mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-13 02:41:50 +08:00
Compare commits
292 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f64f619713 | ||
|
|
a742fa0f8a | ||
|
|
6894c7e80b | ||
|
|
203100431b | ||
|
|
e8b9bcae92 | ||
|
|
052351ab5b | ||
|
|
9dd84e3416 | ||
|
|
211c25d969 | ||
|
|
275684d319 | ||
|
|
0f8a47e8f6 | ||
|
|
303c840464 | ||
|
|
b15008fbce | ||
|
|
a8cf3e1ad6 | ||
|
|
0515ef6e8b | ||
|
|
777d5df573 | ||
|
|
c5f379ba01 | ||
|
|
145d38c9bd | ||
|
|
eab957ce00 | ||
|
|
b5fb077ad6 | ||
|
|
ebcbb11cb2 | ||
|
|
a1413dd1b3 | ||
|
|
4e6ee2db25 | ||
|
|
8e744597d1 | ||
|
|
dfa8b541b4 | ||
|
|
1dc55f8811 | ||
|
|
501d9a05d4 | ||
|
|
229d51cd18 | ||
|
|
40e61b30d6 | ||
|
|
3c3ce55842 | ||
|
|
e3e61bcae9 | ||
|
|
dfca4d60ee | ||
|
|
e671b45948 | ||
|
|
b00113d212 | ||
|
|
9b926d1a1e | ||
|
|
98c9f1a830 | ||
|
|
46ac591fe8 | ||
|
|
bf66b095c7 | ||
|
|
5228581324 | ||
|
|
c9c704e671 | ||
|
|
16d4c7c646 | ||
|
|
39056292b7 | ||
|
|
87ffd283ce | ||
|
|
8eb42816f1 | ||
|
|
ebdf64c0b9 | ||
|
|
caab5f476e | ||
|
|
1998f3ae8a | ||
|
|
5ff2a43b70 | ||
|
|
3cd842ca1a | ||
|
|
86cefa7bda | ||
|
|
fdac697f6e | ||
|
|
8203d690cb | ||
|
|
cf58dc0dd3 | ||
|
|
6a69af3bf1 | ||
|
|
acdfbb4644 | ||
|
|
72f24bf535 | ||
|
|
ba23244876 | ||
|
|
624f9f18b4 | ||
|
|
17002345c9 | ||
|
|
f3f2051c45 | ||
|
|
e60d793c8c | ||
|
|
7ecc64614a | ||
|
|
0311237db2 | ||
|
|
11d8187258 | ||
|
|
fc4a9af0cb | ||
|
|
fa64e11a77 | ||
|
|
210f0f1012 | ||
|
|
6d3f10d1d7 | ||
|
|
09483c9f07 | ||
|
|
2871950ab8 | ||
|
|
5849f751bc | ||
|
|
45f92fe066 | ||
|
|
f492f4839a | ||
|
|
fa81793bea | ||
|
|
c12ef3e772 | ||
|
|
6eebdb8898 | ||
|
|
3e9a309079 | ||
|
|
15d5890861 | ||
|
|
89b3475508 | ||
|
|
6e301538ed | ||
|
|
c3a31f2c5d | ||
|
|
559b1e02a7 | ||
|
|
9e4412c7a8 | ||
|
|
6dab38172f | ||
|
|
f1ee46e1ac | ||
|
|
775928456d | ||
|
|
fd4a15c84e | ||
|
|
be725ce21f | ||
|
|
fa31552cc1 | ||
|
|
a3ccf5baed | ||
|
|
8c6225b749 | ||
|
|
89e77c0089 | ||
|
|
b27d8a9570 | ||
|
|
4a3ff82200 | ||
|
|
bfbab44756 | ||
|
|
4458af83d8 | ||
|
|
6b62b5b5a9 | ||
|
|
31cc060837 | ||
|
|
ea284d739a | ||
|
|
ab06ed0083 | ||
|
|
4de4db3c69 | ||
|
|
e1cac5dd50 | ||
|
|
7adde91e9f | ||
|
|
3428642d04 | ||
|
|
2f0cce0089 | ||
|
|
c7ced2bfbb | ||
|
|
69049e3f45 | ||
|
|
e17e9a6473 | ||
|
|
5e91ba6c60 | ||
|
|
9f6e6852da | ||
|
|
68f9de0c69 | ||
|
|
17af615fe2 | ||
|
|
4577be71ce | ||
|
|
0311d63b7d | ||
|
|
440314c16d | ||
|
|
8dd4a513c8 | ||
|
|
e096fc98e2 | ||
|
|
4329bd8e80 | ||
|
|
ae07df612d | ||
|
|
d5d6f1fbbe | ||
|
|
b9d068d6d4 | ||
|
|
48ac43d628 | ||
|
|
79da2c8c17 | ||
|
|
6aac7bb8e3 | ||
|
|
51a61bef31 | ||
|
|
44d84116c3 | ||
|
|
474a1ce027 | ||
|
|
b22839c99f | ||
|
|
8b927f302c | ||
|
|
c16da759b2 | ||
|
|
74a830694c | ||
|
|
d06a3ca12e | ||
|
|
154a9283b5 | ||
|
|
b702791c2c | ||
|
|
d21066c282 | ||
|
|
df23975a0b | ||
|
|
3da0ef2adb | ||
|
|
35485bbbb1 | ||
|
|
894b93e08d | ||
|
|
97640a517a | ||
|
|
ee0886fc48 | ||
|
|
0fe16963cd | ||
|
|
82dcafff00 | ||
|
|
3ffb907a6f | ||
|
|
d91477ad80 | ||
|
|
0529b57694 | ||
|
|
79a2953862 | ||
|
|
8d542b8e45 | ||
|
|
ac9060ab3a | ||
|
|
1c9716e460 | ||
|
|
7e70e4c299 | ||
|
|
ac43cf85ec | ||
|
|
08dc0a0348 | ||
|
|
90adef6cfb | ||
|
|
d4499cc6d7 | ||
|
|
958cf290e2 | ||
|
|
d3a522f3e8 | ||
|
|
52935d4b8e | ||
|
|
32217f87fd | ||
|
|
675aff26ff | ||
|
|
029384c427 | ||
|
|
37417caca2 | ||
|
|
8f58e4e48a | ||
|
|
68c872ad36 | ||
|
|
c780544792 | ||
|
|
23e15e479e | ||
|
|
684618e72b | ||
|
|
93d3df1e08 | ||
|
|
335f5e9ec6 | ||
|
|
30e9ae0153 | ||
|
|
25ac862f46 | ||
|
|
d4e59770d0 | ||
|
|
15122b9ebb | ||
|
|
a41e6d19fd | ||
|
|
e879ec7189 | ||
|
|
4faa5f1c95 | ||
|
|
c42f91a7fe | ||
|
|
92d2085b64 | ||
|
|
6a39f7e69d | ||
|
|
dfa8dbc52a | ||
|
|
a393601ec5 | ||
|
|
b74a90b416 | ||
|
|
77de8d857b | ||
|
|
76c1745269 | ||
|
|
811382775d | ||
|
|
e8f1caa219 | ||
|
|
15c5cd5f6e | ||
|
|
766a8d2145 | ||
|
|
e350e0c7bb | ||
|
|
db4ab85d3e | ||
|
|
cfcd277a58 | ||
|
|
c256fd9379 | ||
|
|
0a4c205105 | ||
|
|
e815c3c10e | ||
|
|
8eb1a4e52e | ||
|
|
19648721fc | ||
|
|
b81d1039c5 | ||
|
|
a667b7548c | ||
|
|
598bea9b21 | ||
|
|
df104d6e9b | ||
|
|
417f3c0f8c | ||
|
|
5114a942dc | ||
|
|
edef937822 | ||
|
|
faa86eded0 | ||
|
|
44fa6e0a42 | ||
|
|
be9a1c76d4 | ||
|
|
fcc811d6a1 | ||
|
|
906404f075 | ||
|
|
1267c8d0f4 | ||
|
|
eb1093128e | ||
|
|
4ddeb6551e | ||
|
|
7252c2ff3d | ||
|
|
8dee45c0a3 | ||
|
|
99ead8b165 | ||
|
|
0c7f13d9a4 | ||
|
|
ed32b95de1 | ||
|
|
beacc2e26b | ||
|
|
389621c954 | ||
|
|
2ba7756d13 | ||
|
|
02f77c0a51 | ||
|
|
5aa8d37cd0 | ||
|
|
a7b8ffc716 | ||
|
|
b0bc53646e | ||
|
|
5f31c9ad7e | ||
|
|
818d9f3f5d | ||
|
|
1c3c070db4 | ||
|
|
91e4792aa9 | ||
|
|
813bfa8f97 | ||
|
|
8b29f6bb7c | ||
|
|
27273405f7 | ||
|
|
f4299457fb | ||
|
|
06983a35ad | ||
|
|
a80953527b | ||
|
|
0f469e225b | ||
|
|
5dca69fbec | ||
|
|
ac626e5895 | ||
|
|
cb78758839 | ||
|
|
844a2412b2 | ||
|
|
650d877430 | ||
|
|
f459061ad5 | ||
|
|
a6f9701679 | ||
|
|
26a325efff | ||
|
|
0a96ee16a8 | ||
|
|
43c962b48b | ||
|
|
724545ebd6 | ||
|
|
a9a2004d4a | ||
|
|
5b14c8a832 | ||
|
|
e2c5a514cb | ||
|
|
296761a34e | ||
|
|
1d3436d51b | ||
|
|
60bb11c315 | ||
|
|
72fe6195af | ||
|
|
04fb3b7ee3 | ||
|
|
942fca7ad8 | ||
|
|
39df995e37 | ||
|
|
efaa8b6620 | ||
|
|
35bd0aa8f6 | ||
|
|
0f9adc59f9 | ||
|
|
c43a72ef46 | ||
|
|
7a61119c55 | ||
|
|
d620eac621 | ||
|
|
1dbffbee2d | ||
|
|
c67817f46e | ||
|
|
d654419423 | ||
|
|
1e2240dbe9 | ||
|
|
b3778ef48c | ||
|
|
a16cf5c8d3 | ||
|
|
d82bf5a823 | ||
|
|
132eec900c | ||
|
|
09114f59c8 | ||
|
|
72099ae951 | ||
|
|
d66064024c | ||
|
|
8c93848303 | ||
|
|
57a86ab36f | ||
|
|
e75cdf0b61 | ||
|
|
79b13f363b | ||
|
|
87d5a1292d | ||
|
|
3e6ed5e4c3 | ||
|
|
96dd9bef5f | ||
|
|
697a646fc9 | ||
|
|
cde17bd668 | ||
|
|
98b72f086d | ||
|
|
196b805499 | ||
|
|
beb839d8e2 | ||
|
|
2aa39bd355 | ||
|
|
a62d30acb9 | ||
|
|
8bc5b40957 | ||
|
|
2a11d5f190 | ||
|
|
964bbbf5bc | ||
|
|
75ad427862 | ||
|
|
edda988790 | ||
|
|
a8961761ec | ||
|
|
2b80a02d51 |
25
.claude/CLAUDE.md
Normal file
25
.claude/CLAUDE.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Claude Instructions
|
||||||
|
|
||||||
|
- **CLI Tools Usage**: @~/.claude/workflows/cli-tools-usage.md
|
||||||
|
- **Coding Philosophy**: @~/.claude/workflows/coding-philosophy.md
|
||||||
|
- **Context Requirements**: @~/.claude/workflows/context-tools-ace.md
|
||||||
|
- **File Modification**: @~/.claude/workflows/file-modification.md
|
||||||
|
- **CLI Endpoints Config**: @.claude/cli-tools.json
|
||||||
|
|
||||||
|
## CLI Endpoints
|
||||||
|
|
||||||
|
**Strictly follow the @.claude/cli-tools.json configuration**
|
||||||
|
|
||||||
|
Available CLI endpoints are dynamically defined by the config file:
|
||||||
|
- Built-in tools and their enable/disable status
|
||||||
|
- Custom API endpoints registered via the Dashboard
|
||||||
|
- Managed through the CCW Dashboard Status page
|
||||||
|
|
||||||
|
## Agent Execution
|
||||||
|
|
||||||
|
- **Always use `run_in_background: false`** for Task tool agent calls: `Task({ subagent_type: "xxx", prompt: "...", run_in_background: false })` to ensure synchronous execution and immediate result visibility
|
||||||
|
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
|
||||||
|
|
||||||
|
## Code Diagnostics
|
||||||
|
|
||||||
|
- **Prefer `mcp__ide__getDiagnostics`** for code error checking over shell-based TypeScript compilation
|
||||||
4
.claude/active_memory_config.json
Normal file
4
.claude/active_memory_config.json
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"interval": "manual",
|
||||||
|
"tool": "gemini"
|
||||||
|
}
|
||||||
@@ -16,11 +16,9 @@ description: |
|
|||||||
color: yellow
|
color: yellow
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
|
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
**Agent Role**: Transform user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria.
|
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
|
||||||
|
|
||||||
**Core Capabilities**:
|
**Core Capabilities**:
|
||||||
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
|
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
|
||||||
@@ -33,7 +31,7 @@ You are a pure execution agent specialized in creating actionable implementation
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 1. Execution Process
|
## 1. Input & Execution
|
||||||
|
|
||||||
### 1.1 Input Processing
|
### 1.1 Input Processing
|
||||||
|
|
||||||
@@ -43,7 +41,6 @@ You are a pure execution agent specialized in creating actionable implementation
|
|||||||
- `context_package_path`: Context package with brainstorming artifacts catalog
|
- `context_package_path`: Context package with brainstorming artifacts catalog
|
||||||
- **Metadata**: Simple values
|
- **Metadata**: Simple values
|
||||||
- `session_id`: Workflow session identifier (WFS-[topic])
|
- `session_id`: Workflow session identifier (WFS-[topic])
|
||||||
- `execution_mode`: agent-mode | cli-execute-mode
|
|
||||||
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
|
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
|
||||||
|
|
||||||
**Legacy Support** (backward compatibility):
|
**Legacy Support** (backward compatibility):
|
||||||
@@ -51,7 +48,7 @@ You are a pure execution agent specialized in creating actionable implementation
|
|||||||
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
||||||
- **Task requirements**: Direct task description
|
- **Task requirements**: Direct task description
|
||||||
|
|
||||||
### 1.2 Two-Phase Execution Flow
|
### 1.2 Execution Flow
|
||||||
|
|
||||||
#### Phase 1: Context Loading & Assembly
|
#### Phase 1: Context Loading & Assembly
|
||||||
|
|
||||||
@@ -89,6 +86,27 @@ You are a pure execution agent specialized in creating actionable implementation
|
|||||||
6. Assess task complexity (simple/medium/complex)
|
6. Assess task complexity (simple/medium/complex)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**MCP Integration** (when `mcp_capabilities` available):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Exa Code Context (mcp_capabilities.exa_code = true)
|
||||||
|
mcp__exa__get_code_context_exa(
|
||||||
|
query="TypeScript OAuth2 JWT authentication patterns",
|
||||||
|
tokensNum="dynamic"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Integration in flow_control.pre_analysis
|
||||||
|
{
|
||||||
|
"step": "local_codebase_exploration",
|
||||||
|
"action": "Explore codebase structure",
|
||||||
|
"commands": [
|
||||||
|
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
|
||||||
|
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
|
||||||
|
],
|
||||||
|
"output_to": "codebase_structure"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
**Context Package Structure** (fields defined by context-search-agent):
|
**Context Package Structure** (fields defined by context-search-agent):
|
||||||
|
|
||||||
**Always Present**:
|
**Always Present**:
|
||||||
@@ -170,30 +188,6 @@ if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
|
|||||||
5. Update session state for execution readiness
|
5. Update session state for execution readiness
|
||||||
```
|
```
|
||||||
|
|
||||||
### 1.3 MCP Integration Guidelines
|
|
||||||
|
|
||||||
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
|
|
||||||
```javascript
|
|
||||||
// Get best practices and examples
|
|
||||||
mcp__exa__get_code_context_exa(
|
|
||||||
query="TypeScript OAuth2 JWT authentication patterns",
|
|
||||||
tokensNum="dynamic"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Integration in flow_control.pre_analysis**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"step": "local_codebase_exploration",
|
|
||||||
"action": "Explore codebase structure",
|
|
||||||
"commands": [
|
|
||||||
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
|
|
||||||
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
|
|
||||||
],
|
|
||||||
"output_to": "codebase_structure"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 2. Output Specifications
|
## 2. Output Specifications
|
||||||
@@ -209,15 +203,69 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
"id": "IMPL-N",
|
"id": "IMPL-N",
|
||||||
"title": "Descriptive task name",
|
"title": "Descriptive task name",
|
||||||
"status": "pending|active|completed|blocked",
|
"status": "pending|active|completed|blocked",
|
||||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
|
||||||
|
"cli_execution_id": "WFS-{session}-IMPL-N",
|
||||||
|
"cli_execution": {
|
||||||
|
"strategy": "new|resume|fork|merge_fork",
|
||||||
|
"resume_from": "parent-cli-id",
|
||||||
|
"merge_from": ["id1", "id2"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Field Descriptions**:
|
**Field Descriptions**:
|
||||||
- `id`: Task identifier (format: `IMPL-N`)
|
- `id`: Task identifier
|
||||||
|
- Single module format: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||||
|
- Multi-module format: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1, IMPL-C1)
|
||||||
|
- Prefix: A, B, C... (assigned by module detection order)
|
||||||
|
- Sequence: 1, 2, 3... (per-module increment)
|
||||||
- `title`: Descriptive task name summarizing the work
|
- `title`: Descriptive task name summarizing the work
|
||||||
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
||||||
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||||
|
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
|
||||||
|
- `cli_execution`: CLI execution strategy based on task dependencies
|
||||||
|
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
|
||||||
|
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
|
||||||
|
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
|
||||||
|
|
||||||
|
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
|
||||||
|
|
||||||
|
| Dependency Pattern | Strategy | CLI Command Pattern |
|
||||||
|
|--------------------|----------|---------------------|
|
||||||
|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
|
||||||
|
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
|
||||||
|
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||||
|
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
|
||||||
|
|
||||||
|
**Strategy Selection Algorithm**:
|
||||||
|
```javascript
|
||||||
|
function computeCliStrategy(task, allTasks) {
|
||||||
|
const deps = task.context?.depends_on || []
|
||||||
|
const childCount = allTasks.filter(t =>
|
||||||
|
t.context?.depends_on?.includes(task.id)
|
||||||
|
).length
|
||||||
|
|
||||||
|
if (deps.length === 0) {
|
||||||
|
return { strategy: "new" }
|
||||||
|
} else if (deps.length === 1) {
|
||||||
|
const parentTask = allTasks.find(t => t.id === deps[0])
|
||||||
|
const parentChildCount = allTasks.filter(t =>
|
||||||
|
t.context?.depends_on?.includes(deps[0])
|
||||||
|
).length
|
||||||
|
|
||||||
|
if (parentChildCount === 1) {
|
||||||
|
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
|
||||||
|
} else {
|
||||||
|
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
const mergeFrom = deps.map(depId =>
|
||||||
|
allTasks.find(t => t.id === depId).cli_execution_id
|
||||||
|
)
|
||||||
|
return { strategy: "merge_fork", merge_from: mergeFrom }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Meta Object
|
#### Meta Object
|
||||||
|
|
||||||
@@ -226,7 +274,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
"meta": {
|
"meta": {
|
||||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||||
"execution_group": "parallel-abc123|null"
|
"execution_group": "parallel-abc123|null",
|
||||||
|
"module": "frontend|backend|shared|null",
|
||||||
|
"execution_config": {
|
||||||
|
"method": "agent|hybrid|cli",
|
||||||
|
"cli_tool": "codex|gemini|qwen|auto",
|
||||||
|
"enable_resume": true,
|
||||||
|
"previous_cli_id": "string|null"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -235,6 +290,12 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
||||||
- `agent`: Assigned agent for execution
|
- `agent`: Assigned agent for execution
|
||||||
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||||
|
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||||
|
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
|
||||||
|
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
|
||||||
|
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
|
||||||
|
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
|
||||||
|
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
|
||||||
|
|
||||||
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
|
|
||||||
@@ -244,8 +305,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
"type": "test-gen|test-fix",
|
"type": "test-gen|test-fix",
|
||||||
"agent": "@code-developer|@test-fix-agent",
|
"agent": "@code-developer|@test-fix-agent",
|
||||||
"test_framework": "jest|vitest|pytest|junit|mocha",
|
"test_framework": "jest|vitest|pytest|junit|mocha",
|
||||||
"coverage_target": "80%",
|
"coverage_target": "80%"
|
||||||
"use_codex": true|false
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -253,7 +313,8 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
**Test-Specific Fields**:
|
**Test-Specific Fields**:
|
||||||
- `test_framework`: Existing test framework from project (required for test tasks)
|
- `test_framework`: Existing test framework from project (required for test tasks)
|
||||||
- `coverage_target`: Target code coverage percentage (optional)
|
- `coverage_target`: Target code coverage percentage (optional)
|
||||||
- `use_codex`: Whether to use Codex for automated fixes in test-fix tasks (optional, default: false)
|
|
||||||
|
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
|
||||||
|
|
||||||
#### Context Object
|
#### Context Object
|
||||||
|
|
||||||
@@ -392,7 +453,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
// Pattern: Project structure analysis
|
// Pattern: Project structure analysis
|
||||||
{
|
{
|
||||||
"step": "analyze_project_architecture",
|
"step": "analyze_project_architecture",
|
||||||
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
|
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
|
||||||
"output_to": "project_architecture"
|
"output_to": "project_architecture"
|
||||||
},
|
},
|
||||||
|
|
||||||
@@ -409,14 +470,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
// Pattern: Gemini CLI deep analysis
|
// Pattern: Gemini CLI deep analysis
|
||||||
{
|
{
|
||||||
"step": "gemini_analyze_[aspect]",
|
"step": "gemini_analyze_[aspect]",
|
||||||
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
|
||||||
"output_to": "analysis_result"
|
"output_to": "analysis_result"
|
||||||
},
|
},
|
||||||
|
|
||||||
// Pattern: Qwen CLI analysis (fallback/alternative)
|
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||||
{
|
{
|
||||||
"step": "qwen_analyze_[aspect]",
|
"step": "qwen_analyze_[aspect]",
|
||||||
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
|
||||||
"output_to": "analysis_result"
|
"output_to": "analysis_result"
|
||||||
},
|
},
|
||||||
|
|
||||||
@@ -457,7 +518,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
|||||||
4. **Command Composition Patterns**:
|
4. **Command Composition Patterns**:
|
||||||
- **Single command**: `bash([simple_search])`
|
- **Single command**: `bash([simple_search])`
|
||||||
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||||
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
|
||||||
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||||
|
|
||||||
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||||
@@ -479,21 +540,38 @@ The `implementation_approach` supports **two execution modes** based on the pres
|
|||||||
- Specified command executes the step directly
|
- Specified command executes the step directly
|
||||||
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||||
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||||
- **Required fields**: Same as default mode **PLUS** `command`
|
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
|
||||||
- **Command patterns**:
|
- **Command patterns** (with resume support):
|
||||||
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
|
||||||
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
|
||||||
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
|
||||||
|
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
|
||||||
|
|
||||||
**Mode Selection Strategy**:
|
**Semantic CLI Tool Selection**:
|
||||||
- **Default to agent execution** for most tasks
|
|
||||||
- **Use CLI mode** when:
|
|
||||||
- User explicitly requests CLI tool (codex/gemini/qwen)
|
|
||||||
- Task requires multi-step autonomous reasoning beyond agent capability
|
|
||||||
- Complex refactoring needs specialized tool analysis
|
|
||||||
- Building on previous CLI execution context (use `resume --last`)
|
|
||||||
|
|
||||||
**Key Principle**: The `command` field is **optional**. Agent must decide based on task complexity and user preference.
|
Agent determines CLI tool usage per-step based on user semantics and task nature.
|
||||||
|
|
||||||
|
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
|
||||||
|
|
||||||
|
**User Semantic Triggers** (patterns to detect in task_description):
|
||||||
|
- "use Codex/codex" → Add `command` field with Codex CLI
|
||||||
|
- "use Gemini/gemini" → Add `command` field with Gemini CLI
|
||||||
|
- "use Qwen/qwen" → Add `command` field with Qwen CLI
|
||||||
|
- "CLI execution" / "automated" → Infer appropriate CLI tool
|
||||||
|
|
||||||
|
**Task-Based Selection** (when no explicit user preference):
|
||||||
|
- **Implementation/coding**: Codex preferred for autonomous development
|
||||||
|
- **Analysis/exploration**: Gemini preferred for large context analysis
|
||||||
|
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
|
||||||
|
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
||||||
|
|
||||||
|
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
||||||
|
- Agent orchestrates task execution
|
||||||
|
- When step has `command` field, agent executes it via CCW CLI
|
||||||
|
- When step has no `command` field, agent implements directly
|
||||||
|
- This maintains agent control while leveraging CLI tool power
|
||||||
|
|
||||||
|
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
|
||||||
|
|
||||||
**Examples**:
|
**Examples**:
|
||||||
|
|
||||||
@@ -543,11 +621,26 @@ The `implementation_approach` supports **two execution modes** based on the pres
|
|||||||
"step": 3,
|
"step": 3,
|
||||||
"title": "Execute implementation using CLI tool",
|
"title": "Execute implementation using CLI tool",
|
||||||
"description": "Use Codex/Gemini for complex autonomous execution",
|
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||||
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
|
||||||
"modification_points": ["[Same as default mode]"],
|
"modification_points": ["[Same as default mode]"],
|
||||||
"logic_flow": ["[Same as default mode]"],
|
"logic_flow": ["[Same as default mode]"],
|
||||||
"depends_on": [1, 2],
|
"depends_on": [1, 2],
|
||||||
"output": "cli_implementation"
|
"output": "cli_implementation",
|
||||||
|
"cli_output_id": "step3_cli_id" // Store execution ID for resume
|
||||||
|
},
|
||||||
|
|
||||||
|
// === CLI MODE with Resume: Continue from previous CLI execution ===
|
||||||
|
{
|
||||||
|
"step": 4,
|
||||||
|
"title": "Continue implementation with context",
|
||||||
|
"description": "Resume from previous step with accumulated context",
|
||||||
|
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
|
||||||
|
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
|
||||||
|
"modification_points": ["[Continue from step 3]"],
|
||||||
|
"logic_flow": ["[Build on previous output]"],
|
||||||
|
"depends_on": [3],
|
||||||
|
"output": "continued_implementation",
|
||||||
|
"cli_output_id": "step4_cli_id"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
@@ -589,10 +682,42 @@ The `implementation_approach` supports **two execution modes** based on the pres
|
|||||||
- Analysis results (technical approach, architecture decisions)
|
- Analysis results (technical approach, architecture decisions)
|
||||||
- Brainstorming artifacts (role analyses, guidance specifications)
|
- Brainstorming artifacts (role analyses, guidance specifications)
|
||||||
|
|
||||||
|
**Multi-Module Format** (when modules detected):
|
||||||
|
|
||||||
|
When multiple modules are detected (frontend/backend, etc.), organize IMPL_PLAN.md by module:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Implementation Plan
|
||||||
|
|
||||||
|
## Module A: Frontend (N tasks)
|
||||||
|
### IMPL-A1: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
### IMPL-A2: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
## Module B: Backend (N tasks)
|
||||||
|
### IMPL-B1: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
### IMPL-B2: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
## Cross-Module Dependencies
|
||||||
|
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||||
|
- IMPL-A2 → IMPL-B2 (UI state depends on Backend service)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Cross-Module Dependency Notation**:
|
||||||
|
- During parallel planning, use `CROSS::{module}::{pattern}` format
|
||||||
|
- Example: `depends_on: ["CROSS::B::api-endpoint"]`
|
||||||
|
- Integration phase resolves to actual task IDs: `CROSS::B::api → IMPL-B1`
|
||||||
|
|
||||||
### 2.3 TODO_LIST.md Structure
|
### 2.3 TODO_LIST.md Structure
|
||||||
|
|
||||||
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
|
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
|
||||||
|
|
||||||
|
**Single Module Format**:
|
||||||
```markdown
|
```markdown
|
||||||
# Tasks: {Session Topic}
|
# Tasks: {Session Topic}
|
||||||
|
|
||||||
@@ -606,30 +731,54 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
|
|||||||
- `- [x]` = Completed task
|
- `- [x]` = Completed task
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Multi-Module Format** (hierarchical by module):
|
||||||
|
```markdown
|
||||||
|
# Tasks: {Session Topic}
|
||||||
|
|
||||||
|
## Module A (Frontend)
|
||||||
|
- [ ] **IMPL-A1**: [Task Title] → [📋](./.task/IMPL-A1.json)
|
||||||
|
- [ ] **IMPL-A2**: [Task Title] → [📋](./.task/IMPL-A2.json)
|
||||||
|
|
||||||
|
## Module B (Backend)
|
||||||
|
- [ ] **IMPL-B1**: [Task Title] → [📋](./.task/IMPL-B1.json)
|
||||||
|
- [ ] **IMPL-B2**: [Task Title] → [📋](./.task/IMPL-B2.json)
|
||||||
|
|
||||||
|
## Cross-Module Dependencies
|
||||||
|
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||||
|
|
||||||
|
## Status Legend
|
||||||
|
- `- [ ]` = Pending task
|
||||||
|
- `- [x]` = Completed task
|
||||||
|
```
|
||||||
|
|
||||||
**Linking Rules**:
|
**Linking Rules**:
|
||||||
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
||||||
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
||||||
- Consistent ID schemes: IMPL-XXX
|
- Consistent ID schemes: `IMPL-N` (single) or `IMPL-{prefix}{seq}` (multi-module)
|
||||||
|
|
||||||
### 2.4 Complexity-Based Structure Selection
|
### 2.4 Complexity & Structure Selection
|
||||||
|
|
||||||
Use `analysis_results.complexity` or task count to determine structure:
|
Use `analysis_results.complexity` or task count to determine structure:
|
||||||
|
|
||||||
**Simple Tasks** (≤5 tasks):
|
**Single Module Mode**:
|
||||||
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
- **Simple Tasks** (≤5 tasks): Flat structure
|
||||||
- All tasks at same level
|
- **Medium Tasks** (6-12 tasks): Flat structure
|
||||||
|
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
|
||||||
|
|
||||||
**Medium Tasks** (6-12 tasks):
|
**Multi-Module Mode** (N+1 parallel planning):
|
||||||
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
- **Per-module limit**: ≤9 tasks per module
|
||||||
- All tasks at same level
|
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
|
||||||
|
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||||
|
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
|
||||||
|
|
||||||
**Complex Tasks** (>12 tasks):
|
**Multi-Module Detection Triggers**:
|
||||||
- **Re-scope required**: Maximum 12 tasks hard limit
|
- Explicit frontend/backend separation (`src/frontend`, `src/backend`)
|
||||||
- If analysis_results contains >12 tasks, consolidate or request re-scoping
|
- Monorepo structure (`packages/*`, `apps/*`)
|
||||||
|
- Context-package dependency clustering (2+ distinct module groups)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 3. Quality & Standards
|
## 3. Quality Standards
|
||||||
|
|
||||||
### 3.1 Quantification Requirements (MANDATORY)
|
### 3.1 Quantification Requirements (MANDATORY)
|
||||||
|
|
||||||
@@ -655,47 +804,48 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
- [ ] Each implementation step has its own acceptance criteria
|
- [ ] Each implementation step has its own acceptance criteria
|
||||||
|
|
||||||
**Examples**:
|
**Examples**:
|
||||||
- ✅ GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||||
- ❌ BAD: `"Implement new commands"`
|
- BAD: `"Implement new commands"`
|
||||||
- ✅ GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||||
- ❌ BAD: `"All commands implemented successfully"`
|
- BAD: `"All commands implemented successfully"`
|
||||||
|
|
||||||
### 3.2 Planning Principles
|
### 3.2 Planning & Organization Standards
|
||||||
|
|
||||||
|
**Planning Principles**:
|
||||||
- Each stage produces working, testable code
|
- Each stage produces working, testable code
|
||||||
- Clear success criteria for each deliverable
|
- Clear success criteria for each deliverable
|
||||||
- Dependencies clearly identified between stages
|
- Dependencies clearly identified between stages
|
||||||
- Incremental progress over big bangs
|
- Incremental progress over big bangs
|
||||||
|
|
||||||
### 3.3 File Organization
|
**File Organization**:
|
||||||
|
|
||||||
- Session naming: `WFS-[topic-slug]`
|
- Session naming: `WFS-[topic-slug]`
|
||||||
- Task IDs: IMPL-XXX (flat structure only)
|
- Task IDs:
|
||||||
- Directory structure: flat task organization
|
- Single module: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||||
|
- Multi-module: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||||
### 3.4 Document Standards
|
- Directory structure: flat task organization (all tasks in `.task/`)
|
||||||
|
|
||||||
|
**Document Standards**:
|
||||||
- Proper linking between documents
|
- Proper linking between documents
|
||||||
- Consistent navigation and references
|
- Consistent navigation and references
|
||||||
|
|
||||||
---
|
### 3.3 Guidelines Checklist
|
||||||
|
|
||||||
## 4. Key Reminders
|
|
||||||
|
|
||||||
**ALWAYS:**
|
**ALWAYS:**
|
||||||
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
|
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||||
- **Load IMPL_PLAN template**: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt) before generating IMPL_PLAN.md
|
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
|
||||||
- **Use provided context package**: Extract all information from structured context
|
- Use provided context package: Extract all information from structured context
|
||||||
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
|
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
||||||
- **Follow 6-field schema**: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||||
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
|
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
|
||||||
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
|
||||||
- **Validate task count**: Maximum 12 tasks hard limit, request re-scope if exceeded
|
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
||||||
- **Use session paths**: Construct all paths using provided session_id
|
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||||
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
|
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||||
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
|
- Use session paths: Construct all paths using provided session_id
|
||||||
- **Apply 举一反三 principle**: Adapt pre-analysis patterns to task-specific needs dynamically
|
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
|
||||||
- **Follow template validation**: Complete IMPL_PLAN.md template validation checklist before finalization
|
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
|
||||||
|
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
|
||||||
|
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
|
||||||
|
|
||||||
**NEVER:**
|
**NEVER:**
|
||||||
- Load files directly (use provided context package instead)
|
- Load files directly (use provided context package instead)
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ Score = 0
|
|||||||
|
|
||||||
**1. Project Structure**:
|
**1. Project Structure**:
|
||||||
```bash
|
```bash
|
||||||
~/.claude/scripts/get_modules_by_depth.sh
|
ccw tool exec get_modules_by_depth '{}'
|
||||||
```
|
```
|
||||||
|
|
||||||
**2. Content Search**:
|
**2. Content Search**:
|
||||||
@@ -100,7 +100,7 @@ CONTEXT: @**/*
|
|||||||
# Specific patterns
|
# Specific patterns
|
||||||
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
||||||
|
|
||||||
# Cross-directory (requires --include-directories)
|
# Cross-directory (requires --includeDirs)
|
||||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
|
|||||||
```
|
```
|
||||||
analyze|plan → gemini (qwen fallback) + mode=analysis
|
analyze|plan → gemini (qwen fallback) + mode=analysis
|
||||||
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
||||||
execute (complex) → codex + mode=auto
|
execute (complex) → codex + mode=write
|
||||||
discuss → multi (gemini + codex parallel)
|
discuss → multi (gemini + codex parallel)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
|
|||||||
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
||||||
- **Position**: `-m` after prompt, before flags
|
- **Position**: `-m` after prompt, before flags
|
||||||
|
|
||||||
### Command Templates
|
### Command Templates (CCW Unified CLI)
|
||||||
|
|
||||||
**Gemini/Qwen (Analysis)**:
|
**Gemini/Qwen (Analysis)**:
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: {goal}
|
PURPOSE: {goal}
|
||||||
TASK: {task}
|
TASK: {task}
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: {output}
|
EXPECTED: {output}
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||||
" -m gemini-2.5-pro
|
" --tool gemini --mode analysis --cd {dir}
|
||||||
|
|
||||||
# Qwen fallback: Replace 'gemini' with 'qwen'
|
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||||
```
|
```
|
||||||
|
|
||||||
**Gemini/Qwen (Write)**:
|
**Gemini/Qwen (Write)**:
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "..." --approval-mode yolo
|
ccw cli -p "..." --tool gemini --mode write --cd {dir}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Codex (Auto)**:
|
**Codex (Write)**:
|
||||||
```bash
|
```bash
|
||||||
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
ccw cli -p "..." --tool codex --mode write --cd {dir}
|
||||||
|
|
||||||
# Resume: Add 'resume --last' after prompt
|
|
||||||
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Cross-Directory** (Gemini/Qwen):
|
**Cross-Directory** (Gemini/Qwen):
|
||||||
```bash
|
```bash
|
||||||
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
|
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
|
||||||
```
|
```
|
||||||
|
|
||||||
**Directory Scope**:
|
**Directory Scope**:
|
||||||
- `@` only references current directory + subdirectories
|
- `@` only references current directory + subdirectories
|
||||||
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
|
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
|
||||||
|
|
||||||
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
||||||
|
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ Phase 4: Output Generation
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Project structure
|
# Project structure
|
||||||
~/.claude/scripts/get_modules_by_depth.sh
|
ccw tool exec get_modules_by_depth '{}'
|
||||||
|
|
||||||
# Pattern discovery (adapt based on language)
|
# Pattern discovery (adapt based on language)
|
||||||
rg "^export (class|interface|function) " --type ts -n
|
rg "^export (class|interface|function) " --type ts -n
|
||||||
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
|
|||||||
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: {from prompt}
|
PURPOSE: {from prompt}
|
||||||
TASK: {from prompt}
|
TASK: {from prompt}
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: {from prompt}
|
EXPECTED: {from prompt}
|
||||||
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
||||||
"
|
" --tool gemini --mode analysis --cd {dir}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|||||||
@@ -1,140 +1,117 @@
|
|||||||
---
|
---
|
||||||
name: cli-lite-planning-agent
|
name: cli-lite-planning-agent
|
||||||
description: |
|
description: |
|
||||||
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
|
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
|
||||||
|
|
||||||
Core capabilities:
|
Core capabilities:
|
||||||
- Task decomposition (1-10 tasks with IDs: T1, T2...)
|
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
|
||||||
- Dependency analysis (depends_on references)
|
- Task decomposition with dependency analysis
|
||||||
- Flow control (parallel/sequential phases)
|
- CLI execution ID assignment for fork/merge strategies
|
||||||
- Multi-angle exploration context integration
|
- Multi-angle context integration (explorations or diagnoses)
|
||||||
color: cyan
|
color: cyan
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
|
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
|
||||||
|
|
||||||
## Output Schema
|
|
||||||
|
|
||||||
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
|
|
||||||
|
|
||||||
**planObject Structure**:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
summary: string, // 2-3 sentence overview
|
|
||||||
approach: string, // High-level strategy
|
|
||||||
tasks: [TaskObject], // 1-10 structured tasks
|
|
||||||
flow_control: { // Execution phases
|
|
||||||
execution_order: [{ phase, tasks, type }],
|
|
||||||
exit_conditions: { success, failure }
|
|
||||||
},
|
|
||||||
focus_paths: string[], // Affected files (aggregated)
|
|
||||||
estimated_time: string,
|
|
||||||
recommended_execution: "Agent" | "Codex",
|
|
||||||
complexity: "Low" | "Medium" | "High",
|
|
||||||
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**TaskObject Structure**:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
id: string, // T1, T2, T3...
|
|
||||||
title: string, // Action verb + target
|
|
||||||
file: string, // Target file path
|
|
||||||
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
|
|
||||||
description: string, // What to implement (1-2 sentences)
|
|
||||||
modification_points: [{ // Precise changes (optional)
|
|
||||||
file: string,
|
|
||||||
target: string, // function:lineRange
|
|
||||||
change: string
|
|
||||||
}],
|
|
||||||
implementation: string[], // 2-7 actionable steps
|
|
||||||
reference: { // Pattern guidance (optional)
|
|
||||||
pattern: string,
|
|
||||||
files: string[],
|
|
||||||
examples: string
|
|
||||||
},
|
|
||||||
acceptance: string[], // 1-4 quantified criteria
|
|
||||||
depends_on: string[] // Task IDs: ["T1", "T2"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Input Context
|
## Input Context
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
task_description: string,
|
// Required
|
||||||
explorationsContext: { [angle]: ExplorationResult } | null,
|
task_description: string, // Task or bug description
|
||||||
explorationAngles: string[],
|
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
|
||||||
|
session: { id, folder, artifacts },
|
||||||
|
|
||||||
|
// Context (one of these based on workflow)
|
||||||
|
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
|
||||||
|
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
|
||||||
|
contextAngles: string[], // Exploration or diagnosis angles
|
||||||
|
|
||||||
|
// Optional
|
||||||
clarificationContext: { [question]: answer } | null,
|
clarificationContext: { [question]: answer } | null,
|
||||||
complexity: "Low" | "Medium" | "High",
|
complexity: "Low" | "Medium" | "High", // For lite-plan
|
||||||
cli_config: { tool, template, timeout, fallback },
|
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
|
||||||
session: { id, folder, artifacts }
|
cli_config: { tool, template, timeout, fallback }
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Schema-Driven Output
|
||||||
|
|
||||||
|
**CRITICAL**: Read the schema reference first to determine output structure:
|
||||||
|
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
|
||||||
|
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Step 1: Always read schema first
|
||||||
|
const schema = Bash(`cat ${schema_path}`)
|
||||||
|
|
||||||
|
// Step 2: Generate plan conforming to schema
|
||||||
|
const planObject = generatePlanFromSchema(schema, context)
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: CLI Execution
|
Phase 1: Schema & Context Loading
|
||||||
├─ Aggregate multi-angle exploration findings
|
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
|
||||||
|
├─ Aggregate multi-angle context (explorations or diagnoses)
|
||||||
|
└─ Determine output structure from schema
|
||||||
|
|
||||||
|
Phase 2: CLI Execution
|
||||||
├─ Construct CLI command with planning template
|
├─ Construct CLI command with planning template
|
||||||
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
||||||
└─ Timeout: 60 minutes
|
└─ Timeout: 60 minutes
|
||||||
|
|
||||||
Phase 2: Parsing & Enhancement
|
Phase 3: Parsing & Enhancement
|
||||||
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
|
├─ Parse CLI output sections
|
||||||
├─ Validate and enhance task objects
|
├─ Validate and enhance task objects
|
||||||
└─ Infer missing fields from exploration context
|
└─ Infer missing fields from context
|
||||||
|
|
||||||
Phase 3: planObject Generation
|
Phase 4: planObject Generation
|
||||||
├─ Build planObject from parsed results
|
├─ Build planObject conforming to schema
|
||||||
├─ Generate flow_control from depends_on if not provided
|
├─ Assign CLI execution IDs and strategies
|
||||||
├─ Aggregate focus_paths from all tasks
|
├─ Generate flow_control from depends_on
|
||||||
└─ Return to orchestrator (lite-plan)
|
└─ Return to orchestrator
|
||||||
```
|
```
|
||||||
|
|
||||||
## CLI Command Template
|
## CLI Command Template
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd {project_root} && {cli_tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate implementation plan for {complexity} task
|
PURPOSE: Generate plan for {task_description}
|
||||||
TASK:
|
TASK:
|
||||||
• Analyze: {task_description}
|
• Analyze task/bug description and context
|
||||||
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
|
• Break down into tasks following schema structure
|
||||||
• Identify parallel vs sequential execution phases
|
• Identify dependencies and execution phases
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/* | Memory: {exploration_summary}
|
CONTEXT: @**/* | Memory: {context_summary}
|
||||||
EXPECTED:
|
EXPECTED:
|
||||||
## Implementation Summary
|
## Summary
|
||||||
[overview]
|
[overview]
|
||||||
|
|
||||||
## High-Level Approach
|
|
||||||
[strategy]
|
|
||||||
|
|
||||||
## Task Breakdown
|
## Task Breakdown
|
||||||
### T1: [Title]
|
### T1: [Title] (or FIX1 for fix-plan)
|
||||||
**File**: [path]
|
**Scope**: [module/feature path]
|
||||||
**Action**: [type]
|
**Action**: [type]
|
||||||
**Description**: [what]
|
**Description**: [what]
|
||||||
**Modification Points**: - [file]: [target] - [change]
|
**Modification Points**: - [file]: [target] - [change]
|
||||||
**Implementation**: 1. [step]
|
**Implementation**: 1. [step]
|
||||||
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
|
**Acceptance/Verification**: - [quantified criterion]
|
||||||
**Acceptance**: - [quantified criterion]
|
|
||||||
**Depends On**: []
|
**Depends On**: []
|
||||||
|
|
||||||
## Flow Control
|
## Flow Control
|
||||||
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
||||||
**Exit Conditions**: - Success: [condition] - Failure: [condition]
|
|
||||||
|
|
||||||
## Time Estimate
|
## Time Estimate
|
||||||
**Total**: [time]
|
**Total**: [time]
|
||||||
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||||
- Acceptance must be quantified (counts, method names, metrics)
|
- Follow schema structure from {schema_path}
|
||||||
- Dependencies use task IDs (T1, T2)
|
- Acceptance/verification must be quantified
|
||||||
|
- Dependencies use task IDs
|
||||||
- analysis=READ-ONLY
|
- analysis=READ-ONLY
|
||||||
"
|
" --tool {cli_tool} --mode analysis --cd {project_root}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Core Functions
|
## Core Functions
|
||||||
@@ -279,6 +256,51 @@ function inferFile(task, ctx) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### CLI Execution ID Assignment (MANDATORY)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function assignCliExecutionIds(tasks, sessionId) {
|
||||||
|
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
||||||
|
const childCount = new Map()
|
||||||
|
|
||||||
|
// Count children for each task
|
||||||
|
tasks.forEach(task => {
|
||||||
|
(task.depends_on || []).forEach(depId => {
|
||||||
|
childCount.set(depId, (childCount.get(depId) || 0) + 1)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
tasks.forEach(task => {
|
||||||
|
task.cli_execution_id = `${sessionId}-${task.id}`
|
||||||
|
const deps = task.depends_on || []
|
||||||
|
|
||||||
|
if (deps.length === 0) {
|
||||||
|
task.cli_execution = { strategy: "new" }
|
||||||
|
} else if (deps.length === 1) {
|
||||||
|
const parent = taskMap.get(deps[0])
|
||||||
|
const parentChildCount = childCount.get(deps[0]) || 0
|
||||||
|
task.cli_execution = parentChildCount === 1
|
||||||
|
? { strategy: "resume", resume_from: parent.cli_execution_id }
|
||||||
|
: { strategy: "fork", resume_from: parent.cli_execution_id }
|
||||||
|
} else {
|
||||||
|
task.cli_execution = {
|
||||||
|
strategy: "merge_fork",
|
||||||
|
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return tasks
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Strategy Rules**:
|
||||||
|
| depends_on | Parent Children | Strategy | CLI Command |
|
||||||
|
|------------|-----------------|----------|-------------|
|
||||||
|
| [] | - | `new` | `--id {cli_execution_id}` |
|
||||||
|
| [T1] | 1 | `resume` | `--resume {resume_from}` |
|
||||||
|
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||||
|
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
|
||||||
|
|
||||||
### Flow Control Inference
|
### Flow Control Inference
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
@@ -303,21 +325,44 @@ function inferFlowControl(tasks) {
|
|||||||
### planObject Generation
|
### planObject Generation
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
function generatePlanObject(parsed, enrichedContext, input) {
|
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||||
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
||||||
|
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
|
||||||
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
||||||
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
||||||
|
|
||||||
return {
|
// Base fields (common to both schemas)
|
||||||
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
|
const base = {
|
||||||
approach: parsed.approach || "Step-by-step implementation",
|
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
|
||||||
tasks,
|
tasks,
|
||||||
flow_control,
|
flow_control,
|
||||||
focus_paths,
|
focus_paths,
|
||||||
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
||||||
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
|
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
|
||||||
complexity: input.complexity,
|
_metadata: {
|
||||||
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
|
timestamp: new Date().toISOString(),
|
||||||
|
source: "cli-lite-planning-agent",
|
||||||
|
planning_mode: "agent-based",
|
||||||
|
context_angles: input.contextAngles || [],
|
||||||
|
duration_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Schema-specific fields
|
||||||
|
if (schemaType === 'fix-plan') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
root_cause: parsed.root_cause || "Root cause from diagnosis",
|
||||||
|
strategy: parsed.strategy || "comprehensive_fix",
|
||||||
|
severity: input.severity || "Medium",
|
||||||
|
risk_level: parsed.risk_level || "medium"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
approach: parsed.approach || "Step-by-step implementation",
|
||||||
|
complexity: input.complexity || "Medium"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -383,9 +428,12 @@ function validateTask(task) {
|
|||||||
## Key Reminders
|
## Key Reminders
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Generate task IDs (T1, T2, T3...)
|
- **Read schema first** to determine output structure
|
||||||
|
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
|
||||||
- Include depends_on (even if empty [])
|
- Include depends_on (even if empty [])
|
||||||
- Quantify acceptance criteria
|
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
|
||||||
|
- **Compute cli_execution strategy** based on depends_on
|
||||||
|
- Quantify acceptance/verification criteria
|
||||||
- Generate flow_control from dependencies
|
- Generate flow_control from dependencies
|
||||||
- Handle CLI errors with fallback chain
|
- Handle CLI errors with fallback chain
|
||||||
|
|
||||||
@@ -394,3 +442,5 @@ function validateTask(task) {
|
|||||||
- Use vague acceptance criteria
|
- Use vague acceptance criteria
|
||||||
- Create circular dependencies
|
- Create circular dependencies
|
||||||
- Skip task validation
|
- Skip task validation
|
||||||
|
- **Skip CLI execution ID assignment**
|
||||||
|
- **Ignore schema structure**
|
||||||
|
|||||||
@@ -66,8 +66,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
|||||||
"task_config": {
|
"task_config": {
|
||||||
"agent": "@test-fix-agent",
|
"agent": "@test-fix-agent",
|
||||||
"type": "test-fix-iteration",
|
"type": "test-fix-iteration",
|
||||||
"max_iterations": 5,
|
"max_iterations": 5
|
||||||
"use_codex": false
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -108,7 +107,7 @@ Phase 3: Task JSON Generation
|
|||||||
|
|
||||||
**Template-Based Command Construction with Test Layer Awareness**:
|
**Template-Based Command Construction with Test Layer Awareness**:
|
||||||
```bash
|
```bash
|
||||||
cd {project_root} && {cli_tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||||
TASK:
|
TASK:
|
||||||
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
||||||
@@ -135,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
|||||||
- Consider previous iteration failures
|
- Consider previous iteration failures
|
||||||
- Validate fix doesn't introduce new vulnerabilities
|
- Validate fix doesn't introduce new vulnerabilities
|
||||||
- analysis=READ-ONLY
|
- analysis=READ-ONLY
|
||||||
" {timeout_flag}
|
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer-Specific Guidance Injection**:
|
**Layer-Specific Guidance Injection**:
|
||||||
@@ -263,7 +262,6 @@ function extractModificationPoints() {
|
|||||||
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
||||||
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
||||||
"max_iterations": "{task_config.max_iterations}",
|
"max_iterations": "{task_config.max_iterations}",
|
||||||
"use_codex": "{task_config.use_codex}",
|
|
||||||
"parent_task": "{parent_task_id}",
|
"parent_task": "{parent_task_id}",
|
||||||
"created_by": "@cli-planning-agent",
|
"created_by": "@cli-planning-agent",
|
||||||
"created_at": "{timestamp}"
|
"created_at": "{timestamp}"
|
||||||
@@ -529,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
|||||||
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||||
2. **Execute CLI**:
|
2. **Execute CLI**:
|
||||||
```bash
|
```bash
|
||||||
gemini -p "PURPOSE: Analyze integration test failure...
|
ccw cli -p "PURPOSE: Analyze integration test failure...
|
||||||
TASK: Examine component interactions, data flow, interface contracts...
|
TASK: Examine component interactions, data flow, interface contracts...
|
||||||
RULES: Analyze full call stack and data flow across components"
|
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
|
||||||
```
|
```
|
||||||
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||||
4. **Generate Task JSON** (IMPL-fix-1.json):
|
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||||
|
|||||||
@@ -24,8 +24,6 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- **Context-driven** - Use provided context and existing code patterns
|
- **Context-driven** - Use provided context and existing code patterns
|
||||||
- **Quality over speed** - Write boring, reliable code that works
|
- **Quality over speed** - Write boring, reliable code that works
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
### 1. Context Assessment
|
### 1. Context Assessment
|
||||||
@@ -36,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- **context-package.json** (when available in workflow tasks)
|
- **context-package.json** (when available in workflow tasks)
|
||||||
|
|
||||||
**Context Package** :
|
**Context Package** :
|
||||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
`context-package.json` provides artifact paths - read using Read tool or ccw session:
|
||||||
```bash
|
```bash
|
||||||
# Get role analysis paths from context package
|
# Get context package content from session using Read tool
|
||||||
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
|
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
|
||||||
|
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
|
||||||
```
|
```
|
||||||
|
|
||||||
**Pre-Analysis: Smart Tech Stack Loading**:
|
**Pre-Analysis: Smart Tech Stack Loading**:
|
||||||
@@ -123,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
|
|||||||
- If `command` field present, execute it; otherwise use agent capabilities
|
- If `command` field present, execute it; otherwise use agent capabilities
|
||||||
|
|
||||||
**CLI Command Execution (CLI Execute Mode)**:
|
**CLI Command Execution (CLI Execute Mode)**:
|
||||||
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
|
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
|
||||||
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
|
||||||
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
|
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
|
||||||
|
|
||||||
**Test-Driven Development**:
|
**Test-Driven Development**:
|
||||||
- Write tests first (red → green → refactor)
|
- Write tests first (red → green → refactor)
|
||||||
|
|||||||
@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata
|
- Action: Load session metadata
|
||||||
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_metadata
|
- Output: session_metadata
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -119,17 +119,6 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
- No dependency management
|
- No dependency management
|
||||||
- Used for temporary context preparation
|
- Used for temporary context preparation
|
||||||
|
|
||||||
### NOT Handled by This Agent
|
|
||||||
|
|
||||||
**JSON format** (used by code-developer, test-fix-agent):
|
|
||||||
```json
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [...],
|
|
||||||
"implementation_approach": [...]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
This complete JSON format is stored in `.task/IMPL-*.json` files and handled by implementation agents, not conceptual-planning-agent.
|
|
||||||
|
|
||||||
### Role-Specific Analysis Dimensions
|
### Role-Specific Analysis Dimensions
|
||||||
|
|
||||||
@@ -146,14 +135,14 @@ This complete JSON format is stored in `.task/IMPL-*.json` files and handled by
|
|||||||
|
|
||||||
### Output Integration
|
### Output Integration
|
||||||
|
|
||||||
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into the single role's output:
|
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into role output documents:
|
||||||
- Enhanced `analysis.md` with codebase insights and architectural patterns
|
- Enhanced analysis documents with codebase insights and architectural patterns
|
||||||
- Role-specific technical recommendations based on existing conventions
|
- Role-specific technical recommendations based on existing conventions
|
||||||
- Pattern-based best practices from actual code examination
|
- Pattern-based best practices from actual code examination
|
||||||
- Realistic feasibility assessments based on current implementation
|
- Realistic feasibility assessments based on current implementation
|
||||||
|
|
||||||
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
||||||
- Enhanced `analysis.md` with autonomous development recommendations
|
- Enhanced analysis documents with autonomous development recommendations
|
||||||
- Role-specific strategy based on intelligent system understanding
|
- Role-specific strategy based on intelligent system understanding
|
||||||
- Autonomous development approaches and implementation guidance
|
- Autonomous development approaches and implementation guidance
|
||||||
- Self-guided optimization and integration recommendations
|
- Self-guided optimization and integration recommendations
|
||||||
@@ -166,7 +155,7 @@ When called, you receive:
|
|||||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||||
- **Output Location**: Directory path for generated analysis files
|
- **Output Location**: Directory path for generated analysis files
|
||||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||||
|
|
||||||
@@ -229,26 +218,23 @@ Generate documents according to loaded role template specifications:
|
|||||||
|
|
||||||
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
||||||
|
|
||||||
**Required Files**:
|
**Output Files**:
|
||||||
- **analysis.md**: Main role perspective analysis incorporating user context and role template
|
- **analysis.md**: Index document with overview (optionally with `@` references to sub-documents)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
|
||||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
||||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
- **analysis-{slug}.md**: Section content documents (slug from section heading: lowercase, hyphens)
|
||||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
- Maximum 5 sub-documents (merge related sections if needed)
|
||||||
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
|
- **Content**: Analysis AND recommendations sections
|
||||||
|
|
||||||
**File Structure Example**:
|
**File Structure Example**:
|
||||||
```
|
```
|
||||||
.workflow/WFS-[session]/.brainstorming/system-architect/
|
.workflow/WFS-[session]/.brainstorming/system-architect/
|
||||||
├── analysis.md # Main system architecture analysis with recommendations
|
├── analysis.md # Index with overview + @references
|
||||||
├── analysis-1.md # (Optional) Continuation if content >800 lines
|
├── analysis-architecture-assessment.md # Section content
|
||||||
└── deliverables/ # (Optional) Additional role-specific outputs
|
├── analysis-technology-evaluation.md # Section content
|
||||||
├── technical-architecture.md # System design specifications
|
├── analysis-integration-strategy.md # Section content
|
||||||
├── technology-stack.md # Technology selection rationale
|
└── analysis-recommendations.md # Section content (max 5 sub-docs total)
|
||||||
└── scalability-plan.md # Scaling strategy
|
|
||||||
|
|
||||||
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
|
NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
|
||||||
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Role-Specific Planning Process
|
## Role-Specific Planning Process
|
||||||
@@ -268,14 +254,10 @@ FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefi
|
|||||||
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
||||||
|
|
||||||
### 3. Brainstorming Documentation Phase
|
### 3. Brainstorming Documentation Phase
|
||||||
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
|
- **Create analysis.md**: Main document with overview (optionally with `@` references)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
- **Create sub-documents**: `analysis-{slug}.md` for major sections (max 5)
|
||||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
|
||||||
- **Content**: Include both analysis AND recommendations sections within analysis files
|
|
||||||
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
|
||||||
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
|
|
||||||
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
||||||
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
|
- **Naming Validation**: Verify ALL files start with `analysis` prefix
|
||||||
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
||||||
|
|
||||||
## Role-Specific Analysis Framework
|
## Role-Specific Analysis Framework
|
||||||
@@ -324,5 +306,3 @@ When analysis is complete, ensure:
|
|||||||
- **Relevance**: Directly addresses user's specified requirements
|
- **Relevance**: Directly addresses user's specified requirements
|
||||||
- **Actionability**: Provides concrete next steps and recommendations
|
- **Actionability**: Provides concrete next steps and recommendations
|
||||||
|
|
||||||
### Windows Path Format Guidelines
|
|
||||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
|
||||||
|
|||||||
@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
|
|||||||
### 1. Reference Documentation (Project Standards)
|
### 1. Reference Documentation (Project Standards)
|
||||||
**Tools**:
|
**Tools**:
|
||||||
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
||||||
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
|
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
|
||||||
- `Glob()` - Find documentation files
|
- `Glob()` - Find documentation files
|
||||||
|
|
||||||
**Use**: Phase 0 foundation setup
|
**Use**: Phase 0 foundation setup
|
||||||
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
|
|||||||
**Use**: Unfamiliar APIs/libraries/patterns
|
**Use**: Unfamiliar APIs/libraries/patterns
|
||||||
|
|
||||||
### 3. Existing Code Discovery
|
### 3. Existing Code Discovery
|
||||||
**Primary (Code-Index MCP)**:
|
**Primary (CCW CodexLens MCP)**:
|
||||||
- `mcp__code-index__set_project_path()` - Initialize index
|
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
|
||||||
- `mcp__code-index__find_files(pattern)` - File pattern matching
|
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
|
||||||
- `mcp__code-index__search_code_advanced()` - Content search
|
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
|
||||||
- `mcp__code-index__get_file_summary()` - File structure analysis
|
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
|
||||||
- `mcp__code-index__refresh_index()` - Update index
|
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
|
||||||
|
|
||||||
**Fallback (CLI)**:
|
**Fallback (CLI)**:
|
||||||
- `rg` (ripgrep) - Fast content search
|
- `rg` (ripgrep) - Fast content search
|
||||||
- `find` - File discovery
|
- `find` - File discovery
|
||||||
- `Grep` - Pattern matching
|
- `Grep` - Pattern matching
|
||||||
|
|
||||||
**Priority**: Code-Index MCP > ripgrep > find > grep
|
**Priority**: CodexLens MCP > ripgrep > find > grep
|
||||||
|
|
||||||
## Simplified Execution Process (3 Phases)
|
## Simplified Execution Process (3 Phases)
|
||||||
|
|
||||||
@@ -77,12 +77,11 @@ if (file_exists(contextPackagePath)) {
|
|||||||
|
|
||||||
**1.2 Foundation Setup**:
|
**1.2 Foundation Setup**:
|
||||||
```javascript
|
```javascript
|
||||||
// 1. Initialize Code Index (if available)
|
// 1. Initialize CodexLens (if available)
|
||||||
mcp__code-index__set_project_path(process.cwd())
|
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
|
||||||
mcp__code-index__refresh_index()
|
|
||||||
|
|
||||||
// 2. Project Structure
|
// 2. Project Structure
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
bash(ccw tool exec get_modules_by_depth '{}')
|
||||||
|
|
||||||
// 3. Load Documentation (if not in memory)
|
// 3. Load Documentation (if not in memory)
|
||||||
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
||||||
@@ -100,10 +99,88 @@ if (!memory.has("README.md")) Read(README.md)
|
|||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
### Phase 2: Multi-Source Context Discovery
|
||||||
|
|
||||||
Execute all 3 tracks in parallel for comprehensive coverage.
|
Execute all tracks in parallel for comprehensive coverage.
|
||||||
|
|
||||||
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
|
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
|
||||||
|
|
||||||
|
#### Track 0: Exploration Synthesis (Optional)
|
||||||
|
|
||||||
|
**Trigger**: When `explorations-manifest.json` exists in session `.process/` folder
|
||||||
|
|
||||||
|
**Purpose**: Transform raw exploration data into prioritized, deduplicated insights. This is NOT simple aggregation - it synthesizes `critical_files` (priority-ranked), deduplicates patterns/integration_points, and generates `conflict_indicators`.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check for exploration results from context-gather parallel explore phase
|
||||||
|
const manifestPath = `.workflow/active/${session_id}/.process/explorations-manifest.json`;
|
||||||
|
if (file_exists(manifestPath)) {
|
||||||
|
const manifest = JSON.parse(Read(manifestPath));
|
||||||
|
|
||||||
|
// Load full exploration data from each file
|
||||||
|
const explorationData = manifest.explorations.map(exp => ({
|
||||||
|
...exp,
|
||||||
|
data: JSON.parse(Read(exp.path))
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Build explorations array with summaries
|
||||||
|
const explorations = explorationData.map(exp => ({
|
||||||
|
angle: exp.angle,
|
||||||
|
file: exp.file,
|
||||||
|
path: exp.path,
|
||||||
|
index: exp.data._metadata?.exploration_index || exp.index,
|
||||||
|
summary: {
|
||||||
|
relevant_files_count: exp.data.relevant_files?.length || 0,
|
||||||
|
key_patterns: exp.data.patterns,
|
||||||
|
integration_points: exp.data.integration_points
|
||||||
|
}
|
||||||
|
}));
|
||||||
|
|
||||||
|
// SYNTHESIS (not aggregation): Transform raw data into prioritized insights
|
||||||
|
const aggregated_insights = {
|
||||||
|
// CRITICAL: Synthesize priority-ranked critical_files from multiple relevant_files lists
|
||||||
|
// - Deduplicate by path
|
||||||
|
// - Rank by: mention count across angles + individual relevance scores
|
||||||
|
// - Top 10-15 files only (focused, actionable)
|
||||||
|
critical_files: synthesizeCriticalFiles(explorationData.flatMap(e => e.data.relevant_files || [])),
|
||||||
|
|
||||||
|
// SYNTHESIS: Generate conflict indicators from pattern mismatches, constraint violations
|
||||||
|
conflict_indicators: synthesizeConflictIndicators(explorationData),
|
||||||
|
|
||||||
|
// Deduplicate clarification questions (merge similar questions)
|
||||||
|
clarification_needs: deduplicateQuestions(explorationData.flatMap(e => e.data.clarification_needs || [])),
|
||||||
|
|
||||||
|
// Preserve source attribution for traceability
|
||||||
|
constraints: explorationData.map(e => ({ constraint: e.data.constraints, source_angle: e.angle })).filter(c => c.constraint),
|
||||||
|
|
||||||
|
// Deduplicate patterns across angles (merge identical patterns)
|
||||||
|
all_patterns: deduplicatePatterns(explorationData.map(e => ({ patterns: e.data.patterns, source_angle: e.angle }))),
|
||||||
|
|
||||||
|
// Deduplicate integration points (merge by file:line location)
|
||||||
|
all_integration_points: deduplicateIntegrationPoints(explorationData.map(e => ({ points: e.data.integration_points, source_angle: e.angle })))
|
||||||
|
};
|
||||||
|
|
||||||
|
// Store for Phase 3 packaging
|
||||||
|
exploration_results = { manifest_path: manifestPath, exploration_count: manifest.exploration_count,
|
||||||
|
complexity: manifest.complexity, angles: manifest.angles_explored,
|
||||||
|
explorations, aggregated_insights };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Synthesis helper functions (conceptual)
|
||||||
|
function synthesizeCriticalFiles(allRelevantFiles) {
|
||||||
|
// 1. Group by path
|
||||||
|
// 2. Count mentions across angles
|
||||||
|
// 3. Average relevance scores
|
||||||
|
// 4. Rank by: (mention_count * 0.6) + (avg_relevance * 0.4)
|
||||||
|
// 5. Return top 10-15 with mentioned_by_angles attribution
|
||||||
|
}
|
||||||
|
|
||||||
|
function synthesizeConflictIndicators(explorationData) {
|
||||||
|
// 1. Detect pattern mismatches across angles
|
||||||
|
// 2. Identify constraint violations
|
||||||
|
// 3. Flag files mentioned with conflicting integration approaches
|
||||||
|
// 4. Assign severity: critical/high/medium/low
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Track 1: Reference Documentation
|
#### Track 1: Reference Documentation
|
||||||
|
|
||||||
Extract from Phase 0 loaded docs:
|
Extract from Phase 0 loaded docs:
|
||||||
@@ -134,18 +211,18 @@ mcp__exa__web_search_exa({
|
|||||||
|
|
||||||
**Layer 1: File Pattern Discovery**
|
**Layer 1: File Pattern Discovery**
|
||||||
```javascript
|
```javascript
|
||||||
// Primary: Code-Index MCP
|
// Primary: CodexLens MCP
|
||||||
const files = mcp__code-index__find_files("*{keyword}*")
|
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
|
||||||
// Fallback: find . -iname "*{keyword}*" -type f
|
// Fallback: find . -iname "*{keyword}*" -type f
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer 2: Content Search**
|
**Layer 2: Content Search**
|
||||||
```javascript
|
```javascript
|
||||||
// Primary: Code-Index MCP
|
// Primary: CodexLens MCP
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "{keyword}",
|
action: "search",
|
||||||
file_pattern: "*.ts",
|
query: "{keyword}",
|
||||||
output_mode: "files_with_matches"
|
path: "."
|
||||||
})
|
})
|
||||||
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
||||||
```
|
```
|
||||||
@@ -153,11 +230,10 @@ mcp__code-index__search_code_advanced({
|
|||||||
**Layer 3: Semantic Patterns**
|
**Layer 3: Semantic Patterns**
|
||||||
```javascript
|
```javascript
|
||||||
// Find definitions (class, interface, function)
|
// Find definitions (class, interface, function)
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
|
action: "search",
|
||||||
regex: true,
|
query: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||||
output_mode: "content",
|
path: "."
|
||||||
context_lines: 2
|
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -165,21 +241,22 @@ mcp__code-index__search_code_advanced({
|
|||||||
```javascript
|
```javascript
|
||||||
// Get file summaries for imports/exports
|
// Get file summaries for imports/exports
|
||||||
for (const file of discovered_files) {
|
for (const file of discovered_files) {
|
||||||
const summary = mcp__code-index__get_file_summary(file)
|
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
|
||||||
// summary: {imports, functions, classes, line_count}
|
// summary: {symbols: [{name, type, line}]}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer 5: Config & Tests**
|
**Layer 5: Config & Tests**
|
||||||
```javascript
|
```javascript
|
||||||
// Config files
|
// Config files
|
||||||
mcp__code-index__find_files("*.config.*")
|
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
|
||||||
mcp__code-index__find_files("package.json")
|
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
|
||||||
|
|
||||||
// Tests
|
// Tests
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "(describe|it|test).*{keyword}",
|
action: "search",
|
||||||
file_pattern: "*.{test,spec}.*"
|
query: "(describe|it|test).*{keyword}",
|
||||||
|
path: "."
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -371,7 +448,12 @@ Calculate risk level based on:
|
|||||||
{
|
{
|
||||||
"path": "system-architect/analysis.md",
|
"path": "system-architect/analysis.md",
|
||||||
"type": "primary",
|
"type": "primary",
|
||||||
"content": "# System Architecture Analysis\n\n## Overview\n..."
|
"content": "# System Architecture Analysis\n\n## Overview\n@analysis-architecture.md\n@analysis-recommendations.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "system-architect/analysis-architecture.md",
|
||||||
|
"type": "supplementary",
|
||||||
|
"content": "# Architecture Assessment\n\n..."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -393,10 +475,39 @@ Calculate risk level based on:
|
|||||||
},
|
},
|
||||||
"affected_modules": ["auth", "user-model", "middleware"],
|
"affected_modules": ["auth", "user-model", "middleware"],
|
||||||
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
||||||
|
},
|
||||||
|
"exploration_results": {
|
||||||
|
"manifest_path": ".workflow/active/{session}/.process/explorations-manifest.json",
|
||||||
|
"exploration_count": 3,
|
||||||
|
"complexity": "Medium",
|
||||||
|
"angles": ["architecture", "dependencies", "testing"],
|
||||||
|
"explorations": [
|
||||||
|
{
|
||||||
|
"angle": "architecture",
|
||||||
|
"file": "exploration-architecture.json",
|
||||||
|
"path": ".workflow/active/{session}/.process/exploration-architecture.json",
|
||||||
|
"index": 1,
|
||||||
|
"summary": {
|
||||||
|
"relevant_files_count": 5,
|
||||||
|
"key_patterns": "Service layer with DI",
|
||||||
|
"integration_points": "Container.registerService:45-60"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"aggregated_insights": {
|
||||||
|
"critical_files": [{"path": "src/auth/AuthService.ts", "relevance": 0.95, "mentioned_by_angles": ["architecture"]}],
|
||||||
|
"conflict_indicators": [{"type": "pattern_mismatch", "description": "...", "source_angle": "architecture", "severity": "medium"}],
|
||||||
|
"clarification_needs": [{"question": "...", "context": "...", "options": [], "source_angle": "architecture"}],
|
||||||
|
"constraints": [{"constraint": "Must follow existing DI pattern", "source_angle": "architecture"}],
|
||||||
|
"all_patterns": [{"patterns": "Service layer with DI", "source_angle": "architecture"}],
|
||||||
|
"all_integration_points": [{"points": "Container.registerService:45-60", "source_angle": "architecture"}]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Note**: `exploration_results` is populated when exploration files exist (from context-gather parallel explore phase). If no explorations, this field is omitted or empty.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Quality Validation
|
## Quality Validation
|
||||||
@@ -448,14 +559,14 @@ Output: .workflow/session/{session}/.process/context-package.json
|
|||||||
- Expose sensitive data (credentials, keys)
|
- Expose sensitive data (credentials, keys)
|
||||||
- Exceed file limits (50 total)
|
- Exceed file limits (50 total)
|
||||||
- Include binaries/generated files
|
- Include binaries/generated files
|
||||||
- Use ripgrep if code-index available
|
- Use ripgrep if CodexLens available
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Initialize code-index in Phase 0
|
- Initialize CodexLens in Phase 0
|
||||||
- Execute get_modules_by_depth.sh
|
- Execute get_modules_by_depth.sh
|
||||||
- Load CLAUDE.md/README.md (unless in memory)
|
- Load CLAUDE.md/README.md (unless in memory)
|
||||||
- Execute all 3 discovery tracks
|
- Execute all 3 discovery tracks
|
||||||
- Use code-index MCP as primary
|
- Use CodexLens MCP as primary
|
||||||
- Fallback to ripgrep only when needed
|
- Fallback to ripgrep only when needed
|
||||||
- Use Exa for unfamiliar APIs
|
- Use Exa for unfamiliar APIs
|
||||||
- Apply multi-factor scoring
|
- Apply multi-factor scoring
|
||||||
|
|||||||
@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
|
|
||||||
**Step 2** (CLI execution):
|
**Step 2** (CLI execution):
|
||||||
- Agent substitutes [target_folders] into command
|
- Agent substitutes [target_folders] into command
|
||||||
- Agent executes CLI command via Bash tool:
|
- Agent executes CLI command via CCW:
|
||||||
```bash
|
```bash
|
||||||
bash(cd src/modules && gemini --approval-mode yolo -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate module documentation
|
PURPOSE: Generate module documentation
|
||||||
TASK: Create API.md and README.md for each module
|
TASK: Create API.md and README.md for each module
|
||||||
MODE: write
|
MODE: write
|
||||||
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
./src/modules/api|code|code:3|dirs:0
|
./src/modules/api|code|code:3|dirs:0
|
||||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||||
")
|
" --tool gemini --mode write --cd src/modules
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **CLI Execution** (Gemini CLI):
|
4. **CLI Execution** (Gemini CLI):
|
||||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
|||||||
{
|
{
|
||||||
"step": "analyze_module_structure",
|
"step": "analyze_module_structure",
|
||||||
"action": "Deep analysis of module structure and API",
|
"action": "Deep analysis of module structure and API",
|
||||||
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
|
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||||
"output_to": "module_analysis",
|
"output_to": "module_analysis",
|
||||||
"on_error": "fail"
|
"on_error": "fail"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
|
|||||||
|
|
||||||
## Core Mission
|
## Core Mission
|
||||||
|
|
||||||
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
|
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
|
||||||
|
|
||||||
## Input Context
|
## Input Context
|
||||||
|
|
||||||
@@ -42,12 +42,12 @@ TodoWrite([
|
|||||||
# 3. Launch parallel jobs (max 4)
|
# 3. Launch parallel jobs (max 4)
|
||||||
|
|
||||||
# Depth 5 example (Layer 3 - use multi-layer):
|
# Depth 5 example (Layer 3 - use multi-layer):
|
||||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
|
||||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
|
||||||
|
|
||||||
# Depth 1 example (Layer 2 - use single-layer):
|
# Depth 1 example (Layer 2 - use single-layer):
|
||||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
|
||||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
|
||||||
# ... up to 4 concurrent jobs
|
# ... up to 4 concurrent jobs
|
||||||
|
|
||||||
# 4. Wait for all depth jobs to complete
|
# 4. Wait for all depth jobs to complete
|
||||||
|
|||||||
@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
|
|||||||
**Use**: Phase 1 source context loading
|
**Use**: Phase 1 source context loading
|
||||||
|
|
||||||
### 2. Test Coverage Discovery
|
### 2. Test Coverage Discovery
|
||||||
**Primary (Code-Index MCP)**:
|
**Primary (CCW CodexLens MCP)**:
|
||||||
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
|
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
|
||||||
- `mcp__code-index__search_code_advanced()` - Search test patterns
|
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
|
||||||
- `mcp__code-index__get_file_summary()` - Analyze test structure
|
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
|
||||||
|
|
||||||
**Fallback (CLI)**:
|
**Fallback (CLI)**:
|
||||||
- `rg` (ripgrep) - Fast test pattern search
|
- `rg` (ripgrep) - Fast test pattern search
|
||||||
@@ -120,9 +120,10 @@ for (const summary_path of summaries) {
|
|||||||
|
|
||||||
**2.1 Existing Test Discovery**:
|
**2.1 Existing Test Discovery**:
|
||||||
```javascript
|
```javascript
|
||||||
// Method 1: Code-Index MCP (preferred)
|
// Method 1: CodexLens MCP (preferred)
|
||||||
const test_files = mcp__code-index__find_files({
|
const test_files = mcp__ccw-tools__codex_lens({
|
||||||
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
|
action: "search_files",
|
||||||
|
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
|
||||||
});
|
});
|
||||||
|
|
||||||
// Method 2: Fallback CLI
|
// Method 2: Fallback CLI
|
||||||
|
|||||||
@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
|
|||||||
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
||||||
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
||||||
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
||||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||||
- Identify test commands from project configuration
|
- Identify test commands from project configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Detect test framework and multi-layered commands
|
# Detect test framework and multi-layered commands
|
||||||
if [ -f "package.json" ]; then
|
if [ -f "package.json" ]; then
|
||||||
# Extract layer-specific test commands
|
# Extract layer-specific test commands using Read tool or jq
|
||||||
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
|
PKG_JSON=$(cat package.json)
|
||||||
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
|
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
|
||||||
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
|
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||||
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
|
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
|
||||||
|
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
|
||||||
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
||||||
LINT_CMD="ruff check . || flake8 ."
|
LINT_CMD="ruff check . || flake8 ."
|
||||||
UNIT_CMD="pytest tests/unit/"
|
UNIT_CMD="pytest tests/unit/"
|
||||||
@@ -142,9 +143,9 @@ run_test_layer "L1-unit" "$UNIT_CMD"
|
|||||||
|
|
||||||
### 3. Failure Diagnosis & Fixing Loop
|
### 3. Failure Diagnosis & Fixing Loop
|
||||||
|
|
||||||
**Execution Modes**:
|
**Execution Modes** (determined by `flow_control.implementation_approach`):
|
||||||
|
|
||||||
**A. Manual Mode (Default, meta.use_codex=false)**:
|
**A. Agent Mode (Default, no `command` field in steps)**:
|
||||||
```
|
```
|
||||||
WHILE tests are failing AND iterations < max_iterations:
|
WHILE tests are failing AND iterations < max_iterations:
|
||||||
1. Use Gemini to diagnose failure (bug-fix template)
|
1. Use Gemini to diagnose failure (bug-fix template)
|
||||||
@@ -155,17 +156,17 @@ WHILE tests are failing AND iterations < max_iterations:
|
|||||||
END WHILE
|
END WHILE
|
||||||
```
|
```
|
||||||
|
|
||||||
**B. Codex Mode (meta.use_codex=true)**:
|
**B. CLI Mode (`command` field present in implementation_approach steps)**:
|
||||||
```
|
```
|
||||||
WHILE tests are failing AND iterations < max_iterations:
|
WHILE tests are failing AND iterations < max_iterations:
|
||||||
1. Use Gemini to diagnose failure (bug-fix template)
|
1. Use Gemini to diagnose failure (bug-fix template)
|
||||||
2. Use Codex to apply fixes automatically with resume mechanism
|
2. Execute `command` field (e.g., Codex) to apply fixes automatically
|
||||||
3. Re-run test suite
|
3. Re-run test suite
|
||||||
4. Verify fix doesn't break other tests
|
4. Verify fix doesn't break other tests
|
||||||
END WHILE
|
END WHILE
|
||||||
```
|
```
|
||||||
|
|
||||||
**Codex Resume in Test-Fix Cycle** (when `meta.use_codex=true`):
|
**Codex Resume in Test-Fix Cycle** (when step has `command` with Codex):
|
||||||
- First iteration: Start new Codex session with full context
|
- First iteration: Start new Codex session with full context
|
||||||
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies
|
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies
|
||||||
|
|
||||||
@@ -331,6 +332,8 @@ When generating test results for orchestrator (saved to `.process/test-results.j
|
|||||||
- Break existing passing tests
|
- Break existing passing tests
|
||||||
- Skip final verification
|
- Skip final verification
|
||||||
- Leave tests failing - must achieve 100% pass rate
|
- Leave tests failing - must achieve 100% pass rate
|
||||||
|
- Use `run_in_background` for Bash() commands - always set `run_in_background=false` to ensure tests run in foreground for proper output capture
|
||||||
|
- Use complex bash pipe chains (`cmd | grep | awk | sed`) - prefer dedicated tools (Read, Grep, Glob) for file operations and content extraction; simple single-pipe commands are acceptable when necessary
|
||||||
|
|
||||||
## Quality Certification
|
## Quality Certification
|
||||||
|
|
||||||
|
|||||||
47
.claude/cli-tools.json
Normal file
47
.claude/cli-tools.json
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
{
|
||||||
|
"version": "1.0.0",
|
||||||
|
"tools": {
|
||||||
|
"gemini": {
|
||||||
|
"enabled": true,
|
||||||
|
"isBuiltin": true,
|
||||||
|
"command": "gemini",
|
||||||
|
"description": "Google AI for code analysis"
|
||||||
|
},
|
||||||
|
"qwen": {
|
||||||
|
"enabled": true,
|
||||||
|
"isBuiltin": true,
|
||||||
|
"command": "qwen",
|
||||||
|
"description": "Alibaba AI assistant"
|
||||||
|
},
|
||||||
|
"codex": {
|
||||||
|
"enabled": true,
|
||||||
|
"isBuiltin": true,
|
||||||
|
"command": "codex",
|
||||||
|
"description": "OpenAI code generation"
|
||||||
|
},
|
||||||
|
"claude": {
|
||||||
|
"enabled": true,
|
||||||
|
"isBuiltin": true,
|
||||||
|
"command": "claude",
|
||||||
|
"description": "Anthropic AI assistant"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"customEndpoints": [],
|
||||||
|
"defaultTool": "gemini",
|
||||||
|
"settings": {
|
||||||
|
"promptFormat": "plain",
|
||||||
|
"smartContext": {
|
||||||
|
"enabled": false,
|
||||||
|
"maxFiles": 10
|
||||||
|
},
|
||||||
|
"nativeResume": true,
|
||||||
|
"recursiveQuery": true,
|
||||||
|
"cache": {
|
||||||
|
"injectionMode": "auto",
|
||||||
|
"defaultPrefix": "",
|
||||||
|
"defaultSuffix": ""
|
||||||
|
},
|
||||||
|
"codeIndexMcp": "ace"
|
||||||
|
},
|
||||||
|
"$schema": "./cli-tools.schema.json"
|
||||||
|
}
|
||||||
516
.claude/commands/clean.md
Normal file
516
.claude/commands/clean.md
Normal file
@@ -0,0 +1,516 @@
|
|||||||
|
---
|
||||||
|
name: clean
|
||||||
|
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution
|
||||||
|
argument-hint: "[--dry-run] [\"focus area\"]"
|
||||||
|
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Glob(*), Bash(*), Write(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Clean Command (/clean)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Intelligent cleanup command that explores the codebase to identify the development mainline, discovers artifacts that have drifted from it, and safely removes stale sessions, abandoned documents, and dead code.
|
||||||
|
|
||||||
|
**Core capabilities:**
|
||||||
|
- Mainline detection: Identify active development branches and core modules
|
||||||
|
- Drift analysis: Find sessions, documents, and code that deviate from mainline
|
||||||
|
- Intelligent discovery: cli-explore-agent based artifact scanning
|
||||||
|
- Safe execution: Confirmation-based cleanup with dry-run preview
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/clean # Full intelligent cleanup (explore → analyze → confirm → execute)
|
||||||
|
/clean --dry-run # Explore and analyze only, no execution
|
||||||
|
/clean "auth module" # Focus cleanup on specific area
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Mainline Detection
|
||||||
|
├─ Analyze git history for development trends
|
||||||
|
├─ Identify core modules (high commit frequency)
|
||||||
|
├─ Map active vs stale branches
|
||||||
|
└─ Build mainline profile
|
||||||
|
|
||||||
|
Phase 2: Drift Discovery (cli-explore-agent)
|
||||||
|
├─ Scan workflow sessions for orphaned artifacts
|
||||||
|
├─ Identify documents drifted from mainline
|
||||||
|
├─ Detect dead code and unused exports
|
||||||
|
└─ Generate cleanup manifest
|
||||||
|
|
||||||
|
Phase 3: Confirmation
|
||||||
|
├─ Display cleanup summary by category
|
||||||
|
├─ Show impact analysis (files, size, risk)
|
||||||
|
└─ AskUserQuestion: Select categories to clean
|
||||||
|
|
||||||
|
Phase 4: Execution (unless --dry-run)
|
||||||
|
├─ Execute cleanup by category
|
||||||
|
├─ Update manifests and indexes
|
||||||
|
└─ Report results
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Phase 1: Mainline Detection
|
||||||
|
|
||||||
|
**Session Setup**:
|
||||||
|
```javascript
|
||||||
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
|
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||||
|
const sessionId = `clean-${dateStr}`
|
||||||
|
const sessionFolder = `.workflow/.clean/${sessionId}`
|
||||||
|
|
||||||
|
Bash(`mkdir -p ${sessionFolder}`)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.1: Git History Analysis**
|
||||||
|
```bash
|
||||||
|
# Get commit frequency by directory (last 30 days)
|
||||||
|
bash(git log --since="30 days ago" --name-only --pretty=format: | grep -v "^$" | cut -d/ -f1-2 | sort | uniq -c | sort -rn | head -20)
|
||||||
|
|
||||||
|
# Get recent active branches
|
||||||
|
bash(git for-each-ref --sort=-committerdate refs/heads/ --format='%(refname:short) %(committerdate:relative)' | head -10)
|
||||||
|
|
||||||
|
# Get files with most recent changes
|
||||||
|
bash(git log --since="7 days ago" --name-only --pretty=format: | grep -v "^$" | sort | uniq -c | sort -rn | head -30)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.2: Build Mainline Profile**
|
||||||
|
```javascript
|
||||||
|
const mainlineProfile = {
|
||||||
|
coreModules: [], // High-frequency directories
|
||||||
|
activeFiles: [], // Recently modified files
|
||||||
|
activeBranches: [], // Branches with recent commits
|
||||||
|
staleThreshold: {
|
||||||
|
sessions: 7, // Days
|
||||||
|
branches: 30,
|
||||||
|
documents: 14
|
||||||
|
},
|
||||||
|
timestamp: getUtc8ISOString()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse git log output to identify core modules
|
||||||
|
// Modules with >5 commits in last 30 days = core
|
||||||
|
// Modules with 0 commits in last 30 days = potentially stale
|
||||||
|
|
||||||
|
Write(`${sessionFolder}/mainline-profile.json`, JSON.stringify(mainlineProfile, null, 2))
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Drift Discovery
|
||||||
|
|
||||||
|
**Launch cli-explore-agent for intelligent artifact scanning**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
|
description="Discover stale artifacts",
|
||||||
|
prompt=`
|
||||||
|
## Task Objective
|
||||||
|
Discover artifacts that have drifted from the development mainline. Identify stale sessions, abandoned documents, and dead code for cleanup.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
- **Session Folder**: ${sessionFolder}
|
||||||
|
- **Mainline Profile**: ${sessionFolder}/mainline-profile.json
|
||||||
|
- **Focus Area**: ${focusArea || "全项目"}
|
||||||
|
|
||||||
|
## Discovery Categories
|
||||||
|
|
||||||
|
### Category 1: Stale Workflow Sessions
|
||||||
|
Scan and analyze workflow session directories:
|
||||||
|
|
||||||
|
**Locations to scan**:
|
||||||
|
- .workflow/active/WFS-* (active sessions)
|
||||||
|
- .workflow/archives/WFS-* (archived sessions)
|
||||||
|
- .workflow/.lite-plan/* (lite-plan sessions)
|
||||||
|
- .workflow/.debug/DBG-* (debug sessions)
|
||||||
|
|
||||||
|
**Staleness criteria**:
|
||||||
|
- Active sessions: No modification >7 days + no related git commits
|
||||||
|
- Archives: >30 days old + no feature references in project.json
|
||||||
|
- Lite-plan: >7 days old + plan.json not executed
|
||||||
|
- Debug: >3 days old + issue not in recent commits
|
||||||
|
|
||||||
|
**Analysis steps**:
|
||||||
|
1. List all session directories with modification times
|
||||||
|
2. Cross-reference with git log (are session topics in recent commits?)
|
||||||
|
3. Check manifest.json for orphan entries
|
||||||
|
4. Identify sessions with .archiving marker (interrupted)
|
||||||
|
|
||||||
|
### Category 2: Drifted Documents
|
||||||
|
Scan documentation that no longer aligns with code:
|
||||||
|
|
||||||
|
**Locations to scan**:
|
||||||
|
- .claude/rules/tech/* (generated tech rules)
|
||||||
|
- .workflow/.scratchpad/* (temporary notes)
|
||||||
|
- **/CLAUDE.md (module documentation)
|
||||||
|
- **/README.md (outdated descriptions)
|
||||||
|
|
||||||
|
**Drift criteria**:
|
||||||
|
- Tech rules: Referenced files no longer exist
|
||||||
|
- Scratchpad: Any file (always temporary)
|
||||||
|
- Module docs: Describe functions/classes that were removed
|
||||||
|
- READMEs: Reference deleted directories
|
||||||
|
|
||||||
|
**Analysis steps**:
|
||||||
|
1. Parse document content for file/function references
|
||||||
|
2. Verify referenced entities still exist in codebase
|
||||||
|
3. Flag documents with >30% broken references
|
||||||
|
|
||||||
|
### Category 3: Dead Code
|
||||||
|
Identify code that is no longer used:
|
||||||
|
|
||||||
|
**Scan patterns**:
|
||||||
|
- Unused exports (exported but never imported)
|
||||||
|
- Orphan files (not imported anywhere)
|
||||||
|
- Commented-out code blocks (>10 lines)
|
||||||
|
- TODO/FIXME comments >90 days old
|
||||||
|
|
||||||
|
**Analysis steps**:
|
||||||
|
1. Build import graph using rg/grep
|
||||||
|
2. Identify exports with no importers
|
||||||
|
3. Find files not in import graph
|
||||||
|
4. Scan for large comment blocks
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Write to: ${sessionFolder}/cleanup-manifest.json
|
||||||
|
|
||||||
|
\`\`\`json
|
||||||
|
{
|
||||||
|
"generated_at": "ISO timestamp",
|
||||||
|
"mainline_summary": {
|
||||||
|
"core_modules": ["src/core", "src/api"],
|
||||||
|
"active_branches": ["main", "feature/auth"],
|
||||||
|
"health_score": 0.85
|
||||||
|
},
|
||||||
|
"discoveries": {
|
||||||
|
"stale_sessions": [
|
||||||
|
{
|
||||||
|
"path": ".workflow/active/WFS-old-feature",
|
||||||
|
"type": "active",
|
||||||
|
"age_days": 15,
|
||||||
|
"reason": "No related commits in 15 days",
|
||||||
|
"size_kb": 1024,
|
||||||
|
"risk": "low"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"drifted_documents": [
|
||||||
|
{
|
||||||
|
"path": ".claude/rules/tech/deprecated-lib",
|
||||||
|
"type": "tech_rules",
|
||||||
|
"broken_references": 5,
|
||||||
|
"total_references": 6,
|
||||||
|
"drift_percentage": 83,
|
||||||
|
"reason": "Referenced library removed",
|
||||||
|
"risk": "low"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"dead_code": [
|
||||||
|
{
|
||||||
|
"path": "src/utils/legacy.ts",
|
||||||
|
"type": "orphan_file",
|
||||||
|
"reason": "Not imported by any file",
|
||||||
|
"last_modified": "2025-10-01",
|
||||||
|
"risk": "medium"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"summary": {
|
||||||
|
"total_items": 12,
|
||||||
|
"total_size_mb": 45.2,
|
||||||
|
"by_category": {
|
||||||
|
"stale_sessions": 5,
|
||||||
|
"drifted_documents": 4,
|
||||||
|
"dead_code": 3
|
||||||
|
},
|
||||||
|
"by_risk": {
|
||||||
|
"low": 8,
|
||||||
|
"medium": 3,
|
||||||
|
"high": 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Execution Commands
|
||||||
|
|
||||||
|
\`\`\`bash
|
||||||
|
# Session directories
|
||||||
|
find .workflow -type d -name "WFS-*" -o -name "DBG-*" 2>/dev/null
|
||||||
|
|
||||||
|
# Check modification times (Linux/Mac)
|
||||||
|
stat -c "%Y %n" .workflow/active/WFS-* 2>/dev/null
|
||||||
|
|
||||||
|
# Check modification times (Windows PowerShell via bash)
|
||||||
|
powershell -Command "Get-ChildItem '.workflow/active/WFS-*' | ForEach-Object { Write-Output \"$($_.LastWriteTime) $($_.FullName)\" }"
|
||||||
|
|
||||||
|
# Find orphan exports (TypeScript)
|
||||||
|
rg "export (const|function|class|interface|type)" --type ts -l
|
||||||
|
|
||||||
|
# Find imports
|
||||||
|
rg "import.*from" --type ts
|
||||||
|
|
||||||
|
# Find large comment blocks
|
||||||
|
rg "^\\s*/\\*" -A 10 --type ts
|
||||||
|
|
||||||
|
# Find old TODOs
|
||||||
|
rg "TODO|FIXME" --type ts -n
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] All session directories scanned with age calculation
|
||||||
|
- [ ] Documents cross-referenced with existing code
|
||||||
|
- [ ] Dead code detection via import graph analysis
|
||||||
|
- [ ] cleanup-manifest.json written with complete data
|
||||||
|
- [ ] Each item has risk level and cleanup reason
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Confirmation
|
||||||
|
|
||||||
|
**Step 3.1: Display Summary**
|
||||||
|
```javascript
|
||||||
|
const manifest = JSON.parse(Read(`${sessionFolder}/cleanup-manifest.json`))
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
## Cleanup Discovery Report
|
||||||
|
|
||||||
|
**Mainline Health**: ${Math.round(manifest.mainline_summary.health_score * 100)}%
|
||||||
|
**Core Modules**: ${manifest.mainline_summary.core_modules.join(', ')}
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
| Category | Count | Size | Risk |
|
||||||
|
|----------|-------|------|------|
|
||||||
|
| Stale Sessions | ${manifest.summary.by_category.stale_sessions} | - | ${getRiskSummary('sessions')} |
|
||||||
|
| Drifted Documents | ${manifest.summary.by_category.drifted_documents} | - | ${getRiskSummary('documents')} |
|
||||||
|
| Dead Code | ${manifest.summary.by_category.dead_code} | - | ${getRiskSummary('code')} |
|
||||||
|
|
||||||
|
**Total**: ${manifest.summary.total_items} items, ~${manifest.summary.total_size_mb} MB
|
||||||
|
|
||||||
|
### Stale Sessions
|
||||||
|
${manifest.discoveries.stale_sessions.map(s =>
|
||||||
|
`- ${s.path} (${s.age_days}d, ${s.risk}): ${s.reason}`
|
||||||
|
).join('\n')}
|
||||||
|
|
||||||
|
### Drifted Documents
|
||||||
|
${manifest.discoveries.drifted_documents.map(d =>
|
||||||
|
`- ${d.path} (${d.drift_percentage}% broken, ${d.risk}): ${d.reason}`
|
||||||
|
).join('\n')}
|
||||||
|
|
||||||
|
### Dead Code
|
||||||
|
${manifest.discoveries.dead_code.map(c =>
|
||||||
|
`- ${c.path} (${c.type}, ${c.risk}): ${c.reason}`
|
||||||
|
).join('\n')}
|
||||||
|
`)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3.2: Dry-Run Exit**
|
||||||
|
```javascript
|
||||||
|
if (flags.includes('--dry-run')) {
|
||||||
|
console.log(`
|
||||||
|
---
|
||||||
|
**Dry-run mode**: No changes made.
|
||||||
|
Manifest saved to: ${sessionFolder}/cleanup-manifest.json
|
||||||
|
|
||||||
|
To execute cleanup: /clean
|
||||||
|
`)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3.3: User Confirmation**
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "Which categories to clean?",
|
||||||
|
header: "Categories",
|
||||||
|
multiSelect: true,
|
||||||
|
options: [
|
||||||
|
{
|
||||||
|
label: "Sessions",
|
||||||
|
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Documents",
|
||||||
|
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Dead Code",
|
||||||
|
description: `${manifest.summary.by_category.dead_code} unused code files`
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "Risk level to include?",
|
||||||
|
header: "Risk",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Low only", description: "Safest - only obviously stale items" },
|
||||||
|
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
|
||||||
|
{ label: "All", description: "Aggressive - includes high-risk items" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 4: Execution
|
||||||
|
|
||||||
|
**Step 4.1: Filter Items by Selection**
|
||||||
|
```javascript
|
||||||
|
const selectedCategories = userSelection.categories // ['Sessions', 'Documents', ...]
|
||||||
|
const riskLevel = userSelection.risk // 'Low only', 'Low + Medium', 'All'
|
||||||
|
|
||||||
|
const riskFilter = {
|
||||||
|
'Low only': ['low'],
|
||||||
|
'Low + Medium': ['low', 'medium'],
|
||||||
|
'All': ['low', 'medium', 'high']
|
||||||
|
}[riskLevel]
|
||||||
|
|
||||||
|
const itemsToClean = []
|
||||||
|
|
||||||
|
if (selectedCategories.includes('Sessions')) {
|
||||||
|
itemsToClean.push(...manifest.discoveries.stale_sessions.filter(s => riskFilter.includes(s.risk)))
|
||||||
|
}
|
||||||
|
if (selectedCategories.includes('Documents')) {
|
||||||
|
itemsToClean.push(...manifest.discoveries.drifted_documents.filter(d => riskFilter.includes(d.risk)))
|
||||||
|
}
|
||||||
|
if (selectedCategories.includes('Dead Code')) {
|
||||||
|
itemsToClean.push(...manifest.discoveries.dead_code.filter(c => riskFilter.includes(c.risk)))
|
||||||
|
}
|
||||||
|
|
||||||
|
TodoWrite({
|
||||||
|
todos: itemsToClean.map(item => ({
|
||||||
|
content: `Clean: ${item.path}`,
|
||||||
|
status: "pending",
|
||||||
|
activeForm: `Cleaning ${item.path}`
|
||||||
|
}))
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4.2: Execute Cleanup**
|
||||||
|
```javascript
|
||||||
|
const results = { deleted: [], failed: [], skipped: [] }
|
||||||
|
|
||||||
|
for (const item of itemsToClean) {
|
||||||
|
TodoWrite({ todos: [...] }) // Mark current as in_progress
|
||||||
|
|
||||||
|
try {
|
||||||
|
if (item.type === 'orphan_file' || item.type === 'dead_export') {
|
||||||
|
// Dead code: Delete file or remove export
|
||||||
|
Bash({ command: `rm -rf "${item.path}"` })
|
||||||
|
} else {
|
||||||
|
// Sessions and documents: Delete directory/file
|
||||||
|
Bash({ command: `rm -rf "${item.path}"` })
|
||||||
|
}
|
||||||
|
|
||||||
|
results.deleted.push(item.path)
|
||||||
|
TodoWrite({ todos: [...] }) // Mark as completed
|
||||||
|
} catch (error) {
|
||||||
|
results.failed.push({ path: item.path, error: error.message })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4.3: Update Manifests**
|
||||||
|
```javascript
|
||||||
|
// Update archives manifest if sessions were deleted
|
||||||
|
if (selectedCategories.includes('Sessions')) {
|
||||||
|
const archiveManifestPath = '.workflow/archives/manifest.json'
|
||||||
|
if (fileExists(archiveManifestPath)) {
|
||||||
|
const archiveManifest = JSON.parse(Read(archiveManifestPath))
|
||||||
|
const deletedSessionIds = results.deleted
|
||||||
|
.filter(p => p.includes('WFS-'))
|
||||||
|
.map(p => p.split('/').pop())
|
||||||
|
|
||||||
|
const updatedManifest = archiveManifest.filter(entry =>
|
||||||
|
!deletedSessionIds.includes(entry.session_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
Write(archiveManifestPath, JSON.stringify(updatedManifest, null, 2))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update project.json if features referenced deleted sessions
|
||||||
|
const projectPath = '.workflow/project.json'
|
||||||
|
if (fileExists(projectPath)) {
|
||||||
|
const project = JSON.parse(Read(projectPath))
|
||||||
|
const deletedPaths = new Set(results.deleted)
|
||||||
|
|
||||||
|
project.features = project.features.filter(f =>
|
||||||
|
!deletedPaths.has(f.traceability?.archive_path)
|
||||||
|
)
|
||||||
|
|
||||||
|
project.statistics.total_features = project.features.length
|
||||||
|
project.statistics.last_updated = getUtc8ISOString()
|
||||||
|
|
||||||
|
Write(projectPath, JSON.stringify(project, null, 2))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4.4: Report Results**
|
||||||
|
```javascript
|
||||||
|
console.log(`
|
||||||
|
## Cleanup Complete
|
||||||
|
|
||||||
|
**Deleted**: ${results.deleted.length} items
|
||||||
|
**Failed**: ${results.failed.length} items
|
||||||
|
**Skipped**: ${results.skipped.length} items
|
||||||
|
|
||||||
|
### Deleted Items
|
||||||
|
${results.deleted.map(p => `- ${p}`).join('\n')}
|
||||||
|
|
||||||
|
${results.failed.length > 0 ? `
|
||||||
|
### Failed Items
|
||||||
|
${results.failed.map(f => `- ${f.path}: ${f.error}`).join('\n')}
|
||||||
|
` : ''}
|
||||||
|
|
||||||
|
Cleanup manifest archived to: ${sessionFolder}/cleanup-manifest.json
|
||||||
|
`)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Session Folder Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/.clean/{YYYY-MM-DD}/
|
||||||
|
├── mainline-profile.json # Git history analysis
|
||||||
|
└── cleanup-manifest.json # Discovery results
|
||||||
|
```
|
||||||
|
|
||||||
|
## Risk Level Definitions
|
||||||
|
|
||||||
|
| Risk | Description | Examples |
|
||||||
|
|------|-------------|----------|
|
||||||
|
| **Low** | Safe to delete, no dependencies | Empty sessions, scratchpad files, 100% broken docs |
|
||||||
|
| **Medium** | Likely unused, verify before delete | Orphan files, old archives, partially broken docs |
|
||||||
|
| **High** | May have hidden dependencies | Files with some imports, recent modifications |
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Situation | Action |
|
||||||
|
|-----------|--------|
|
||||||
|
| No git repository | Skip mainline detection, use file timestamps only |
|
||||||
|
| Session in use (.archiving) | Skip with warning |
|
||||||
|
| Permission denied | Report error, continue with others |
|
||||||
|
| Manifest parse error | Regenerate from filesystem scan |
|
||||||
|
| Empty discovery | Report "codebase is clean" |
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/workflow:session:complete` - Properly archive active sessions
|
||||||
|
- `/memory:compact` - Save session memory before cleanup
|
||||||
|
- `/workflow:status` - View current workflow state
|
||||||
@@ -191,7 +191,7 @@ target/
|
|||||||
### Step 2: Workspace Analysis (MANDATORY FIRST)
|
### Step 2: Workspace Analysis (MANDATORY FIRST)
|
||||||
```bash
|
```bash
|
||||||
# Analyze workspace structure
|
# Analyze workspace structure
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh json)
|
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Technology Detection
|
### Step 3: Technology Detection
|
||||||
|
|||||||
383
.claude/commands/memory/compact.md
Normal file
383
.claude/commands/memory/compact.md
Normal file
@@ -0,0 +1,383 @@
|
|||||||
|
---
|
||||||
|
name: compact
|
||||||
|
description: Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool
|
||||||
|
argument-hint: "[optional: session description]"
|
||||||
|
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
|
||||||
|
examples:
|
||||||
|
- /memory:compact
|
||||||
|
- /memory:compact "completed core-memory module"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Memory Compact Command (/memory:compact)
|
||||||
|
|
||||||
|
## 1. Overview
|
||||||
|
|
||||||
|
The `memory:compact` command **compresses current session working memory** into structured text optimized for **session recovery**, extracts critical information, and saves it to persistent storage via MCP `core_memory` tool.
|
||||||
|
|
||||||
|
**Core Philosophy**:
|
||||||
|
- **Session Recovery First**: Capture everything needed to resume work seamlessly
|
||||||
|
- **Minimize Re-exploration**: Include file paths, decisions, and state to avoid redundant analysis
|
||||||
|
- **Preserve Train of Thought**: Keep notes and hypotheses for complex debugging
|
||||||
|
- **Actionable State**: Record last action result and known issues
|
||||||
|
|
||||||
|
## 2. Parameters
|
||||||
|
|
||||||
|
- `"session description"` (Optional): Session description to supplement objective
|
||||||
|
- Example: "completed core-memory module"
|
||||||
|
- Example: "debugging JWT refresh - suspected memory leak"
|
||||||
|
|
||||||
|
## 3. Structured Output Format
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Session ID
|
||||||
|
[WFS-ID if workflow session active, otherwise (none)]
|
||||||
|
|
||||||
|
## Project Root
|
||||||
|
[Absolute path to project root, e.g., D:\Claude_dms3]
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
[High-level goal - the "North Star" of this session]
|
||||||
|
|
||||||
|
## Execution Plan
|
||||||
|
[CRITICAL: Embed the LATEST plan in its COMPLETE and DETAILED form]
|
||||||
|
|
||||||
|
### Source: [workflow | todo | user-stated | inferred]
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Full Execution Plan (Click to expand)</summary>
|
||||||
|
|
||||||
|
[PRESERVE COMPLETE PLAN VERBATIM - DO NOT SUMMARIZE]
|
||||||
|
- ALL phases, tasks, subtasks
|
||||||
|
- ALL file paths (absolute)
|
||||||
|
- ALL dependencies and prerequisites
|
||||||
|
- ALL acceptance criteria
|
||||||
|
- ALL status markers ([x] done, [ ] pending)
|
||||||
|
- ALL notes and context
|
||||||
|
|
||||||
|
Example:
|
||||||
|
## Phase 1: Setup
|
||||||
|
- [x] Initialize project structure
|
||||||
|
- Created D:\Claude_dms3\src\core\index.ts
|
||||||
|
- Added dependencies: lodash, zod
|
||||||
|
- [ ] Configure TypeScript
|
||||||
|
- Update tsconfig.json for strict mode
|
||||||
|
|
||||||
|
## Phase 2: Implementation
|
||||||
|
- [ ] Implement core API
|
||||||
|
- Target: D:\Claude_dms3\src\api\handler.ts
|
||||||
|
- Dependencies: Phase 1 complete
|
||||||
|
- Acceptance: All tests pass
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## Working Files (Modified)
|
||||||
|
[Absolute paths to actively modified files]
|
||||||
|
- D:\Claude_dms3\src\file1.ts (role: main implementation)
|
||||||
|
- D:\Claude_dms3\tests\file1.test.ts (role: unit tests)
|
||||||
|
|
||||||
|
## Reference Files (Read-Only)
|
||||||
|
[Absolute paths to context files - NOT modified but essential for understanding]
|
||||||
|
- D:\Claude_dms3\.claude\CLAUDE.md (role: project instructions)
|
||||||
|
- D:\Claude_dms3\src\types\index.ts (role: type definitions)
|
||||||
|
- D:\Claude_dms3\package.json (role: dependencies)
|
||||||
|
|
||||||
|
## Last Action
|
||||||
|
[Last significant action and its result/status]
|
||||||
|
|
||||||
|
## Decisions
|
||||||
|
- [Decision]: [Reasoning]
|
||||||
|
- [Decision]: [Reasoning]
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
- [User-specified limitation or preference]
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- [Added/changed packages or environment requirements]
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
- [Deferred bug or edge case]
|
||||||
|
|
||||||
|
## Changes Made
|
||||||
|
- [Completed modification]
|
||||||
|
|
||||||
|
## Pending
|
||||||
|
- [Next step] or (none)
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
[Unstructured thoughts, hypotheses, debugging trails]
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Field Definitions
|
||||||
|
|
||||||
|
| Field | Purpose | Recovery Value |
|
||||||
|
|-------|---------|----------------|
|
||||||
|
| **Session ID** | Workflow session identifier (WFS-*) | Links memory to specific stateful task execution |
|
||||||
|
| **Project Root** | Absolute path to project directory | Enables correct path resolution in new sessions |
|
||||||
|
| **Objective** | Ultimate goal of the session | Prevents losing track of broader feature |
|
||||||
|
| **Execution Plan** | Complete plan from any source (verbatim) | Preserves full planning context, avoids re-planning |
|
||||||
|
| **Working Files** | Actively modified files (absolute paths) | Immediately identifies where work was happening |
|
||||||
|
| **Reference Files** | Read-only context files (absolute paths) | Eliminates re-exploration for critical context |
|
||||||
|
| **Last Action** | Final tool output/status | Immediate state awareness (success/failure) |
|
||||||
|
| **Decisions** | Architectural choices + reasoning | Prevents re-litigating settled decisions |
|
||||||
|
| **Constraints** | User-imposed limitations | Maintains personalized coding style |
|
||||||
|
| **Dependencies** | Package/environment changes | Prevents missing dependency errors |
|
||||||
|
| **Known Issues** | Deferred bugs/edge cases | Ensures issues aren't forgotten |
|
||||||
|
| **Changes Made** | Completed modifications | Clear record of what was done |
|
||||||
|
| **Pending** | Next steps | Immediate action items |
|
||||||
|
| **Notes** | Hypotheses, debugging trails | Preserves "train of thought" |
|
||||||
|
|
||||||
|
## 5. Execution Flow
|
||||||
|
|
||||||
|
### Step 1: Analyze Current Session
|
||||||
|
|
||||||
|
Extract the following from conversation history:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const sessionAnalysis = {
|
||||||
|
sessionId: "", // WFS-* if workflow session active, null otherwise
|
||||||
|
projectRoot: "", // Absolute path: D:\Claude_dms3
|
||||||
|
objective: "", // High-level goal (1-2 sentences)
|
||||||
|
executionPlan: {
|
||||||
|
source: "workflow" | "todo" | "user-stated" | "inferred",
|
||||||
|
content: "" // Full plan content - ALWAYS preserve COMPLETE and DETAILED form
|
||||||
|
},
|
||||||
|
workingFiles: [], // {absolutePath, role} - modified files
|
||||||
|
referenceFiles: [], // {absolutePath, role} - read-only context files
|
||||||
|
lastAction: "", // Last significant action + result
|
||||||
|
decisions: [], // {decision, reasoning}
|
||||||
|
constraints: [], // User-specified limitations
|
||||||
|
dependencies: [], // Added/changed packages
|
||||||
|
knownIssues: [], // Deferred bugs
|
||||||
|
changesMade: [], // Completed modifications
|
||||||
|
pending: [], // Next steps
|
||||||
|
notes: "" // Unstructured thoughts
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Generate Structured Text
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Helper: Generate execution plan section
|
||||||
|
const generateExecutionPlan = (plan) => {
|
||||||
|
const sourceLabels = {
|
||||||
|
'workflow': 'workflow (IMPL_PLAN.md)',
|
||||||
|
'todo': 'todo (TodoWrite)',
|
||||||
|
'user-stated': 'user-stated',
|
||||||
|
'inferred': 'inferred'
|
||||||
|
};
|
||||||
|
|
||||||
|
// CRITICAL: Preserve complete plan content verbatim - DO NOT summarize
|
||||||
|
return `### Source: ${sourceLabels[plan.source] || plan.source}
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Full Execution Plan (Click to expand)</summary>
|
||||||
|
|
||||||
|
${plan.content}
|
||||||
|
|
||||||
|
</details>`;
|
||||||
|
};
|
||||||
|
|
||||||
|
const structuredText = `## Session ID
|
||||||
|
${sessionAnalysis.sessionId || '(none)'}
|
||||||
|
|
||||||
|
## Project Root
|
||||||
|
${sessionAnalysis.projectRoot}
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
${sessionAnalysis.objective}
|
||||||
|
|
||||||
|
## Execution Plan
|
||||||
|
${generateExecutionPlan(sessionAnalysis.executionPlan)}
|
||||||
|
|
||||||
|
## Working Files (Modified)
|
||||||
|
${sessionAnalysis.workingFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Reference Files (Read-Only)
|
||||||
|
${sessionAnalysis.referenceFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Last Action
|
||||||
|
${sessionAnalysis.lastAction}
|
||||||
|
|
||||||
|
## Decisions
|
||||||
|
${sessionAnalysis.decisions.map(d => `- ${d.decision}: ${d.reasoning}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
${sessionAnalysis.constraints.map(c => `- ${c}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
${sessionAnalysis.dependencies.map(d => `- ${d}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
${sessionAnalysis.knownIssues.map(i => `- ${i}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Changes Made
|
||||||
|
${sessionAnalysis.changesMade.map(c => `- ${c}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Pending
|
||||||
|
${sessionAnalysis.pending.length > 0
|
||||||
|
? sessionAnalysis.pending.map(p => `- ${p}`).join('\n')
|
||||||
|
: '(none)'}
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
${sessionAnalysis.notes || '(none)'}`
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Import to Core Memory via MCP
|
||||||
|
|
||||||
|
Use the MCP `core_memory` tool to save the structured text:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
mcp__ccw-tools__core_memory({
|
||||||
|
operation: "import",
|
||||||
|
text: structuredText
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
Or via CLI (pipe structured text to import):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Write structured text to temp file, then import
|
||||||
|
echo "$structuredText" | ccw core-memory import
|
||||||
|
|
||||||
|
# Or from a file
|
||||||
|
ccw core-memory import --file /path/to/session-memory.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response Format**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"operation": "import",
|
||||||
|
"id": "CMEM-YYYYMMDD-HHMMSS",
|
||||||
|
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Report Recovery ID
|
||||||
|
|
||||||
|
After successful import, **clearly display the Recovery ID** to the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
╔════════════════════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✓ Session Memory Saved ║
|
||||||
|
║ ║
|
||||||
|
║ Recovery ID: CMEM-YYYYMMDD-HHMMSS ║
|
||||||
|
║ ║
|
||||||
|
║ To restore: "Please import memory <ID>" ║
|
||||||
|
║ (MCP: core_memory export | CLI: ccw core-memory export --id <ID>) ║
|
||||||
|
╚════════════════════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Quality Checklist
|
||||||
|
|
||||||
|
Before generating:
|
||||||
|
- [ ] Session ID captured if workflow session active (WFS-*)
|
||||||
|
- [ ] Project Root is absolute path (e.g., D:\Claude_dms3)
|
||||||
|
- [ ] Objective clearly states the "North Star" goal
|
||||||
|
- [ ] Execution Plan: COMPLETE plan preserved VERBATIM (no summarization)
|
||||||
|
- [ ] Plan Source: Clearly identified (workflow | todo | user-stated | inferred)
|
||||||
|
- [ ] Plan Details: ALL phases, tasks, file paths, dependencies, status markers included
|
||||||
|
- [ ] All file paths are ABSOLUTE (not relative)
|
||||||
|
- [ ] Working Files: 3-8 modified files with roles
|
||||||
|
- [ ] Reference Files: Key context files (CLAUDE.md, types, configs)
|
||||||
|
- [ ] Last Action captures final state (success/failure)
|
||||||
|
- [ ] Decisions include reasoning, not just choices
|
||||||
|
- [ ] Known Issues separates deferred from forgotten bugs
|
||||||
|
- [ ] Notes preserve debugging hypotheses if any
|
||||||
|
|
||||||
|
## 7. Path Resolution Rules
|
||||||
|
|
||||||
|
### Project Root Detection
|
||||||
|
1. Check current working directory from environment
|
||||||
|
2. Look for project markers: `.git/`, `package.json`, `.claude/`
|
||||||
|
3. Use the topmost directory containing these markers
|
||||||
|
|
||||||
|
### Absolute Path Conversion
|
||||||
|
```javascript
|
||||||
|
// Convert relative to absolute
|
||||||
|
const toAbsolutePath = (relativePath, projectRoot) => {
|
||||||
|
if (path.isAbsolute(relativePath)) return relativePath;
|
||||||
|
return path.join(projectRoot, relativePath);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Example: "src/api/auth.ts" → "D:\Claude_dms3\src\api\auth.ts"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reference File Categories
|
||||||
|
| Category | Examples | Priority |
|
||||||
|
|----------|----------|----------|
|
||||||
|
| Project Config | `.claude/CLAUDE.md`, `package.json`, `tsconfig.json` | High |
|
||||||
|
| Type Definitions | `src/types/*.ts`, `*.d.ts` | High |
|
||||||
|
| Related Modules | Parent/sibling modules with shared interfaces | Medium |
|
||||||
|
| Test Files | Corresponding test files for modified code | Medium |
|
||||||
|
| Documentation | `README.md`, `ARCHITECTURE.md` | Low |
|
||||||
|
|
||||||
|
## 8. Plan Detection (Priority Order)
|
||||||
|
|
||||||
|
### Priority 1: Workflow Session (IMPL_PLAN.md)
|
||||||
|
```javascript
|
||||||
|
// Check for active workflow session
|
||||||
|
const manifest = await mcp__ccw-tools__session_manager({
|
||||||
|
operation: "list",
|
||||||
|
location: "active"
|
||||||
|
});
|
||||||
|
|
||||||
|
if (manifest.sessions?.length > 0) {
|
||||||
|
const session = manifest.sessions[0];
|
||||||
|
const plan = await mcp__ccw-tools__session_manager({
|
||||||
|
operation: "read",
|
||||||
|
session_id: session.id,
|
||||||
|
content_type: "plan"
|
||||||
|
});
|
||||||
|
sessionAnalysis.sessionId = session.id;
|
||||||
|
sessionAnalysis.executionPlan.source = "workflow";
|
||||||
|
sessionAnalysis.executionPlan.content = plan.content;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Priority 2: TodoWrite (Current Session Todos)
|
||||||
|
```javascript
|
||||||
|
// Extract from conversation - look for TodoWrite tool calls
|
||||||
|
// Preserve COMPLETE todo list with all details
|
||||||
|
const todos = extractTodosFromConversation();
|
||||||
|
if (todos.length > 0) {
|
||||||
|
sessionAnalysis.executionPlan.source = "todo";
|
||||||
|
// Format todos with full context - preserve status markers
|
||||||
|
sessionAnalysis.executionPlan.content = todos.map(t =>
|
||||||
|
`- [${t.status === 'completed' ? 'x' : t.status === 'in_progress' ? '>' : ' '}] ${t.content}`
|
||||||
|
).join('\n');
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Priority 3: User-Stated Plan
|
||||||
|
```javascript
|
||||||
|
// Look for explicit plan statements in user messages:
|
||||||
|
// - "Here's my plan: 1. ... 2. ... 3. ..."
|
||||||
|
// - "I want to: first..., then..., finally..."
|
||||||
|
// - Numbered or bulleted lists describing steps
|
||||||
|
const userPlan = extractUserStatedPlan();
|
||||||
|
if (userPlan) {
|
||||||
|
sessionAnalysis.executionPlan.source = "user-stated";
|
||||||
|
sessionAnalysis.executionPlan.content = userPlan;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Priority 4: Inferred Plan
|
||||||
|
```javascript
|
||||||
|
// If no explicit plan, infer from:
|
||||||
|
// - Task description and breakdown discussion
|
||||||
|
// - Sequence of actions taken
|
||||||
|
// - Outstanding work mentioned
|
||||||
|
const inferredPlan = inferPlanFromDiscussion();
|
||||||
|
if (inferredPlan) {
|
||||||
|
sessionAnalysis.executionPlan.source = "inferred";
|
||||||
|
sessionAnalysis.executionPlan.content = inferredPlan;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 9. Notes
|
||||||
|
|
||||||
|
- **Timing**: Execute at task completion or before context switch
|
||||||
|
- **Frequency**: Once per independent task or milestone
|
||||||
|
- **Recovery**: New session can immediately continue with full context
|
||||||
|
- **Knowledge Graph**: Entity relationships auto-extracted for visualization
|
||||||
|
- **Absolute Paths**: Critical for cross-session recovery on different machines
|
||||||
@@ -101,10 +101,10 @@ src/ (depth 1) → SINGLE STRATEGY
|
|||||||
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||||
|
|
||||||
// Get module structure with classification
|
// Get module structure with classification
|
||||||
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
|
||||||
|
|
||||||
// OR with path parameter
|
// OR with path parameter
|
||||||
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
|
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
|
||||||
@@ -200,7 +200,7 @@ for (let layer of [3, 2, 1]) {
|
|||||||
let strategy = module.depth >= 3 ? "full" : "single";
|
let strategy = module.depth >= 3 ? "full" : "single";
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
|
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
});
|
});
|
||||||
if (bash_result.exit_code === 0) {
|
if (bash_result.exit_code === 0) {
|
||||||
@@ -263,7 +263,7 @@ MODULES:
|
|||||||
|
|
||||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||||
|
|
||||||
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
|
EXECUTION SCRIPT: ccw tool exec generate_module_docs
|
||||||
- Accepts strategy parameter: full | single
|
- Accepts strategy parameter: full | single
|
||||||
- Accepts folder type detection: code | navigation
|
- Accepts folder type detection: code | navigation
|
||||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||||
@@ -273,7 +273,7 @@ EXECUTION FLOW (for each module):
|
|||||||
1. Tool fallback loop (exit on first success):
|
1. Tool fallback loop (exit on first success):
|
||||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
|
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
exit_code=$?
|
exit_code=$?
|
||||||
@@ -322,7 +322,7 @@ let project_root = get_project_root();
|
|||||||
report("Generating project README.md...");
|
report("Generating project README.md...");
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
|
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
});
|
});
|
||||||
if (bash_result.exit_code === 0) {
|
if (bash_result.exit_code === 0) {
|
||||||
@@ -335,7 +335,7 @@ for (let tool of tool_order) {
|
|||||||
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
|
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
|
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
});
|
});
|
||||||
if (bash_result.exit_code === 0) {
|
if (bash_result.exit_code === 0) {
|
||||||
@@ -350,7 +350,7 @@ if (bash_result.stdout.includes("API_FOUND")) {
|
|||||||
report("Generating HTTP API documentation...");
|
report("Generating HTTP API documentation...");
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
|
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
});
|
});
|
||||||
if (bash_result.exit_code === 0) {
|
if (bash_result.exit_code === 0) {
|
||||||
|
|||||||
@@ -51,7 +51,7 @@ Orchestrates context-aware documentation generation/update for changed modules u
|
|||||||
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||||
|
|
||||||
// Detect changed modules
|
// Detect changed modules
|
||||||
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
|
||||||
|
|
||||||
// Cache git changes
|
// Cache git changes
|
||||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
@@ -123,7 +123,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
|
|||||||
return async () => {
|
return async () => {
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
|
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
});
|
});
|
||||||
if (bash_result.exit_code === 0) {
|
if (bash_result.exit_code === 0) {
|
||||||
@@ -207,21 +207,21 @@ EXECUTION:
|
|||||||
For each module above:
|
For each module above:
|
||||||
1. Try tool 1:
|
1. Try tool 1:
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
|
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
|
||||||
→ Failure: Try tool 2
|
→ Failure: Try tool 2
|
||||||
2. Try tool 2:
|
2. Try tool 2:
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
|
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
|
||||||
→ Failure: Try tool 3
|
→ Failure: Try tool 3
|
||||||
3. Try tool 3:
|
3. Try tool 3:
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
|
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
|
||||||
|
|||||||
@@ -64,12 +64,17 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
|||||||
```bash
|
```bash
|
||||||
# Get target path, project name, and root
|
# Get target path, project name, and root
|
||||||
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
||||||
|
```
|
||||||
|
|
||||||
# Create session directories (replace timestamp)
|
```javascript
|
||||||
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
|
// Create docs session (type: docs)
|
||||||
|
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
|
||||||
|
// Parse output to get sessionId
|
||||||
|
```
|
||||||
|
|
||||||
# Create workflow-session.json (replace values)
|
```bash
|
||||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
|
# Update workflow-session.json with docs-specific fields
|
||||||
|
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 2: Analyze Structure
|
### Phase 2: Analyze Structure
|
||||||
@@ -80,10 +85,10 @@ bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentati
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Run folder analysis
|
# 1. Run folder analysis
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
|
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
|
||||||
|
|
||||||
# 2. Get top-level directories (first 2 path levels)
|
# 2. Get top-level directories (first 2 path levels)
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
|
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
|
||||||
|
|
||||||
# 3. Find existing docs (if directory exists)
|
# 3. Find existing docs (if directory exists)
|
||||||
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
|
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
|
||||||
@@ -230,12 +235,12 @@ api_id=$((group_count + 3))
|
|||||||
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|
||||||
|------|-------------|-----------|----------|---------------|------------|
|
|------|-------------|-----------|----------|---------------|------------|
|
||||||
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
|
||||||
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
|
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
|
||||||
|
|
||||||
**Command Patterns**:
|
**Command Patterns**:
|
||||||
- Gemini/Qwen: `cd dir && gemini -p "..."`
|
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
|
||||||
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
|
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
|
||||||
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
|
||||||
|
|
||||||
**Generation Process**:
|
**Generation Process**:
|
||||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||||
@@ -326,7 +331,7 @@ api_id=$((group_count + 3))
|
|||||||
{
|
{
|
||||||
"step": 2,
|
"step": 2,
|
||||||
"title": "Batch generate documentation via CLI",
|
"title": "Batch generate documentation via CLI",
|
||||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
|
||||||
"depends_on": [1],
|
"depends_on": [1],
|
||||||
"output": "generated_docs"
|
"output": "generated_docs"
|
||||||
}
|
}
|
||||||
@@ -358,7 +363,7 @@ api_id=$((group_count + 3))
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"step": "analyze_project",
|
"step": "analyze_project",
|
||||||
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
|
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
|
||||||
"output_to": "project_outline"
|
"output_to": "project_outline"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -398,7 +403,7 @@ api_id=$((group_count + 3))
|
|||||||
"pre_analysis": [
|
"pre_analysis": [
|
||||||
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
|
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
|
||||||
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
|
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
|
||||||
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
|
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
|
||||||
],
|
],
|
||||||
"implementation_approach": [
|
"implementation_approach": [
|
||||||
{
|
{
|
||||||
@@ -435,7 +440,7 @@ api_id=$((group_count + 3))
|
|||||||
"pre_analysis": [
|
"pre_analysis": [
|
||||||
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
|
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
|
||||||
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
|
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
|
||||||
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
|
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
|
||||||
],
|
],
|
||||||
"implementation_approach": [
|
"implementation_approach": [
|
||||||
{
|
{
|
||||||
@@ -596,7 +601,7 @@ api_id=$((group_count + 3))
|
|||||||
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|
||||||
|------|---------------|----------|---------------|------------|
|
|------|---------------|----------|---------------|------------|
|
||||||
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
|
||||||
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
|
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
|
||||||
|
|
||||||
**Execution Flow**:
|
**Execution Flow**:
|
||||||
- **Phase 2**: Unified analysis once, results in `.process/`
|
- **Phase 2**: Unified analysis once, results in `.process/`
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
|
|||||||
allowed-tools: Task(*), Bash(*)
|
allowed-tools: Task(*), Bash(*)
|
||||||
examples:
|
examples:
|
||||||
- /memory:load "在当前前端基础上开发用户认证功能"
|
- /memory:load "在当前前端基础上开发用户认证功能"
|
||||||
- /memory:load --tool qwen -p "重构支付模块API"
|
- /memory:load --tool qwen "重构支付模块API"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Memory Load Command (/memory:load)
|
# Memory Load Command (/memory:load)
|
||||||
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
|
|||||||
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
||||||
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
||||||
3. **Extracts Keywords**: Derives core keywords from task description
|
3. **Extracts Keywords**: Derives core keywords from task description
|
||||||
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
|
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
|
||||||
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
||||||
6. **Generates Content Package**: Returns structured JSON core content package
|
6. **Generates Content Package**: Returns structured JSON core content package
|
||||||
|
|
||||||
@@ -109,7 +109,7 @@ Task(
|
|||||||
|
|
||||||
1. **Project Structure**
|
1. **Project Structure**
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
bash(ccw tool exec get_modules_by_depth '{}')
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
2. **Core Documentation**
|
2. **Core Documentation**
|
||||||
@@ -136,7 +136,7 @@ Task(
|
|||||||
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
||||||
|
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd . && ${tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Extract project core context for task: ${task_description}
|
PURPOSE: Extract project core context for task: ${task_description}
|
||||||
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
@@ -147,7 +147,7 @@ RULES:
|
|||||||
- Identify key architecture patterns and technical constraints
|
- Identify key architecture patterns and technical constraints
|
||||||
- Extract integration points and development standards
|
- Extract integration points and development standards
|
||||||
- Output concise, structured format
|
- Output concise, structured format
|
||||||
"
|
" --tool ${tool} --mode analysis
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
### Step 4: Generate Core Content Package
|
### Step 4: Generate Core Content Package
|
||||||
@@ -212,7 +212,7 @@ Before returning:
|
|||||||
### Example 2: Using Qwen Tool
|
### Example 2: Using Qwen Tool
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
/memory:load --tool qwen -p "重构支付模块API"
|
/memory:load --tool qwen "重构支付模块API"
|
||||||
```
|
```
|
||||||
|
|
||||||
Agent uses Qwen CLI for analysis, returns same structured package.
|
Agent uses Qwen CLI for analysis, returns same structured package.
|
||||||
|
|||||||
310
.claude/commands/memory/tech-research-rules.md
Normal file
310
.claude/commands/memory/tech-research-rules.md
Normal file
@@ -0,0 +1,310 @@
|
|||||||
|
---
|
||||||
|
name: tech-research-rules
|
||||||
|
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
|
||||||
|
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
||||||
|
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Tech Stack Rules Generator
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
|
||||||
|
|
||||||
|
**Output Structure**:
|
||||||
|
```
|
||||||
|
.claude/rules/tech/{tech-stack}/
|
||||||
|
├── core.md # paths: **/*.{ext} - Core principles
|
||||||
|
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
|
||||||
|
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
|
||||||
|
├── config.md # paths: *.config.* - Configuration rules
|
||||||
|
├── api.md # paths: **/api/**/* - API rules (backend only)
|
||||||
|
├── components.md # paths: **/components/**/* - Component rules (frontend only)
|
||||||
|
└── metadata.json # Generation metadata
|
||||||
|
```
|
||||||
|
|
||||||
|
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Rules
|
||||||
|
|
||||||
|
1. **Start Immediately**: First action is TodoWrite initialization
|
||||||
|
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
|
||||||
|
3. **Template-Driven**: Agent reads templates before generating content
|
||||||
|
4. **Agent Produces Files**: Agent writes all rule files directly
|
||||||
|
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3-Phase Execution
|
||||||
|
|
||||||
|
### Phase 1: Prepare Context & Detect Tech Stack
|
||||||
|
|
||||||
|
**Goal**: Detect input mode, extract tech stack info, determine file extensions
|
||||||
|
|
||||||
|
**Input Mode Detection**:
|
||||||
|
```bash
|
||||||
|
input="$1"
|
||||||
|
|
||||||
|
if [[ "$input" == WFS-* ]]; then
|
||||||
|
MODE="session"
|
||||||
|
SESSION_ID="$input"
|
||||||
|
# Read workflow-session.json to extract tech stack
|
||||||
|
else
|
||||||
|
MODE="direct"
|
||||||
|
TECH_STACK_NAME="$input"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tech Stack Analysis**:
|
||||||
|
```javascript
|
||||||
|
// Decompose composite tech stacks
|
||||||
|
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
|
||||||
|
|
||||||
|
const TECH_EXTENSIONS = {
|
||||||
|
"typescript": "{ts,tsx}",
|
||||||
|
"javascript": "{js,jsx}",
|
||||||
|
"python": "py",
|
||||||
|
"rust": "rs",
|
||||||
|
"go": "go",
|
||||||
|
"java": "java",
|
||||||
|
"csharp": "cs",
|
||||||
|
"ruby": "rb",
|
||||||
|
"php": "php"
|
||||||
|
};
|
||||||
|
|
||||||
|
const FRAMEWORK_TYPE = {
|
||||||
|
"react": "frontend",
|
||||||
|
"vue": "frontend",
|
||||||
|
"angular": "frontend",
|
||||||
|
"nextjs": "fullstack",
|
||||||
|
"nuxt": "fullstack",
|
||||||
|
"fastapi": "backend",
|
||||||
|
"express": "backend",
|
||||||
|
"django": "backend",
|
||||||
|
"rails": "backend"
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Existing Rules**:
|
||||||
|
```bash
|
||||||
|
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
||||||
|
rules_dir=".claude/rules/tech/${normalized_name}"
|
||||||
|
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Skip Decision**:
|
||||||
|
- If `existing_count > 0` AND no `--regenerate` → `SKIP_GENERATION = true`
|
||||||
|
- If `--regenerate` → Delete existing and regenerate
|
||||||
|
|
||||||
|
**Output Variables**:
|
||||||
|
- `TECH_STACK_NAME`: Normalized name
|
||||||
|
- `PRIMARY_LANG`: Primary language
|
||||||
|
- `FILE_EXT`: File extension pattern
|
||||||
|
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
|
||||||
|
- `COMPONENTS`: Array of tech components
|
||||||
|
- `SKIP_GENERATION`: Boolean
|
||||||
|
|
||||||
|
**TodoWrite**: Mark phase 1 completed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Agent Produces Path-Conditional Rules
|
||||||
|
|
||||||
|
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
||||||
|
|
||||||
|
**Goal**: Delegate to agent for Exa research and rule file generation
|
||||||
|
|
||||||
|
**Template Files**:
|
||||||
|
```
|
||||||
|
~/.claude/workflows/cli-templates/prompts/rules/
|
||||||
|
├── tech-rules-agent-prompt.txt # Agent instructions
|
||||||
|
├── rule-core.txt # Core principles template
|
||||||
|
├── rule-patterns.txt # Implementation patterns template
|
||||||
|
├── rule-testing.txt # Testing rules template
|
||||||
|
├── rule-config.txt # Configuration rules template
|
||||||
|
├── rule-api.txt # API rules template (backend)
|
||||||
|
└── rule-components.txt # Component rules template (frontend)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent Task**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
|
||||||
|
prompt: `
|
||||||
|
You are generating path-conditional rules for Claude Code.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
- Tech Stack: ${TECH_STACK_NAME}
|
||||||
|
- Primary Language: ${PRIMARY_LANG}
|
||||||
|
- File Extensions: ${FILE_EXT}
|
||||||
|
- Framework Type: ${FRAMEWORK_TYPE}
|
||||||
|
- Components: ${JSON.stringify(COMPONENTS)}
|
||||||
|
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
|
||||||
|
Read the agent prompt template for detailed instructions:
|
||||||
|
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
||||||
|
|
||||||
|
## Execution Steps
|
||||||
|
|
||||||
|
1. Execute Exa research queries (see agent prompt)
|
||||||
|
2. Read each rule template
|
||||||
|
3. Generate rule files following template structure
|
||||||
|
4. Write files to output directory
|
||||||
|
5. Write metadata.json
|
||||||
|
6. Report completion
|
||||||
|
|
||||||
|
## Variable Substitutions
|
||||||
|
|
||||||
|
Replace in templates:
|
||||||
|
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
|
||||||
|
- {PRIMARY_LANG} → ${PRIMARY_LANG}
|
||||||
|
- {FILE_EXT} → ${FILE_EXT}
|
||||||
|
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
|
||||||
|
`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Completion Criteria**:
|
||||||
|
- 4-6 rule files written with proper `paths` frontmatter
|
||||||
|
- metadata.json written
|
||||||
|
- Agent reports files created
|
||||||
|
|
||||||
|
**TodoWrite**: Mark phase 2 completed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Verify & Report
|
||||||
|
|
||||||
|
**Goal**: Verify generated files and provide usage summary
|
||||||
|
|
||||||
|
**Steps**:
|
||||||
|
|
||||||
|
1. **Verify Files**:
|
||||||
|
```bash
|
||||||
|
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Validate Frontmatter**:
|
||||||
|
```bash
|
||||||
|
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Read Metadata**:
|
||||||
|
```javascript
|
||||||
|
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Generate Summary Report**:
|
||||||
|
```
|
||||||
|
Tech Stack Rules Generated
|
||||||
|
|
||||||
|
Tech Stack: {TECH_STACK_NAME}
|
||||||
|
Location: .claude/rules/tech/{TECH_STACK_NAME}/
|
||||||
|
|
||||||
|
Files Created:
|
||||||
|
├── core.md → paths: **/*.{ext}
|
||||||
|
├── patterns.md → paths: src/**/*.{ext}
|
||||||
|
├── testing.md → paths: **/*.{test,spec}.{ext}
|
||||||
|
├── config.md → paths: *.config.*
|
||||||
|
├── api.md → paths: **/api/**/* (if backend)
|
||||||
|
└── components.md → paths: **/components/**/* (if frontend)
|
||||||
|
|
||||||
|
Auto-Loading:
|
||||||
|
- Rules apply automatically when editing matching files
|
||||||
|
- No manual loading required
|
||||||
|
|
||||||
|
Example Activation:
|
||||||
|
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
|
||||||
|
- Edit tests/api.test.ts → core.md + testing.md
|
||||||
|
- Edit package.json → config.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**TodoWrite**: Mark phase 3 completed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Path Pattern Reference
|
||||||
|
|
||||||
|
| Pattern | Matches |
|
||||||
|
|---------|---------|
|
||||||
|
| `**/*.ts` | All .ts files |
|
||||||
|
| `src/**/*` | All files under src/ |
|
||||||
|
| `*.config.*` | Config files in root |
|
||||||
|
| `**/*.{ts,tsx}` | .ts and .tsx files |
|
||||||
|
|
||||||
|
| Tech Stack | Core Pattern | Test Pattern |
|
||||||
|
|------------|--------------|--------------|
|
||||||
|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
|
||||||
|
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
|
||||||
|
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
|
||||||
|
| Go | `**/*.go` | `**/*_test.go` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Parameters
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**:
|
||||||
|
- **session-id**: `WFS-*` format - Extract from workflow session
|
||||||
|
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
|
||||||
|
- **--regenerate**: Force regenerate existing rules
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Single Language
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research "typescript"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
|
||||||
|
|
||||||
|
### Frontend Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research "typescript-react"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
|
||||||
|
|
||||||
|
### Backend Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research "python-fastapi"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
|
||||||
|
|
||||||
|
### From Session
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research WFS-user-auth-20251104
|
||||||
|
```
|
||||||
|
|
||||||
|
**Workflow**: Extract tech stack from session → Generate rules
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Comparison: Rules vs SKILL
|
||||||
|
|
||||||
|
| Aspect | SKILL Memory | Rules |
|
||||||
|
|--------|--------------|-------|
|
||||||
|
| Loading | Manual: `Skill("tech")` | Automatic by path |
|
||||||
|
| Scope | All files when loaded | Only matching files |
|
||||||
|
| Granularity | Monolithic packages | Per-file-type |
|
||||||
|
| Context | Full package | Only relevant rules |
|
||||||
|
|
||||||
|
**When to Use**:
|
||||||
|
- **Rules**: Tech stack conventions per file type
|
||||||
|
- **SKILL**: Reference docs, APIs, examples for manual lookup
|
||||||
@@ -1,477 +0,0 @@
|
|||||||
---
|
|
||||||
name: tech-research
|
|
||||||
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
|
|
||||||
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Tech Stack Research SKILL Generator
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
|
|
||||||
|
|
||||||
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
|
|
||||||
|
|
||||||
**Execution Paths**:
|
|
||||||
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
|
|
||||||
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
|
|
||||||
- **Phase 3 Always Executes**: SKILL index is always generated or updated
|
|
||||||
|
|
||||||
**Agent Responsibility**:
|
|
||||||
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
|
|
||||||
- Orchestrator only provides context paths and waits for completion
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
|
||||||
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
|
|
||||||
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
|
|
||||||
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
|
|
||||||
5. **No User Prompts**: Never ask user questions or wait for input between phases
|
|
||||||
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
|
|
||||||
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3-Phase Execution
|
|
||||||
|
|
||||||
### Phase 1: Prepare Context Paths
|
|
||||||
|
|
||||||
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
|
|
||||||
|
|
||||||
**Input Mode Detection**:
|
|
||||||
```bash
|
|
||||||
# Get input parameter
|
|
||||||
input="$1"
|
|
||||||
|
|
||||||
# Detect mode
|
|
||||||
if [[ "$input" == WFS-* ]]; then
|
|
||||||
MODE="session"
|
|
||||||
SESSION_ID="$input"
|
|
||||||
CONTEXT_PATH=".workflow/${SESSION_ID}"
|
|
||||||
else
|
|
||||||
MODE="direct"
|
|
||||||
TECH_STACK_NAME="$input"
|
|
||||||
CONTEXT_PATH="$input" # Pass tech stack name as context
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check Existing SKILL**:
|
|
||||||
```bash
|
|
||||||
# For session mode, peek at session to get tech stack name
|
|
||||||
if [[ "$MODE" == "session" ]]; then
|
|
||||||
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
|
|
||||||
Read(.workflow/${SESSION_ID}/workflow-session.json)
|
|
||||||
# Extract tech_stack_name (minimal extraction)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Normalize and check
|
|
||||||
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
|
||||||
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
|
|
||||||
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Skip Decision**:
|
|
||||||
```javascript
|
|
||||||
if (existing_files > 0 && !regenerate_flag) {
|
|
||||||
SKIP_GENERATION = true
|
|
||||||
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
|
|
||||||
} else if (regenerate_flag) {
|
|
||||||
bash(rm -rf ".claude/skills/${normalized_name}")
|
|
||||||
SKIP_GENERATION = false
|
|
||||||
message = "Regenerating tech stack SKILL from scratch."
|
|
||||||
} else {
|
|
||||||
SKIP_GENERATION = false
|
|
||||||
message = "No existing SKILL found, generating new tech stack documentation."
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Variables**:
|
|
||||||
- `MODE`: `session` or `direct`
|
|
||||||
- `SESSION_ID`: Session ID (if session mode)
|
|
||||||
- `CONTEXT_PATH`: Path to session directory OR tech stack name
|
|
||||||
- `TECH_STACK_NAME`: Extracted or provided tech stack name
|
|
||||||
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
|
|
||||||
|
|
||||||
**TodoWrite**:
|
|
||||||
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
|
|
||||||
- If not skipping: Mark phase 1 completed, phase 2 in_progress
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 2: Agent Produces All Files
|
|
||||||
|
|
||||||
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
|
||||||
|
|
||||||
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
|
|
||||||
|
|
||||||
**Agent Task Specification**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task(
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
|
|
||||||
prompt: "
|
|
||||||
Generate a complete tech stack SKILL package with Exa research.
|
|
||||||
|
|
||||||
**Context Provided**:
|
|
||||||
- Mode: {MODE}
|
|
||||||
- Context Path: {CONTEXT_PATH}
|
|
||||||
|
|
||||||
**Templates Available**:
|
|
||||||
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
|
|
||||||
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
|
|
||||||
|
|
||||||
**Your Responsibilities**:
|
|
||||||
|
|
||||||
1. **Extract Tech Stack Information**:
|
|
||||||
|
|
||||||
IF MODE == 'session':
|
|
||||||
- Read `.workflow/active/{session_id}/workflow-session.json`
|
|
||||||
- Read `.workflow/active/{session_id}/.process/context-package.json`
|
|
||||||
- Extract tech_stack: {language, frameworks, libraries}
|
|
||||||
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
|
|
||||||
- Example: \"typescript-react-nextjs\"
|
|
||||||
|
|
||||||
IF MODE == 'direct':
|
|
||||||
- Tech stack name = CONTEXT_PATH
|
|
||||||
- Parse composite: split by '-' delimiter
|
|
||||||
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
|
|
||||||
|
|
||||||
2. **Execute Exa Research** (4-6 parallel queries):
|
|
||||||
|
|
||||||
Base Queries (always execute):
|
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
|
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
|
|
||||||
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
|
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
|
|
||||||
|
|
||||||
Component Queries (if composite):
|
|
||||||
- For each additional component:
|
|
||||||
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
|
|
||||||
|
|
||||||
3. **Read Module Format Template**:
|
|
||||||
|
|
||||||
Read template for structure guidance:
|
|
||||||
```bash
|
|
||||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Synthesize Content into 6 Modules**:
|
|
||||||
|
|
||||||
Follow template structure from tech-module-format.txt:
|
|
||||||
- **principles.md** - Core concepts, philosophies (~3K tokens)
|
|
||||||
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
|
|
||||||
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
|
|
||||||
- **testing.md** - Testing strategies, frameworks (~3K tokens)
|
|
||||||
- **config.md** - Setup, configuration, tooling (~3K tokens)
|
|
||||||
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
|
|
||||||
|
|
||||||
Each module follows template format:
|
|
||||||
- Frontmatter (YAML)
|
|
||||||
- Main sections with clear headings
|
|
||||||
- Code examples from Exa research
|
|
||||||
- Best practices sections
|
|
||||||
- References to Exa sources
|
|
||||||
|
|
||||||
5. **Write Files Directly**:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Create directory
|
|
||||||
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
|
|
||||||
|
|
||||||
// Write each module file using Write tool
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
|
|
||||||
// Write frameworks.md only if composite
|
|
||||||
|
|
||||||
// Write metadata.json
|
|
||||||
Write({
|
|
||||||
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
|
|
||||||
content: JSON.stringify({
|
|
||||||
tech_stack_name,
|
|
||||||
components,
|
|
||||||
is_composite,
|
|
||||||
generated_at: timestamp,
|
|
||||||
source: \"exa-research\",
|
|
||||||
research_summary: { total_queries, total_sources }
|
|
||||||
})
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Report Completion**:
|
|
||||||
|
|
||||||
Provide summary:
|
|
||||||
- Tech stack name
|
|
||||||
- Files created (count)
|
|
||||||
- Exa queries executed
|
|
||||||
- Sources consulted
|
|
||||||
|
|
||||||
**CRITICAL**:
|
|
||||||
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
|
|
||||||
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
|
|
||||||
- Do NOT return JSON or structured data - produce actual .md files
|
|
||||||
- Handle errors gracefully (Exa failures, missing files, template read failures)
|
|
||||||
- If tech stack cannot be determined, ask orchestrator to clarify
|
|
||||||
"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Completion Criteria**:
|
|
||||||
- Agent task executed successfully
|
|
||||||
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
|
|
||||||
- metadata.json written
|
|
||||||
- Agent reports completion
|
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 3: Generate SKILL.md Index
|
|
||||||
|
|
||||||
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
|
|
||||||
|
|
||||||
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
|
|
||||||
|
|
||||||
**Steps**:
|
|
||||||
|
|
||||||
1. **Verify Generated Files**:
|
|
||||||
```bash
|
|
||||||
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Read metadata.json**:
|
|
||||||
```javascript
|
|
||||||
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
|
|
||||||
// Extract: tech_stack_name, components, is_composite, research_summary
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Read Module Headers** (optional, first 20 lines):
|
|
||||||
```javascript
|
|
||||||
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
|
|
||||||
// Repeat for other modules
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Read SKILL Index Template**:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Generate SKILL.md Index**:
|
|
||||||
|
|
||||||
Follow template from tech-skill-index.txt with variable substitutions:
|
|
||||||
- `{TECH_STACK_NAME}`: From metadata.json
|
|
||||||
- `{MAIN_TECH}`: Primary technology
|
|
||||||
- `{ISO_TIMESTAMP}`: Current timestamp
|
|
||||||
- `{QUERY_COUNT}`: From research_summary
|
|
||||||
- `{SOURCE_COUNT}`: From research_summary
|
|
||||||
- Conditional sections for composite tech stacks
|
|
||||||
|
|
||||||
Template provides structure for:
|
|
||||||
- Frontmatter with metadata
|
|
||||||
- Overview and tech stack description
|
|
||||||
- Module organization (Core/Practical/Config sections)
|
|
||||||
- Loading recommendations (Quick/Implementation/Complete)
|
|
||||||
- Usage guidelines and auto-trigger keywords
|
|
||||||
- Research metadata and version history
|
|
||||||
|
|
||||||
6. **Write SKILL.md**:
|
|
||||||
```javascript
|
|
||||||
Write({
|
|
||||||
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
|
|
||||||
content: generatedIndexMarkdown
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Completion Criteria**:
|
|
||||||
- SKILL.md index written
|
|
||||||
- All module files verified
|
|
||||||
- Loading recommendations included
|
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 3 completed
|
|
||||||
|
|
||||||
**Final Report**:
|
|
||||||
```
|
|
||||||
Tech Stack SKILL Package Complete
|
|
||||||
|
|
||||||
Tech Stack: {TECH_STACK_NAME}
|
|
||||||
Location: .claude/skills/{TECH_STACK_NAME}/
|
|
||||||
|
|
||||||
Files: SKILL.md + 5-6 modules + metadata.json
|
|
||||||
Exa Research: {queries} queries, {sources} sources
|
|
||||||
|
|
||||||
Usage: Skill(command: "{TECH_STACK_NAME}")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Details
|
|
||||||
|
|
||||||
### TodoWrite Patterns
|
|
||||||
|
|
||||||
**Initialization** (Before Phase 1):
|
|
||||||
```javascript
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
|
|
||||||
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Full Path** (SKIP_GENERATION = false):
|
|
||||||
```javascript
|
|
||||||
// After Phase 1
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "in_progress", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "pending", ...}
|
|
||||||
]})
|
|
||||||
|
|
||||||
// After Phase 2
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
|
||||||
]})
|
|
||||||
|
|
||||||
// After Phase 3
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "completed", ...}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Skip Path** (SKIP_GENERATION = true):
|
|
||||||
```javascript
|
|
||||||
// After Phase 1 (skip Phase 2)
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
|
|
||||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Execution Flow
|
|
||||||
|
|
||||||
**Full Path**:
|
|
||||||
```
|
|
||||||
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
|
|
||||||
```
|
|
||||||
|
|
||||||
**Skip Path**:
|
|
||||||
```
|
|
||||||
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Handling
|
|
||||||
|
|
||||||
**Phase 1 Errors**:
|
|
||||||
- Invalid session ID: Report error, verify session exists
|
|
||||||
- Missing context-package: Warn, fall back to direct mode
|
|
||||||
- No tech stack detected: Ask user to specify tech stack name
|
|
||||||
|
|
||||||
**Phase 2 Errors (Agent)**:
|
|
||||||
- Agent task fails: Retry once, report if fails again
|
|
||||||
- Exa API failures: Agent handles internally with retries
|
|
||||||
- Incomplete results: Warn user, proceed with partial data if minimum sections available
|
|
||||||
|
|
||||||
**Phase 3 Errors**:
|
|
||||||
- Write failures: Report which files failed
|
|
||||||
- Missing files: Note in SKILL.md, suggest regeneration
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Arguments**:
|
|
||||||
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
|
|
||||||
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
|
|
||||||
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
|
|
||||||
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
|
|
||||||
- **--tool**: Reserved for future CLI integration (default: gemini)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
**Generated File Structure** (for all examples):
|
|
||||||
```
|
|
||||||
.claude/skills/{tech-stack}/
|
|
||||||
├── SKILL.md # Index (Phase 3)
|
|
||||||
├── principles.md # Agent (Phase 2)
|
|
||||||
├── patterns.md # Agent
|
|
||||||
├── practices.md # Agent
|
|
||||||
├── testing.md # Agent
|
|
||||||
├── config.md # Agent
|
|
||||||
├── frameworks.md # Agent (if composite)
|
|
||||||
└── metadata.json # Agent
|
|
||||||
```
|
|
||||||
|
|
||||||
### Direct Mode - Single Stack
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "typescript"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Detects direct mode, checks existing SKILL
|
|
||||||
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
|
|
||||||
3. Phase 3: Generates SKILL.md index
|
|
||||||
|
|
||||||
### Direct Mode - Composite Stack
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "typescript-react-nextjs"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
|
|
||||||
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
|
|
||||||
3. Phase 3: Generates SKILL.md index with framework integration
|
|
||||||
|
|
||||||
### Session Mode - Extract from Workflow
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research WFS-user-auth-20251104
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
|
|
||||||
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
|
|
||||||
3. Phase 3: Generates SKILL.md index
|
|
||||||
|
|
||||||
### Regenerate Existing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "react" --regenerate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Deletes existing SKILL due to --regenerate
|
|
||||||
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
|
|
||||||
3. Phase 3: Generates updated SKILL.md
|
|
||||||
|
|
||||||
### Skip Path - Fast Update
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "python"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Scenario**: SKILL already exists with 7 files
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
|
|
||||||
2. Phase 2: **SKIPPED**
|
|
||||||
3. Phase 3: Updates SKILL.md index only (5-10x faster)
|
|
||||||
|
|
||||||
|
|
||||||
@@ -99,10 +99,10 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
|
|||||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
|
|
||||||
// Get module structure
|
// Get module structure
|
||||||
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
|
||||||
|
|
||||||
// OR with --path
|
// OR with --path
|
||||||
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
||||||
@@ -185,7 +185,7 @@ for (let layer of [3, 2, 1]) {
|
|||||||
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
|
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
});
|
});
|
||||||
if (bash_result.exit_code === 0) {
|
if (bash_result.exit_code === 0) {
|
||||||
@@ -244,7 +244,7 @@ MODULES:
|
|||||||
|
|
||||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||||
|
|
||||||
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
|
EXECUTION SCRIPT: ccw tool exec update_module_claude
|
||||||
- Accepts strategy parameter: multi-layer | single-layer
|
- Accepts strategy parameter: multi-layer | single-layer
|
||||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||||
|
|
||||||
@@ -252,7 +252,7 @@ EXECUTION FLOW (for each module):
|
|||||||
1. Tool fallback loop (exit on first success):
|
1. Tool fallback loop (exit on first success):
|
||||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
|
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
exit_code=$?
|
exit_code=$?
|
||||||
|
|||||||
@@ -41,7 +41,7 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Detect changed modules
|
// Detect changed modules
|
||||||
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
|
||||||
|
|
||||||
// Cache git changes
|
// Cache git changes
|
||||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
@@ -102,7 +102,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
|
|||||||
return async () => {
|
return async () => {
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
|
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
});
|
});
|
||||||
if (bash_result.exit_code === 0) {
|
if (bash_result.exit_code === 0) {
|
||||||
@@ -184,21 +184,21 @@ EXECUTION:
|
|||||||
For each module above:
|
For each module above:
|
||||||
1. Try tool 1:
|
1. Try tool 1:
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
|
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
|
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
|
||||||
→ Failure: Try tool 2
|
→ Failure: Try tool 2
|
||||||
2. Try tool 2:
|
2. Try tool 2:
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
|
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
|
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
|
||||||
→ Failure: Try tool 3
|
→ Failure: Try tool 3
|
||||||
3. Try tool 3:
|
3. Try tool 3:
|
||||||
Bash({
|
Bash({
|
||||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
|
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
|
||||||
run_in_background: false
|
run_in_background: false
|
||||||
})
|
})
|
||||||
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
|
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
|
||||||
|
|||||||
@@ -187,7 +187,7 @@ Objectives:
|
|||||||
|
|
||||||
3. Use Gemini for aggregation (optional):
|
3. Use Gemini for aggregation (optional):
|
||||||
Command pattern:
|
Command pattern:
|
||||||
cd .workflow/.archives/{session_id} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Extract lessons and conflicts from workflow session
|
PURPOSE: Extract lessons and conflicts from workflow session
|
||||||
TASK:
|
TASK:
|
||||||
• Analyze IMPL_PLAN and lessons from manifest
|
• Analyze IMPL_PLAN and lessons from manifest
|
||||||
@@ -198,7 +198,7 @@ Objectives:
|
|||||||
CONTEXT: @IMPL_PLAN.md @workflow-session.json
|
CONTEXT: @IMPL_PLAN.md @workflow-session.json
|
||||||
EXPECTED: Structured lessons and conflicts in JSON format
|
EXPECTED: Structured lessons and conflicts in JSON format
|
||||||
RULES: Template reference from skill-aggregation.txt
|
RULES: Template reference from skill-aggregation.txt
|
||||||
"
|
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
|
||||||
|
|
||||||
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
|
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
|
||||||
|
|
||||||
@@ -334,7 +334,7 @@ Objectives:
|
|||||||
- Sort sessions by date
|
- Sort sessions by date
|
||||||
|
|
||||||
2. Use Gemini for final aggregation:
|
2. Use Gemini for final aggregation:
|
||||||
gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
|
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
|
||||||
TASK:
|
TASK:
|
||||||
• Group successes by functional domain
|
• Group successes by functional domain
|
||||||
@@ -345,7 +345,7 @@ Objectives:
|
|||||||
CONTEXT: [Provide aggregated JSON data]
|
CONTEXT: [Provide aggregated JSON data]
|
||||||
EXPECTED: Final aggregated structure for SKILL documents
|
EXPECTED: Final aggregated structure for SKILL documents
|
||||||
RULES: Template reference from skill-aggregation.txt
|
RULES: Template reference from skill-aggregation.txt
|
||||||
"
|
" --tool gemini --mode analysis
|
||||||
|
|
||||||
3. Read templates for formatting (same 4 templates as single mode)
|
3. Read templates for formatting (same 4 templates as single mode)
|
||||||
|
|
||||||
|
|||||||
@@ -32,12 +32,16 @@ Identify inconsistencies, duplications, ambiguities, and underspecified items be
|
|||||||
IF --session parameter provided:
|
IF --session parameter provided:
|
||||||
session_id = provided session
|
session_id = provided session
|
||||||
ELSE:
|
ELSE:
|
||||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
# Auto-detect active session
|
||||||
IF active_session EXISTS:
|
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
|
||||||
session_id = get_active_session()
|
IF active_sessions is empty:
|
||||||
ELSE:
|
|
||||||
ERROR: "No active workflow session found. Use --session <session-id>"
|
ERROR: "No active workflow session found. Use --session <session-id>"
|
||||||
EXIT
|
EXIT
|
||||||
|
ELSE IF active_sessions has multiple entries:
|
||||||
|
# Use most recently modified session
|
||||||
|
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
|
||||||
|
ELSE:
|
||||||
|
session_id = basename(active_sessions[0])
|
||||||
|
|
||||||
# Derive absolute paths
|
# Derive absolute paths
|
||||||
session_dir = .workflow/active/WFS-{session}
|
session_dir = .workflow/active/WFS-{session}
|
||||||
@@ -45,13 +49,15 @@ brainstorm_dir = session_dir/.brainstorming
|
|||||||
task_dir = session_dir/.task
|
task_dir = session_dir/.task
|
||||||
|
|
||||||
# Validate required artifacts
|
# Validate required artifacts
|
||||||
SYNTHESIS = brainstorm_dir/role analysis documents
|
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
|
||||||
|
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
|
||||||
IMPL_PLAN = session_dir/IMPL_PLAN.md
|
IMPL_PLAN = session_dir/IMPL_PLAN.md
|
||||||
TASK_FILES = Glob(task_dir/*.json)
|
TASK_FILES = Glob(task_dir/*.json)
|
||||||
|
|
||||||
# Abort if missing
|
# Abort if missing
|
||||||
IF NOT EXISTS(SYNTHESIS):
|
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
|
||||||
ERROR: "role analysis documents not found. Run /workflow:brainstorm:synthesis first"
|
IF SYNTHESIS_FILES.count == 0:
|
||||||
|
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
|
||||||
EXIT
|
EXIT
|
||||||
|
|
||||||
IF NOT EXISTS(IMPL_PLAN):
|
IF NOT EXISTS(IMPL_PLAN):
|
||||||
@@ -95,7 +101,7 @@ Load only minimal necessary context from each artifact:
|
|||||||
- Dependencies (depends_on, blocks)
|
- Dependencies (depends_on, blocks)
|
||||||
- Context (requirements, focus_paths, acceptance, artifacts)
|
- Context (requirements, focus_paths, acceptance, artifacts)
|
||||||
- Flow control (pre_analysis, implementation_approach)
|
- Flow control (pre_analysis, implementation_approach)
|
||||||
- Meta (complexity, priority, use_codex)
|
- Meta (complexity, priority)
|
||||||
|
|
||||||
### 3. Build Semantic Models
|
### 3. Build Semantic Models
|
||||||
|
|
||||||
@@ -135,27 +141,27 @@ Focus on high-signal findings. Limit to 50 findings total; aggregate remainder i
|
|||||||
- **Unmapped Tasks**: Tasks with no clear requirement linkage
|
- **Unmapped Tasks**: Tasks with no clear requirement linkage
|
||||||
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
|
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
|
||||||
|
|
||||||
#### B. Consistency Validation
|
#### C. Consistency Validation
|
||||||
|
|
||||||
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
|
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
|
||||||
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
|
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
|
||||||
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
|
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
|
||||||
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
|
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
|
||||||
|
|
||||||
#### C. Dependency Integrity
|
#### D. Dependency Integrity
|
||||||
|
|
||||||
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
|
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
|
||||||
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
|
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
|
||||||
- **Broken Dependencies**: Task depends on non-existent task ID
|
- **Broken Dependencies**: Task depends on non-existent task ID
|
||||||
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
|
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
|
||||||
|
|
||||||
#### D. Synthesis Alignment
|
#### E. Synthesis Alignment
|
||||||
|
|
||||||
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
|
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
|
||||||
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
|
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
|
||||||
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
|
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
|
||||||
|
|
||||||
#### E. Task Specification Quality
|
#### F. Task Specification Quality
|
||||||
|
|
||||||
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
|
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
|
||||||
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
|
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
|
||||||
@@ -163,12 +169,12 @@ Focus on high-signal findings. Limit to 50 findings total; aggregate remainder i
|
|||||||
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
|
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
|
||||||
- **Missing Target Files**: Tasks without flow_control.target_files specification
|
- **Missing Target Files**: Tasks without flow_control.target_files specification
|
||||||
|
|
||||||
#### F. Duplication Detection
|
#### G. Duplication Detection
|
||||||
|
|
||||||
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
|
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
|
||||||
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
|
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
|
||||||
|
|
||||||
#### G. Feasibility Assessment
|
#### H. Feasibility Assessment
|
||||||
|
|
||||||
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
|
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
|
||||||
- **Resource Conflicts**: Parallel tasks requiring same resources/files
|
- **Resource Conflicts**: Parallel tasks requiring same resources/files
|
||||||
@@ -203,7 +209,9 @@ Use this heuristic to prioritize findings:
|
|||||||
|
|
||||||
### 6. Produce Compact Analysis Report
|
### 6. Produce Compact Analysis Report
|
||||||
|
|
||||||
Output a Markdown report (no file writes) with the following structure:
|
**Report Generation**: Generate report content and save to file.
|
||||||
|
|
||||||
|
Output a Markdown report with the following structure:
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
## Action Plan Verification Report
|
## Action Plan Verification Report
|
||||||
@@ -217,7 +225,11 @@ Output a Markdown report (no file writes) with the following structure:
|
|||||||
### Executive Summary
|
### Executive Summary
|
||||||
|
|
||||||
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
|
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
|
||||||
- **Recommendation**: BLOCK_EXECUTION | PROCEED_WITH_FIXES | PROCEED_WITH_CAUTION | PROCEED
|
- **Recommendation**: (See decision matrix below)
|
||||||
|
- BLOCK_EXECUTION: Critical issues exist (must fix before proceeding)
|
||||||
|
- PROCEED_WITH_FIXES: High issues exist, no critical (fix recommended before execution)
|
||||||
|
- PROCEED_WITH_CAUTION: Medium issues only (proceed with awareness)
|
||||||
|
- PROCEED: Low issues only or no issues (safe to execute)
|
||||||
- **Critical Issues**: {count}
|
- **Critical Issues**: {count}
|
||||||
- **High Issues**: {count}
|
- **High Issues**: {count}
|
||||||
- **Medium Issues**: {count}
|
- **Medium Issues**: {count}
|
||||||
@@ -322,14 +334,27 @@ Output a Markdown report (no file writes) with the following structure:
|
|||||||
|
|
||||||
#### Action Recommendations
|
#### Action Recommendations
|
||||||
|
|
||||||
**If CRITICAL Issues Exist**:
|
**Recommendation Decision Matrix**:
|
||||||
- **BLOCK EXECUTION** - Resolve critical issues before proceeding
|
|
||||||
- Use TodoWrite to track all required fixes
|
|
||||||
- Fix broken dependencies and circular references
|
|
||||||
|
|
||||||
**If Only HIGH/MEDIUM/LOW Issues**:
|
| Condition | Recommendation | Action |
|
||||||
- **PROCEED WITH CAUTION** - Fix high-priority issues first
|
|-----------|----------------|--------|
|
||||||
- Use TodoWrite to systematically track and complete all improvements
|
| Critical > 0 | BLOCK_EXECUTION | Must resolve all critical issues before proceeding |
|
||||||
|
| Critical = 0, High > 0 | PROCEED_WITH_FIXES | Fix high-priority issues before execution |
|
||||||
|
| Critical = 0, High = 0, Medium > 0 | PROCEED_WITH_CAUTION | Proceed with awareness of medium issues |
|
||||||
|
| Only Low or None | PROCEED | Safe to execute workflow |
|
||||||
|
|
||||||
|
**If CRITICAL Issues Exist** (BLOCK_EXECUTION):
|
||||||
|
- Resolve all critical issues before proceeding
|
||||||
|
- Use TodoWrite to track required fixes
|
||||||
|
- Fix broken dependencies and circular references first
|
||||||
|
|
||||||
|
**If HIGH Issues Exist** (PROCEED_WITH_FIXES):
|
||||||
|
- Fix high-priority issues before execution
|
||||||
|
- Use TodoWrite to systematically track and complete improvements
|
||||||
|
|
||||||
|
**If Only MEDIUM/LOW Issues** (PROCEED_WITH_CAUTION / PROCEED):
|
||||||
|
- Can proceed with execution
|
||||||
|
- Address issues during or after implementation
|
||||||
|
|
||||||
#### TodoWrite-Based Remediation Workflow
|
#### TodoWrite-Based Remediation Workflow
|
||||||
|
|
||||||
@@ -359,13 +384,18 @@ Priority Order:
|
|||||||
|
|
||||||
### 7. Save Report and Execute TodoWrite-Based Remediation
|
### 7. Save Report and Execute TodoWrite-Based Remediation
|
||||||
|
|
||||||
**Save Analysis Report**:
|
**Step 7.1: Save Analysis Report**:
|
||||||
```bash
|
```bash
|
||||||
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||||
Write(report_path, full_report_content)
|
Write(report_path, full_report_content)
|
||||||
```
|
```
|
||||||
|
|
||||||
**After Report Generation**:
|
**Step 7.2: Display Report Summary to User**:
|
||||||
|
- Show executive summary with counts
|
||||||
|
- Display recommendation (BLOCK/PROCEED_WITH_FIXES/PROCEED_WITH_CAUTION/PROCEED)
|
||||||
|
- List critical and high issues if any
|
||||||
|
|
||||||
|
**Step 7.3: After Report Generation**:
|
||||||
|
|
||||||
1. **Extract Findings**: Parse all issues by severity
|
1. **Extract Findings**: Parse all issues by severity
|
||||||
2. **Create TodoWrite Task List**: Convert findings to actionable todos
|
2. **Create TodoWrite Task List**: Convert findings to actionable todos
|
||||||
|
|||||||
@@ -81,6 +81,7 @@ ELSE:
|
|||||||
**Framework-Based Analysis** (when guidance-specification.md exists):
|
**Framework-Based Analysis** (when guidance-specification.md exists):
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="conceptual-planning-agent",
|
Task(subagent_type="conceptual-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Generate API designer analysis addressing topic framework
|
prompt="Generate API designer analysis addressing topic framework
|
||||||
|
|
||||||
## Framework Integration Required
|
## Framework Integration Required
|
||||||
@@ -136,6 +137,7 @@ Task(subagent_type="conceptual-planning-agent",
|
|||||||
# For existing analysis updates
|
# For existing analysis updates
|
||||||
IF update_mode = "incremental":
|
IF update_mode = "incremental":
|
||||||
Task(subagent_type="conceptual-planning-agent",
|
Task(subagent_type="conceptual-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Update existing API designer analysis
|
prompt="Update existing API designer analysis
|
||||||
|
|
||||||
## Current Analysis Context
|
## Current Analysis Context
|
||||||
|
|||||||
@@ -2,452 +2,360 @@
|
|||||||
name: artifacts
|
name: artifacts
|
||||||
description: Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis
|
description: Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis
|
||||||
argument-hint: "topic or challenge description [--count N]"
|
argument-hint: "topic or challenge description [--count N]"
|
||||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
|
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
|
Seven-phase workflow: **Context collection** → **Topic analysis** → **Role selection** → **Role questions** → **Conflict resolution** → **Final check** → **Generate specification**
|
||||||
|
|
||||||
|
All user interactions use AskUserQuestion tool (max 4 questions per call, multi-round).
|
||||||
|
|
||||||
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
||||||
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||||
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
|
**Core Principle**: Questions dynamically generated from project context + topic keywords, NOT generic templates
|
||||||
|
|
||||||
**Parameters**:
|
**Parameters**:
|
||||||
- `topic` (required): Topic or challenge description (structured format recommended)
|
- `topic` (required): Topic or challenge description (structured format recommended)
|
||||||
- `--count N` (optional): Number of roles user WANTS to select (system will recommend N+2 options for user to choose from, default: 3)
|
- `--count N` (optional): Number of roles to select (system recommends N+2 options, default: 3)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### Phase Summary
|
||||||
|
|
||||||
|
| Phase | Goal | AskUserQuestion | Storage |
|
||||||
|
|-------|------|-----------------|---------|
|
||||||
|
| 0 | Context collection | - | context-package.json |
|
||||||
|
| 1 | Topic analysis | 2-4 questions | intent_context |
|
||||||
|
| 2 | Role selection | 1 multi-select | selected_roles |
|
||||||
|
| 3 | Role questions | 3-4 per role | role_decisions[role] |
|
||||||
|
| 4 | Conflict resolution | max 4 per round | cross_role_decisions |
|
||||||
|
| 4.5 | Final check | progressive rounds | additional_decisions |
|
||||||
|
| 5 | Generate spec | - | guidance-specification.md |
|
||||||
|
|
||||||
|
### AskUserQuestion Pattern
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Single-select (Phase 1, 3, 4)
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "{问题文本}",
|
||||||
|
header: "{短标签}", // max 12 chars
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "{选项}", description: "{说明和影响}" },
|
||||||
|
{ label: "{选项}", description: "{说明和影响}" },
|
||||||
|
{ label: "{选项}", description: "{说明和影响}" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
// ... max 4 questions per call
|
||||||
|
]
|
||||||
|
})
|
||||||
|
|
||||||
|
// Multi-select (Phase 2)
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "请选择 {count} 个角色",
|
||||||
|
header: "角色选择",
|
||||||
|
multiSelect: true,
|
||||||
|
options: [/* max 4 options per call */]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-Round Execution
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const BATCH_SIZE = 4;
|
||||||
|
for (let i = 0; i < allQuestions.length; i += BATCH_SIZE) {
|
||||||
|
const batch = allQuestions.slice(i, i + BATCH_SIZE);
|
||||||
|
AskUserQuestion({ questions: batch });
|
||||||
|
// Store responses before next round
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Task Tracking
|
## Task Tracking
|
||||||
|
|
||||||
**⚠️ TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
|
**TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
|
||||||
|
|
||||||
**When called from auto-parallel**:
|
**When called from auto-parallel**:
|
||||||
- Find the artifacts parent task: "Execute artifacts command for interactive framework generation"
|
- Find artifacts parent task → Mark "in_progress"
|
||||||
- Mark parent task as "in_progress"
|
- APPEND sub-tasks (Phase 0-5) → Mark each as completes
|
||||||
- APPEND artifacts sub-tasks AFTER the parent task (Phase 0-5)
|
- When Phase 5 completes → Mark parent "completed"
|
||||||
- Mark each sub-task as it completes
|
- PRESERVE all other auto-parallel tasks
|
||||||
- When Phase 5 completes, mark parent task as "completed"
|
|
||||||
- **PRESERVE all other auto-parallel tasks** (role agents, synthesis)
|
|
||||||
|
|
||||||
**Standalone Mode**:
|
**Standalone Mode**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Initialize session (.workflow/active/ session check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
{"content": "Initialize session", "status": "pending", "activeForm": "Initializing"},
|
||||||
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
|
{"content": "Phase 0: Context collection", "status": "pending", "activeForm": "Phase 0"},
|
||||||
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
|
{"content": "Phase 1: Topic analysis (2-4 questions)", "status": "pending", "activeForm": "Phase 1"},
|
||||||
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
|
{"content": "Phase 2: Role selection", "status": "pending", "activeForm": "Phase 2"},
|
||||||
{"content": "Phase 3: Generate 3-4 questions per role, output and wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 3 role questions"},
|
{"content": "Phase 3: Role questions (per role)", "status": "pending", "activeForm": "Phase 3"},
|
||||||
{"content": "Phase 4: Detect conflicts, output clarifications, wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 4 conflict resolution"},
|
{"content": "Phase 4: Conflict resolution", "status": "pending", "activeForm": "Phase 4"},
|
||||||
{"content": "Phase 5: Transform Q&A to declarative statements, write guidance-specification.md", "status": "pending", "activeForm": "Phase 5 document generation"}
|
{"content": "Phase 4.5: Final clarification", "status": "pending", "activeForm": "Phase 4.5"},
|
||||||
|
{"content": "Phase 5: Generate specification", "status": "pending", "activeForm": "Phase 5"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
## User Interaction Protocol
|
---
|
||||||
|
|
||||||
### Question Output Format
|
|
||||||
|
|
||||||
All questions output as structured text (detailed format with descriptions):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
【问题{N} - {短标签}】{问题文本}
|
|
||||||
a) {选项标签}
|
|
||||||
说明:{选项说明和影响}
|
|
||||||
b) {选项标签}
|
|
||||||
说明:{选项说明和影响}
|
|
||||||
c) {选项标签}
|
|
||||||
说明:{选项说明和影响}
|
|
||||||
|
|
||||||
请回答:{N}a 或 {N}b 或 {N}c
|
|
||||||
```
|
|
||||||
|
|
||||||
**Multi-select format** (Phase 2 role selection):
|
|
||||||
```markdown
|
|
||||||
【角色选择】请选择 {count} 个角色参与头脑风暴分析
|
|
||||||
|
|
||||||
a) {role-name} ({中文名})
|
|
||||||
推荐理由:{基于topic的相关性说明}
|
|
||||||
b) {role-name} ({中文名})
|
|
||||||
推荐理由:{基于topic的相关性说明}
|
|
||||||
...
|
|
||||||
|
|
||||||
支持格式:
|
|
||||||
- 分别选择:2a 2c 2d (选择第2题的a、c、d选项)
|
|
||||||
- 合并语法:2acd (选择a、c、d)
|
|
||||||
- 逗号分隔:2a,c,d
|
|
||||||
|
|
||||||
请输入选择:
|
|
||||||
```
|
|
||||||
|
|
||||||
### Input Parsing Rules
|
|
||||||
|
|
||||||
**Supported formats** (intelligent parsing):
|
|
||||||
|
|
||||||
1. **Space-separated**: `1a 2b 3c` → Q1:a, Q2:b, Q3:c
|
|
||||||
2. **Comma-separated**: `1a,2b,3c` → Q1:a, Q2:b, Q3:c
|
|
||||||
3. **Multi-select combined**: `2abc` → Q2: options a,b,c
|
|
||||||
4. **Multi-select spaces**: `2 a b c` → Q2: options a,b,c
|
|
||||||
5. **Multi-select comma**: `2a,b,c` → Q2: options a,b,c
|
|
||||||
6. **Natural language**: `问题1选a` → 1a (fallback parsing)
|
|
||||||
|
|
||||||
**Parsing algorithm**:
|
|
||||||
- Extract question numbers and option letters
|
|
||||||
- Validate question numbers match output
|
|
||||||
- Validate option letters exist for each question
|
|
||||||
- If ambiguous/invalid, output example format and request re-input
|
|
||||||
|
|
||||||
**Error handling** (lenient):
|
|
||||||
- Recognize common variations automatically
|
|
||||||
- If parsing fails, show example and wait for clarification
|
|
||||||
- Support re-input without penalty
|
|
||||||
|
|
||||||
### Batching Strategy
|
|
||||||
|
|
||||||
**Batch limits**:
|
|
||||||
- **Default**: Maximum 10 questions per round
|
|
||||||
- **Phase 2 (role selection)**: Display all recommended roles at once (count+2 roles)
|
|
||||||
- **Auto-split**: If questions > 10, split into multiple rounds with clear round indicators
|
|
||||||
|
|
||||||
**Round indicators**:
|
|
||||||
```markdown
|
|
||||||
===== 第 1 轮问题 (共2轮) =====
|
|
||||||
【问题1 - ...】...
|
|
||||||
【问题2 - ...】...
|
|
||||||
...
|
|
||||||
【问题10 - ...】...
|
|
||||||
|
|
||||||
请回答 (格式: 1a 2b ... 10c):
|
|
||||||
```
|
|
||||||
|
|
||||||
### Interaction Flow
|
|
||||||
|
|
||||||
**Standard flow**:
|
|
||||||
1. Output questions in formatted text
|
|
||||||
2. Output expected input format example
|
|
||||||
3. Wait for user input
|
|
||||||
4. Parse input with intelligent matching
|
|
||||||
5. If parsing succeeds → Store answers and continue
|
|
||||||
6. If parsing fails → Show error, example, and wait for re-input
|
|
||||||
|
|
||||||
**No question/option limits**: Text-based interaction removes previous 4-question and 4-option restrictions
|
|
||||||
|
|
||||||
## Execution Phases
|
## Execution Phases
|
||||||
|
|
||||||
### Session Management
|
### Session Management
|
||||||
|
|
||||||
- Check `.workflow/active/` for existing sessions
|
- Check `.workflow/active/` for existing sessions
|
||||||
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
- Multiple → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||||
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
- Parse `--count N` parameter (default: 3)
|
||||||
- Store decisions in `workflow-session.json` including count parameter
|
- Store decisions in `workflow-session.json`
|
||||||
|
|
||||||
### Phase 0: Automatic Project Context Collection
|
### Phase 0: Context Collection
|
||||||
|
|
||||||
**Goal**: Gather project architecture, documentation, and relevant code context BEFORE user interaction
|
**Goal**: Gather project context BEFORE user interaction
|
||||||
|
|
||||||
**Detection Mechanism** (execute first):
|
**Steps**:
|
||||||
```javascript
|
1. Check if `context-package.json` exists → Skip if valid
|
||||||
// Check if context-package already exists
|
2. Invoke `context-search-agent` (BRAINSTORM MODE - lightweight)
|
||||||
const contextPackagePath = `.workflow/active/WFS-{session-id}/.process/context-package.json`;
|
3. Output: `.workflow/active/WFS-{session-id}/.process/context-package.json`
|
||||||
|
|
||||||
if (file_exists(contextPackagePath)) {
|
**Graceful Degradation**: If agent fails, continue to Phase 1 without context
|
||||||
// Validate package
|
|
||||||
const package = Read(contextPackagePath);
|
|
||||||
if (package.metadata.session_id === session_id) {
|
|
||||||
console.log("✅ Valid context-package found, skipping Phase 0");
|
|
||||||
return; // Skip to Phase 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Implementation**: Invoke `context-search-agent` only if package doesn't exist
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="context-search-agent",
|
subagent_type="context-search-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Gather project context for brainstorm",
|
description="Gather project context for brainstorm",
|
||||||
prompt=`
|
prompt=`
|
||||||
You are executing as context-search-agent (.claude/agents/context-search-agent.md).
|
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
|
||||||
|
|
||||||
## Execution Mode
|
Session: ${session_id}
|
||||||
**BRAINSTORM MODE** (Lightweight) - Phase 1-2 only (skip deep analysis)
|
Task: ${task_description}
|
||||||
|
Output: .workflow/${session_id}/.process/context-package.json
|
||||||
|
|
||||||
## Session Information
|
Required fields: metadata, project_context, assets, dependencies, conflict_detection
|
||||||
- **Session ID**: ${session_id}
|
|
||||||
- **Task Description**: ${task_description}
|
|
||||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
|
||||||
|
|
||||||
## Mission
|
|
||||||
Execute complete context-search-agent workflow for implementation planning:
|
|
||||||
|
|
||||||
### Phase 1: Initialization & Pre-Analysis
|
|
||||||
1. **Detection**: Check for existing context-package (early exit if valid)
|
|
||||||
2. **Foundation**: Initialize code-index, get project structure, load docs
|
|
||||||
3. **Analysis**: Extract keywords, determine scope, classify complexity
|
|
||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
|
||||||
Execute all 3 discovery tracks:
|
|
||||||
- **Track 1**: Reference documentation (CLAUDE.md, architecture docs)
|
|
||||||
- **Track 2**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
|
||||||
- **Track 3**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
|
|
||||||
|
|
||||||
### Phase 3: Synthesis, Assessment & Packaging
|
|
||||||
1. Apply relevance scoring and build dependency graph
|
|
||||||
2. Synthesize 3-source data (docs > code > web)
|
|
||||||
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
|
||||||
4. Perform conflict detection with risk assessment
|
|
||||||
5. Generate and validate context-package.json
|
|
||||||
|
|
||||||
## Output Requirements
|
|
||||||
Complete context-package.json with:
|
|
||||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
|
||||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack
|
|
||||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
|
||||||
- **dependencies**: {internal[], external[]} with dependency graph
|
|
||||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
|
||||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy}
|
|
||||||
|
|
||||||
## Quality Validation
|
|
||||||
Before completion verify:
|
|
||||||
- [ ] Valid JSON format with all required fields
|
|
||||||
- [ ] File relevance accuracy >80%
|
|
||||||
- [ ] Dependency graph complete (max 2 transitive levels)
|
|
||||||
- [ ] Conflict risk level calculated correctly
|
|
||||||
- [ ] No sensitive data exposed
|
|
||||||
- [ ] Total files ≤50 (prioritize high-relevance)
|
|
||||||
|
|
||||||
Execute autonomously following agent documentation.
|
|
||||||
Report completion with statistics.
|
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Graceful Degradation**:
|
### Phase 1: Topic Analysis
|
||||||
- If agent fails: Log warning, continue to Phase 1 without project context
|
|
||||||
- If package invalid: Re-run context-search-agent
|
|
||||||
|
|
||||||
### Phase 1: Topic Analysis & Intent Classification
|
**Goal**: Extract keywords/challenges enriched by Phase 0 context
|
||||||
|
|
||||||
**Goal**: Extract keywords/challenges to drive all subsequent question generation, **enriched by Phase 0 project context**
|
|
||||||
|
|
||||||
**Steps**:
|
**Steps**:
|
||||||
1. **Load Phase 0 context** (if available):
|
1. Load Phase 0 context (tech_stack, modules, conflict_risk)
|
||||||
- Read `.workflow/active/WFS-{session-id}/.process/context-package.json`
|
2. Deep topic analysis (entities, challenges, constraints, metrics)
|
||||||
- Extract: tech_stack, existing modules, conflict_risk, relevant files
|
3. Generate 2-4 context-aware probing questions
|
||||||
|
4. AskUserQuestion → Store to `session.intent_context`
|
||||||
|
|
||||||
2. **Deep topic analysis** (context-aware):
|
**Example**:
|
||||||
- Extract technical entities from topic + existing codebase
|
```javascript
|
||||||
- Identify core challenges considering existing architecture
|
AskUserQuestion({
|
||||||
- Consider constraints (timeline/budget/compliance)
|
questions: [
|
||||||
- Define success metrics based on current project state
|
{
|
||||||
|
question: "实时协作平台的主要技术挑战?",
|
||||||
3. **Generate 2-4 context-aware probing questions**:
|
header: "核心挑战",
|
||||||
- Reference existing tech stack in questions
|
multiSelect: false,
|
||||||
- Consider integration with existing modules
|
options: [
|
||||||
- Address identified conflict risks from Phase 0
|
{ label: "实时数据同步", description: "100+用户同时在线,状态同步复杂度高" },
|
||||||
- Target root challenges and trade-off priorities
|
{ label: "可扩展性架构", description: "用户规模增长时的系统扩展能力" },
|
||||||
|
{ label: "冲突解决机制", description: "多用户同时编辑的冲突处理策略" }
|
||||||
4. **User interaction**: Output questions using text format (see User Interaction Protocol), wait for user input
|
]
|
||||||
|
},
|
||||||
5. **Parse user answers**: Use intelligent parsing to extract answers from user input (support multiple formats)
|
{
|
||||||
|
question: "MVP阶段最关注的指标?",
|
||||||
6. **Storage**: Store answers to `session.intent_context` with `{extracted_keywords, identified_challenges, user_answers, project_context_used}`
|
header: "优先级",
|
||||||
|
multiSelect: false,
|
||||||
**Example Output**:
|
options: [
|
||||||
```markdown
|
{ label: "功能完整性", description: "实现所有核心功能" },
|
||||||
===== Phase 1: 项目意图分析 =====
|
{ label: "用户体验", description: "流畅的交互体验和响应速度" },
|
||||||
|
{ label: "系统稳定性", description: "高可用性和数据一致性" }
|
||||||
【问题1 - 核心挑战】实时协作平台的主要技术挑战?
|
]
|
||||||
a) 实时数据同步
|
}
|
||||||
说明:100+用户同时在线,状态同步复杂度高
|
]
|
||||||
b) 可扩展性架构
|
})
|
||||||
说明:用户规模增长时的系统扩展能力
|
|
||||||
c) 冲突解决机制
|
|
||||||
说明:多用户同时编辑的冲突处理策略
|
|
||||||
|
|
||||||
【问题2 - 优先级】MVP阶段最关注的指标?
|
|
||||||
a) 功能完整性
|
|
||||||
说明:实现所有核心功能
|
|
||||||
b) 用户体验
|
|
||||||
说明:流畅的交互体验和响应速度
|
|
||||||
c) 系统稳定性
|
|
||||||
说明:高可用性和数据一致性
|
|
||||||
|
|
||||||
请回答 (格式: 1a 2b):
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**User input examples**:
|
|
||||||
- `1a 2c` → Q1:a, Q2:c
|
|
||||||
- `1a,2c` → Q1:a, Q2:c
|
|
||||||
|
|
||||||
**⚠️ CRITICAL**: Questions MUST reference topic keywords. Generic "Project type?" violates dynamic generation.
|
**⚠️ CRITICAL**: Questions MUST reference topic keywords. Generic "Project type?" violates dynamic generation.
|
||||||
|
|
||||||
### Phase 2: Role Selection
|
### Phase 2: Role Selection
|
||||||
|
|
||||||
**⚠️ CRITICAL**: User MUST interact to select roles. NEVER auto-select without user confirmation.
|
**Goal**: User selects roles from intelligent recommendations
|
||||||
|
|
||||||
**Available Roles**:
|
**Available Roles**: data-architect, product-manager, product-owner, scrum-master, subject-matter-expert, system-architect, test-strategist, ui-designer, ux-expert
|
||||||
- data-architect (数据架构师)
|
|
||||||
- product-manager (产品经理)
|
|
||||||
- product-owner (产品负责人)
|
|
||||||
- scrum-master (敏捷教练)
|
|
||||||
- subject-matter-expert (领域专家)
|
|
||||||
- system-architect (系统架构师)
|
|
||||||
- test-strategist (测试策略师)
|
|
||||||
- ui-designer (UI 设计师)
|
|
||||||
- ux-expert (UX 专家)
|
|
||||||
|
|
||||||
**Steps**:
|
**Steps**:
|
||||||
1. **Intelligent role recommendation** (AI analysis):
|
1. Analyze Phase 1 keywords → Recommend count+2 roles with rationale
|
||||||
- Analyze Phase 1 extracted keywords and challenges
|
2. AskUserQuestion (multiSelect=true) → Store to `session.selected_roles`
|
||||||
- Use AI reasoning to determine most relevant roles for the specific topic
|
3. If count+2 > 4, split into multiple rounds
|
||||||
- Recommend count+2 roles (e.g., if user wants 3 roles, recommend 5 options)
|
|
||||||
- Provide clear rationale for each recommended role based on topic context
|
|
||||||
|
|
||||||
2. **User selection** (text interaction):
|
**Example**:
|
||||||
- Output all recommended roles at once (no batching needed for count+2 roles)
|
```javascript
|
||||||
- Display roles with labels and relevance rationale
|
AskUserQuestion({
|
||||||
- Wait for user input in multi-select format
|
questions: [{
|
||||||
- Parse user input (support multiple formats)
|
question: "请选择 3 个角色参与头脑风暴分析",
|
||||||
- **Storage**: Store selections to `session.selected_roles`
|
header: "角色选择",
|
||||||
|
multiSelect: true,
|
||||||
**Example Output**:
|
options: [
|
||||||
```markdown
|
{ label: "system-architect", description: "实时同步架构设计和技术选型" },
|
||||||
===== Phase 2: 角色选择 =====
|
{ label: "ui-designer", description: "协作界面用户体验和状态展示" },
|
||||||
|
{ label: "product-manager", description: "功能优先级和MVP范围决策" },
|
||||||
【角色选择】请选择 3 个角色参与头脑风暴分析
|
{ label: "data-architect", description: "数据同步模型和存储方案设计" }
|
||||||
|
]
|
||||||
a) system-architect (系统架构师)
|
}]
|
||||||
推荐理由:实时同步架构设计和技术选型的核心角色
|
})
|
||||||
b) ui-designer (UI设计师)
|
|
||||||
推荐理由:协作界面用户体验和实时状态展示
|
|
||||||
c) product-manager (产品经理)
|
|
||||||
推荐理由:功能优先级和MVP范围决策
|
|
||||||
d) data-architect (数据架构师)
|
|
||||||
推荐理由:数据同步模型和存储方案设计
|
|
||||||
e) ux-expert (UX专家)
|
|
||||||
推荐理由:多用户协作交互流程优化
|
|
||||||
|
|
||||||
支持格式:
|
|
||||||
- 分别选择:2a 2c 2d (选择a、c、d)
|
|
||||||
- 合并语法:2acd (选择a、c、d)
|
|
||||||
- 逗号分隔:2a,c,d (选择a、c、d)
|
|
||||||
|
|
||||||
请输入选择:
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**User input examples**:
|
**⚠️ CRITICAL**: User MUST interact. NEVER auto-select without confirmation.
|
||||||
- `2acd` → Roles: a, c, d (system-architect, product-manager, data-architect)
|
|
||||||
- `2a 2c 2d` → Same result
|
|
||||||
- `2a,c,d` → Same result
|
|
||||||
|
|
||||||
**Role Recommendation Rules**:
|
### Phase 3: Role-Specific Questions
|
||||||
- NO hardcoded keyword-to-role mappings
|
|
||||||
- Use intelligent analysis of topic, challenges, and requirements
|
|
||||||
- Consider role synergies and coverage gaps
|
|
||||||
- Explain WHY each role is relevant to THIS specific topic
|
|
||||||
- Default recommendation: count+2 roles for user to choose from
|
|
||||||
|
|
||||||
### Phase 3: Role-Specific Questions (Dynamic Generation)
|
|
||||||
|
|
||||||
**Goal**: Generate deep questions mapping role expertise to Phase 1 challenges
|
**Goal**: Generate deep questions mapping role expertise to Phase 1 challenges
|
||||||
|
|
||||||
**Algorithm**:
|
**Algorithm**:
|
||||||
```
|
1. FOR each selected role:
|
||||||
FOR each selected role:
|
- Map Phase 1 challenges to role domain
|
||||||
1. Map Phase 1 challenges to role domain:
|
- Generate 3-4 questions (implementation depth, trade-offs, edge cases)
|
||||||
- "real-time sync" + system-architect → State management pattern
|
- AskUserQuestion per role → Store to `session.role_decisions[role]`
|
||||||
- "100 users" + system-architect → Communication protocol
|
2. Process roles sequentially (one at a time for clarity)
|
||||||
- "low latency" + system-architect → Conflict resolution
|
3. If role needs > 4 questions, split into multiple rounds
|
||||||
|
|
||||||
2. Generate 3-4 questions per role probing implementation depth, trade-offs, edge cases:
|
**Example** (system-architect):
|
||||||
Q: "How handle real-time state sync for 100+ users?" (explores approach)
|
```javascript
|
||||||
Q: "How resolve conflicts when 2 users edit simultaneously?" (explores edge case)
|
AskUserQuestion({
|
||||||
Options: [Event Sourcing/Centralized/CRDT] (concrete, explain trade-offs for THIS use case)
|
questions: [
|
||||||
|
{
|
||||||
3. Output questions in text format per role:
|
question: "100+ 用户实时状态同步方案?",
|
||||||
- Display all questions for current role (3-4 questions, no 10-question limit)
|
header: "状态同步",
|
||||||
- Questions in Chinese (用中文提问)
|
multiSelect: false,
|
||||||
- Wait for user input
|
options: [
|
||||||
- Parse answers using intelligent parsing
|
{ label: "Event Sourcing", description: "完整事件历史,支持回溯,存储成本高" },
|
||||||
- Store answers to session.role_decisions[role]
|
{ label: "集中式状态管理", description: "实现简单,单点瓶颈风险" },
|
||||||
|
{ label: "CRDT", description: "去中心化,自动合并,学习曲线陡" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "两个用户同时编辑冲突如何解决?",
|
||||||
|
header: "冲突解决",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "自动合并", description: "用户无感知,可能产生意外结果" },
|
||||||
|
{ label: "手动解决", description: "用户控制,增加交互复杂度" },
|
||||||
|
{ label: "版本控制", description: "保留历史,需要分支管理" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**Batching Strategy**:
|
### Phase 4: Conflict Resolution
|
||||||
- Each role outputs all its questions at once (typically 3-4 questions)
|
|
||||||
- No need to split per role (within 10-question batch limit)
|
|
||||||
- Multiple roles processed sequentially (one role at a time for clarity)
|
|
||||||
|
|
||||||
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format)
|
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers
|
||||||
|
|
||||||
**Example Topic-Specific Questions** (system-architect role for "real-time collaboration platform"):
|
|
||||||
- "100+ 用户实时状态同步方案?" → Options: Event Sourcing / 集中式状态管理 / CRDT
|
|
||||||
- "两个用户同时编辑冲突如何解决?" → Options: 自动合并 / 手动解决 / 版本控制
|
|
||||||
- "低延迟通信协议选择?" → Options: WebSocket / SSE / 轮询
|
|
||||||
- "系统扩展性架构方案?" → Options: 微服务 / 单体+缓存 / Serverless
|
|
||||||
|
|
||||||
**Quality Requirements**: See "Question Generation Guidelines" section for detailed rules
|
|
||||||
|
|
||||||
### Phase 4: Cross-Role Clarification (Conflict Detection)
|
|
||||||
|
|
||||||
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers, not pre-defined relationships
|
|
||||||
|
|
||||||
**Algorithm**:
|
**Algorithm**:
|
||||||
```
|
|
||||||
1. Analyze Phase 3 answers for conflicts:
|
1. Analyze Phase 3 answers for conflicts:
|
||||||
- Contradictory choices: product-manager "fast iteration" vs system-architect "complex Event Sourcing"
|
- Contradictory choices (e.g., "fast iteration" vs "complex Event Sourcing")
|
||||||
- Missing integration: ui-designer "Optimistic updates" but system-architect didn't address conflict handling
|
- Missing integration (e.g., "Optimistic updates" but no conflict handling)
|
||||||
- Implicit dependencies: ui-designer "Live cursors" but no auth approach defined
|
- Implicit dependencies (e.g., "Live cursors" but no auth defined)
|
||||||
|
2. Generate clarification questions referencing SPECIFIC Phase 3 choices
|
||||||
2. FOR each detected conflict:
|
3. AskUserQuestion (max 4 per call, multi-round) → Store to `session.cross_role_decisions`
|
||||||
Generate clarification questions referencing SPECIFIC Phase 3 choices
|
|
||||||
|
|
||||||
3. Output clarification questions in text format:
|
|
||||||
- Batch conflicts into rounds (max 10 questions per round)
|
|
||||||
- Display questions with context from Phase 3 answers
|
|
||||||
- Questions in Chinese (用中文提问)
|
|
||||||
- Wait for user input
|
|
||||||
- Parse answers using intelligent parsing
|
|
||||||
- Store answers to session.cross_role_decisions
|
|
||||||
|
|
||||||
4. If NO conflicts: Skip Phase 4 (inform user: "未检测到跨角色冲突,跳过Phase 4")
|
4. If NO conflicts: Skip Phase 4 (inform user: "未检测到跨角色冲突,跳过Phase 4")
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "CRDT 与 UI 回滚期望冲突,如何解决?\n背景:system-architect选择CRDT,ui-designer期望回滚UI",
|
||||||
|
header: "架构冲突",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "采用 CRDT", description: "保持去中心化,调整UI期望" },
|
||||||
|
{ label: "显示合并界面", description: "增加用户交互,展示冲突详情" },
|
||||||
|
{ label: "切换到 OT", description: "支持回滚,增加服务器复杂度" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**Batching Strategy**:
|
### Phase 4.5: Final Clarification
|
||||||
- Maximum 10 clarification questions per round
|
|
||||||
- If conflicts > 10, split into multiple rounds
|
|
||||||
- Prioritize most critical conflicts first
|
|
||||||
|
|
||||||
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format with background context)
|
**Purpose**: Ensure no important points missed before generating specification
|
||||||
|
|
||||||
**Example Conflict Detection** (from Phase 3 answers):
|
|
||||||
- **Architecture Conflict**: "CRDT 与 UI 回滚期望冲突,如何解决?"
|
|
||||||
- Background: system-architect chose CRDT, ui-designer expects rollback UI
|
|
||||||
- Options: 采用 CRDT / 显示合并界面 / 切换到 OT
|
|
||||||
- **Integration Gap**: "实时光标功能缺少身份认证方案"
|
|
||||||
- Background: ui-designer chose live cursors, no auth defined
|
|
||||||
- Options: OAuth 2.0 / JWT Token / Session-based
|
|
||||||
|
|
||||||
**Quality Requirements**: See "Question Generation Guidelines" section for conflict-specific rules
|
|
||||||
|
|
||||||
### Phase 5: Generate Guidance Specification
|
|
||||||
|
|
||||||
**Steps**:
|
**Steps**:
|
||||||
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions`
|
1. Ask initial check:
|
||||||
2. Transform Q&A pairs to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
|
```javascript
|
||||||
3. Generate guidance-specification.md (template below) - **PRIMARY OUTPUT FILE**
|
AskUserQuestion({
|
||||||
4. Update workflow-session.json with **METADATA ONLY**:
|
questions: [{
|
||||||
- session_id (e.g., "WFS-topic-slug")
|
question: "在生成最终规范之前,是否有前面未澄清的重点需要补充?",
|
||||||
- selected_roles[] (array of role names, e.g., ["system-architect", "ui-designer", "product-manager"])
|
header: "补充确认",
|
||||||
- topic (original user input string)
|
multiSelect: false,
|
||||||
- timestamp (ISO-8601 format)
|
options: [
|
||||||
- phase_completed: "artifacts"
|
{ label: "无需补充", description: "前面的讨论已经足够完整" },
|
||||||
- count_parameter (number from --count flag)
|
{ label: "需要补充", description: "还有重要内容需要澄清" }
|
||||||
5. Validate: No interrogative sentences in .md file, all decisions traceable, no content duplication in .json
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
2. If "需要补充":
|
||||||
|
- Analyze user's additional points
|
||||||
|
- Generate progressive questions (not role-bound, interconnected)
|
||||||
|
- AskUserQuestion (max 4 per round) → Store to `session.additional_decisions`
|
||||||
|
- Repeat until user confirms completion
|
||||||
|
3. If "无需补充": Proceed to Phase 5
|
||||||
|
|
||||||
**⚠️ CRITICAL OUTPUT SEPARATION**:
|
**Progressive Pattern**: Questions interconnected, each round informs next, continue until resolved.
|
||||||
- **guidance-specification.md**: Full guidance content (decisions, rationale, integration points)
|
|
||||||
- **workflow-session.json**: Session metadata ONLY (no guidance content, no decisions, no Q&A pairs)
|
|
||||||
- **NO content duplication**: Guidance stays in .md, metadata stays in .json
|
|
||||||
|
|
||||||
## Output Document Template
|
### Phase 5: Generate Specification
|
||||||
|
|
||||||
|
**Steps**:
|
||||||
|
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions` + `additional_decisions`
|
||||||
|
2. Transform Q&A to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
|
||||||
|
3. Generate `guidance-specification.md`
|
||||||
|
4. Update `workflow-session.json` (metadata only)
|
||||||
|
5. Validate: No interrogative sentences, all decisions traceable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Question Guidelines
|
||||||
|
|
||||||
|
### Core Principle
|
||||||
|
|
||||||
|
**Target**: 开发者(理解技术但需要从用户需求出发)
|
||||||
|
|
||||||
|
**Question Structure**: `[业务场景/需求前提] + [技术关注点]`
|
||||||
|
**Option Structure**: `标签:[技术方案] + 说明:[业务影响] + [技术权衡]`
|
||||||
|
|
||||||
|
### Quality Rules
|
||||||
|
|
||||||
|
**MUST Include**:
|
||||||
|
- ✅ All questions in Chinese (用中文提问)
|
||||||
|
- ✅ 业务场景作为问题前提
|
||||||
|
- ✅ 技术选项的业务影响说明
|
||||||
|
- ✅ 量化指标和约束条件
|
||||||
|
|
||||||
|
**MUST Avoid**:
|
||||||
|
- ❌ 纯技术选型无业务上下文
|
||||||
|
- ❌ 过度抽象的用户体验问题
|
||||||
|
- ❌ 脱离话题的通用架构问题
|
||||||
|
|
||||||
|
### Phase-Specific Requirements
|
||||||
|
|
||||||
|
| Phase | Focus | Key Requirements |
|
||||||
|
|-------|-------|------------------|
|
||||||
|
| 1 | 意图理解 | Reference topic keywords, 用户场景、业务约束、优先级 |
|
||||||
|
| 2 | 角色推荐 | Intelligent analysis (NOT keyword mapping), explain relevance |
|
||||||
|
| 3 | 角色问题 | Reference Phase 1 keywords, concrete options with trade-offs |
|
||||||
|
| 4 | 冲突解决 | Reference SPECIFIC Phase 3 choices, explain impact on both roles |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output & Governance
|
||||||
|
|
||||||
|
### Output Template
|
||||||
|
|
||||||
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||||
|
|
||||||
@@ -478,9 +386,9 @@ FOR each selected role:
|
|||||||
|
|
||||||
## Next Steps
|
## Next Steps
|
||||||
**⚠️ Automatic Continuation** (when called from auto-parallel):
|
**⚠️ Automatic Continuation** (when called from auto-parallel):
|
||||||
- auto-parallel will assign agents to generate role-specific analysis documents
|
- auto-parallel assigns agents for role-specific analysis
|
||||||
- Each selected role gets dedicated conceptual-planning-agent
|
- Each selected role gets conceptual-planning-agent
|
||||||
- Agents read this guidance-specification.md for framework context
|
- Agents read this guidance-specification.md for context
|
||||||
|
|
||||||
## Appendix: Decision Tracking
|
## Appendix: Decision Tracking
|
||||||
| Decision ID | Category | Question | Selected | Phase | Rationale |
|
| Decision ID | Category | Question | Selected | Phase | Rationale |
|
||||||
@@ -490,95 +398,19 @@ FOR each selected role:
|
|||||||
| D-003+ | [Role] | [Q] | [A] | 3 | [Why] |
|
| D-003+ | [Role] | [Q] | [A] | 3 | [Why] |
|
||||||
```
|
```
|
||||||
|
|
||||||
## Question Generation Guidelines
|
### File Structure
|
||||||
|
|
||||||
### Core Principle: Developer-Facing Questions with User Context
|
|
||||||
|
|
||||||
**Target Audience**: 开发者(理解技术但需要从用户需求出发)
|
|
||||||
|
|
||||||
**Generation Philosophy**:
|
|
||||||
1. **Phase 1**: 用户场景、业务约束、优先级(建立上下文)
|
|
||||||
2. **Phase 2**: 基于话题分析的智能角色推荐(非关键词映射)
|
|
||||||
3. **Phase 3**: 业务需求 + 技术选型(需求驱动的技术决策)
|
|
||||||
4. **Phase 4**: 技术冲突的业务权衡(帮助开发者理解影响)
|
|
||||||
|
|
||||||
### Universal Quality Rules
|
|
||||||
|
|
||||||
**Question Structure** (all phases):
|
|
||||||
```
|
|
||||||
[业务场景/需求前提] + [技术关注点]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option Structure** (all phases):
|
|
||||||
```
|
|
||||||
标签:[技术方案简称] + (业务特征)
|
|
||||||
说明:[业务影响] + [技术权衡]
|
|
||||||
```
|
|
||||||
|
|
||||||
**MUST Include** (all phases):
|
|
||||||
- ✅ All questions in Chinese (用中文提问)
|
|
||||||
- ✅ 业务场景作为问题前提
|
|
||||||
- ✅ 技术选项的业务影响说明
|
|
||||||
- ✅ 量化指标和约束条件
|
|
||||||
|
|
||||||
**MUST Avoid** (all phases):
|
|
||||||
- ❌ 纯技术选型无业务上下文
|
|
||||||
- ❌ 过度抽象的用户体验问题
|
|
||||||
- ❌ 脱离话题的通用架构问题
|
|
||||||
|
|
||||||
### Phase-Specific Requirements
|
|
||||||
|
|
||||||
**Phase 1 Requirements**:
|
|
||||||
- Questions MUST reference topic keywords (NOT generic "Project type?")
|
|
||||||
- Focus: 用户使用场景(谁用?怎么用?多频繁?)、业务约束(预算、时间、团队、合规)
|
|
||||||
- Success metrics: 性能指标、用户体验目标
|
|
||||||
- Priority ranking: MVP vs 长期规划
|
|
||||||
|
|
||||||
**Phase 3 Requirements**:
|
|
||||||
- Questions MUST reference Phase 1 keywords (e.g., "real-time", "100 users")
|
|
||||||
- Options MUST be concrete approaches with relevance to topic
|
|
||||||
- Each option includes trade-offs specific to this use case
|
|
||||||
- Include 业务需求驱动的技术问题、量化指标(并发数、延迟、可用性)
|
|
||||||
|
|
||||||
**Phase 4 Requirements**:
|
|
||||||
- Questions MUST reference SPECIFIC Phase 3 choices in background context
|
|
||||||
- Options address the detected conflict directly
|
|
||||||
- Each option explains impact on both conflicting roles
|
|
||||||
- NEVER use static "Cross-Role Matrix" - ALWAYS analyze actual Phase 3 answers
|
|
||||||
- Focus: 技术冲突的业务权衡、帮助开发者理解不同选择的影响
|
|
||||||
|
|
||||||
## Validation Checklist
|
|
||||||
|
|
||||||
Generated guidance-specification.md MUST:
|
|
||||||
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
|
|
||||||
- ✅ Every decision traceable to user answer
|
|
||||||
- ✅ Cross-role conflicts resolved or documented
|
|
||||||
- ✅ Next steps concrete and specific
|
|
||||||
- ✅ All Phase 1-4 decisions in session metadata
|
|
||||||
|
|
||||||
## Update Mechanism
|
|
||||||
|
|
||||||
```
|
```
|
||||||
IF guidance-specification.md EXISTS:
|
.workflow/active/WFS-[topic]/
|
||||||
Prompt: "Regenerate completely / Update sections / Cancel"
|
├── workflow-session.json # Metadata ONLY
|
||||||
ELSE:
|
├── .process/
|
||||||
Run full Phase 1-5 flow
|
│ └── context-package.json # Phase 0 output
|
||||||
|
└── .brainstorming/
|
||||||
|
└── guidance-specification.md # Full guidance content
|
||||||
```
|
```
|
||||||
|
|
||||||
## Governance Rules
|
### Session Metadata
|
||||||
|
|
||||||
**Output Requirements**:
|
|
||||||
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
|
|
||||||
- Every decision MUST trace to user answer
|
|
||||||
- Conflicts MUST be resolved (not marked "TBD")
|
|
||||||
- Next steps MUST be actionable
|
|
||||||
- Topic preserved as authoritative reference in session
|
|
||||||
|
|
||||||
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
|
|
||||||
|
|
||||||
## Storage Validation
|
|
||||||
|
|
||||||
**workflow-session.json** (metadata only):
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"session_id": "WFS-{topic-slug}",
|
"session_id": "WFS-{topic-slug}",
|
||||||
@@ -591,14 +423,31 @@ ELSE:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**⚠️ Rule**: Session JSON stores ONLY metadata (session_id, selected_roles[], topic, timestamps). All guidance content goes to guidance-specification.md.
|
**⚠️ Rule**: Session JSON stores ONLY metadata. All guidance content goes to guidance-specification.md.
|
||||||
|
|
||||||
## File Structure
|
### Validation Checklist
|
||||||
|
|
||||||
|
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
|
||||||
|
- ✅ Every decision traceable to user answer
|
||||||
|
- ✅ Cross-role conflicts resolved or documented
|
||||||
|
- ✅ Next steps concrete and specific
|
||||||
|
- ✅ No content duplication between .json and .md
|
||||||
|
|
||||||
|
### Update Mechanism
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/active/WFS-[topic]/
|
IF guidance-specification.md EXISTS:
|
||||||
├── workflow-session.json # Session metadata ONLY
|
Prompt: "Regenerate completely / Update sections / Cancel"
|
||||||
└── .brainstorming/
|
ELSE:
|
||||||
└── guidance-specification.md # Full guidance content
|
Run full Phase 0-5 flow
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Governance Rules
|
||||||
|
|
||||||
|
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
|
||||||
|
- Every decision MUST trace to user answer
|
||||||
|
- Conflicts MUST be resolved (not marked "TBD")
|
||||||
|
- Next steps MUST be actionable
|
||||||
|
- Topic preserved as authoritative reference
|
||||||
|
|
||||||
|
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
|
||||||
|
|||||||
@@ -9,11 +9,11 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
|
|||||||
|
|
||||||
## Coordinator Role
|
## Coordinator Role
|
||||||
|
|
||||||
**This command is a pure orchestrator**: Execute 3 phases in sequence (interactive framework → parallel role analysis → synthesis), coordinating specialized commands/agents through task attachment model.
|
**This command is a pure orchestrator**: Dispatches 3 phases in sequence (interactive framework → parallel role analysis → synthesis), coordinating specialized commands/agents through task attachment model.
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- Task agent execution **attaches analysis tasks** to orchestrator's TodoWrite
|
- Task agent dispatch **attaches analysis tasks** to orchestrator's TodoWrite
|
||||||
- Phase 1: artifacts command attaches its internal tasks (Phase 1-5)
|
- Phase 1: artifacts command attaches its internal tasks (Phase 1-5)
|
||||||
- Phase 2: N conceptual-planning-agent tasks attached in parallel
|
- Phase 2: N conceptual-planning-agent tasks attached in parallel
|
||||||
- Phase 3: synthesis command attaches its internal tasks
|
- Phase 3: synthesis command attaches its internal tasks
|
||||||
@@ -26,9 +26,9 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
|
|||||||
This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) handles user interaction, Phase 2 (role agents) runs in parallel.
|
This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) handles user interaction, Phase 2 (role agents) runs in parallel.
|
||||||
|
|
||||||
1. **User triggers**: `/workflow:brainstorm:auto-parallel "topic" [--count N]`
|
1. **User triggers**: `/workflow:brainstorm:auto-parallel "topic" [--count N]`
|
||||||
2. **Phase 1 executes** → artifacts command (tasks ATTACHED) → Auto-continues
|
2. **Dispatch Phase 1** → artifacts command (tasks ATTACHED) → Auto-continues
|
||||||
3. **Phase 2 executes** → Parallel role agents (N tasks ATTACHED concurrently) → Auto-continues
|
3. **Dispatch Phase 2** → Parallel role agents (N tasks ATTACHED concurrently) → Auto-continues
|
||||||
4. **Phase 3 executes** → Synthesis command (tasks ATTACHED) → Reports final summary
|
4. **Dispatch Phase 3** → Synthesis command (tasks ATTACHED) → Reports final summary
|
||||||
|
|
||||||
**Auto-Continue Mechanism**:
|
**Auto-Continue Mechanism**:
|
||||||
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
||||||
@@ -38,13 +38,13 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
|||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 command execution
|
1. **Start Immediately**: First action is TodoWrite initialization, second action is dispatch Phase 1 command
|
||||||
2. **No Preliminary Analysis**: Do not analyze topic before Phase 1 - artifacts handles all analysis
|
2. **No Preliminary Analysis**: Do not analyze topic before Phase 1 - artifacts handles all analysis
|
||||||
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
|
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
|
||||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
4. **Auto-Continue via TodoList**: Check TodoList status to dispatch next pending phase automatically
|
||||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
6. **Task Attachment Model**: SlashCommand and Task invocations **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
6. **Task Attachment Model**: SlashCommand and Task dispatches **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and dispatch next phase
|
||||||
8. **Parallel Execution**: Phase 2 attaches multiple agent tasks simultaneously for concurrent execution
|
8. **Parallel Execution**: Phase 2 attaches multiple agent tasks simultaneously for concurrent execution
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
@@ -67,7 +67,11 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
|||||||
|
|
||||||
### Phase 1: Interactive Framework Generation
|
### Phase 1: Interactive Framework Generation
|
||||||
|
|
||||||
**Command**: `SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")`
|
**Step 1: Dispatch** - Interactive framework generation via artifacts command
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
|
||||||
|
```
|
||||||
|
|
||||||
**What It Does**:
|
**What It Does**:
|
||||||
- Topic analysis: Extract challenges, generate task-specific questions
|
- Topic analysis: Extract challenges, generate task-specific questions
|
||||||
@@ -87,7 +91,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
|||||||
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
||||||
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
|
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
|
||||||
|
|
||||||
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 1 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
@@ -102,7 +106,7 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
|
**Note**: SlashCommand dispatch **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 1.1-1.5** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 1.1-1.5** sequentially
|
||||||
|
|
||||||
@@ -137,26 +141,10 @@ OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
|
|||||||
TOPIC: {user-provided-topic}
|
TOPIC: {user-provided-topic}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. load_topic_framework → .workflow/active/WFS-{session}/.brainstorming/guidance-specification.md
|
||||||
- Action: Load structured topic discussion framework
|
2. load_role_template → ~/.claude/workflows/cli-templates/planning-roles/{role}.md
|
||||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
3. load_session_metadata → .workflow/active/WFS-{session}/workflow-session.json
|
||||||
- Output: topic_framework_content
|
4. load_style_skill (ui-designer only, if style_skill_package) → .claude/skills/style-{style_skill_package}/
|
||||||
|
|
||||||
2. **load_role_template**
|
|
||||||
- Action: Load {role-name} planning template
|
|
||||||
- Command: Read(~/.claude/workflows/cli-templates/planning-roles/{role}.md)
|
|
||||||
- Output: role_template_guidelines
|
|
||||||
|
|
||||||
3. **load_session_metadata**
|
|
||||||
- Action: Load session metadata and original user intent
|
|
||||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
|
||||||
- Output: session_context (contains original user prompt as PRIMARY reference)
|
|
||||||
|
|
||||||
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
|
|
||||||
- Action: Load style SKILL package for design system reference
|
|
||||||
- Command: Read(.claude/skills/style-{style_skill_package}/SKILL.md) AND Read(.workflow/reference_style/{style_skill_package}/design-tokens.json)
|
|
||||||
- Output: style_skill_content, design_tokens
|
|
||||||
- Usage: Apply design tokens in ui-designer analysis and artifacts
|
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
|
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
|
||||||
@@ -166,13 +154,9 @@ TOPIC: {user-provided-topic}
|
|||||||
**Template Integration**: Apply role template guidelines within framework structure
|
**Template Integration**: Apply role template guidelines within framework structure
|
||||||
|
|
||||||
## Expected Deliverables
|
## Expected Deliverables
|
||||||
1. **analysis.md**: Comprehensive {role-name} analysis addressing all framework discussion points
|
1. **analysis.md** (optionally with analysis-{slug}.md sub-documents)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
2. **Framework Reference**: @../guidance-specification.md
|
||||||
- **FORBIDDEN**: Never use `recommendations.md` or any filename not starting with `analysis`
|
3. **User Intent Alignment**: Validate against session_context
|
||||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
|
||||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
|
||||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
|
||||||
3. **User Intent Alignment**: Validate analysis aligns with original user objectives from session_context
|
|
||||||
|
|
||||||
## Completion Criteria
|
## Completion Criteria
|
||||||
- Address each discussion point from guidance-specification.md with {role-name} expertise
|
- Address each discussion point from guidance-specification.md with {role-name} expertise
|
||||||
@@ -183,7 +167,7 @@ TOPIC: {user-provided-topic}
|
|||||||
"
|
"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parallel Execution**:
|
**Parallel Dispatch**:
|
||||||
- Launch N agents simultaneously (one message with multiple Task calls)
|
- Launch N agents simultaneously (one message with multiple Task calls)
|
||||||
- Each agent task **attached** to orchestrator's TodoWrite
|
- Each agent task **attached** to orchestrator's TodoWrite
|
||||||
- All agents execute concurrently, each attaching their own analysis sub-tasks
|
- All agents execute concurrently, each attaching their own analysis sub-tasks
|
||||||
@@ -195,13 +179,13 @@ TOPIC: {user-provided-topic}
|
|||||||
- guidance-specification.md path
|
- guidance-specification.md path
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md`
|
||||||
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
- Optionally with `analysis-{slug}.md` sub-documents (max 5)
|
||||||
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
|
- **File pattern**: `analysis*.md` for globbing
|
||||||
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
|
- **FORBIDDEN**: `recommendations.md` or any non-`analysis` prefixed files
|
||||||
- All N role analyses completed
|
- All N role analyses completed
|
||||||
|
|
||||||
**TodoWrite Update (Phase 2 agents invoked - tasks attached in parallel)**:
|
**TodoWrite Update (Phase 2 agents dispatched - tasks attached in parallel)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
@@ -214,7 +198,7 @@ TOPIC: {user-provided-topic}
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: Multiple Task invocations **attach** N role analysis tasks simultaneously. Orchestrator **executes** these tasks in parallel.
|
**Note**: Multiple Task dispatches **attach** N role analysis tasks simultaneously. Orchestrator **executes** these tasks in parallel.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 2.1-2.N** concurrently
|
**Next Action**: Tasks attached → **Execute Phase 2.1-2.N** concurrently
|
||||||
|
|
||||||
@@ -236,7 +220,11 @@ TOPIC: {user-provided-topic}
|
|||||||
|
|
||||||
### Phase 3: Synthesis Generation
|
### Phase 3: Synthesis Generation
|
||||||
|
|
||||||
**Command**: `SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")`
|
**Step 3: Dispatch** - Synthesis integration via synthesis command
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")
|
||||||
|
```
|
||||||
|
|
||||||
**What It Does**:
|
**What It Does**:
|
||||||
- Load original user intent from workflow-session.json
|
- Load original user intent from workflow-session.json
|
||||||
@@ -250,7 +238,7 @@ TOPIC: {user-provided-topic}
|
|||||||
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
||||||
- Synthesis references all role analyses
|
- Synthesis references all role analyses
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 3 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
@@ -263,7 +251,7 @@ TOPIC: {user-provided-topic}
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
|
**Note**: SlashCommand dispatch **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
||||||
|
|
||||||
@@ -296,7 +284,7 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
|
|||||||
|
|
||||||
### Key Principles
|
### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand/Task invoked):
|
1. **Task Attachment** (when SlashCommand/Task dispatched):
|
||||||
- Sub-command's or agent's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's or agent's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Phase 1: `/workflow:brainstorm:artifacts` attaches 5 internal tasks (Phase 1.1-1.5)
|
- Phase 1: `/workflow:brainstorm:artifacts` attaches 5 internal tasks (Phase 1.1-1.5)
|
||||||
- Phase 2: Multiple `Task(conceptual-planning-agent)` calls attach N role analysis tasks simultaneously
|
- Phase 2: Multiple `Task(conceptual-planning-agent)` calls attach N role analysis tasks simultaneously
|
||||||
@@ -317,7 +305,7 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase 1 invoked (artifacts tasks ATTACHED) → Artifacts sub-tasks executed → Phase 1 completed (tasks COLLAPSED) → Phase 2 invoked (N role tasks ATTACHED in parallel) → Role analyses executed concurrently → Phase 2 completed (tasks COLLAPSED) → Phase 3 invoked (synthesis tasks ATTACHED) → Synthesis sub-tasks executed → Phase 3 completed (tasks COLLAPSED) → Workflow complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase 1 dispatched (artifacts tasks ATTACHED) → Artifacts sub-tasks executed → Phase 1 completed (tasks COLLAPSED) → Phase 2 dispatched (N role tasks ATTACHED in parallel) → Role analyses executed concurrently → Phase 2 completed (tasks COLLAPSED) → Phase 3 dispatched (synthesis tasks ATTACHED) → Synthesis sub-tasks executed → Phase 3 completed (tasks COLLAPSED) → Workflow complete.
|
||||||
|
|
||||||
### Brainstorming Workflow Specific Features
|
### Brainstorming Workflow Specific Features
|
||||||
|
|
||||||
@@ -445,12 +433,9 @@ CONTEXT_VARS:
|
|||||||
├── workflow-session.json # Session metadata ONLY
|
├── workflow-session.json # Session metadata ONLY
|
||||||
└── .brainstorming/
|
└── .brainstorming/
|
||||||
├── guidance-specification.md # Framework (Phase 1)
|
├── guidance-specification.md # Framework (Phase 1)
|
||||||
├── {role-1}/
|
├── {role}/
|
||||||
│ └── analysis.md # Role analysis (Phase 2)
|
│ ├── analysis.md # Main document (with optional @references)
|
||||||
├── {role-2}/
|
│ └── analysis-{slug}.md # Section documents (max 5)
|
||||||
│ └── analysis.md
|
|
||||||
├── {role-N}/
|
|
||||||
│ └── analysis.md
|
|
||||||
└── synthesis-specification.md # Integration (Phase 3)
|
└── synthesis-specification.md # Integration (Phase 3)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -2,325 +2,318 @@
|
|||||||
name: synthesis
|
name: synthesis
|
||||||
description: Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent
|
description: Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent
|
||||||
argument-hint: "[optional: --session session-id]"
|
argument-hint: "[optional: --session session-id]"
|
||||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
|
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*), AskUserQuestion(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Three-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
|
Six-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
|
||||||
|
|
||||||
**Phase 1-2 (Main Flow)**: Session detection → File discovery → Path preparation
|
**Phase 1-2**: Session detection → File discovery → Path preparation
|
||||||
|
**Phase 3A**: Cross-role analysis agent → Generate recommendations
|
||||||
|
**Phase 4**: User selects enhancements → User answers clarifications (via AskUserQuestion)
|
||||||
|
**Phase 5**: Parallel update agents (one per role)
|
||||||
|
**Phase 6**: Context package update → Metadata update → Completion report
|
||||||
|
|
||||||
**Phase 3A (Analysis Agent)**: Cross-role analysis → Generate recommendations
|
All user interactions use AskUserQuestion tool (max 4 questions per call, multi-round).
|
||||||
|
|
||||||
**Phase 4 (Main Flow)**: User selects enhancements → User answers clarifications → Build update plan
|
|
||||||
|
|
||||||
**Phase 5 (Parallel Update Agents)**: Each agent updates ONE role document → Parallel execution
|
|
||||||
|
|
||||||
**Phase 6 (Main Flow)**: Metadata update → Completion report
|
|
||||||
|
|
||||||
**Key Features**:
|
|
||||||
- Multi-agent architecture (analysis agent + parallel update agents)
|
|
||||||
- Clear separation: Agent analysis vs Main flow interaction
|
|
||||||
- Parallel document updates (one agent per role)
|
|
||||||
- User intent alignment validation
|
|
||||||
|
|
||||||
**Document Flow**:
|
**Document Flow**:
|
||||||
- Input: `[role]/analysis*.md`, `guidance-specification.md`, session metadata
|
- Input: `[role]/analysis*.md`, `guidance-specification.md`, session metadata
|
||||||
- Output: Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
|
- Output: Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### Phase Summary
|
||||||
|
|
||||||
|
| Phase | Goal | Executor | Output |
|
||||||
|
|-------|------|----------|--------|
|
||||||
|
| 1 | Session detection | Main flow | session_id, brainstorm_dir |
|
||||||
|
| 2 | File discovery | Main flow | role_analysis_paths |
|
||||||
|
| 3A | Cross-role analysis | Agent | enhancement_recommendations |
|
||||||
|
| 4 | User interaction | Main flow + AskUserQuestion | update_plan |
|
||||||
|
| 5 | Document updates | Parallel agents | Updated analysis*.md |
|
||||||
|
| 6 | Finalization | Main flow | context-package.json, report |
|
||||||
|
|
||||||
|
### AskUserQuestion Pattern
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Enhancement selection (multi-select)
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "请选择要应用的改进建议",
|
||||||
|
header: "改进选择",
|
||||||
|
multiSelect: true,
|
||||||
|
options: [
|
||||||
|
{ label: "EP-001: API Contract", description: "添加详细的请求/响应 schema 定义" },
|
||||||
|
{ label: "EP-002: User Intent", description: "明确用户需求优先级和验收标准" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
|
// Clarification questions (single-select, multi-round)
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "MVP 阶段的核心目标是什么?",
|
||||||
|
header: "用户意图",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "快速验证", description: "最小功能集,快速上线获取反馈" },
|
||||||
|
{ label: "技术壁垒", description: "完善架构,为长期发展打基础" },
|
||||||
|
{ label: "功能完整", description: "覆盖所有规划功能,延迟上线" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Task Tracking
|
## Task Tracking
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Detect session and validate analyses", "status": "in_progress", "activeForm": "Detecting session"},
|
{"content": "Detect session and validate analyses", "status": "pending", "activeForm": "Detecting session"},
|
||||||
{"content": "Discover role analysis file paths", "status": "pending", "activeForm": "Discovering paths"},
|
{"content": "Discover role analysis file paths", "status": "pending", "activeForm": "Discovering paths"},
|
||||||
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis agent"},
|
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis"},
|
||||||
{"content": "Present enhancements for user selection", "status": "pending", "activeForm": "Presenting enhancements"},
|
{"content": "Present enhancements via AskUserQuestion", "status": "pending", "activeForm": "Selecting enhancements"},
|
||||||
{"content": "Generate and present clarification questions", "status": "pending", "activeForm": "Clarifying with user"},
|
{"content": "Clarification questions via AskUserQuestion", "status": "pending", "activeForm": "Clarifying"},
|
||||||
{"content": "Build update plan from user input", "status": "pending", "activeForm": "Building update plan"},
|
{"content": "Execute parallel update agents", "status": "pending", "activeForm": "Updating documents"},
|
||||||
{"content": "Execute parallel update agents (one per role)", "status": "pending", "activeForm": "Updating documents in parallel"},
|
{"content": "Update context package and metadata", "status": "pending", "activeForm": "Finalizing"}
|
||||||
{"content": "Update session metadata and generate report", "status": "pending", "activeForm": "Finalizing session"}
|
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Execution Phases
|
## Execution Phases
|
||||||
|
|
||||||
### Phase 1: Discovery & Validation
|
### Phase 1: Discovery & Validation
|
||||||
|
|
||||||
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*` directories
|
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*`
|
||||||
2. **Validate Files**:
|
2. **Validate Files**:
|
||||||
- `guidance-specification.md` (optional, warn if missing)
|
- `guidance-specification.md` (optional, warn if missing)
|
||||||
- `*/analysis*.md` (required, error if empty)
|
- `*/analysis*.md` (required, error if empty)
|
||||||
3. **Load User Intent**: Extract from `workflow-session.json` (project/description field)
|
3. **Load User Intent**: Extract from `workflow-session.json`
|
||||||
|
|
||||||
### Phase 2: Role Discovery & Path Preparation
|
### Phase 2: Role Discovery & Path Preparation
|
||||||
|
|
||||||
**Main flow prepares file paths for Agent**:
|
**Main flow prepares file paths for Agent**:
|
||||||
|
|
||||||
1. **Discover Analysis Files**:
|
1. **Discover Analysis Files**:
|
||||||
- Glob(.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md)
|
- Glob: `.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md`
|
||||||
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
|
- Supports: analysis.md + analysis-{slug}.md (max 5)
|
||||||
- Validate: At least one file exists (error if empty)
|
|
||||||
|
|
||||||
2. **Extract Role Information**:
|
2. **Extract Role Information**:
|
||||||
- `role_analysis_paths`: Relative paths from brainstorm_dir
|
- `role_analysis_paths`: Relative paths
|
||||||
- `participating_roles`: Role names extracted from directory paths
|
- `participating_roles`: Role names from directories
|
||||||
|
|
||||||
3. **Pass to Agent** (Phase 3):
|
3. **Pass to Agent**: session_id, brainstorm_dir, role_analysis_paths, participating_roles
|
||||||
- `session_id`
|
|
||||||
- `brainstorm_dir`: .workflow/active/WFS-{session}/.brainstorming/
|
|
||||||
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
|
|
||||||
- `participating_roles`: ["product-manager", "system-architect", ...]
|
|
||||||
|
|
||||||
**Main Flow Responsibility**: File discovery and path preparation only (NO file content reading)
|
|
||||||
|
|
||||||
### Phase 3A: Analysis & Enhancement Agent
|
### Phase 3A: Analysis & Enhancement Agent
|
||||||
|
|
||||||
**First agent call**: Cross-role analysis and generate enhancement recommendations
|
**Agent executes cross-role analysis**:
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
Task(conceptual-planning-agent): "
|
Task(conceptual-planning-agent, `
|
||||||
## Agent Mission
|
## Agent Mission
|
||||||
Analyze role documents, identify conflicts/gaps, and generate enhancement recommendations
|
Analyze role documents, identify conflicts/gaps, generate enhancement recommendations
|
||||||
|
|
||||||
## Input from Main Flow
|
## Input
|
||||||
- brainstorm_dir: {brainstorm_dir}
|
- brainstorm_dir: ${brainstorm_dir}
|
||||||
- role_analysis_paths: {role_analysis_paths}
|
- role_analysis_paths: ${role_analysis_paths}
|
||||||
- participating_roles: {participating_roles}
|
- participating_roles: ${participating_roles}
|
||||||
|
|
||||||
## Execution Instructions
|
## Flow Control Steps
|
||||||
[FLOW_CONTROL]
|
1. load_session_metadata → Read workflow-session.json
|
||||||
|
2. load_role_analyses → Read all analysis files
|
||||||
|
3. cross_role_analysis → Identify consensus, conflicts, gaps, ambiguities
|
||||||
|
4. generate_recommendations → Format as EP-001, EP-002, ...
|
||||||
|
|
||||||
### Flow Control Steps
|
## Output Format
|
||||||
**AGENT RESPONSIBILITY**: Execute these analysis steps sequentially with context accumulation:
|
|
||||||
|
|
||||||
1. **load_session_metadata**
|
|
||||||
- Action: Load original user intent as primary reference
|
|
||||||
- Command: Read({brainstorm_dir}/../workflow-session.json)
|
|
||||||
- Output: original_user_intent (from project/description field)
|
|
||||||
|
|
||||||
2. **load_role_analyses**
|
|
||||||
- Action: Load all role analysis documents
|
|
||||||
- Command: For each path in role_analysis_paths: Read({brainstorm_dir}/{path})
|
|
||||||
- Output: role_analyses_content_map = {role_name: content}
|
|
||||||
|
|
||||||
3. **cross_role_analysis**
|
|
||||||
- Action: Identify consensus themes, conflicts, gaps, underspecified areas
|
|
||||||
- Output: consensus_themes, conflicting_views, gaps_list, ambiguities
|
|
||||||
|
|
||||||
4. **generate_recommendations**
|
|
||||||
- Action: Convert cross-role analysis findings into structured enhancement recommendations
|
|
||||||
- Format: EP-001, EP-002, ... (sequential numbering)
|
|
||||||
- Fields: id, title, affected_roles, category, current_state, enhancement, rationale, priority
|
|
||||||
- Taxonomy: Map to 9 categories (User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology)
|
|
||||||
- Output: enhancement_recommendations (JSON array)
|
|
||||||
|
|
||||||
### Output to Main Flow
|
|
||||||
Return JSON array:
|
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
\"id\": \"EP-001\",
|
"id": "EP-001",
|
||||||
\"title\": \"API Contract Specification\",
|
"title": "API Contract Specification",
|
||||||
\"affected_roles\": [\"system-architect\", \"api-designer\"],
|
"affected_roles": ["system-architect", "api-designer"],
|
||||||
\"category\": \"Architecture\",
|
"category": "Architecture",
|
||||||
\"current_state\": \"High-level API descriptions\",
|
"current_state": "High-level API descriptions",
|
||||||
\"enhancement\": \"Add detailed contract definitions with request/response schemas\",
|
"enhancement": "Add detailed contract definitions",
|
||||||
\"rationale\": \"Enables precise implementation and testing\",
|
"rationale": "Enables precise implementation",
|
||||||
\"priority\": \"High\"
|
"priority": "High"
|
||||||
},
|
}
|
||||||
...
|
|
||||||
]
|
]
|
||||||
|
`)
|
||||||
"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Main Flow User Interaction
|
### Phase 4: User Interaction
|
||||||
|
|
||||||
**Main flow handles all user interaction via text output**:
|
**All interactions via AskUserQuestion (Chinese questions)**
|
||||||
|
|
||||||
**⚠️ CRITICAL**: ALL questions MUST use Chinese (所有问题必须用中文) for better user understanding
|
#### Step 1: Enhancement Selection
|
||||||
|
|
||||||
1. **Present Enhancement Options** (multi-select):
|
```javascript
|
||||||
```markdown
|
// If enhancements > 4, split into multiple rounds
|
||||||
===== Enhancement 选择 =====
|
const enhancements = [...]; // from Phase 3A
|
||||||
|
const BATCH_SIZE = 4;
|
||||||
|
|
||||||
请选择要应用的改进建议(可多选):
|
for (let i = 0; i < enhancements.length; i += BATCH_SIZE) {
|
||||||
|
const batch = enhancements.slice(i, i + BATCH_SIZE);
|
||||||
|
|
||||||
a) EP-001: API Contract Specification
|
AskUserQuestion({
|
||||||
影响角色:system-architect, api-designer
|
questions: [{
|
||||||
说明:添加详细的请求/响应 schema 定义
|
question: `请选择要应用的改进建议 (第${Math.floor(i/BATCH_SIZE)+1}轮)`,
|
||||||
|
header: "改进选择",
|
||||||
|
multiSelect: true,
|
||||||
|
options: batch.map(ep => ({
|
||||||
|
label: `${ep.id}: ${ep.title}`,
|
||||||
|
description: `影响: ${ep.affected_roles.join(', ')} | ${ep.enhancement}`
|
||||||
|
}))
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
b) EP-002: User Intent Validation
|
// Store selections before next round
|
||||||
影响角色:product-manager, ux-expert
|
}
|
||||||
说明:明确用户需求优先级和验收标准
|
|
||||||
|
|
||||||
c) EP-003: Error Handling Strategy
|
// User can also skip: provide "跳过" option
|
||||||
影响角色:system-architect
|
|
||||||
说明:统一异常处理和降级方案
|
|
||||||
|
|
||||||
支持格式:1abc 或 1a 1b 1c 或 1a,b,c
|
|
||||||
请输入选择(可跳过输入 skip):
|
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Generate Clarification Questions** (based on analysis agent output):
|
#### Step 2: Clarification Questions
|
||||||
- ✅ **ALL questions in Chinese (所有问题必须用中文)**
|
|
||||||
- Use 9-category taxonomy scan results
|
|
||||||
- Prioritize most critical questions (no hard limit)
|
|
||||||
- Each with 2-4 options + descriptions
|
|
||||||
|
|
||||||
3. **Interactive Clarification Loop** (max 10 questions per round):
|
```javascript
|
||||||
```markdown
|
// Generate questions based on 9-category taxonomy scan
|
||||||
===== Clarification 问题 (第 1/2 轮) =====
|
// Categories: User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology
|
||||||
|
|
||||||
【问题1 - 用户意图】MVP 阶段的核心目标是什么?
|
const clarifications = [...]; // from analysis
|
||||||
a) 快速验证市场需求
|
const BATCH_SIZE = 4;
|
||||||
说明:最小功能集,快速上线获取反馈
|
|
||||||
b) 建立技术壁垒
|
|
||||||
说明:完善架构,为长期发展打基础
|
|
||||||
c) 实现功能完整性
|
|
||||||
说明:覆盖所有规划功能,延迟上线
|
|
||||||
|
|
||||||
【问题2 - 架构决策】技术栈选择的优先考虑因素?
|
for (let i = 0; i < clarifications.length; i += BATCH_SIZE) {
|
||||||
a) 团队熟悉度
|
const batch = clarifications.slice(i, i + BATCH_SIZE);
|
||||||
说明:使用现有技术栈,降低学习成本
|
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
|
||||||
b) 技术先进性
|
const totalRounds = Math.ceil(clarifications.length / BATCH_SIZE);
|
||||||
说明:采用新技术,提升竞争力
|
|
||||||
c) 生态成熟度
|
|
||||||
说明:选择成熟方案,保证稳定性
|
|
||||||
|
|
||||||
...(最多10个问题)
|
AskUserQuestion({
|
||||||
|
questions: batch.map(q => ({
|
||||||
|
question: q.question,
|
||||||
|
header: q.category.substring(0, 12),
|
||||||
|
multiSelect: false,
|
||||||
|
options: q.options.map(opt => ({
|
||||||
|
label: opt.label,
|
||||||
|
description: opt.description
|
||||||
|
}))
|
||||||
|
}))
|
||||||
|
})
|
||||||
|
|
||||||
请回答 (格式: 1a 2b 3c...):
|
// Store answers before next round
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Wait for user input → Parse all answers in batch → Continue to next round if needed
|
### Question Guidelines
|
||||||
|
|
||||||
4. **Build Update Plan**:
|
**Target**: 开发者(理解技术但需要从用户需求出发)
|
||||||
```
|
|
||||||
|
**Question Structure**: `[跨角色分析发现] + [需要澄清的决策点]`
|
||||||
|
**Option Structure**: `标签:[具体方案] + 说明:[业务影响] + [技术权衡]`
|
||||||
|
|
||||||
|
**9-Category Taxonomy**:
|
||||||
|
|
||||||
|
| Category | Focus | Example Question Pattern |
|
||||||
|
|----------|-------|--------------------------|
|
||||||
|
| User Intent | 用户目标 | "MVP阶段核心目标?" + 验证/壁垒/完整性 |
|
||||||
|
| Requirements | 需求细化 | "功能优先级如何排序?" + 核心/增强/可选 |
|
||||||
|
| Architecture | 架构决策 | "技术栈选择考量?" + 熟悉度/先进性/成熟度 |
|
||||||
|
| UX | 用户体验 | "交互复杂度取舍?" + 简洁/丰富/渐进 |
|
||||||
|
| Feasibility | 可行性 | "资源约束下的范围?" + 最小/标准/完整 |
|
||||||
|
| Risk | 风险管理 | "风险容忍度?" + 保守/平衡/激进 |
|
||||||
|
| Process | 流程规范 | "迭代节奏?" + 快速/稳定/灵活 |
|
||||||
|
| Decisions | 决策确认 | "冲突解决方案?" + 方案A/方案B/折中 |
|
||||||
|
| Terminology | 术语统一 | "统一使用哪个术语?" + 术语A/术语B |
|
||||||
|
|
||||||
|
**Quality Rules**:
|
||||||
|
|
||||||
|
**MUST Include**:
|
||||||
|
- ✅ All questions in Chinese (用中文提问)
|
||||||
|
- ✅ 基于跨角色分析的具体发现
|
||||||
|
- ✅ 选项包含业务影响说明
|
||||||
|
- ✅ 解决实际的模糊点或冲突
|
||||||
|
|
||||||
|
**MUST Avoid**:
|
||||||
|
- ❌ 与角色分析无关的通用问题
|
||||||
|
- ❌ 重复已在 artifacts 阶段确认的内容
|
||||||
|
- ❌ 过于细节的实现级问题
|
||||||
|
|
||||||
|
#### Step 3: Build Update Plan
|
||||||
|
|
||||||
|
```javascript
|
||||||
update_plan = {
|
update_plan = {
|
||||||
"role1": {
|
"role1": {
|
||||||
"enhancements": [EP-001, EP-003],
|
"enhancements": ["EP-001", "EP-003"],
|
||||||
"clarifications": [
|
"clarifications": [
|
||||||
{"question": "...", "answer": "...", "category": "..."},
|
{"question": "...", "answer": "...", "category": "..."}
|
||||||
...
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"role2": {
|
"role2": {
|
||||||
"enhancements": [EP-002],
|
"enhancements": ["EP-002"],
|
||||||
"clarifications": [...]
|
"clarifications": [...]
|
||||||
},
|
}
|
||||||
...
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Parallel Document Update Agents
|
### Phase 5: Parallel Document Update Agents
|
||||||
|
|
||||||
**Parallel agent calls** (one per role needing updates):
|
**Execute in parallel** (one agent per role):
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Execute in parallel using single message with multiple Task calls
|
// Single message with multiple Task calls for parallelism
|
||||||
|
Task(conceptual-planning-agent, `
|
||||||
Task(conceptual-planning-agent): "
|
|
||||||
## Agent Mission
|
## Agent Mission
|
||||||
Apply user-confirmed enhancements and clarifications to {role1} analysis document
|
Apply enhancements and clarifications to ${role} analysis
|
||||||
|
|
||||||
## Agent Intent
|
## Input
|
||||||
- **Goal**: Integrate synthesis results into role-specific analysis
|
- role: ${role}
|
||||||
- **Scope**: Update ONLY {role1}/analysis.md (isolated, no cross-role dependencies)
|
- analysis_path: ${brainstorm_dir}/${role}/analysis.md
|
||||||
- **Constraints**: Preserve original insights, add refinements without deletion
|
- enhancements: ${role_enhancements}
|
||||||
|
- clarifications: ${role_clarifications}
|
||||||
|
- original_user_intent: ${intent}
|
||||||
|
|
||||||
## Input from Main Flow
|
## Flow Control Steps
|
||||||
- role: {role1}
|
1. load_current_analysis → Read analysis file
|
||||||
- analysis_path: {brainstorm_dir}/{role1}/analysis.md
|
2. add_clarifications_section → Insert Q&A section
|
||||||
- enhancements: [EP-001, EP-003] (user-selected improvements)
|
3. apply_enhancements → Integrate into relevant sections
|
||||||
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
|
4. resolve_contradictions → Remove conflicts
|
||||||
- original_user_intent: {from session metadata}
|
5. enforce_terminology → Align terminology
|
||||||
|
6. validate_intent → Verify alignment with user intent
|
||||||
|
7. write_updated_file → Save changes
|
||||||
|
|
||||||
## Execution Instructions
|
## Output
|
||||||
[FLOW_CONTROL]
|
Updated ${role}/analysis.md
|
||||||
|
`)
|
||||||
### Flow Control Steps
|
|
||||||
**AGENT RESPONSIBILITY**: Execute these update steps sequentially:
|
|
||||||
|
|
||||||
1. **load_current_analysis**
|
|
||||||
- Action: Load existing role analysis document
|
|
||||||
- Command: Read({brainstorm_dir}/{role1}/analysis.md)
|
|
||||||
- Output: current_analysis_content
|
|
||||||
|
|
||||||
2. **add_clarifications_section**
|
|
||||||
- Action: Insert Clarifications section with Q&A
|
|
||||||
- Format: \"## Clarifications\\n### Session {date}\\n- **Q**: {question} (Category: {category})\\n **A**: {answer}\"
|
|
||||||
- Output: analysis_with_clarifications
|
|
||||||
|
|
||||||
3. **apply_enhancements**
|
|
||||||
- Action: Integrate EP-001, EP-003 into relevant sections
|
|
||||||
- Strategy: Locate section by category (Architecture → Architecture section, UX → User Experience section)
|
|
||||||
- Output: analysis_with_enhancements
|
|
||||||
|
|
||||||
4. **resolve_contradictions**
|
|
||||||
- Action: Remove conflicts between original content and clarifications/enhancements
|
|
||||||
- Output: contradiction_free_analysis
|
|
||||||
|
|
||||||
5. **enforce_terminology_consistency**
|
|
||||||
- Action: Align all terminology with user-confirmed choices from clarifications
|
|
||||||
- Output: terminology_consistent_analysis
|
|
||||||
|
|
||||||
6. **validate_user_intent_alignment**
|
|
||||||
- Action: Verify all updates support original_user_intent
|
|
||||||
- Output: validated_analysis
|
|
||||||
|
|
||||||
7. **write_updated_file**
|
|
||||||
- Action: Save final analysis document
|
|
||||||
- Command: Write({brainstorm_dir}/{role1}/analysis.md, validated_analysis)
|
|
||||||
- Output: File update confirmation
|
|
||||||
|
|
||||||
### Output
|
|
||||||
Updated {role1}/analysis.md with Clarifications section + enhanced content
|
|
||||||
")
|
|
||||||
|
|
||||||
Task(conceptual-planning-agent): "
|
|
||||||
## Agent Mission
|
|
||||||
Apply user-confirmed enhancements and clarifications to {role2} analysis document
|
|
||||||
|
|
||||||
## Agent Intent
|
|
||||||
- **Goal**: Integrate synthesis results into role-specific analysis
|
|
||||||
- **Scope**: Update ONLY {role2}/analysis.md (isolated, no cross-role dependencies)
|
|
||||||
- **Constraints**: Preserve original insights, add refinements without deletion
|
|
||||||
|
|
||||||
## Input from Main Flow
|
|
||||||
- role: {role2}
|
|
||||||
- analysis_path: {brainstorm_dir}/{role2}/analysis.md
|
|
||||||
- enhancements: [EP-002] (user-selected improvements)
|
|
||||||
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
|
|
||||||
- original_user_intent: {from session metadata}
|
|
||||||
|
|
||||||
## Execution Instructions
|
|
||||||
[FLOW_CONTROL]
|
|
||||||
|
|
||||||
### Flow Control Steps
|
|
||||||
**AGENT RESPONSIBILITY**: Execute same 7 update steps as {role1} agent (load → clarifications → enhancements → contradictions → terminology → validation → write)
|
|
||||||
|
|
||||||
### Output
|
|
||||||
Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|
||||||
")
|
|
||||||
|
|
||||||
# ... repeat for each role in update_plan
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Agent Characteristics**:
|
**Agent Characteristics**:
|
||||||
- **Intent**: Integrate user-confirmed synthesis results (NOT generate new analysis)
|
- **Isolation**: Each agent updates exactly ONE role (parallel safe)
|
||||||
- **Isolation**: Each agent updates exactly ONE role (parallel execution safe)
|
- **Dependencies**: Zero cross-agent dependencies
|
||||||
- **Context**: Minimal - receives only role-specific enhancements + clarifications
|
|
||||||
- **Dependencies**: Zero cross-agent dependencies (full parallelism)
|
|
||||||
- **Validation**: All updates must align with original_user_intent
|
- **Validation**: All updates must align with original_user_intent
|
||||||
|
|
||||||
### Phase 6: Completion & Metadata Update
|
### Phase 6: Finalization
|
||||||
|
|
||||||
**Main flow finalizes**:
|
#### Step 1: Update Context Package
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Sync updated analyses to context-package.json
|
||||||
|
const context_pkg = Read(".workflow/active/WFS-{session}/.process/context-package.json")
|
||||||
|
|
||||||
|
// Update guidance-specification if exists
|
||||||
|
// Update synthesis-specification if exists
|
||||||
|
// Re-read all role analysis files
|
||||||
|
// Update metadata timestamps
|
||||||
|
|
||||||
|
Write(context_pkg_path, JSON.stringify(context_pkg))
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 2: Update Session Metadata
|
||||||
|
|
||||||
1. Wait for all parallel agents to complete
|
|
||||||
2. Update workflow-session.json:
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"phases": {
|
"phases": {
|
||||||
@@ -330,15 +323,13 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
"completed_at": "timestamp",
|
"completed_at": "timestamp",
|
||||||
"participating_roles": [...],
|
"participating_roles": [...],
|
||||||
"clarification_results": {
|
"clarification_results": {
|
||||||
"enhancements_applied": ["EP-001", "EP-002", ...],
|
"enhancements_applied": ["EP-001", "EP-002"],
|
||||||
"questions_asked": 3,
|
"questions_asked": 3,
|
||||||
"categories_clarified": ["Architecture", "UX", ...],
|
"categories_clarified": ["Architecture", "UX"],
|
||||||
"roles_updated": ["role1", "role2", ...],
|
"roles_updated": ["role1", "role2"]
|
||||||
"outstanding_items": []
|
|
||||||
},
|
},
|
||||||
"quality_metrics": {
|
"quality_metrics": {
|
||||||
"user_intent_alignment": "validated",
|
"user_intent_alignment": "validated",
|
||||||
"requirement_coverage": "comprehensive",
|
|
||||||
"ambiguity_resolution": "complete",
|
"ambiguity_resolution": "complete",
|
||||||
"terminology_consistency": "enforced"
|
"terminology_consistency": "enforced"
|
||||||
}
|
}
|
||||||
@@ -347,7 +338,8 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Generate completion report (show to user):
|
#### Step 3: Completion Report
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
## ✅ Clarification Complete
|
## ✅ Clarification Complete
|
||||||
|
|
||||||
@@ -359,9 +351,11 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
✅ PROCEED: `/workflow:plan --session WFS-{session-id}`
|
✅ PROCEED: `/workflow:plan --session WFS-{session-id}`
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Output
|
## Output
|
||||||
|
|
||||||
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
|
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md`
|
||||||
|
|
||||||
**Updated Structure**:
|
**Updated Structure**:
|
||||||
```markdown
|
```markdown
|
||||||
@@ -381,116 +375,24 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
- Ambiguities resolved, placeholders removed
|
- Ambiguities resolved, placeholders removed
|
||||||
- Consistent terminology
|
- Consistent terminology
|
||||||
|
|
||||||
### Phase 6: Update Context Package
|
---
|
||||||
|
|
||||||
**Purpose**: Sync updated role analyses to context-package.json to avoid stale cache
|
|
||||||
|
|
||||||
**Operations**:
|
|
||||||
```bash
|
|
||||||
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
|
|
||||||
|
|
||||||
# 1. Read existing package
|
|
||||||
context_pkg = Read(context_pkg_path)
|
|
||||||
|
|
||||||
# 2. Re-read brainstorm artifacts (now with synthesis enhancements)
|
|
||||||
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
|
|
||||||
|
|
||||||
# 2.1 Update guidance-specification if exists
|
|
||||||
IF exists({brainstorm_dir}/guidance-specification.md):
|
|
||||||
context_pkg.brainstorm_artifacts.guidance_specification.content = Read({brainstorm_dir}/guidance-specification.md)
|
|
||||||
context_pkg.brainstorm_artifacts.guidance_specification.updated_at = NOW()
|
|
||||||
|
|
||||||
# 2.2 Update synthesis-specification if exists
|
|
||||||
IF exists({brainstorm_dir}/synthesis-specification.md):
|
|
||||||
IF context_pkg.brainstorm_artifacts.synthesis_output:
|
|
||||||
context_pkg.brainstorm_artifacts.synthesis_output.content = Read({brainstorm_dir}/synthesis-specification.md)
|
|
||||||
context_pkg.brainstorm_artifacts.synthesis_output.updated_at = NOW()
|
|
||||||
|
|
||||||
# 2.3 Re-read all role analysis files
|
|
||||||
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
|
|
||||||
context_pkg.brainstorm_artifacts.role_analyses = []
|
|
||||||
|
|
||||||
FOR file IN role_analysis_files:
|
|
||||||
role_name = extract_role_from_path(file) # e.g., "ui-designer"
|
|
||||||
relative_path = file.replace({brainstorm_dir}/, "")
|
|
||||||
|
|
||||||
context_pkg.brainstorm_artifacts.role_analyses.push({
|
|
||||||
"role": role_name,
|
|
||||||
"files": [{
|
|
||||||
"path": relative_path,
|
|
||||||
"type": "primary",
|
|
||||||
"content": Read(file),
|
|
||||||
"updated_at": NOW()
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
|
|
||||||
# 3. Update metadata
|
|
||||||
context_pkg.metadata.updated_at = NOW()
|
|
||||||
context_pkg.metadata.synthesis_timestamp = NOW()
|
|
||||||
|
|
||||||
# 4. Write back
|
|
||||||
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
|
|
||||||
|
|
||||||
REPORT: "✅ Updated context-package.json with synthesis results"
|
|
||||||
```
|
|
||||||
|
|
||||||
**TodoWrite Update**:
|
|
||||||
```json
|
|
||||||
{"content": "Update context package with synthesis results", "status": "completed", "activeForm": "Updating context package"}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Session Metadata
|
|
||||||
|
|
||||||
Update `workflow-session.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"phases": {
|
|
||||||
"BRAINSTORM": {
|
|
||||||
"status": "clarification_completed",
|
|
||||||
"clarification_completed": true,
|
|
||||||
"completed_at": "timestamp",
|
|
||||||
"participating_roles": ["product-manager", "system-architect", ...],
|
|
||||||
"clarification_results": {
|
|
||||||
"questions_asked": 3,
|
|
||||||
"categories_clarified": ["Architecture & Design", ...],
|
|
||||||
"roles_updated": ["system-architect", "ui-designer", ...],
|
|
||||||
"outstanding_items": []
|
|
||||||
},
|
|
||||||
"quality_metrics": {
|
|
||||||
"user_intent_alignment": "validated",
|
|
||||||
"requirement_coverage": "comprehensive",
|
|
||||||
"ambiguity_resolution": "complete",
|
|
||||||
"terminology_consistency": "enforced",
|
|
||||||
"decision_transparency": "documented"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quality Checklist
|
## Quality Checklist
|
||||||
|
|
||||||
**Content**:
|
**Content**:
|
||||||
- All role analyses loaded/analyzed
|
- ✅ All role analyses loaded/analyzed
|
||||||
- Cross-role analysis (consensus, conflicts, gaps)
|
- ✅ Cross-role analysis (consensus, conflicts, gaps)
|
||||||
- 9-category ambiguity scan
|
- ✅ 9-category ambiguity scan
|
||||||
- Questions prioritized
|
- ✅ Questions prioritized
|
||||||
- Clarifications documented
|
|
||||||
|
|
||||||
**Analysis**:
|
**Analysis**:
|
||||||
- User intent validated
|
- ✅ User intent validated
|
||||||
- Cross-role synthesis complete
|
- ✅ Cross-role synthesis complete
|
||||||
- Ambiguities resolved
|
- ✅ Ambiguities resolved
|
||||||
- Correct roles updated
|
- ✅ Terminology consistent
|
||||||
- Terminology consistent
|
|
||||||
- Contradictions removed
|
|
||||||
|
|
||||||
**Documents**:
|
**Documents**:
|
||||||
- Clarifications section formatted
|
- ✅ Clarifications section formatted
|
||||||
- Sections reflect answers
|
- ✅ Sections reflect answers
|
||||||
- No placeholders (TODO/TBD)
|
- ✅ No placeholders (TODO/TBD)
|
||||||
- Valid Markdown
|
- ✅ Valid Markdown
|
||||||
- Cross-references maintained
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -81,6 +81,7 @@ ELSE:
|
|||||||
**Framework-Based Analysis** (when guidance-specification.md exists):
|
**Framework-Based Analysis** (when guidance-specification.md exists):
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="conceptual-planning-agent",
|
Task(subagent_type="conceptual-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Generate system architect analysis addressing topic framework
|
prompt="Generate system architect analysis addressing topic framework
|
||||||
|
|
||||||
## Framework Integration Required
|
## Framework Integration Required
|
||||||
@@ -136,6 +137,7 @@ Task(subagent_type="conceptual-planning-agent",
|
|||||||
# For existing analysis updates
|
# For existing analysis updates
|
||||||
IF update_mode = "incremental":
|
IF update_mode = "incremental":
|
||||||
Task(subagent_type="conceptual-planning-agent",
|
Task(subagent_type="conceptual-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Update existing system architect analysis
|
prompt="Update existing system architect analysis
|
||||||
|
|
||||||
## Current Analysis Context
|
## Current Analysis Context
|
||||||
|
|||||||
321
.claude/commands/workflow/debug.md
Normal file
321
.claude/commands/workflow/debug.md
Normal file
@@ -0,0 +1,321 @@
|
|||||||
|
---
|
||||||
|
name: debug
|
||||||
|
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
|
||||||
|
argument-hint: "\"bug description or error message\""
|
||||||
|
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Debug Command (/workflow:debug)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
|
||||||
|
|
||||||
|
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/workflow:debug <BUG_DESCRIPTION>
|
||||||
|
|
||||||
|
# Arguments
|
||||||
|
<bug-description> Bug description, error message, or stack trace (required)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Session Detection:
|
||||||
|
├─ Check if debug session exists for this bug
|
||||||
|
├─ EXISTS + debug.log has content → Analyze mode
|
||||||
|
└─ NOT_FOUND or empty log → Explore mode
|
||||||
|
|
||||||
|
Explore Mode:
|
||||||
|
├─ Locate error source in codebase
|
||||||
|
├─ Generate testable hypotheses (dynamic count)
|
||||||
|
├─ Add NDJSON logging instrumentation
|
||||||
|
└─ Output: Hypothesis list + await user reproduction
|
||||||
|
|
||||||
|
Analyze Mode:
|
||||||
|
├─ Parse debug.log, validate each hypothesis
|
||||||
|
└─ Decision:
|
||||||
|
├─ Confirmed → Fix root cause
|
||||||
|
├─ Inconclusive → Add more logging, iterate
|
||||||
|
└─ All rejected → Generate new hypotheses
|
||||||
|
|
||||||
|
Fix & Cleanup:
|
||||||
|
├─ Apply fix based on confirmed hypothesis
|
||||||
|
├─ User verifies
|
||||||
|
├─ Remove debug instrumentation
|
||||||
|
└─ If not fixed → Return to Analyze mode
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Session Setup & Mode Detection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
|
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||||
|
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||||
|
|
||||||
|
const sessionId = `DBG-${bugSlug}-${dateStr}`
|
||||||
|
const sessionFolder = `.workflow/.debug/${sessionId}`
|
||||||
|
const debugLogPath = `${sessionFolder}/debug.log`
|
||||||
|
|
||||||
|
// Auto-detect mode
|
||||||
|
const sessionExists = fs.existsSync(sessionFolder)
|
||||||
|
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
|
||||||
|
|
||||||
|
const mode = logHasContent ? 'analyze' : 'explore'
|
||||||
|
|
||||||
|
if (!sessionExists) {
|
||||||
|
bash(`mkdir -p ${sessionFolder}`)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Explore Mode
|
||||||
|
|
||||||
|
**Step 1.1: Locate Error Source**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Extract keywords from bug description
|
||||||
|
const keywords = extractErrorKeywords(bug_description)
|
||||||
|
// e.g., ['Stack Length', '未找到', 'registered 0']
|
||||||
|
|
||||||
|
// Search codebase for error locations
|
||||||
|
for (const keyword of keywords) {
|
||||||
|
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Identify affected files and functions
|
||||||
|
const affectedLocations = [...] // from search results
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.2: Generate Hypotheses (Dynamic)**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Hypothesis categories based on error pattern
|
||||||
|
const HYPOTHESIS_PATTERNS = {
|
||||||
|
"not found|missing|undefined|未找到": "data_mismatch",
|
||||||
|
"0|empty|zero|registered 0": "logic_error",
|
||||||
|
"timeout|connection|sync": "integration_issue",
|
||||||
|
"type|format|parse": "type_mismatch"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate hypotheses based on actual issue (NOT fixed count)
|
||||||
|
function generateHypotheses(bugDescription, affectedLocations) {
|
||||||
|
const hypotheses = []
|
||||||
|
|
||||||
|
// Analyze bug and create targeted hypotheses
|
||||||
|
// Each hypothesis has:
|
||||||
|
// - id: H1, H2, ... (dynamic count)
|
||||||
|
// - description: What might be wrong
|
||||||
|
// - testable_condition: What to log
|
||||||
|
// - logging_point: Where to add instrumentation
|
||||||
|
|
||||||
|
return hypotheses // Could be 1, 3, 5, or more
|
||||||
|
}
|
||||||
|
|
||||||
|
const hypotheses = generateHypotheses(bug_description, affectedLocations)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.3: Add NDJSON Instrumentation**
|
||||||
|
|
||||||
|
For each hypothesis, add logging at the relevant location:
|
||||||
|
|
||||||
|
**Python template**:
|
||||||
|
```python
|
||||||
|
# region debug [H{n}]
|
||||||
|
try:
|
||||||
|
import json, time
|
||||||
|
_dbg = {
|
||||||
|
"sid": "{sessionId}",
|
||||||
|
"hid": "H{n}",
|
||||||
|
"loc": "{file}:{line}",
|
||||||
|
"msg": "{testable_condition}",
|
||||||
|
"data": {
|
||||||
|
# Capture relevant values here
|
||||||
|
},
|
||||||
|
"ts": int(time.time() * 1000)
|
||||||
|
}
|
||||||
|
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
|
||||||
|
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
|
||||||
|
except: pass
|
||||||
|
# endregion
|
||||||
|
```
|
||||||
|
|
||||||
|
**JavaScript/TypeScript template**:
|
||||||
|
```javascript
|
||||||
|
// region debug [H{n}]
|
||||||
|
try {
|
||||||
|
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
|
||||||
|
sid: "{sessionId}",
|
||||||
|
hid: "H{n}",
|
||||||
|
loc: "{file}:{line}",
|
||||||
|
msg: "{testable_condition}",
|
||||||
|
data: { /* Capture relevant values */ },
|
||||||
|
ts: Date.now()
|
||||||
|
}) + "\n");
|
||||||
|
} catch(_) {}
|
||||||
|
// endregion
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output to user**:
|
||||||
|
```
|
||||||
|
## Hypotheses Generated
|
||||||
|
|
||||||
|
Based on error "{bug_description}", generated {n} hypotheses:
|
||||||
|
|
||||||
|
{hypotheses.map(h => `
|
||||||
|
### ${h.id}: ${h.description}
|
||||||
|
- Logging at: ${h.logging_point}
|
||||||
|
- Testing: ${h.testable_condition}
|
||||||
|
`).join('')}
|
||||||
|
|
||||||
|
**Debug log**: ${debugLogPath}
|
||||||
|
|
||||||
|
**Next**: Run reproduction steps, then come back for analysis.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Analyze Mode
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Parse NDJSON log
|
||||||
|
const entries = Read(debugLogPath).split('\n')
|
||||||
|
.filter(l => l.trim())
|
||||||
|
.map(l => JSON.parse(l))
|
||||||
|
|
||||||
|
// Group by hypothesis
|
||||||
|
const byHypothesis = groupBy(entries, 'hid')
|
||||||
|
|
||||||
|
// Validate each hypothesis
|
||||||
|
for (const [hid, logs] of Object.entries(byHypothesis)) {
|
||||||
|
const hypothesis = hypotheses.find(h => h.id === hid)
|
||||||
|
const latestLog = logs[logs.length - 1]
|
||||||
|
|
||||||
|
// Check if evidence confirms or rejects hypothesis
|
||||||
|
const verdict = evaluateEvidence(hypothesis, latestLog.data)
|
||||||
|
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**:
|
||||||
|
```
|
||||||
|
## Evidence Analysis
|
||||||
|
|
||||||
|
Analyzed ${entries.length} log entries.
|
||||||
|
|
||||||
|
${results.map(r => `
|
||||||
|
### ${r.id}: ${r.description}
|
||||||
|
- **Status**: ${r.verdict}
|
||||||
|
- **Evidence**: ${JSON.stringify(r.evidence)}
|
||||||
|
- **Reason**: ${r.reason}
|
||||||
|
`).join('')}
|
||||||
|
|
||||||
|
${confirmedHypothesis ? `
|
||||||
|
## Root Cause Identified
|
||||||
|
|
||||||
|
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
|
||||||
|
|
||||||
|
Ready to fix.
|
||||||
|
` : `
|
||||||
|
## Need More Evidence
|
||||||
|
|
||||||
|
Add more logging or refine hypotheses.
|
||||||
|
`}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Fix & Cleanup
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Apply fix based on confirmed hypothesis
|
||||||
|
// ... Edit affected files
|
||||||
|
|
||||||
|
// After user verifies fix works:
|
||||||
|
|
||||||
|
// Remove debug instrumentation (search for region markers)
|
||||||
|
const instrumentedFiles = Grep({
|
||||||
|
pattern: "# region debug|// region debug",
|
||||||
|
output_mode: "files_with_matches"
|
||||||
|
})
|
||||||
|
|
||||||
|
for (const file of instrumentedFiles) {
|
||||||
|
// Remove content between region markers
|
||||||
|
removeDebugRegions(file)
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
## Debug Complete
|
||||||
|
|
||||||
|
- Root cause: ${confirmedHypothesis.description}
|
||||||
|
- Fix applied to: ${modifiedFiles.join(', ')}
|
||||||
|
- Debug instrumentation removed
|
||||||
|
`)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Debug Log Format (NDJSON)
|
||||||
|
|
||||||
|
Each line is a JSON object:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `sid` | Session ID |
|
||||||
|
| `hid` | Hypothesis ID (H1, H2, ...) |
|
||||||
|
| `loc` | Code location |
|
||||||
|
| `msg` | What's being tested |
|
||||||
|
| `data` | Captured values |
|
||||||
|
| `ts` | Timestamp (ms) |
|
||||||
|
|
||||||
|
## Session Folder
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/.debug/DBG-{slug}-{date}/
|
||||||
|
├── debug.log # NDJSON log (main artifact)
|
||||||
|
└── resolution.md # Summary after fix (optional)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Iteration Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
First Call (/workflow:debug "error"):
|
||||||
|
├─ No session exists → Explore mode
|
||||||
|
├─ Extract error keywords, search codebase
|
||||||
|
├─ Generate hypotheses, add logging
|
||||||
|
└─ Await user reproduction
|
||||||
|
|
||||||
|
After Reproduction (/workflow:debug "error"):
|
||||||
|
├─ Session exists + debug.log has content → Analyze mode
|
||||||
|
├─ Parse log, evaluate hypotheses
|
||||||
|
└─ Decision:
|
||||||
|
├─ Confirmed → Fix → User verify
|
||||||
|
│ ├─ Fixed → Cleanup → Done
|
||||||
|
│ └─ Not fixed → Add logging → Iterate
|
||||||
|
├─ Inconclusive → Add logging → Iterate
|
||||||
|
└─ All rejected → New hypotheses → Iterate
|
||||||
|
|
||||||
|
Output:
|
||||||
|
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Situation | Action |
|
||||||
|
|-----------|--------|
|
||||||
|
| Empty debug.log | Verify reproduction triggered the code path |
|
||||||
|
| All hypotheses rejected | Generate new hypotheses with broader scope |
|
||||||
|
| Fix doesn't work | Iterate with more granular logging |
|
||||||
|
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |
|
||||||
@@ -16,9 +16,9 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
|||||||
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
|
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
|
||||||
|
|
||||||
**Loading Strategy**:
|
**Loading Strategy**:
|
||||||
- **TODO_LIST.md**: Read in Phase 2 (task metadata, status, dependencies)
|
- **TODO_LIST.md**: Read in Phase 3 (task metadata, status, dependencies for TodoWrite generation)
|
||||||
- **IMPL_PLAN.md**: Read existence in Phase 2, parse execution strategy when needed
|
- **IMPL_PLAN.md**: Check existence in Phase 2 (normal mode), parse execution strategy in Phase 4A
|
||||||
- **Task JSONs**: Complete lazy loading (read only during execution)
|
- **Task JSONs**: Lazy loading - read only when task is about to execute (Phase 4B)
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
|
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
|
||||||
@@ -39,6 +39,52 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
|||||||
- **Progress tracking**: Continuous TodoWrite updates throughout entire workflow execution
|
- **Progress tracking**: Continuous TodoWrite updates throughout entire workflow execution
|
||||||
- **Autonomous completion**: Execute all tasks without user interruption until workflow complete
|
- **Autonomous completion**: Execute all tasks without user interruption until workflow complete
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Normal Mode:
|
||||||
|
Phase 1: Discovery
|
||||||
|
├─ Count active sessions
|
||||||
|
└─ Decision:
|
||||||
|
├─ count=0 → ERROR: No active sessions
|
||||||
|
├─ count=1 → Auto-select session → Phase 2
|
||||||
|
└─ count>1 → AskUserQuestion (max 4 options) → Phase 2
|
||||||
|
|
||||||
|
Phase 2: Planning Document Validation
|
||||||
|
├─ Check IMPL_PLAN.md exists
|
||||||
|
├─ Check TODO_LIST.md exists
|
||||||
|
└─ Validate .task/ contains IMPL-*.json files
|
||||||
|
|
||||||
|
Phase 3: TodoWrite Generation
|
||||||
|
├─ Update session status to "active" (Step 0)
|
||||||
|
├─ Parse TODO_LIST.md for task statuses
|
||||||
|
├─ Generate TodoWrite for entire workflow
|
||||||
|
└─ Prepare session context paths
|
||||||
|
|
||||||
|
Phase 4: Execution Strategy & Task Execution
|
||||||
|
├─ Step 4A: Parse execution strategy from IMPL_PLAN.md
|
||||||
|
└─ Step 4B: Execute tasks with lazy loading
|
||||||
|
└─ Loop:
|
||||||
|
├─ Get next in_progress task from TodoWrite
|
||||||
|
├─ Lazy load task JSON
|
||||||
|
├─ Launch agent with task context
|
||||||
|
├─ Mark task completed (update IMPL-*.json status)
|
||||||
|
│ # Quick fix: Update task status for ccw dashboard
|
||||||
|
│ # TS=$(date -Iseconds) && jq --arg ts "$TS" '.status="completed" | .status_history=(.status_history // [])+[{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
|
||||||
|
└─ Advance to next task
|
||||||
|
|
||||||
|
Phase 5: Completion
|
||||||
|
├─ Update task statuses in JSON files
|
||||||
|
├─ Generate summaries
|
||||||
|
└─ Auto-call /workflow:session:complete
|
||||||
|
|
||||||
|
Resume Mode (--resume-session):
|
||||||
|
├─ Skip Phase 1 & Phase 2
|
||||||
|
└─ Entry Point: Phase 3 (TodoWrite Generation)
|
||||||
|
├─ Update session status to "active" (if not already)
|
||||||
|
└─ Continue: Phase 4 → Phase 5
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Phase 1: Discovery
|
### Phase 1: Discovery
|
||||||
@@ -81,16 +127,31 @@ bash(for dir in .workflow/active/WFS-*/; do
|
|||||||
done)
|
done)
|
||||||
```
|
```
|
||||||
|
|
||||||
Use AskUserQuestion to present formatted options:
|
Use AskUserQuestion to present formatted options (max 4 options shown):
|
||||||
```
|
```javascript
|
||||||
Multiple active workflow sessions detected. Please select one:
|
// If more than 4 sessions, show most recent 4 with "Other" option for manual input
|
||||||
|
const sessions = getActiveSessions() // sorted by last modified
|
||||||
|
const displaySessions = sessions.slice(0, 4)
|
||||||
|
|
||||||
1. WFS-auth-system | Authentication System | 3/5 tasks (60%)
|
AskUserQuestion({
|
||||||
2. WFS-payment-module | Payment Integration | 0/8 tasks (0%)
|
questions: [{
|
||||||
|
question: "Multiple active sessions detected. Select one:",
|
||||||
Enter number, full session ID, or partial match:
|
header: "Session",
|
||||||
|
multiSelect: false,
|
||||||
|
options: displaySessions.map(s => ({
|
||||||
|
label: s.id,
|
||||||
|
description: `${s.project} | ${s.progress}`
|
||||||
|
}))
|
||||||
|
// Note: User can select "Other" to manually enter session ID
|
||||||
|
}]
|
||||||
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Input Validation**:
|
||||||
|
- If user selects from options: Use selected session ID
|
||||||
|
- If user selects "Other" and provides input: Validate session exists
|
||||||
|
- If validation fails: Show error and re-prompt or suggest available sessions
|
||||||
|
|
||||||
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
|
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
|
||||||
|
|
||||||
#### Step 1.3: Load Session Metadata
|
#### Step 1.3: Load Session Metadata
|
||||||
@@ -103,24 +164,33 @@ bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
|||||||
|
|
||||||
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
|
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
|
||||||
|
|
||||||
### Phase 2: Planning Document Analysis
|
### Phase 2: Planning Document Validation
|
||||||
**Applies to**: Normal mode only (skipped in resume mode)
|
**Applies to**: Normal mode only (skipped in resume mode)
|
||||||
|
|
||||||
**Optimized to avoid reading all task JSONs upfront**
|
**Purpose**: Validate planning artifacts exist before execution
|
||||||
|
|
||||||
**Process**:
|
**Process**:
|
||||||
1. **Check IMPL_PLAN.md**: Verify file exists and has valid structure (defer detailed parsing to Phase 4)
|
1. **Check IMPL_PLAN.md**: Verify file exists (defer detailed parsing to Phase 4A)
|
||||||
2. **Read TODO_LIST.md**: Get current task statuses and execution progress
|
2. **Check TODO_LIST.md**: Verify file exists (defer reading to Phase 3)
|
||||||
3. **Extract Task Metadata**: Parse task IDs, titles, and dependency relationships from TODO_LIST.md
|
3. **Validate Task Directory**: Ensure `.task/` contains at least one IMPL-*.json file
|
||||||
4. **Build Execution Queue**: Determine ready tasks based on TODO_LIST.md status and dependencies
|
|
||||||
|
|
||||||
**Key Optimization**: Use IMPL_PLAN.md (existence check only) and TODO_LIST.md as primary sources instead of reading all task JSONs. Detailed IMPL_PLAN.md parsing happens in Phase 4A.
|
**Key Optimization**: Only existence checks here. Actual file reading happens in later phases.
|
||||||
|
|
||||||
**Resume Mode**: When `--resume-session` flag is provided, **session discovery** (Phase 1) is skipped, but **task metadata loading** (TODO_LIST.md reading) still occurs in Phase 3 for TodoWrite generation.
|
**Resume Mode**: This phase is skipped when `--resume-session` flag is provided. Resume mode entry point is Phase 3.
|
||||||
|
|
||||||
### Phase 3: TodoWrite Generation
|
### Phase 3: TodoWrite Generation
|
||||||
**Applies to**: Both normal and resume modes (resume mode entry point)
|
**Applies to**: Both normal and resume modes (resume mode entry point)
|
||||||
|
|
||||||
|
**Step 0: Update Session Status to Active**
|
||||||
|
Before generating TodoWrite, update session status from "planning" to "active":
|
||||||
|
```bash
|
||||||
|
# Update session status (idempotent - safe to run if already active)
|
||||||
|
jq '.status = "active" | .execution_started_at = (.execution_started_at // now | todate)' \
|
||||||
|
.workflow/active/${sessionId}/workflow-session.json > tmp.json && \
|
||||||
|
mv tmp.json .workflow/active/${sessionId}/workflow-session.json
|
||||||
|
```
|
||||||
|
This ensures the dashboard shows the session as "ACTIVE" during execution.
|
||||||
|
|
||||||
**Process**:
|
**Process**:
|
||||||
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
||||||
- Parse TODO_LIST.md to extract all tasks with current statuses
|
- Parse TODO_LIST.md to extract all tasks with current statuses
|
||||||
@@ -156,7 +226,7 @@ If IMPL_PLAN.md lacks execution strategy, use intelligent fallback (analyze task
|
|||||||
```
|
```
|
||||||
while (TODO_LIST.md has pending tasks) {
|
while (TODO_LIST.md has pending tasks) {
|
||||||
next_task_id = getTodoWriteInProgressTask()
|
next_task_id = getTodoWriteInProgressTask()
|
||||||
task_json = Read(.workflow/session/{session}/.task/{next_task_id}.json) // Lazy load
|
task_json = Read(.workflow/active/{session}/.task/{next_task_id}.json) // Lazy load
|
||||||
executeTaskWithAgent(task_json)
|
executeTaskWithAgent(task_json)
|
||||||
updateTodoListMarkCompleted(next_task_id)
|
updateTodoListMarkCompleted(next_task_id)
|
||||||
advanceTodoWriteToNextTask()
|
advanceTodoWriteToNextTask()
|
||||||
@@ -322,6 +392,7 @@ TodoWrite({
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="{meta.agent}",
|
Task(subagent_type="{meta.agent}",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Execute task: {task.title}
|
prompt="Execute task: {task.title}
|
||||||
|
|
||||||
{[FLOW_CONTROL]}
|
{[FLOW_CONTROL]}
|
||||||
|
|||||||
@@ -20,9 +20,40 @@ Initialize `.workflow/project.json` with comprehensive project understanding by
|
|||||||
/workflow:init --regenerate # Force regeneration
|
/workflow:init --regenerate # Force regeneration
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
└─ Parse --regenerate flag → regenerate = true | false
|
||||||
|
|
||||||
|
Decision:
|
||||||
|
├─ EXISTS + no --regenerate → Exit: "Already initialized"
|
||||||
|
├─ EXISTS + --regenerate → Backup existing → Continue analysis
|
||||||
|
└─ NOT_FOUND → Continue analysis
|
||||||
|
|
||||||
|
Analysis Flow:
|
||||||
|
├─ Get project metadata (name, root)
|
||||||
|
├─ Invoke cli-explore-agent
|
||||||
|
│ ├─ Structural scan (get_modules_by_depth.sh, find, wc)
|
||||||
|
│ ├─ Semantic analysis (Gemini CLI)
|
||||||
|
│ ├─ Synthesis and merge
|
||||||
|
│ └─ Write .workflow/project.json
|
||||||
|
└─ Display summary
|
||||||
|
|
||||||
|
Output:
|
||||||
|
└─ .workflow/project.json (+ .backup if regenerate)
|
||||||
|
```
|
||||||
|
|
||||||
## Implementation
|
## Implementation
|
||||||
|
|
||||||
### Step 1: Check Existing State
|
### Step 1: Parse Input and Check Existing State
|
||||||
|
|
||||||
|
**Parse --regenerate flag**:
|
||||||
|
```javascript
|
||||||
|
const regenerate = $ARGUMENTS.includes('--regenerate')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check existing state**:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||||
@@ -55,21 +86,23 @@ bash(cp .workflow/project.json .workflow/project.json.backup)
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Deep project analysis",
|
description="Deep project analysis",
|
||||||
prompt=`
|
prompt=`
|
||||||
Analyze project for workflow initialization and generate .workflow/project.json.
|
Analyze project for workflow initialization and generate .workflow/project.json.
|
||||||
|
|
||||||
## Output Schema Reference
|
## MANDATORY FIRST STEPS
|
||||||
~/.claude/workflows/cli-templates/schemas/project-json-schema.json
|
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-json-schema.json (get schema reference)
|
||||||
|
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||||
|
|
||||||
## Task
|
## Task
|
||||||
Generate complete project.json with:
|
Generate complete project.json with:
|
||||||
- project_name: ${projectName}
|
- project_name: ${projectName}
|
||||||
- initialized_at: current ISO timestamp
|
- initialized_at: current ISO timestamp
|
||||||
- overview: {description, technology_stack, architecture, key_components, entry_points, metrics}
|
- overview: {description, technology_stack, architecture, key_components}
|
||||||
- features: ${regenerate ? 'preserve from backup' : '[] (empty)'}
|
- features: ${regenerate ? 'preserve from backup' : '[] (empty)'}
|
||||||
|
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
|
||||||
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated}'}
|
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated}'}
|
||||||
- memory_resources: {skills, documentation, module_docs, gaps, last_scanned}
|
|
||||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
@@ -86,28 +119,11 @@ Generate complete project.json with:
|
|||||||
- Patterns: singleton, factory, repository
|
- Patterns: singleton, factory, repository
|
||||||
- Key components: 5-10 modules {name, path, description, importance}
|
- Key components: 5-10 modules {name, path, description, importance}
|
||||||
|
|
||||||
**Metrics**:
|
|
||||||
- total_files: Source files (exclude tests/configs)
|
|
||||||
- lines_of_code: Use find + wc -l
|
|
||||||
- module_count: Use ~/.claude/scripts/get_modules_by_depth.sh
|
|
||||||
- complexity: low | medium | high
|
|
||||||
|
|
||||||
**Entry Points**:
|
|
||||||
- main: index.ts, main.py, main.go
|
|
||||||
- cli_commands: package.json scripts, Makefile targets
|
|
||||||
- api_endpoints: HTTP/REST routes (if applicable)
|
|
||||||
|
|
||||||
**Memory Resources**:
|
|
||||||
- skills: Scan .claude/skills/ → [{name, type, path}]
|
|
||||||
- documentation: Scan .workflow/docs/ → [{name, path, has_readme, has_architecture}]
|
|
||||||
- module_docs: Find **/CLAUDE.md (exclude node_modules, .git)
|
|
||||||
- gaps: Identify missing resources
|
|
||||||
|
|
||||||
## Execution
|
## Execution
|
||||||
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||||
2. Semantic analysis: Gemini for patterns/architecture
|
2. Semantic analysis: Gemini for patterns/architecture
|
||||||
3. Synthesis: Merge findings
|
3. Synthesis: Merge findings
|
||||||
4. ${regenerate ? 'Merge with preserved features/statistics from .workflow/project.json.backup' : ''}
|
4. ${regenerate ? 'Merge with preserved features/development_index/statistics from .workflow/project.json.backup' : ''}
|
||||||
5. Write JSON: Write('.workflow/project.json', jsonContent)
|
5. Write JSON: Write('.workflow/project.json', jsonContent)
|
||||||
6. Report: Return brief completion summary
|
6. Report: Return brief completion summary
|
||||||
|
|
||||||
@@ -136,17 +152,6 @@ Frameworks: ${projectJson.overview.technology_stack.frameworks.join(', ')}
|
|||||||
Style: ${projectJson.overview.architecture.style}
|
Style: ${projectJson.overview.architecture.style}
|
||||||
Components: ${projectJson.overview.key_components.length} core modules
|
Components: ${projectJson.overview.key_components.length} core modules
|
||||||
|
|
||||||
### Metrics
|
|
||||||
Files: ${projectJson.overview.metrics.total_files}
|
|
||||||
LOC: ${projectJson.overview.metrics.lines_of_code}
|
|
||||||
Complexity: ${projectJson.overview.metrics.complexity}
|
|
||||||
|
|
||||||
### Memory Resources
|
|
||||||
SKILL Packages: ${projectJson.memory_resources.skills.length}
|
|
||||||
Documentation: ${projectJson.memory_resources.documentation.length}
|
|
||||||
Module Docs: ${projectJson.memory_resources.module_docs.length}
|
|
||||||
Gaps: ${projectJson.memory_resources.gaps.join(', ') || 'none'}
|
|
||||||
|
|
||||||
---
|
---
|
||||||
Project state: .workflow/project.json
|
Project state: .workflow/project.json
|
||||||
${regenerate ? 'Backup: .workflow/project.json.backup' : ''}
|
${regenerate ? 'Backup: .workflow/project.json.backup' : ''}
|
||||||
|
|||||||
@@ -138,26 +138,30 @@ If `isPlanJson === false`:
|
|||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
### Workflow Overview
|
|
||||||
|
|
||||||
```
|
```
|
||||||
Input Processing → Mode Detection
|
Input Parsing:
|
||||||
|
|
└─ Decision (mode detection):
|
||||||
v
|
├─ --in-memory flag → Mode 1: Load executionContext → Skip user selection
|
||||||
[Mode 1] --in-memory: Load executionContext → Skip selection
|
├─ Ends with .md/.json/.txt → Mode 3: Read file → Detect format
|
||||||
[Mode 2] Prompt: Create plan → User selects method + review
|
│ ├─ Valid plan.json → Use planObject → User selects method + review
|
||||||
[Mode 3] File: Detect format → Extract plan OR treat as prompt → User selects
|
│ └─ Not plan.json → Treat as prompt → User selects method + review
|
||||||
|
|
└─ Other → Mode 2: Prompt description → User selects method + review
|
||||||
v
|
|
||||||
Execution & Progress Tracking
|
Execution:
|
||||||
├─ Step 1: Initialize execution tracking
|
├─ Step 1: Initialize result tracking (previousExecutionResults = [])
|
||||||
├─ Step 2: Create TodoWrite execution list
|
├─ Step 2: Task grouping & batch creation
|
||||||
├─ Step 3: Launch execution (Agent or Codex)
|
│ ├─ Extract explicit depends_on (no file/keyword inference)
|
||||||
├─ Step 4: Track execution progress
|
│ ├─ Group: independent tasks → single parallel batch (maximize utilization)
|
||||||
└─ Step 5: Code review (optional)
|
│ ├─ Group: dependent tasks → sequential phases (respect dependencies)
|
||||||
|
|
│ └─ Create TodoWrite list for batches
|
||||||
v
|
├─ Step 3: Launch execution
|
||||||
Execution Complete
|
│ ├─ Phase 1: All independent tasks (⚡ single batch, concurrent)
|
||||||
|
│ └─ Phase 2+: Dependent tasks by dependency order
|
||||||
|
├─ Step 4: Track progress (TodoWrite updates per batch)
|
||||||
|
└─ Step 5: Code review (if codeReviewTool ≠ "Skip")
|
||||||
|
|
||||||
|
Output:
|
||||||
|
└─ Execution complete with results in previousExecutionResults[]
|
||||||
```
|
```
|
||||||
|
|
||||||
## Detailed Execution Steps
|
## Detailed Execution Steps
|
||||||
@@ -177,66 +181,68 @@ previousExecutionResults = []
|
|||||||
|
|
||||||
**Dependency Analysis & Grouping Algorithm**:
|
**Dependency Analysis & Grouping Algorithm**:
|
||||||
```javascript
|
```javascript
|
||||||
// Infer dependencies: same file → sequential, keywords (use/integrate) → sequential
|
// Use explicit depends_on from plan.json (no inference from file/keywords)
|
||||||
function inferDependencies(tasks) {
|
function extractDependencies(tasks) {
|
||||||
return tasks.map((task, i) => {
|
const taskIdToIndex = {}
|
||||||
const deps = []
|
tasks.forEach((t, i) => { taskIdToIndex[t.id] = i })
|
||||||
const file = task.file || task.title.match(/in\s+([^\s:]+)/)?.[1]
|
|
||||||
const keywords = (task.description || task.title).toLowerCase()
|
|
||||||
|
|
||||||
for (let j = 0; j < i; j++) {
|
return tasks.map((task, i) => {
|
||||||
const prevFile = tasks[j].file || tasks[j].title.match(/in\s+([^\s:]+)/)?.[1]
|
// Only use explicit depends_on from plan.json
|
||||||
if (file && prevFile === file) deps.push(j) // Same file
|
const deps = (task.depends_on || [])
|
||||||
else if (/use|integrate|call|import/.test(keywords)) deps.push(j) // Keyword dependency
|
.map(depId => taskIdToIndex[depId])
|
||||||
}
|
.filter(idx => idx !== undefined && idx < i)
|
||||||
return { ...task, taskIndex: i, dependencies: deps }
|
return { ...task, taskIndex: i, dependencies: deps }
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Group into batches: independent → parallel [P1,P2...], dependent → sequential [S1,S2...]
|
// Group into batches: maximize parallel execution
|
||||||
function createExecutionCalls(tasks, executionMethod) {
|
function createExecutionCalls(tasks, executionMethod) {
|
||||||
const tasksWithDeps = inferDependencies(tasks)
|
const tasksWithDeps = extractDependencies(tasks)
|
||||||
const maxBatch = executionMethod === "Codex" ? 4 : 7
|
|
||||||
const calls = []
|
|
||||||
const processed = new Set()
|
const processed = new Set()
|
||||||
|
const calls = []
|
||||||
|
|
||||||
// Parallel: independent tasks, different files, max batch size
|
// Phase 1: All independent tasks → single parallel batch (maximize utilization)
|
||||||
const parallelGroups = []
|
const independentTasks = tasksWithDeps.filter(t => t.dependencies.length === 0)
|
||||||
tasksWithDeps.forEach(t => {
|
if (independentTasks.length > 0) {
|
||||||
if (t.dependencies.length === 0 && !processed.has(t.taskIndex)) {
|
independentTasks.forEach(t => processed.add(t.taskIndex))
|
||||||
const group = [t]
|
calls.push({
|
||||||
processed.add(t.taskIndex)
|
method: executionMethod,
|
||||||
tasksWithDeps.forEach(o => {
|
executionType: "parallel",
|
||||||
if (!o.dependencies.length && !processed.has(o.taskIndex) &&
|
groupId: "P1",
|
||||||
group.length < maxBatch && t.file !== o.file) {
|
taskSummary: independentTasks.map(t => t.title).join(' | '),
|
||||||
group.push(o)
|
tasks: independentTasks
|
||||||
processed.add(o.taskIndex)
|
|
||||||
}
|
|
||||||
})
|
})
|
||||||
parallelGroups.push(group)
|
|
||||||
}
|
}
|
||||||
})
|
|
||||||
|
|
||||||
// Sequential: dependent tasks, batch when deps satisfied
|
// Phase 2: Dependent tasks → sequential batches (respect dependencies)
|
||||||
const remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
|
let sequentialIndex = 1
|
||||||
|
let remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
|
||||||
|
|
||||||
while (remaining.length > 0) {
|
while (remaining.length > 0) {
|
||||||
const batch = remaining.filter((t, i) =>
|
// Find tasks whose dependencies are all satisfied
|
||||||
i < maxBatch && t.dependencies.every(d => processed.has(d))
|
const ready = remaining.filter(t =>
|
||||||
|
t.dependencies.every(d => processed.has(d))
|
||||||
)
|
)
|
||||||
if (!batch.length) break
|
|
||||||
batch.forEach(t => processed.add(t.taskIndex))
|
if (ready.length === 0) {
|
||||||
calls.push({ executionType: "sequential", groupId: `S${calls.length + 1}`, tasks: batch })
|
console.warn('Circular dependency detected, forcing remaining tasks')
|
||||||
remaining.splice(0, remaining.length, ...remaining.filter(t => !processed.has(t.taskIndex)))
|
ready.push(...remaining)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Combine results
|
// Group ready tasks (can run in parallel within this phase)
|
||||||
return [
|
ready.forEach(t => processed.add(t.taskIndex))
|
||||||
...parallelGroups.map((g, i) => ({
|
calls.push({
|
||||||
method: executionMethod, executionType: "parallel", groupId: `P${i+1}`,
|
method: executionMethod,
|
||||||
taskSummary: g.map(t => t.title).join(' | '), tasks: g
|
executionType: ready.length > 1 ? "parallel" : "sequential",
|
||||||
})),
|
groupId: ready.length > 1 ? `P${calls.length + 1}` : `S${sequentialIndex++}`,
|
||||||
...calls.map(c => ({ ...c, method: executionMethod, taskSummary: c.tasks.map(t => t.title).join(' → ') }))
|
taskSummary: ready.map(t => t.title).join(ready.length > 1 ? ' | ' : ' → '),
|
||||||
]
|
tasks: ready
|
||||||
|
})
|
||||||
|
|
||||||
|
remaining = remaining.filter(t => !processed.has(t.taskIndex))
|
||||||
|
}
|
||||||
|
|
||||||
|
return calls
|
||||||
}
|
}
|
||||||
|
|
||||||
executionCalls = createExecutionCalls(planObject.tasks, executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
|
executionCalls = createExecutionCalls(planObject.tasks, executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
|
||||||
@@ -252,6 +258,33 @@ TodoWrite({
|
|||||||
|
|
||||||
### Step 3: Launch Execution
|
### Step 3: Launch Execution
|
||||||
|
|
||||||
|
**Executor Resolution** (任务级 executor 优先于全局设置):
|
||||||
|
```javascript
|
||||||
|
// 获取任务的 executor(优先使用 executorAssignments,fallback 到全局 executionMethod)
|
||||||
|
function getTaskExecutor(task) {
|
||||||
|
const assignments = executionContext?.executorAssignments || {}
|
||||||
|
if (assignments[task.id]) {
|
||||||
|
return assignments[task.id].executor // 'gemini' | 'codex' | 'agent'
|
||||||
|
}
|
||||||
|
// Fallback: 全局 executionMethod 映射
|
||||||
|
const method = executionContext?.executionMethod || 'Auto'
|
||||||
|
if (method === 'Agent') return 'agent'
|
||||||
|
if (method === 'Codex') return 'codex'
|
||||||
|
// Auto: 根据复杂度
|
||||||
|
return planObject.complexity === 'Low' ? 'agent' : 'codex'
|
||||||
|
}
|
||||||
|
|
||||||
|
// 按 executor 分组任务
|
||||||
|
function groupTasksByExecutor(tasks) {
|
||||||
|
const groups = { gemini: [], codex: [], agent: [] }
|
||||||
|
tasks.forEach(task => {
|
||||||
|
const executor = getTaskExecutor(task)
|
||||||
|
groups[executor].push(task)
|
||||||
|
})
|
||||||
|
return groups
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
|
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
|
||||||
```javascript
|
```javascript
|
||||||
const parallel = executionCalls.filter(c => c.executionType === "parallel")
|
const parallel = executionCalls.filter(c => c.executionType === "parallel")
|
||||||
@@ -277,88 +310,117 @@ for (const call of sequential) {
|
|||||||
**Option A: Agent Execution**
|
**Option A: Agent Execution**
|
||||||
|
|
||||||
When to use:
|
When to use:
|
||||||
- `executionMethod = "Agent"`
|
- `getTaskExecutor(task) === "agent"`
|
||||||
- `executionMethod = "Auto" AND complexity = "Low"`
|
- 或 `executionMethod = "Agent"` (全局 fallback)
|
||||||
|
- 或 `executionMethod = "Auto" AND complexity = "Low"` (全局 fallback)
|
||||||
|
|
||||||
|
**Task Formatting Principle**: Each task is a self-contained checklist. The agent only needs to know what THIS task requires, not its position or relation to other tasks.
|
||||||
|
|
||||||
Agent call format:
|
Agent call format:
|
||||||
```javascript
|
```javascript
|
||||||
function formatTaskForAgent(task, index) {
|
// Format single task as self-contained checklist
|
||||||
|
function formatTaskChecklist(task) {
|
||||||
return `
|
return `
|
||||||
### Task ${index + 1}: ${task.title}
|
## ${task.title}
|
||||||
**File**: ${task.file}
|
|
||||||
|
**Target**: \`${task.file}\`
|
||||||
**Action**: ${task.action}
|
**Action**: ${task.action}
|
||||||
**Description**: ${task.description}
|
|
||||||
|
|
||||||
**Implementation Steps**:
|
### What to do
|
||||||
${task.implementation.map((step, i) => `${i + 1}. ${step}`).join('\n')}
|
${task.description}
|
||||||
|
|
||||||
**Reference**:
|
### How to do it
|
||||||
|
${task.implementation.map(step => `- ${step}`).join('\n')}
|
||||||
|
|
||||||
|
### Reference
|
||||||
- Pattern: ${task.reference.pattern}
|
- Pattern: ${task.reference.pattern}
|
||||||
- Example Files: ${task.reference.files.join(', ')}
|
- Examples: ${task.reference.files.join(', ')}
|
||||||
- Guidance: ${task.reference.examples}
|
- Notes: ${task.reference.examples}
|
||||||
|
|
||||||
**Acceptance Criteria**:
|
### Done when
|
||||||
${task.acceptance.map((criterion, i) => `${i + 1}. ${criterion}`).join('\n')}
|
${task.acceptance.map(c => `- [ ] ${c}`).join('\n')}
|
||||||
|
`
|
||||||
|
}
|
||||||
|
|
||||||
|
// For batch execution: aggregate tasks without numbering
|
||||||
|
function formatBatchPrompt(batch) {
|
||||||
|
const tasksSection = batch.tasks.map(t => formatTaskChecklist(t)).join('\n---\n')
|
||||||
|
|
||||||
|
return `
|
||||||
|
${originalUserInput ? `## Goal\n${originalUserInput}\n` : ''}
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
|
||||||
|
${tasksSection}
|
||||||
|
|
||||||
|
${batch.context ? `## Context\n${batch.context}` : ''}
|
||||||
|
|
||||||
|
Complete each task according to its "Done when" checklist.
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
|
|
||||||
Task(
|
Task(
|
||||||
subagent_type="code-developer",
|
subagent_type="code-developer",
|
||||||
description="Implement planned tasks",
|
run_in_background=false,
|
||||||
prompt=`
|
description=batch.taskSummary,
|
||||||
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
|
prompt=formatBatchPrompt({
|
||||||
|
tasks: batch.tasks,
|
||||||
|
context: buildRelevantContext(batch.tasks)
|
||||||
|
})
|
||||||
|
)
|
||||||
|
|
||||||
## Implementation Plan
|
// Helper: Build relevant context for batch
|
||||||
|
// Context serves as REFERENCE ONLY - helps agent understand existing state
|
||||||
|
function buildRelevantContext(tasks) {
|
||||||
|
const sections = []
|
||||||
|
|
||||||
**Summary**: ${planObject.summary}
|
// 1. Previous work completion - what's already done (reference for continuity)
|
||||||
**Approach**: ${planObject.approach}
|
if (previousExecutionResults.length > 0) {
|
||||||
|
sections.push(`### Previous Work (Reference)
|
||||||
|
Use this to understand what's already completed. Avoid duplicating work.
|
||||||
|
|
||||||
## Task Breakdown (${planObject.tasks.length} tasks)
|
${previousExecutionResults.map(r => `**${r.tasksSummary}**
|
||||||
${planObject.tasks.map((task, i) => formatTaskForAgent(task, i)).join('\n')}
|
- Status: ${r.status}
|
||||||
|
- Outputs: ${r.keyOutputs || 'See git diff'}
|
||||||
${previousExecutionResults.length > 0 ? `\n## Previous Execution Results\n${previousExecutionResults.map(result => `
|
${r.notes ? `- Notes: ${r.notes}` : ''}`
|
||||||
[${result.executionId}] ${result.status}
|
).join('\n\n')}`)
|
||||||
Tasks: ${result.tasksSummary}
|
|
||||||
Completion: ${result.completionSummary}
|
|
||||||
Outputs: ${result.keyOutputs || 'See git diff'}
|
|
||||||
${result.notes ? `Notes: ${result.notes}` : ''}
|
|
||||||
`).join('\n---\n')}` : ''}
|
|
||||||
|
|
||||||
## Multi-Angle Code Context
|
|
||||||
|
|
||||||
${explorationsContext && Object.keys(explorationsContext).length > 0 ?
|
|
||||||
explorationAngles.map(angle => {
|
|
||||||
const exp = explorationsContext[angle]
|
|
||||||
return `### Exploration Angle: ${angle}
|
|
||||||
|
|
||||||
**Project Structure**: ${exp.project_structure || 'N/A'}
|
|
||||||
**Relevant Files**: ${exp.relevant_files?.join(', ') || 'None'}
|
|
||||||
**Patterns**: ${exp.patterns || 'N/A'}
|
|
||||||
**Dependencies**: ${exp.dependencies || 'N/A'}
|
|
||||||
**Integration Points**: ${exp.integration_points || 'N/A'}
|
|
||||||
**Constraints**: ${exp.constraints || 'N/A'}`
|
|
||||||
}).join('\n\n---\n\n')
|
|
||||||
: "No exploration performed"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
|
// 2. Related files - files that may need to be read/referenced
|
||||||
|
const relatedFiles = extractRelatedFiles(tasks)
|
||||||
|
if (relatedFiles.length > 0) {
|
||||||
|
sections.push(`### Related Files (Reference)
|
||||||
|
These files may contain patterns, types, or utilities relevant to your tasks:
|
||||||
|
|
||||||
${executionContext?.session?.artifacts ? `\n## Exploration Artifact Files
|
${relatedFiles.map(f => `- \`${f}\``).join('\n')}`)
|
||||||
|
}
|
||||||
|
|
||||||
Detailed exploration context available in:
|
// 3. Clarifications from user
|
||||||
${executionContext.session.artifacts.explorations?.map(exp =>
|
if (clarificationContext) {
|
||||||
`- Angle: ${exp.angle} → ${exp.path}`
|
sections.push(`### User Clarifications
|
||||||
).join('\n') || ''}
|
${Object.entries(clarificationContext).map(([q, a]) => `- **${q}**: ${a}`).join('\n')}`)
|
||||||
${executionContext.session.artifacts.explorations_manifest ? `- Manifest: ${executionContext.session.artifacts.explorations_manifest}` : ''}
|
}
|
||||||
- Plan: ${executionContext.session.artifacts.plan}
|
|
||||||
|
|
||||||
Read exploration files for comprehensive context from multiple angles.` : ''}
|
// 4. Artifact files (for deeper context if needed)
|
||||||
|
if (executionContext?.session?.artifacts?.plan) {
|
||||||
|
sections.push(`### Artifacts
|
||||||
|
For detailed planning context, read: ${executionContext.session.artifacts.plan}`)
|
||||||
|
}
|
||||||
|
|
||||||
## Requirements
|
return sections.join('\n\n')
|
||||||
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
|
}
|
||||||
Return only after all tasks are fully implemented and tested.
|
|
||||||
`
|
// Extract related files from task references
|
||||||
)
|
function extractRelatedFiles(tasks) {
|
||||||
|
const files = new Set()
|
||||||
|
tasks.forEach(task => {
|
||||||
|
// Add reference example files
|
||||||
|
if (task.reference?.files) {
|
||||||
|
task.reference.files.forEach(f => files.add(f))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return [...files]
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result Collection**: After completion, collect result following `executionResult` structure (see Data Structures section)
|
**Result Collection**: After completion, collect result following `executionResult` structure (see Data Structures section)
|
||||||
@@ -366,102 +428,145 @@ ${result.notes ? `Notes: ${result.notes}` : ''}
|
|||||||
**Option B: CLI Execution (Codex)**
|
**Option B: CLI Execution (Codex)**
|
||||||
|
|
||||||
When to use:
|
When to use:
|
||||||
- `executionMethod = "Codex"`
|
- `getTaskExecutor(task) === "codex"`
|
||||||
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
- 或 `executionMethod = "Codex"` (全局 fallback)
|
||||||
|
- 或 `executionMethod = "Auto" AND complexity = "Medium/High"` (全局 fallback)
|
||||||
|
|
||||||
**Artifact Path Delegation**:
|
**Task Formatting Principle**: Same as Agent - each task is a self-contained checklist. No task numbering or position awareness.
|
||||||
- Include artifact file paths in CLI prompt for enhanced context
|
|
||||||
- Codex can read artifact files for detailed planning information
|
|
||||||
- Example: Reference exploration.json for architecture patterns
|
|
||||||
|
|
||||||
Command format:
|
Command format:
|
||||||
```bash
|
```bash
|
||||||
function formatTaskForCodex(task, index) {
|
// Format single task as compact checklist for CLI
|
||||||
|
function formatTaskForCLI(task) {
|
||||||
return `
|
return `
|
||||||
${index + 1}. ${task.title} (${task.file})
|
## ${task.title}
|
||||||
Action: ${task.action}
|
File: ${task.file}
|
||||||
What: ${task.description}
|
Action: ${task.action}
|
||||||
How:
|
|
||||||
${task.implementation.map((step, i) => ` ${i + 1}. ${step}`).join('\n')}
|
What: ${task.description}
|
||||||
Reference: ${task.reference.pattern} (see ${task.reference.files.join(', ')})
|
|
||||||
Guidance: ${task.reference.examples}
|
How:
|
||||||
Verify:
|
${task.implementation.map(step => `- ${step}`).join('\n')}
|
||||||
${task.acceptance.map((criterion, i) => ` - ${criterion}`).join('\n')}
|
|
||||||
|
Reference: ${task.reference.pattern} (see ${task.reference.files.join(', ')})
|
||||||
|
Notes: ${task.reference.examples}
|
||||||
|
|
||||||
|
Done when:
|
||||||
|
${task.acceptance.map(c => `- [ ] ${c}`).join('\n')}
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
|
|
||||||
codex --full-auto exec "
|
// Build CLI prompt for batch
|
||||||
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
|
// Context provides REFERENCE information - not requirements to fulfill
|
||||||
|
function buildCLIPrompt(batch) {
|
||||||
|
const tasksSection = batch.tasks.map(t => formatTaskForCLI(t)).join('\n---\n')
|
||||||
|
|
||||||
## Implementation Plan
|
let prompt = `${originalUserInput ? `## Goal\n${originalUserInput}\n\n` : ''}`
|
||||||
|
prompt += `## Tasks\n\n${tasksSection}\n`
|
||||||
|
|
||||||
TASK: ${planObject.summary}
|
// Context section - reference information only
|
||||||
APPROACH: ${planObject.approach}
|
const contextSections = []
|
||||||
|
|
||||||
### Task Breakdown (${planObject.tasks.length} tasks)
|
// 1. Previous work - what's already completed
|
||||||
${planObject.tasks.map((task, i) => formatTaskForCodex(task, i)).join('\n')}
|
if (previousExecutionResults.length > 0) {
|
||||||
|
contextSections.push(`### Previous Work (Reference)
|
||||||
|
Already completed - avoid duplicating:
|
||||||
|
${previousExecutionResults.map(r => `- ${r.tasksSummary}: ${r.status}${r.keyOutputs ? ` (${r.keyOutputs})` : ''}`).join('\n')}`)
|
||||||
|
}
|
||||||
|
|
||||||
${previousExecutionResults.length > 0 ? `\n### Previous Execution Results\n${previousExecutionResults.map(result => `
|
// 2. Related files from task references
|
||||||
[${result.executionId}] ${result.status}
|
const relatedFiles = [...new Set(batch.tasks.flatMap(t => t.reference?.files || []))]
|
||||||
Tasks: ${result.tasksSummary}
|
if (relatedFiles.length > 0) {
|
||||||
Status: ${result.completionSummary}
|
contextSections.push(`### Related Files (Reference)
|
||||||
Outputs: ${result.keyOutputs || 'See git diff'}
|
Patterns and examples to follow:
|
||||||
${result.notes ? `Notes: ${result.notes}` : ''}
|
${relatedFiles.map(f => `- ${f}`).join('\n')}`)
|
||||||
`).join('\n---\n')}
|
}
|
||||||
|
|
||||||
IMPORTANT: Review previous results. Build on completed work. Avoid duplication.
|
// 3. User clarifications
|
||||||
` : ''}
|
if (clarificationContext) {
|
||||||
|
contextSections.push(`### Clarifications
|
||||||
|
${Object.entries(clarificationContext).map(([q, a]) => `- ${q}: ${a}`).join('\n')}`)
|
||||||
|
}
|
||||||
|
|
||||||
### Multi-Angle Code Context
|
// 4. Plan artifact for deeper context
|
||||||
|
if (executionContext?.session?.artifacts?.plan) {
|
||||||
|
contextSections.push(`### Artifacts
|
||||||
|
Detailed plan: ${executionContext.session.artifacts.plan}`)
|
||||||
|
}
|
||||||
|
|
||||||
${explorationsContext && Object.keys(explorationsContext).length > 0 ?
|
if (contextSections.length > 0) {
|
||||||
`Exploration conducted from ${explorationAngles.length} angles:
|
prompt += `\n## Context\n${contextSections.join('\n\n')}\n`
|
||||||
|
}
|
||||||
|
|
||||||
${explorationAngles.map(angle => {
|
prompt += `\nComplete each task according to its "Done when" checklist.`
|
||||||
const exp = explorationsContext[angle]
|
|
||||||
return `Angle: ${angle}
|
return prompt
|
||||||
- Structure: ${exp.project_structure || 'Standard structure'}
|
|
||||||
- Files: ${exp.relevant_files?.slice(0, 5).join(', ') || 'TBD'}${exp.relevant_files?.length > 5 ? ` (+${exp.relevant_files.length - 5} more)` : ''}
|
|
||||||
- Patterns: ${exp.patterns?.substring(0, 100) || 'Follow existing'}${exp.patterns?.length > 100 ? '...' : ''}
|
|
||||||
- Constraints: ${exp.constraints || 'None'}`
|
|
||||||
}).join('\n\n')}
|
|
||||||
`
|
|
||||||
: 'No prior exploration - analyze codebase as needed'
|
|
||||||
}
|
}
|
||||||
|
|
||||||
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
|
ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write
|
||||||
|
|
||||||
${executionContext?.session?.artifacts ? `\n### Exploration Artifact Files
|
|
||||||
Detailed context from multiple exploration angles available in:
|
|
||||||
${executionContext.session.artifacts.explorations?.map(exp =>
|
|
||||||
`- Angle: ${exp.angle} → ${exp.path}`
|
|
||||||
).join('\n') || ''}
|
|
||||||
${executionContext.session.artifacts.explorations_manifest ? `- Manifest: ${executionContext.session.artifacts.explorations_manifest}` : ''}
|
|
||||||
- Plan: ${executionContext.session.artifacts.plan}
|
|
||||||
|
|
||||||
Read exploration files for comprehensive architectural, pattern, and constraint details from multiple angles.
|
|
||||||
` : ''}
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
|
|
||||||
Return only after all tasks are fully implemented and tested.
|
|
||||||
|
|
||||||
Complexity: ${planObject.complexity}
|
|
||||||
" --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Execution with tracking**:
|
**Execution with fixed IDs** (predictable ID pattern):
|
||||||
```javascript
|
```javascript
|
||||||
// Launch CLI in foreground (NOT background)
|
// Launch CLI in foreground (NOT background)
|
||||||
|
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||||
|
const timeoutByComplexity = {
|
||||||
|
"Low": 2400000, // 40 minutes
|
||||||
|
"Medium": 3600000, // 60 minutes
|
||||||
|
"High": 6000000 // 100 minutes
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate fixed execution ID: ${sessionId}-${groupId}
|
||||||
|
// This enables predictable ID lookup without relying on resume context chains
|
||||||
|
const sessionId = executionContext?.session?.id || 'standalone'
|
||||||
|
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
|
||||||
|
|
||||||
|
// Check if resuming from previous failed execution
|
||||||
|
const previousCliId = batch.resumeFromCliId || null
|
||||||
|
|
||||||
|
// Build command with fixed ID (and optional resume for continuation)
|
||||||
|
const cli_command = previousCliId
|
||||||
|
? `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
|
||||||
|
: `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
|
||||||
|
|
||||||
bash_result = Bash(
|
bash_result = Bash(
|
||||||
command=cli_command,
|
command=cli_command,
|
||||||
timeout=6000000
|
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Execution ID is now predictable: ${fixedExecutionId}
|
||||||
|
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
|
||||||
|
const cliExecutionId = fixedExecutionId
|
||||||
|
|
||||||
// Update TodoWrite when execution completes
|
// Update TodoWrite when execution completes
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
**Resume on Failure** (with fixed ID):
|
||||||
|
```javascript
|
||||||
|
// If execution failed or timed out, offer resume option
|
||||||
|
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
|
||||||
|
console.log(`
|
||||||
|
⚠️ Execution incomplete. Resume available:
|
||||||
|
Fixed ID: ${fixedExecutionId}
|
||||||
|
Lookup: ccw cli detail ${fixedExecutionId}
|
||||||
|
Resume: ccw cli -p "Continue tasks" --resume ${fixedExecutionId} --tool codex --mode write --id ${fixedExecutionId}-retry
|
||||||
|
`)
|
||||||
|
|
||||||
|
// Store for potential retry in same session
|
||||||
|
batch.resumeFromCliId = fixedExecutionId
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
|
||||||
|
|
||||||
|
**Option C: CLI Execution (Gemini)**
|
||||||
|
|
||||||
|
When to use: `getTaskExecutor(task) === "gemini"` (分析类任务)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 使用与 Option B 相同的 formatBatchPrompt,切换 tool 和 mode
|
||||||
|
ccw cli -p "${formatBatchPrompt(batch)}" --tool gemini --mode analysis --id ${sessionId}-${batch.groupId}
|
||||||
|
```
|
||||||
|
|
||||||
### Step 4: Progress Tracking
|
### Step 4: Progress Tracking
|
||||||
|
|
||||||
@@ -508,26 +613,93 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-q
|
|||||||
# - Report findings directly
|
# - Report findings directly
|
||||||
|
|
||||||
# Method 2: Gemini Review (recommended)
|
# Method 2: Gemini Review (recommended)
|
||||||
gemini -p "[Shared Prompt Template with artifacts]"
|
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
|
||||||
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
||||||
|
|
||||||
# Method 3: Qwen Review (alternative)
|
# Method 3: Qwen Review (alternative)
|
||||||
qwen -p "[Shared Prompt Template with artifacts]"
|
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||||
# Same prompt as Gemini, different execution engine
|
# Same prompt as Gemini, different execution engine
|
||||||
|
|
||||||
# Method 4: Codex Review (autonomous)
|
# Method 4: Codex Review (autonomous)
|
||||||
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access
|
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
||||||
|
```
|
||||||
|
|
||||||
|
**Multi-Round Review with Fixed IDs**:
|
||||||
|
```javascript
|
||||||
|
// Generate fixed review ID
|
||||||
|
const reviewId = `${sessionId}-review`
|
||||||
|
|
||||||
|
// First review pass with fixed ID
|
||||||
|
const reviewResult = Bash(`ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${reviewId}`)
|
||||||
|
|
||||||
|
// If issues found, continue review dialog with fixed ID chain
|
||||||
|
if (hasUnresolvedIssues(reviewResult)) {
|
||||||
|
// Resume with follow-up questions
|
||||||
|
Bash(`ccw cli -p "Clarify the security concerns you mentioned" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
||||||
- `@{plan.json}` → `@${executionContext.session.artifacts.plan}`
|
- `@{plan.json}` → `@${executionContext.session.artifacts.plan}`
|
||||||
- `[@{exploration.json}]` → exploration files from artifacts (if exists)
|
- `[@{exploration.json}]` → exploration files from artifacts (if exists)
|
||||||
|
|
||||||
|
### Step 6: Update Development Index
|
||||||
|
|
||||||
|
**Trigger**: After all executions complete (regardless of code review)
|
||||||
|
|
||||||
|
**Skip Condition**: Skip if `.workflow/project.json` does not exist
|
||||||
|
|
||||||
|
**Operations**:
|
||||||
|
```javascript
|
||||||
|
const projectJsonPath = '.workflow/project.json'
|
||||||
|
if (!fileExists(projectJsonPath)) return // Silent skip
|
||||||
|
|
||||||
|
const projectJson = JSON.parse(Read(projectJsonPath))
|
||||||
|
|
||||||
|
// Initialize if needed
|
||||||
|
if (!projectJson.development_index) {
|
||||||
|
projectJson.development_index = { feature: [], enhancement: [], bugfix: [], refactor: [], docs: [] }
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detect category from keywords
|
||||||
|
function detectCategory(text) {
|
||||||
|
text = text.toLowerCase()
|
||||||
|
if (/\b(fix|bug|error|issue|crash)\b/.test(text)) return 'bugfix'
|
||||||
|
if (/\b(refactor|cleanup|reorganize)\b/.test(text)) return 'refactor'
|
||||||
|
if (/\b(doc|readme|comment)\b/.test(text)) return 'docs'
|
||||||
|
if (/\b(add|new|create|implement)\b/.test(text)) return 'feature'
|
||||||
|
return 'enhancement'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detect sub_feature from task file paths
|
||||||
|
function detectSubFeature(tasks) {
|
||||||
|
const dirs = tasks.map(t => t.file?.split('/').slice(-2, -1)[0]).filter(Boolean)
|
||||||
|
const counts = dirs.reduce((a, d) => { a[d] = (a[d] || 0) + 1; return a }, {})
|
||||||
|
return Object.entries(counts).sort((a, b) => b[1] - a[1])[0]?.[0] || 'general'
|
||||||
|
}
|
||||||
|
|
||||||
|
const category = detectCategory(`${planObject.summary} ${planObject.approach}`)
|
||||||
|
const entry = {
|
||||||
|
title: planObject.summary.slice(0, 60),
|
||||||
|
sub_feature: detectSubFeature(planObject.tasks),
|
||||||
|
date: new Date().toISOString().split('T')[0],
|
||||||
|
description: planObject.approach.slice(0, 100),
|
||||||
|
status: previousExecutionResults.every(r => r.status === 'completed') ? 'completed' : 'partial',
|
||||||
|
session_id: executionContext?.session?.id || null
|
||||||
|
}
|
||||||
|
|
||||||
|
projectJson.development_index[category].push(entry)
|
||||||
|
projectJson.statistics.last_updated = new Date().toISOString()
|
||||||
|
Write(projectJsonPath, JSON.stringify(projectJson, null, 2))
|
||||||
|
|
||||||
|
console.log(`✓ Development index: [${category}] ${entry.title}`)
|
||||||
|
```
|
||||||
|
|
||||||
## Best Practices
|
## Best Practices
|
||||||
|
|
||||||
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
|
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
|
||||||
**Batch Limits**: Agent 7 tasks, CLI 4 tasks
|
**Task Grouping**: Based on explicit depends_on only; independent tasks run in single parallel batch
|
||||||
**Execution**: Parallel batches use single Claude message with multiple tool calls (no concurrency limit)
|
**Execution**: All independent tasks launch concurrently via single Claude message with multiple tool calls
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -538,8 +710,10 @@ codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --ski
|
|||||||
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
||||||
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
||||||
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
||||||
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
|
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `${sessionId}-${groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
|
||||||
|
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
|
||||||
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
||||||
|
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history`, verify date directories |
|
||||||
|
|
||||||
## Data Structures
|
## Data Structures
|
||||||
|
|
||||||
@@ -561,10 +735,15 @@ Passed from lite-plan via global variable:
|
|||||||
explorationAngles: string[], // List of exploration angles
|
explorationAngles: string[], // List of exploration angles
|
||||||
explorationManifest: {...} | null, // Exploration manifest
|
explorationManifest: {...} | null, // Exploration manifest
|
||||||
clarificationContext: {...} | null,
|
clarificationContext: {...} | null,
|
||||||
executionMethod: "Agent" | "Codex" | "Auto",
|
executionMethod: "Agent" | "Codex" | "Auto", // 全局默认
|
||||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||||
originalUserInput: string,
|
originalUserInput: string,
|
||||||
|
|
||||||
|
// 任务级 executor 分配(优先于 executionMethod)
|
||||||
|
executorAssignments: {
|
||||||
|
[taskId]: { executor: "gemini" | "codex" | "agent", reason: string }
|
||||||
|
},
|
||||||
|
|
||||||
// Session artifacts location (saved by lite-plan)
|
// Session artifacts location (saved by lite-plan)
|
||||||
session: {
|
session: {
|
||||||
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
||||||
@@ -594,8 +773,20 @@ Collected after each execution call completes:
|
|||||||
tasksSummary: string, // Brief description of tasks handled
|
tasksSummary: string, // Brief description of tasks handled
|
||||||
completionSummary: string, // What was completed
|
completionSummary: string, // What was completed
|
||||||
keyOutputs: string, // Files created/modified, key changes
|
keyOutputs: string, // Files created/modified, key changes
|
||||||
notes: string // Important context for next execution
|
notes: string, // Important context for next execution
|
||||||
|
fixedCliId: string | null // Fixed CLI execution ID (e.g., "implement-auth-2025-12-13-P1")
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||||
|
|
||||||
|
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||||
|
|
||||||
|
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||||
|
```bash
|
||||||
|
# Lookup previous execution
|
||||||
|
ccw cli detail ${fixedCliId}
|
||||||
|
|
||||||
|
# Resume with new fixed ID for retry
|
||||||
|
ccw cli -p "Continue from where we left off" --resume ${fixedCliId} --tool codex --mode write --id ${fixedCliId}-retry
|
||||||
|
```
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -34,30 +34,55 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
|
|||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
```
|
```
|
||||||
User Input → Task Analysis & Exploration Decision (Phase 1)
|
Phase 1: Task Analysis & Exploration
|
||||||
↓
|
├─ Parse input (description or .md file)
|
||||||
Clarification (Phase 2, optional)
|
├─ intelligent complexity assessment (Low/Medium/High)
|
||||||
↓
|
├─ Exploration decision (auto-detect or --explore flag)
|
||||||
Complexity Assessment & Planning (Phase 3)
|
├─ ⚠️ Context protection: If file reading ≥50k chars → force cli-explore-agent
|
||||||
↓
|
└─ Decision:
|
||||||
Task Confirmation & Execution Selection (Phase 4)
|
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
|
||||||
↓
|
└─ needsExploration=false → Skip to Phase 2/3
|
||||||
Dispatch to Execution (Phase 5)
|
|
||||||
|
Phase 2: Clarification (optional, multi-round)
|
||||||
|
├─ Aggregate clarification_needs from all exploration angles
|
||||||
|
├─ Deduplicate similar questions
|
||||||
|
└─ Decision:
|
||||||
|
├─ Has clarifications → AskUserQuestion (max 4 questions per round, multiple rounds allowed)
|
||||||
|
└─ No clarifications → Skip to Phase 3
|
||||||
|
|
||||||
|
Phase 3: Planning (NO CODE EXECUTION - planning only)
|
||||||
|
└─ Decision (based on Phase 1 complexity):
|
||||||
|
├─ Low → Load schema: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json → Direct Claude planning (following schema) → plan.json → MUST proceed to Phase 4
|
||||||
|
└─ Medium/High → cli-lite-planning-agent → plan.json → MUST proceed to Phase 4
|
||||||
|
|
||||||
|
Phase 4: Confirmation & Selection
|
||||||
|
├─ Display plan summary (tasks, complexity, estimated time)
|
||||||
|
└─ AskUserQuestion:
|
||||||
|
├─ Confirm: Allow / Modify / Cancel
|
||||||
|
├─ Execution: Agent / Codex / Auto
|
||||||
|
└─ Review: Gemini / Agent / Skip
|
||||||
|
|
||||||
|
Phase 5: Dispatch
|
||||||
|
├─ Build executionContext (plan + explorations + clarifications + selections)
|
||||||
|
└─ SlashCommand("/workflow:lite-execute --in-memory")
|
||||||
```
|
```
|
||||||
|
|
||||||
## Implementation
|
## Implementation
|
||||||
|
|
||||||
### Phase 1: Intelligent Multi-Angle Exploration
|
### Phase 1: Intelligent Multi-Angle Exploration
|
||||||
|
|
||||||
**Session Setup**:
|
**Session Setup** (MANDATORY - follow exactly):
|
||||||
```javascript
|
```javascript
|
||||||
|
// Helper: Get UTC+8 (China Standard Time) ISO string
|
||||||
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
|
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-11-29
|
||||||
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-')
|
|
||||||
const sessionId = `${taskSlug}-${shortTimestamp}`
|
const sessionId = `${taskSlug}-${dateStr}` // e.g., "implement-jwt-refresh-2025-11-29"
|
||||||
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
|
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
|
||||||
|
|
||||||
bash(`mkdir -p ${sessionFolder}`)
|
bash(`mkdir -p ${sessionFolder} && test -d ${sessionFolder} && echo "SUCCESS: ${sessionFolder}" || echo "FAILED: ${sessionFolder}"`)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Exploration Decision Logic**:
|
**Exploration Decision Logic**:
|
||||||
@@ -76,34 +101,21 @@ if (!needsExploration) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Complexity Assessment & Exploration Count**:
|
**⚠️ Context Protection**: File reading ≥50k chars → force `needsExploration=true` (delegate to cli-explore-agent)
|
||||||
|
|
||||||
|
**Complexity Assessment** (Intelligent Analysis):
|
||||||
```javascript
|
```javascript
|
||||||
// Estimate task complexity based on description
|
// analyzes task complexity based on:
|
||||||
function estimateComplexity(taskDescription) {
|
// - Scope: How many systems/modules are affected?
|
||||||
const wordCount = taskDescription.split(/\s+/).length
|
// - Depth: Surface change vs architectural impact?
|
||||||
const text = taskDescription.toLowerCase()
|
// - Risk: Potential for breaking existing functionality?
|
||||||
|
// - Dependencies: How interconnected is the change?
|
||||||
|
|
||||||
const indicators = {
|
const complexity = analyzeTaskComplexity(task_description)
|
||||||
high: ['refactor', 'migrate', 'redesign', 'architecture', 'system'],
|
// Returns: 'Low' | 'Medium' | 'High'
|
||||||
medium: ['implement', 'add feature', 'integrate', 'modify module'],
|
// Low: Single file, isolated change, minimal risk
|
||||||
low: ['fix', 'update', 'adjust', 'tweak']
|
// Medium: Multiple files, some dependencies, moderate risk
|
||||||
}
|
// High: Cross-module, architectural, high risk
|
||||||
|
|
||||||
let score = 0
|
|
||||||
if (wordCount > 50) score += 2
|
|
||||||
else if (wordCount > 20) score += 1
|
|
||||||
|
|
||||||
if (indicators.high.some(w => text.includes(w))) score += 3
|
|
||||||
else if (indicators.medium.some(w => text.includes(w))) score += 2
|
|
||||||
else if (indicators.low.some(w => text.includes(w))) score += 1
|
|
||||||
|
|
||||||
// 0-2: Low, 3-4: Medium, 5+: High
|
|
||||||
if (score >= 5) return 'High'
|
|
||||||
if (score >= 3) return 'Medium'
|
|
||||||
return 'Low'
|
|
||||||
}
|
|
||||||
|
|
||||||
const complexity = estimateComplexity(task_description)
|
|
||||||
|
|
||||||
// Angle assignment based on task type (orchestrator decides, not agent)
|
// Angle assignment based on task type (orchestrator decides, not agent)
|
||||||
const ANGLE_PRESETS = {
|
const ANGLE_PRESETS = {
|
||||||
@@ -128,11 +140,17 @@ function selectAngles(taskDescription, count) {
|
|||||||
|
|
||||||
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
|
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
|
||||||
|
|
||||||
|
// Planning strategy determination
|
||||||
|
const planningStrategy = complexity === 'Low'
|
||||||
|
? 'Direct Claude Planning'
|
||||||
|
: 'cli-lite-planning-agent'
|
||||||
|
|
||||||
console.log(`
|
console.log(`
|
||||||
## Exploration Plan
|
## Exploration Plan
|
||||||
|
|
||||||
Task Complexity: ${complexity}
|
Task Complexity: ${complexity}
|
||||||
Selected Angles: ${selectedAngles.join(', ')}
|
Selected Angles: ${selectedAngles.join(', ')}
|
||||||
|
Planning Strategy: ${planningStrategy}
|
||||||
|
|
||||||
Launching ${selectedAngles.length} parallel explorations...
|
Launching ${selectedAngles.length} parallel explorations...
|
||||||
`)
|
`)
|
||||||
@@ -140,11 +158,16 @@ Launching ${selectedAngles.length} parallel explorations...
|
|||||||
|
|
||||||
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
|
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
|
||||||
|
|
||||||
|
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
|
||||||
|
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
|
||||||
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Launch agents with pre-assigned angles
|
// Launch agents with pre-assigned angles
|
||||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
|
||||||
description=`Explore: ${angle}`,
|
description=`Explore: ${angle}`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -158,7 +181,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
|||||||
|
|
||||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||||
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
|
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||||
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||||
|
|
||||||
@@ -187,11 +210,14 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
|||||||
**Required Fields** (all ${angle} focused):
|
**Required Fields** (all ${angle} focused):
|
||||||
- project_structure: Modules/architecture relevant to ${angle}
|
- project_structure: Modules/architecture relevant to ${angle}
|
||||||
- relevant_files: Files affected from ${angle} perspective
|
- relevant_files: Files affected from ${angle} perspective
|
||||||
|
**IMPORTANT**: Use object format with relevance scores for synthesis:
|
||||||
|
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Core ${angle} logic"}]\`
|
||||||
|
Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
|
||||||
- patterns: ${angle}-related patterns to follow
|
- patterns: ${angle}-related patterns to follow
|
||||||
- dependencies: Dependencies relevant to ${angle}
|
- dependencies: Dependencies relevant to ${angle}
|
||||||
- integration_points: Where to integrate from ${angle} viewpoint
|
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
|
||||||
- constraints: ${angle}-specific limitations/conventions
|
- constraints: ${angle}-specific limitations/conventions
|
||||||
- clarification_needs: ${angle}-related ambiguities (with options array)
|
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
|
||||||
- _metadata.exploration_angle: "${angle}"
|
- _metadata.exploration_angle: "${angle}"
|
||||||
|
|
||||||
## Success Criteria
|
## Success Criteria
|
||||||
@@ -202,7 +228,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
|||||||
- [ ] Integration points include file:line locations
|
- [ ] Integration points include file:line locations
|
||||||
- [ ] Constraints are project-specific to ${angle}
|
- [ ] Constraints are project-specific to ${angle}
|
||||||
- [ ] JSON output follows schema exactly
|
- [ ] JSON output follows schema exactly
|
||||||
- [ ] clarification_needs includes options array
|
- [ ] clarification_needs includes options + recommended
|
||||||
|
|
||||||
## Output
|
## Output
|
||||||
Write: ${sessionFolder}/exploration-${angle}.json
|
Write: ${sessionFolder}/exploration-${angle}.json
|
||||||
@@ -225,7 +251,7 @@ const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json"
|
|||||||
const explorationManifest = {
|
const explorationManifest = {
|
||||||
session_id: sessionId,
|
session_id: sessionId,
|
||||||
task_description: task_description,
|
task_description: task_description,
|
||||||
timestamp: new Date().toISOString(),
|
timestamp: getUtc8ISOString(),
|
||||||
complexity: complexity,
|
complexity: complexity,
|
||||||
exploration_count: explorationCount,
|
exploration_count: explorationCount,
|
||||||
explorations: explorationFiles.map(file => {
|
explorations: explorationFiles.map(file => {
|
||||||
@@ -261,10 +287,12 @@ Angles explored: ${explorationManifest.explorations.map(e => e.angle).join(', ')
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Clarification (Optional)
|
### Phase 2: Clarification (Optional, Multi-Round)
|
||||||
|
|
||||||
**Skip if**: No exploration or `clarification_needs` is empty across all explorations
|
**Skip if**: No exploration or `clarification_needs` is empty across all explorations
|
||||||
|
|
||||||
|
**⚠️ CRITICAL**: AskUserQuestion tool limits max 4 questions per call. **MUST execute multiple rounds** to exhaust all clarification needs - do NOT stop at round 1.
|
||||||
|
|
||||||
**Aggregate clarification needs from all exploration angles**:
|
**Aggregate clarification needs from all exploration angles**:
|
||||||
```javascript
|
```javascript
|
||||||
// Load manifest and all exploration files
|
// Load manifest and all exploration files
|
||||||
@@ -287,32 +315,37 @@ explorations.forEach(exp => {
|
|||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
// Deduplicate by question similarity
|
// Intelligent deduplication: analyze allClarifications by intent
|
||||||
function deduplicateClarifications(clarifications) {
|
// - Identify questions with similar intent across different angles
|
||||||
const unique = []
|
// - Merge similar questions: combine options, consolidate context
|
||||||
clarifications.forEach(c => {
|
// - Produce dedupedClarifications with unique intents only
|
||||||
const isDuplicate = unique.some(u =>
|
const dedupedClarifications = intelligentMerge(allClarifications)
|
||||||
u.question.toLowerCase() === c.question.toLowerCase()
|
|
||||||
)
|
|
||||||
if (!isDuplicate) unique.push(c)
|
|
||||||
})
|
|
||||||
return unique
|
|
||||||
}
|
|
||||||
|
|
||||||
const uniqueClarifications = deduplicateClarifications(allClarifications)
|
// Multi-round clarification: batch questions (max 4 per round)
|
||||||
|
if (dedupedClarifications.length > 0) {
|
||||||
|
const BATCH_SIZE = 4
|
||||||
|
const totalRounds = Math.ceil(dedupedClarifications.length / BATCH_SIZE)
|
||||||
|
|
||||||
|
for (let i = 0; i < dedupedClarifications.length; i += BATCH_SIZE) {
|
||||||
|
const batch = dedupedClarifications.slice(i, i + BATCH_SIZE)
|
||||||
|
const currentRound = Math.floor(i / BATCH_SIZE) + 1
|
||||||
|
|
||||||
|
console.log(`### Clarification Round ${currentRound}/${totalRounds}`)
|
||||||
|
|
||||||
if (uniqueClarifications.length > 0) {
|
|
||||||
AskUserQuestion({
|
AskUserQuestion({
|
||||||
questions: uniqueClarifications.map(need => ({
|
questions: batch.map(need => ({
|
||||||
question: `[${need.source_angle}] ${need.question}\n\nContext: ${need.context}`,
|
question: `[${need.source_angle}] ${need.question}\n\nContext: ${need.context}`,
|
||||||
header: need.source_angle,
|
header: need.source_angle.substring(0, 12),
|
||||||
multiSelect: false,
|
multiSelect: false,
|
||||||
options: need.options.map(opt => ({
|
options: need.options.map((opt, index) => ({
|
||||||
label: opt,
|
label: need.recommended === index ? `${opt} ★` : opt,
|
||||||
description: `Use ${opt} approach`
|
description: need.recommended === index ? `Recommended` : `Use ${opt}`
|
||||||
}))
|
}))
|
||||||
}))
|
}))
|
||||||
})
|
})
|
||||||
|
|
||||||
|
// Store batch responses in clarificationContext before next round
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -320,29 +353,62 @@ if (uniqueClarifications.length > 0) {
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Complexity Assessment & Planning
|
### Phase 3: Planning
|
||||||
|
|
||||||
|
**Planning Strategy Selection** (based on Phase 1 complexity):
|
||||||
|
|
||||||
|
**IMPORTANT**: Phase 3 is **planning only** - NO code execution. All execution happens in Phase 5 via lite-execute.
|
||||||
|
|
||||||
|
**Executor Assignment** (Claude 智能分配,plan 生成后执行):
|
||||||
|
|
||||||
**Complexity Assessment**:
|
|
||||||
```javascript
|
```javascript
|
||||||
complexityScore = {
|
// 分配规则(优先级从高到低):
|
||||||
file_count: exploration?.relevant_files?.length || 0,
|
// 1. 用户明确指定:"用 gemini 分析..." → gemini, "codex 实现..." → codex
|
||||||
integration_points: exploration?.dependencies?.length || 0,
|
// 2. 默认 → agent
|
||||||
architecture_changes: exploration?.constraints?.includes('architecture'),
|
|
||||||
task_scope: estimated_steps > 5
|
|
||||||
}
|
|
||||||
|
|
||||||
// Low: score < 3, Medium: 3-5, High: > 5
|
const executorAssignments = {} // { taskId: { executor: 'gemini'|'codex'|'agent', reason: string } }
|
||||||
|
plan.tasks.forEach(task => {
|
||||||
|
// Claude 根据上述规则语义分析,为每个 task 分配 executor
|
||||||
|
executorAssignments[task.id] = { executor: '...', reason: '...' }
|
||||||
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**Low Complexity** - Direct planning by Claude:
|
**Low Complexity** - Direct planning by Claude:
|
||||||
- Generate plan directly, write to `${sessionFolder}/plan.json`
|
```javascript
|
||||||
- No agent invocation
|
// Step 1: Read schema
|
||||||
|
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
|
||||||
|
|
||||||
|
// Step 2: ⚠️ MANDATORY - Read and review ALL exploration files
|
||||||
|
const manifest = JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
|
||||||
|
manifest.explorations.forEach(exp => {
|
||||||
|
const explorationData = Read(exp.path)
|
||||||
|
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Step 3: Generate plan following schema (Claude directly, no agent)
|
||||||
|
// ⚠️ Plan MUST incorporate insights from exploration files read in Step 2
|
||||||
|
const plan = {
|
||||||
|
summary: "...",
|
||||||
|
approach: "...",
|
||||||
|
tasks: [...], // Each task: { id, title, scope, ..., depends_on, execution_group, complexity }
|
||||||
|
estimated_time: "...",
|
||||||
|
recommended_execution: "Agent",
|
||||||
|
complexity: "Low",
|
||||||
|
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 4: Write plan to session folder
|
||||||
|
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
||||||
|
|
||||||
|
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||||
|
```
|
||||||
|
|
||||||
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
|
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-lite-planning-agent",
|
subagent_type="cli-lite-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate detailed implementation plan",
|
description="Generate detailed implementation plan",
|
||||||
prompt=`
|
prompt=`
|
||||||
Generate implementation plan and write plan.json.
|
Generate implementation plan and write plan.json.
|
||||||
@@ -372,26 +438,26 @@ ${JSON.stringify(clarificationContext) || "None"}
|
|||||||
${complexity}
|
${complexity}
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
Generate plan.json with:
|
Generate plan.json following the schema obtained above. Key constraints:
|
||||||
- summary: 2-3 sentence overview
|
- tasks: 2-7 structured tasks (**group by feature/module, NOT by file**)
|
||||||
- approach: High-level implementation strategy (incorporating insights from all exploration angles)
|
- _metadata.exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
|
||||||
- tasks: 3-10 structured tasks with:
|
|
||||||
- title, file, action, description
|
## Task Grouping Rules
|
||||||
- implementation (3-7 steps)
|
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
|
||||||
- reference (pattern, files, examples)
|
2. **Group by context**: Tasks with similar context or related functional changes can be grouped together
|
||||||
- acceptance (2-4 criteria)
|
3. **Minimize agent count**: Simple, unrelated tasks can also be grouped to reduce agent execution overhead
|
||||||
- estimated_time, recommended_execution, complexity
|
4. **Avoid file-per-task**: Do NOT create separate tasks for each file
|
||||||
- _metadata:
|
5. **Substantial tasks**: Each task should represent 15-60 minutes of work
|
||||||
- timestamp, source, planning_mode
|
6. **True dependencies only**: Only use depends_on when Task B cannot start without Task A's output
|
||||||
- exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
|
7. **Prefer parallel**: Most tasks should be independent (no depends_on)
|
||||||
|
|
||||||
## Execution
|
## Execution
|
||||||
1. Read ALL exploration files for comprehensive context
|
1. Read schema file (cat command above)
|
||||||
2. Execute CLI planning using Gemini (Qwen fallback)
|
2. Execute CLI planning using Gemini (Qwen fallback)
|
||||||
3. Synthesize findings from multiple exploration angles
|
3. Read ALL exploration files for comprehensive context
|
||||||
4. Parse output and structure plan
|
4. Synthesize findings and generate plan following schema
|
||||||
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
|
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
|
||||||
4. Return brief completion summary
|
6. Return brief completion summary
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
@@ -463,6 +529,8 @@ AskUserQuestion({
|
|||||||
|
|
||||||
### Phase 5: Dispatch to Execution
|
### Phase 5: Dispatch to Execution
|
||||||
|
|
||||||
|
**CRITICAL**: lite-plan NEVER executes code directly. ALL execution MUST go through lite-execute.
|
||||||
|
|
||||||
**Step 5.1: Build executionContext**
|
**Step 5.1: Build executionContext**
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
@@ -484,9 +552,13 @@ executionContext = {
|
|||||||
explorationAngles: manifest.explorations.map(e => e.angle),
|
explorationAngles: manifest.explorations.map(e => e.angle),
|
||||||
explorationManifest: manifest,
|
explorationManifest: manifest,
|
||||||
clarificationContext: clarificationContext || null,
|
clarificationContext: clarificationContext || null,
|
||||||
executionMethod: userSelection.execution_method,
|
executionMethod: userSelection.execution_method, // 全局默认,可被 executorAssignments 覆盖
|
||||||
codeReviewTool: userSelection.code_review_tool,
|
codeReviewTool: userSelection.code_review_tool,
|
||||||
originalUserInput: task_description,
|
originalUserInput: task_description,
|
||||||
|
|
||||||
|
// 任务级 executor 分配(优先于全局 executionMethod)
|
||||||
|
executorAssignments: executorAssignments, // { taskId: { executor, reason } }
|
||||||
|
|
||||||
session: {
|
session: {
|
||||||
id: sessionId,
|
id: sessionId,
|
||||||
folder: sessionFolder,
|
folder: sessionFolder,
|
||||||
@@ -511,7 +583,7 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
|||||||
## Session Folder Structure
|
## Session Folder Structure
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/.lite-plan/{task-slug}-{timestamp}/
|
.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/
|
||||||
├── exploration-{angle1}.json # Exploration angle 1
|
├── exploration-{angle1}.json # Exploration angle 1
|
||||||
├── exploration-{angle2}.json # Exploration angle 2
|
├── exploration-{angle2}.json # Exploration angle 2
|
||||||
├── exploration-{angle3}.json # Exploration angle 3 (if applicable)
|
├── exploration-{angle3}.json # Exploration angle 3 (if applicable)
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: plan
|
name: plan
|
||||||
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution
|
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs
|
||||||
argument-hint: "[--cli-execute] \"text description\"|file.md"
|
argument-hint: "\"text description\"|file.md"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -9,7 +9,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
|
|
||||||
## Coordinator Role
|
## Coordinator Role
|
||||||
|
|
||||||
**This command is a pure orchestrator**: Execute 5 slash commands in sequence (including a quality gate), parse their outputs, pass context between them, and ensure complete execution through **automatic continuation**.
|
**This command is a pure orchestrator**: Dispatch 5 slash commands in sequence (including a quality gate), parse their outputs, pass context between them, and ensure complete execution through **automatic continuation**.
|
||||||
|
|
||||||
**Execution Model - Auto-Continue Workflow with Quality Gate**:
|
**Execution Model - Auto-Continue Workflow with Quality Gate**:
|
||||||
|
|
||||||
@@ -17,14 +17,14 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
|
|||||||
|
|
||||||
|
|
||||||
1. **User triggers**: `/workflow:plan "task"`
|
1. **User triggers**: `/workflow:plan "task"`
|
||||||
2. **Phase 1 executes** → Session discovery → Auto-continues
|
2. **Phase 1 dispatches** → Session discovery → Auto-continues
|
||||||
3. **Phase 2 executes** → Context gathering → Auto-continues
|
3. **Phase 2 dispatches** → Context gathering → Auto-continues
|
||||||
4. **Phase 3 executes** (optional, if conflict_risk ≥ medium) → Conflict resolution → Auto-continues
|
4. **Phase 3 dispatches** (optional, if conflict_risk ≥ medium) → Conflict resolution → Auto-continues
|
||||||
5. **Phase 4 executes** → Task generation (task-generate-agent) → Reports final summary
|
5. **Phase 4 dispatches** → Task generation (task-generate-agent) → Reports final summary
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- When a sub-command is invoked (e.g., `/workflow:tools:context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
- When a sub-command is dispatched (e.g., `/workflow:tools:context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
||||||
- Orchestrator **executes these attached tasks** sequentially
|
- Orchestrator **executes these attached tasks** sequentially
|
||||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||||
- This is **task expansion**, not external delegation
|
- This is **task expansion**, not external delegation
|
||||||
@@ -43,13 +43,48 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
|
|||||||
3. **Parse Every Output**: Extract required data from each command/agent output for next phase
|
3. **Parse Every Output**: Extract required data from each command/agent output for next phase
|
||||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
||||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
6. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
6. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
└─ Convert user input to structured format (GOAL/SCOPE/CONTEXT)
|
||||||
|
|
||||||
|
Phase 1: Session Discovery
|
||||||
|
└─ /workflow:session:start --auto "structured-description"
|
||||||
|
└─ Output: sessionId (WFS-xxx)
|
||||||
|
|
||||||
|
Phase 2: Context Gathering
|
||||||
|
└─ /workflow:tools:context-gather --session sessionId "structured-description"
|
||||||
|
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
||||||
|
└─ Output: contextPath + conflict_risk
|
||||||
|
|
||||||
|
Phase 3: Conflict Resolution
|
||||||
|
└─ Decision (conflict_risk check):
|
||||||
|
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
|
||||||
|
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
||||||
|
│ └─ Output: Modified brainstorm artifacts
|
||||||
|
└─ conflict_risk < medium → Skip to Phase 4
|
||||||
|
|
||||||
|
Phase 4: Task Generation
|
||||||
|
└─ /workflow:tools:task-generate-agent --session sessionId
|
||||||
|
└─ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
||||||
|
|
||||||
|
Return:
|
||||||
|
└─ Summary with recommended next steps
|
||||||
|
```
|
||||||
|
|
||||||
## 5-Phase Execution
|
## 5-Phase Execution
|
||||||
|
|
||||||
### Phase 1: Session Discovery
|
### Phase 1: Session Discovery
|
||||||
**Command**: `SlashCommand(command="/workflow:session:start --auto \"[structured-task-description]\"")`
|
|
||||||
|
**Step 1.1: Dispatch** - Create or discover workflow session
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:session:start --auto \"[structured-task-description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Task Description Structure**:
|
**Task Description Structure**:
|
||||||
```
|
```
|
||||||
@@ -81,7 +116,12 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Context Gathering
|
### Phase 2: Context Gathering
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[structured-task-description]\"")`
|
|
||||||
|
**Step 2.1: Dispatch** - Gather project context and analyze codebase
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[structured-task-description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
||||||
|
|
||||||
@@ -95,9 +135,9 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
- Context package path extracted
|
- Context package path extracted
|
||||||
- File exists and is valid JSON
|
- File exists and is valid JSON
|
||||||
|
|
||||||
<!-- TodoWrite: When context-gather invoked, INSERT 3 context-gather tasks, mark first as in_progress -->
|
<!-- TodoWrite: When context-gather dispatched, INSERT 3 context-gather tasks, mark first as in_progress -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 2 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 2 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
@@ -109,7 +149,7 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
|
**Note**: SlashCommand dispatch **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
|
||||||
|
|
||||||
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
|
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
|
||||||
|
|
||||||
@@ -128,11 +168,15 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk)
|
### Phase 3: Conflict Resolution
|
||||||
|
|
||||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||||
|
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")`
|
**Step 3.1: Dispatch** - Detect and resolve conflicts with CLI analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- sessionId from Phase 1
|
- sessionId from Phase 1
|
||||||
@@ -141,10 +185,10 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
- Extract: Execution status (success/skipped/failed)
|
- Extract: Execution status (success/skipped/failed)
|
||||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
- Verify: conflict-resolution.json file path (if executed)
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||||
|
|
||||||
**Skip Behavior**:
|
**Skip Behavior**:
|
||||||
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
||||||
@@ -152,7 +196,7 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
|
|
||||||
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
|
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached, if conflict_risk ≥ medium)**:
|
**TodoWrite Update (Phase 3 SlashCommand dispatched - tasks attached, if conflict_risk ≥ medium)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
@@ -165,7 +209,7 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
|
**Note**: SlashCommand dispatch **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
|
||||||
|
|
||||||
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
|
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
|
||||||
|
|
||||||
@@ -185,9 +229,14 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
|
|
||||||
**Memory State Check**:
|
**Memory State Check**:
|
||||||
- Evaluate current context window usage and memory state
|
- Evaluate current context window usage and memory state
|
||||||
- If memory usage is high (>110K tokens or approaching context limits):
|
- If memory usage is high (>120K tokens or approaching context limits):
|
||||||
- **Command**: `SlashCommand(command="/compact")`
|
|
||||||
- This optimizes memory before proceeding to Phase 3.5
|
**Step 3.2: Dispatch** - Optimize memory before proceeding
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/compact")
|
||||||
|
```
|
||||||
|
|
||||||
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
||||||
- Ensures optimal performance and prevents context overflow
|
- Ensures optimal performance and prevents context overflow
|
||||||
|
|
||||||
@@ -221,17 +270,13 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
- Task generation translates high-level role analyses into concrete, actionable work items
|
- Task generation translates high-level role analyses into concrete, actionable work items
|
||||||
- **Intent priority**: Current user prompt > role analysis.md files > guidance-specification.md
|
- **Intent priority**: Current user prompt > role analysis.md files > guidance-specification.md
|
||||||
|
|
||||||
**Command**:
|
**Step 4.1: Dispatch** - Generate implementation plan and task JSONs
|
||||||
```bash
|
|
||||||
# Default (agent mode)
|
|
||||||
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")
|
|
||||||
|
|
||||||
# With CLI execution
|
```javascript
|
||||||
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId] --cli-execute")
|
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")
|
||||||
```
|
```
|
||||||
|
|
||||||
**Flag**:
|
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", the agent embeds `command` fields in relevant `implementation_approach` steps.
|
||||||
- `--cli-execute`: Generate tasks with Codex execution commands
|
|
||||||
|
|
||||||
**Input**: `sessionId` from Phase 1
|
**Input**: `sessionId` from Phase 1
|
||||||
|
|
||||||
@@ -240,9 +285,9 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]
|
|||||||
- `.workflow/active/[sessionId]/.task/IMPL-*.json` exists (at least one)
|
- `.workflow/active/[sessionId]/.task/IMPL-*.json` exists (at least one)
|
||||||
- `.workflow/active/[sessionId]/TODO_LIST.md` exists
|
- `.workflow/active/[sessionId]/TODO_LIST.md` exists
|
||||||
|
|
||||||
<!-- TodoWrite: When task-generate-agent invoked, ATTACH 1 agent task -->
|
<!-- TodoWrite: When task-generate-agent dispatched, ATTACH 1 agent task -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 4 SlashCommand invoked - agent task attached)**:
|
**TodoWrite Update (Phase 4 SlashCommand dispatched - agent task attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
@@ -286,7 +331,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to catch issues earl
|
|||||||
|
|
||||||
### Key Principles
|
### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand dispatched):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
|
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
|
||||||
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
|
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
|
||||||
@@ -305,7 +350,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to catch issues earl
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary for Phase 2/3, or marked completed for Phase 4) → Next phase begins → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase dispatched (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary for Phase 2/3, or marked completed for Phase 4) → Next phase begins → Repeat until all phases complete.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -373,7 +418,7 @@ Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
|
|||||||
↓ Output: Modified brainstorm artifacts (NO report file)
|
↓ Output: Modified brainstorm artifacts (NO report file)
|
||||||
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
|
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
|
||||||
↓
|
↓
|
||||||
Phase 4: task-generate-agent --session sessionId [--cli-execute]
|
Phase 4: task-generate-agent --session sessionId
|
||||||
↓ Input: sessionId + resolved brainstorm artifacts + session memory
|
↓ Input: sessionId + resolved brainstorm artifacts + session memory
|
||||||
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
||||||
↓
|
↓
|
||||||
@@ -397,7 +442,7 @@ User triggers: /workflow:plan "Build authentication system"
|
|||||||
Phase 1: Session Discovery
|
Phase 1: Session Discovery
|
||||||
→ sessionId extracted
|
→ sessionId extracted
|
||||||
↓
|
↓
|
||||||
Phase 2: Context Gathering (SlashCommand invoked)
|
Phase 2: Context Gathering (SlashCommand dispatched)
|
||||||
→ ATTACH 3 sub-tasks: ← ATTACHED
|
→ ATTACH 3 sub-tasks: ← ATTACHED
|
||||||
- → Analyze codebase structure
|
- → Analyze codebase structure
|
||||||
- → Identify integration points
|
- → Identify integration points
|
||||||
@@ -408,7 +453,7 @@ Phase 2: Context Gathering (SlashCommand invoked)
|
|||||||
↓
|
↓
|
||||||
Conditional Branch: Check conflict_risk
|
Conditional Branch: Check conflict_risk
|
||||||
├─ IF conflict_risk ≥ medium:
|
├─ IF conflict_risk ≥ medium:
|
||||||
│ Phase 3: Conflict Resolution (SlashCommand invoked)
|
│ Phase 3: Conflict Resolution (SlashCommand dispatched)
|
||||||
│ → ATTACH 3 sub-tasks: ← ATTACHED
|
│ → ATTACH 3 sub-tasks: ← ATTACHED
|
||||||
│ - → Detect conflicts with CLI analysis
|
│ - → Detect conflicts with CLI analysis
|
||||||
│ - → Present conflicts to user
|
│ - → Present conflicts to user
|
||||||
@@ -418,7 +463,7 @@ Conditional Branch: Check conflict_risk
|
|||||||
│
|
│
|
||||||
└─ ELSE: Skip Phase 3, proceed to Phase 4
|
└─ ELSE: Skip Phase 3, proceed to Phase 4
|
||||||
↓
|
↓
|
||||||
Phase 4: Task Generation (SlashCommand invoked)
|
Phase 4: Task Generation (SlashCommand dispatched)
|
||||||
→ Single agent task (no sub-tasks)
|
→ Single agent task (no sub-tasks)
|
||||||
→ Agent autonomously completes internally:
|
→ Agent autonomously completes internally:
|
||||||
(discovery → planning → output)
|
(discovery → planning → output)
|
||||||
@@ -428,12 +473,12 @@ Return summary to user
|
|||||||
```
|
```
|
||||||
|
|
||||||
**Key Points**:
|
**Key Points**:
|
||||||
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand invoked
|
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand dispatched
|
||||||
- Phase 2, 3: Multiple sub-tasks
|
- Phase 2, 3: Multiple sub-tasks
|
||||||
- Phase 4: Single agent task
|
- Phase 4: Single agent task
|
||||||
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)
|
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)
|
||||||
- **Phase 4**: Single agent task, no collapse (just mark completed)
|
- **Phase 4**: Single agent task, no collapse (just mark completed)
|
||||||
- **Conditional Branch**: Phase 3 only executes if conflict_risk ≥ medium
|
- **Conditional Branch**: Phase 3 only dispatches if conflict_risk ≥ medium
|
||||||
- **Continuous Flow**: No user intervention between phases
|
- **Continuous Flow**: No user intervention between phases
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
@@ -452,11 +497,9 @@ Return summary to user
|
|||||||
- Parse context path from Phase 2 output, store in memory
|
- Parse context path from Phase 2 output, store in memory
|
||||||
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
||||||
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
||||||
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
|
- Wait for Phase 3 to finish executing (if executed), verify conflict-resolution.json created
|
||||||
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
||||||
- **Build Phase 4 command**:
|
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
|
||||||
- Base command: `/workflow:tools:task-generate-agent --session [sessionId]`
|
|
||||||
- Add `--cli-execute` if flag present
|
|
||||||
- Pass session ID to Phase 4 command
|
- Pass session ID to Phase 4 command
|
||||||
- Verify all Phase 4 outputs
|
- Verify all Phase 4 outputs
|
||||||
- Update TodoWrite after each phase (dynamically adjust for Phase 3 presence)
|
- Update TodoWrite after each phase (dynamically adjust for Phase 3 presence)
|
||||||
|
|||||||
@@ -48,8 +48,54 @@ Intelligently replans workflow sessions or individual tasks with interactive bou
|
|||||||
/workflow:replan IMPL-1 --interactive
|
/workflow:replan IMPL-1 --interactive
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session, --interactive
|
||||||
|
└─ Detect mode: task-id present → Task mode | Otherwise → Session mode
|
||||||
|
|
||||||
|
Phase 1: Mode Detection & Session Discovery
|
||||||
|
├─ Detect operation mode (Task vs Session)
|
||||||
|
├─ Discover/validate session (--session flag or auto-detect)
|
||||||
|
└─ Load session context (workflow-session.json, IMPL_PLAN.md, TODO_LIST.md)
|
||||||
|
|
||||||
|
Phase 2: Interactive Requirement Clarification
|
||||||
|
└─ Decision (by mode):
|
||||||
|
├─ Session mode → 3-4 questions (scope, modules, changes, dependencies)
|
||||||
|
└─ Task mode → 2 questions (update type, ripple effect)
|
||||||
|
|
||||||
|
Phase 3: Impact Analysis & Planning
|
||||||
|
├─ Analyze required changes
|
||||||
|
├─ Generate modification plan
|
||||||
|
└─ User confirmation (Execute / Adjust / Cancel)
|
||||||
|
|
||||||
|
Phase 4: Backup Creation
|
||||||
|
└─ Backup all affected files with manifest
|
||||||
|
|
||||||
|
Phase 5: Apply Modifications
|
||||||
|
├─ Update IMPL_PLAN.md (if needed)
|
||||||
|
├─ Update TODO_LIST.md (if needed)
|
||||||
|
├─ Update/Create/Delete task JSONs
|
||||||
|
└─ Update session metadata
|
||||||
|
|
||||||
|
Phase 6: Verification & Summary
|
||||||
|
├─ Validate consistency (JSON validity, task limits, acyclic dependencies)
|
||||||
|
└─ Generate change summary
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
|
### Input Parsing
|
||||||
|
|
||||||
|
**Parse flags**:
|
||||||
|
```javascript
|
||||||
|
const sessionFlag = $ARGUMENTS.match(/--session\s+(\S+)/)?.[1]
|
||||||
|
const interactive = $ARGUMENTS.includes('--interactive')
|
||||||
|
const taskIdMatch = $ARGUMENTS.match(/\b(IMPL-\d+(?:\.\d+)?)\b/)
|
||||||
|
const taskId = taskIdMatch?.[1]
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 1: Mode Detection & Session Discovery
|
### Phase 1: Mode Detection & Session Discovery
|
||||||
|
|
||||||
**Process**:
|
**Process**:
|
||||||
@@ -97,11 +143,10 @@ Options: Dynamically generated from existing tasks' focus_paths
|
|||||||
**Q3: Task Changes** (if scope >= task_restructure)
|
**Q3: Task Changes** (if scope >= task_restructure)
|
||||||
```javascript
|
```javascript
|
||||||
Options:
|
Options:
|
||||||
- 添加新任务
|
- 添加/删除任务 (add_remove)
|
||||||
- 删除现有任务
|
- 合并/拆分任务 (merge_split)
|
||||||
- 合并任务
|
- 仅更新内容 (update_only)
|
||||||
- 拆分任务
|
// Note: Max 4 options for AskUserQuestion
|
||||||
- 仅更新内容
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Q4: Dependency Changes**
|
**Q4: Dependency Changes**
|
||||||
|
|||||||
@@ -46,8 +46,7 @@ Automated fix orchestrator with **two-phase architecture**: AI-powered planning
|
|||||||
1. **Intelligent Planning**: AI-powered analysis identifies optimal grouping and execution strategy
|
1. **Intelligent Planning**: AI-powered analysis identifies optimal grouping and execution strategy
|
||||||
2. **Multi-stage Coordination**: Supports complex parallel + serial execution with dependency management
|
2. **Multi-stage Coordination**: Supports complex parallel + serial execution with dependency management
|
||||||
3. **Conservative Safety**: Mandatory test verification with automatic rollback on failure
|
3. **Conservative Safety**: Mandatory test verification with automatic rollback on failure
|
||||||
4. **Real-time Visibility**: Dashboard shows planning progress, stage timeline, and active agents
|
4. **Resume Support**: Checkpoint-based recovery for interrupted sessions
|
||||||
5. **Resume Support**: Checkpoint-based recovery for interrupted sessions
|
|
||||||
|
|
||||||
### Orchestrator Boundary (CRITICAL)
|
### Orchestrator Boundary (CRITICAL)
|
||||||
- **ONLY command** for automated review finding fixes
|
- **ONLY command** for automated review finding fixes
|
||||||
@@ -59,14 +58,14 @@ Automated fix orchestrator with **two-phase architecture**: AI-powered planning
|
|||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: Discovery & Initialization
|
Phase 1: Discovery & Initialization
|
||||||
└─ Validate export file, create fix session structure, initialize state files → Generate fix-dashboard.html
|
└─ Validate export file, create fix session structure, initialize state files
|
||||||
|
|
||||||
Phase 2: Planning Coordination (@cli-planning-agent)
|
Phase 2: Planning Coordination (@cli-planning-agent)
|
||||||
├─ Analyze findings for patterns and dependencies
|
├─ Analyze findings for patterns and dependencies
|
||||||
├─ Group by file + dimension + root cause similarity
|
├─ Group by file + dimension + root cause similarity
|
||||||
├─ Determine execution strategy (parallel/serial/hybrid)
|
├─ Determine execution strategy (parallel/serial/hybrid)
|
||||||
├─ Generate fix timeline with stages
|
├─ Generate fix timeline with stages
|
||||||
└─ Output: fix-plan.json (dashboard auto-polls for status)
|
└─ Output: fix-plan.json
|
||||||
|
|
||||||
Phase 3: Execution Orchestration (Stage-based)
|
Phase 3: Execution Orchestration (Stage-based)
|
||||||
For each timeline stage:
|
For each timeline stage:
|
||||||
@@ -198,12 +197,10 @@ if (result.passRate < 100%) {
|
|||||||
- Session creation: Generate fix-session-id (`fix-{timestamp}`)
|
- Session creation: Generate fix-session-id (`fix-{timestamp}`)
|
||||||
- Directory structure: Create `{review-dir}/fixes/{fix-session-id}/` with subdirectories
|
- Directory structure: Create `{review-dir}/fixes/{fix-session-id}/` with subdirectories
|
||||||
- State files: Initialize active-fix-session.json (session marker)
|
- State files: Initialize active-fix-session.json (session marker)
|
||||||
- Dashboard generation: Create fix-dashboard.html from template (see Dashboard Generation below)
|
|
||||||
- TodoWrite initialization: Set up 4-phase tracking
|
- TodoWrite initialization: Set up 4-phase tracking
|
||||||
|
|
||||||
**Phase 2: Planning Coordination**
|
**Phase 2: Planning Coordination**
|
||||||
- Launch @cli-planning-agent with findings data and project context
|
- Launch @cli-planning-agent with findings data and project context
|
||||||
- Monitor planning progress (dashboard shows "Planning fixes..." indicator)
|
|
||||||
- Validate fix-plan.json output (schema conformance, includes metadata with session status)
|
- Validate fix-plan.json output (schema conformance, includes metadata with session status)
|
||||||
- Load plan into memory for execution phase
|
- Load plan into memory for execution phase
|
||||||
- TodoWrite update: Mark planning complete, start execution
|
- TodoWrite update: Mark planning complete, start execution
|
||||||
@@ -216,7 +213,6 @@ if (result.passRate < 100%) {
|
|||||||
- Assign agent IDs (agents update their fix-progress-{N}.json)
|
- Assign agent IDs (agents update their fix-progress-{N}.json)
|
||||||
- Handle agent failures gracefully (mark group as failed, continue)
|
- Handle agent failures gracefully (mark group as failed, continue)
|
||||||
- Advance to next stage only when current stage complete
|
- Advance to next stage only when current stage complete
|
||||||
- Dashboard polls and aggregates fix-progress-{N}.json files for display
|
|
||||||
|
|
||||||
**Phase 4: Completion & Aggregation**
|
**Phase 4: Completion & Aggregation**
|
||||||
- Collect final status from all fix-progress-{N}.json files
|
- Collect final status from all fix-progress-{N}.json files
|
||||||
@@ -224,7 +220,7 @@ if (result.passRate < 100%) {
|
|||||||
- Update fix-history.json with new session entry
|
- Update fix-history.json with new session entry
|
||||||
- Remove active-fix-session.json
|
- Remove active-fix-session.json
|
||||||
- TodoWrite completion: Mark all phases done
|
- TodoWrite completion: Mark all phases done
|
||||||
- Output summary to user with dashboard link
|
- Output summary to user
|
||||||
|
|
||||||
**Phase 5: Session Completion (Optional)**
|
**Phase 5: Session Completion (Optional)**
|
||||||
- If all findings fixed successfully (no failures):
|
- If all findings fixed successfully (no failures):
|
||||||
@@ -234,51 +230,12 @@ if (result.passRate < 100%) {
|
|||||||
- Output: "Some findings failed. Review fix-summary.md before completing session."
|
- Output: "Some findings failed. Review fix-summary.md before completing session."
|
||||||
- Do NOT auto-complete session
|
- Do NOT auto-complete session
|
||||||
|
|
||||||
### Dashboard Generation
|
|
||||||
|
|
||||||
**MANDATORY**: Dashboard MUST be generated from template during Phase 1 initialization
|
|
||||||
|
|
||||||
**Template Location**: `~/.claude/templates/fix-dashboard.html`
|
|
||||||
|
|
||||||
**⚠️ POST-GENERATION**: Orchestrator and agents MUST NOT read/write/modify fix-dashboard.html after creation
|
|
||||||
|
|
||||||
**Generation Steps**:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Copy template to fix session directory
|
|
||||||
cp ~/.claude/templates/fix-dashboard.html ${sessionDir}/fixes/${fixSessionId}/fix-dashboard.html
|
|
||||||
|
|
||||||
# 2. Replace SESSION_ID placeholder
|
|
||||||
sed -i "s|{{SESSION_ID}}|${sessionId}|g" ${sessionDir}/fixes/${fixSessionId}/fix-dashboard.html
|
|
||||||
|
|
||||||
# 3. Replace REVIEW_DIR placeholder
|
|
||||||
sed -i "s|{{REVIEW_DIR}}|${reviewDir}|g" ${sessionDir}/fixes/${fixSessionId}/fix-dashboard.html
|
|
||||||
|
|
||||||
# 4. Output dashboard URL
|
|
||||||
echo "🔧 Fix Dashboard: file://$(cd ${sessionDir}/fixes/${fixSessionId} && pwd)/fix-dashboard.html"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Dashboard Features**:
|
|
||||||
- Real-time progress tracking via JSON polling (3-second interval)
|
|
||||||
- Stage timeline visualization with parallel/serial execution modes
|
|
||||||
- Active groups and agents monitoring
|
|
||||||
- Flow control steps tracking for each agent
|
|
||||||
- Fix history drawer with session summaries
|
|
||||||
- Consumes new JSON structure (fix-plan.json with metadata + fix-progress-{N}.json)
|
|
||||||
|
|
||||||
**JSON Consumption**:
|
|
||||||
- `fix-plan.json`: Reads metadata field for session info, timeline stages, groups configuration
|
|
||||||
- `fix-progress-{N}.json`: Polls all progress files to aggregate real-time status
|
|
||||||
- `active-fix-session.json`: Detects active session on load
|
|
||||||
- `fix-history.json`: Loads historical fix sessions
|
|
||||||
|
|
||||||
### Output File Structure
|
### Output File Structure
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/active/WFS-{session-id}/.review/
|
.workflow/active/WFS-{session-id}/.review/
|
||||||
├── fix-export-{timestamp}.json # Exported findings (input)
|
├── fix-export-{timestamp}.json # Exported findings (input)
|
||||||
└── fixes/{fix-session-id}/
|
└── fixes/{fix-session-id}/
|
||||||
├── fix-dashboard.html # Interactive dashboard (generated once, auto-polls JSON)
|
|
||||||
├── fix-plan.json # Planning agent output (execution plan with metadata)
|
├── fix-plan.json # Planning agent output (execution plan with metadata)
|
||||||
├── fix-progress-1.json # Group 1 progress (planning agent init → agent updates)
|
├── fix-progress-1.json # Group 1 progress (planning agent init → agent updates)
|
||||||
├── fix-progress-2.json # Group 2 progress (planning agent init → agent updates)
|
├── fix-progress-2.json # Group 2 progress (planning agent init → agent updates)
|
||||||
@@ -289,10 +246,8 @@ echo "🔧 Fix Dashboard: file://$(cd ${sessionDir}/fixes/${fixSessionId} && pwd
|
|||||||
```
|
```
|
||||||
|
|
||||||
**File Producers**:
|
**File Producers**:
|
||||||
- **Orchestrator**: `fix-dashboard.html` (generated once from template during Phase 1)
|
|
||||||
- **Planning Agent**: `fix-plan.json` (with metadata), all `fix-progress-*.json` (initial state)
|
- **Planning Agent**: `fix-plan.json` (with metadata), all `fix-progress-*.json` (initial state)
|
||||||
- **Execution Agents**: Update assigned `fix-progress-{N}.json` in real-time
|
- **Execution Agents**: Update assigned `fix-progress-{N}.json` in real-time
|
||||||
- **Dashboard (Browser)**: Reads `fix-plan.json` + all `fix-progress-*.json`, aggregates in-memory every 3 seconds via JavaScript polling
|
|
||||||
|
|
||||||
|
|
||||||
### Agent Invocation Template
|
### Agent Invocation Template
|
||||||
@@ -345,7 +300,7 @@ For each group (G1, G2, G3, ...), generate fix-progress-{N}.json following templ
|
|||||||
- Flow control: Empty implementation_approach array
|
- Flow control: Empty implementation_approach array
|
||||||
- Errors: Empty array
|
- Errors: Empty array
|
||||||
|
|
||||||
**CRITICAL**: Ensure complete template structure for Dashboard consumption - all fields must be present.
|
**CRITICAL**: Ensure complete template structure - all fields must be present.
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
|
|
||||||
@@ -417,7 +372,7 @@ Task({
|
|||||||
description: `Fix ${group.findings.length} issues: ${group.group_name}`,
|
description: `Fix ${group.findings.length} issues: ${group.group_name}`,
|
||||||
prompt: `
|
prompt: `
|
||||||
## Task Objective
|
## Task Objective
|
||||||
Execute fixes for code review findings in group ${group.group_id}. Update progress file in real-time with flow control tracking for dashboard visibility.
|
Execute fixes for code review findings in group ${group.group_id}. Update progress file in real-time with flow control tracking.
|
||||||
|
|
||||||
## Assignment
|
## Assignment
|
||||||
- Group ID: ${group.group_id}
|
- Group ID: ${group.group_id}
|
||||||
@@ -547,7 +502,6 @@ When all findings processed:
|
|||||||
|
|
||||||
### Progress File Updates
|
### Progress File Updates
|
||||||
- **MUST update after every significant action** (before/after each step)
|
- **MUST update after every significant action** (before/after each step)
|
||||||
- **Dashboard polls every 3 seconds** - ensure writes are atomic
|
|
||||||
- **Always maintain complete structure** - never write partial updates
|
- **Always maintain complete structure** - never write partial updates
|
||||||
- **Use ISO 8601 timestamps** - e.g., "2025-01-25T14:36:00Z"
|
- **Use ISO 8601 timestamps** - e.g., "2025-01-25T14:36:00Z"
|
||||||
|
|
||||||
@@ -636,9 +590,17 @@ TodoWrite({
|
|||||||
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
|
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
|
||||||
2. **Conservative Approach**: Test verification is mandatory - no fixes kept without passing tests
|
2. **Conservative Approach**: Test verification is mandatory - no fixes kept without passing tests
|
||||||
3. **Parallel Efficiency**: Default 3 concurrent agents balances speed and resource usage
|
3. **Parallel Efficiency**: Default 3 concurrent agents balances speed and resource usage
|
||||||
4. **Monitor Dashboard**: Real-time stage timeline and agent status provide execution visibility
|
4. **Resume Support**: Fix sessions can resume from checkpoints after interruption
|
||||||
5. **Resume Support**: Fix sessions can resume from checkpoints after interruption
|
5. **Manual Review**: Always review failed fixes manually - may require architectural changes
|
||||||
6. **Manual Review**: Always review failed fixes manually - may require architectural changes
|
6. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
|
||||||
7. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
### View Fix Progress
|
||||||
|
Use `ccw view` to open the workflow dashboard in browser:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw view
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -51,14 +51,12 @@ Independent multi-dimensional code review orchestrator with **hybrid parallel-it
|
|||||||
2. **Session-Integrated**: Review results tracked within workflow session for unified management
|
2. **Session-Integrated**: Review results tracked within workflow session for unified management
|
||||||
3. **Comprehensive Coverage**: Same 7 specialized dimensions as session review
|
3. **Comprehensive Coverage**: Same 7 specialized dimensions as session review
|
||||||
4. **Intelligent Prioritization**: Automatic identification of critical issues and cross-cutting concerns
|
4. **Intelligent Prioritization**: Automatic identification of critical issues and cross-cutting concerns
|
||||||
5. **Real-time Visibility**: JSON-based progress tracking with interactive HTML dashboard
|
5. **Unified Archive**: Review results archived with session for historical reference
|
||||||
6. **Unified Archive**: Review results archived with session for historical reference
|
|
||||||
|
|
||||||
### Orchestrator Boundary (CRITICAL)
|
### Orchestrator Boundary (CRITICAL)
|
||||||
- **ONLY command** for independent multi-dimensional module review
|
- **ONLY command** for independent multi-dimensional module review
|
||||||
- Manages: dimension coordination, aggregation, iteration control, progress tracking
|
- Manages: dimension coordination, aggregation, iteration control, progress tracking
|
||||||
- Delegates: Code exploration and analysis to @cli-explore-agent, dimension-specific reviews via Deep Scan mode
|
- Delegates: Code exploration and analysis to @cli-explore-agent, dimension-specific reviews via Deep Scan mode
|
||||||
- **⚠️ DASHBOARD CONSTRAINT**: Dashboard is generated ONCE during Phase 1 initialization. After initialization, orchestrator and agents MUST NOT read, write, or modify dashboard.html - it remains static for user interaction only.
|
|
||||||
|
|
||||||
## How It Works
|
## How It Works
|
||||||
|
|
||||||
@@ -66,7 +64,7 @@ Independent multi-dimensional code review orchestrator with **hybrid parallel-it
|
|||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: Discovery & Initialization
|
Phase 1: Discovery & Initialization
|
||||||
└─ Resolve file patterns, validate paths, initialize state, create output structure → Generate dashboard.html
|
└─ Resolve file patterns, validate paths, initialize state, create output structure
|
||||||
|
|
||||||
Phase 2: Parallel Reviews (for each dimension)
|
Phase 2: Parallel Reviews (for each dimension)
|
||||||
├─ Launch 7 review agents simultaneously
|
├─ Launch 7 review agents simultaneously
|
||||||
@@ -90,7 +88,7 @@ Phase 4: Iterative Deep-Dive (optional)
|
|||||||
└─ Loop until no critical findings OR max iterations
|
└─ Loop until no critical findings OR max iterations
|
||||||
|
|
||||||
Phase 5: Completion
|
Phase 5: Completion
|
||||||
└─ Finalize review-progress.json → Output dashboard path
|
└─ Finalize review-progress.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Agent Roles
|
### Agent Roles
|
||||||
@@ -188,8 +186,8 @@ const CATEGORIES = {
|
|||||||
|
|
||||||
**Step 1: Session Creation**
|
**Step 1: Session Creation**
|
||||||
```javascript
|
```javascript
|
||||||
// Create workflow session for this review
|
// Create workflow session for this review (type: review)
|
||||||
SlashCommand(command="/workflow:session:start \"Code review for [target_pattern]\"")
|
SlashCommand(command="/workflow:session:start --type review \"Code review for [target_pattern]\"")
|
||||||
|
|
||||||
// Parse output
|
// Parse output
|
||||||
const sessionId = output.match(/SESSION_ID: (WFS-[^\s]+)/)[1];
|
const sessionId = output.match(/SESSION_ID: (WFS-[^\s]+)/)[1];
|
||||||
@@ -219,34 +217,9 @@ done
|
|||||||
|
|
||||||
**Step 4: Initialize Review State**
|
**Step 4: Initialize Review State**
|
||||||
- State initialization: Create `review-state.json` with metadata, dimensions, max_iterations, resolved_files (merged metadata + state)
|
- State initialization: Create `review-state.json` with metadata, dimensions, max_iterations, resolved_files (merged metadata + state)
|
||||||
- Progress tracking: Create `review-progress.json` for dashboard polling
|
- Progress tracking: Create `review-progress.json` for progress tracking
|
||||||
|
|
||||||
**Step 5: Dashboard Generation**
|
**Step 5: TodoWrite Initialization**
|
||||||
|
|
||||||
**Constraints**:
|
|
||||||
- **MANDATORY**: Dashboard MUST be generated from template: `~/.claude/templates/review-cycle-dashboard.html`
|
|
||||||
- **PROHIBITED**: Direct creation or custom generation without template
|
|
||||||
- **POST-GENERATION**: Orchestrator and agents MUST NOT read/write/modify dashboard.html after creation
|
|
||||||
|
|
||||||
**Generation Commands** (3 independent steps):
|
|
||||||
```bash
|
|
||||||
# Step 1: Copy template to output location
|
|
||||||
cp ~/.claude/templates/review-cycle-dashboard.html ${sessionDir}/.review/dashboard.html
|
|
||||||
|
|
||||||
# Step 2: Replace SESSION_ID placeholder
|
|
||||||
sed -i "s|{{SESSION_ID}}|${sessionId}|g" ${sessionDir}/.review/dashboard.html
|
|
||||||
|
|
||||||
# Step 3: Replace REVIEW_TYPE placeholder
|
|
||||||
sed -i "s|{{REVIEW_TYPE}}|module|g" ${sessionDir}/.review/dashboard.html
|
|
||||||
|
|
||||||
# Step 4: Replace REVIEW_DIR placeholder
|
|
||||||
sed -i "s|{{REVIEW_DIR}}|${reviewDir}|g" ${sessionDir}/.review/dashboard.html
|
|
||||||
|
|
||||||
# Output: Dashboard path
|
|
||||||
echo "📊 Dashboard: file://$(cd ${sessionDir} && pwd)/.review/dashboard.html"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 6: TodoWrite Initialization**
|
|
||||||
- Set up progress tracking with hierarchical structure
|
- Set up progress tracking with hierarchical structure
|
||||||
- Mark Phase 1 completed, Phase 2 in_progress
|
- Mark Phase 1 completed, Phase 2 in_progress
|
||||||
|
|
||||||
@@ -277,7 +250,6 @@ echo "📊 Dashboard: file://$(cd ${sessionDir} && pwd)/.review/dashboard.html"
|
|||||||
- Finalize review-progress.json with completion statistics
|
- Finalize review-progress.json with completion statistics
|
||||||
- Update review-state.json with completion_time and phase=complete
|
- Update review-state.json with completion_time and phase=complete
|
||||||
- TodoWrite completion: Mark all tasks done
|
- TodoWrite completion: Mark all tasks done
|
||||||
- Output: Dashboard path to user
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -298,12 +270,11 @@ echo "📊 Dashboard: file://$(cd ${sessionDir} && pwd)/.review/dashboard.html"
|
|||||||
├── iterations/ # Deep-dive results
|
├── iterations/ # Deep-dive results
|
||||||
│ ├── iteration-1-finding-{uuid}.json
|
│ ├── iteration-1-finding-{uuid}.json
|
||||||
│ └── iteration-2-finding-{uuid}.json
|
│ └── iteration-2-finding-{uuid}.json
|
||||||
├── reports/ # Human-readable reports
|
└── reports/ # Human-readable reports
|
||||||
│ ├── security-analysis.md
|
├── security-analysis.md
|
||||||
│ ├── security-cli-output.txt
|
├── security-cli-output.txt
|
||||||
│ ├── deep-dive-1-{uuid}.md
|
├── deep-dive-1-{uuid}.md
|
||||||
│ └── ...
|
└── ...
|
||||||
└── dashboard.html # Interactive dashboard (primary output)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Session Context**:
|
**Session Context**:
|
||||||
@@ -420,6 +391,7 @@ echo "📊 Dashboard: file://$(cd ${sessionDir} && pwd)/.review/dashboard.html"
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Execute ${dimension} review analysis via Deep Scan`,
|
description=`Execute ${dimension} review analysis via Deep Scan`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -505,6 +477,7 @@ Task(
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -769,23 +742,25 @@ TodoWrite({
|
|||||||
3. **Use Glob Wisely**: `src/auth/**` is more efficient than `src/**` with lots of irrelevant files
|
3. **Use Glob Wisely**: `src/auth/**` is more efficient than `src/**` with lots of irrelevant files
|
||||||
4. **Trust Aggregation Logic**: Auto-selection based on proven heuristics
|
4. **Trust Aggregation Logic**: Auto-selection based on proven heuristics
|
||||||
5. **Monitor Logs**: Check reports/ directory for CLI analysis insights
|
5. **Monitor Logs**: Check reports/ directory for CLI analysis insights
|
||||||
6. **Dashboard Polling**: Refresh every 5 seconds for real-time updates
|
|
||||||
7. **Export Results**: Use dashboard export for external tracking tools
|
|
||||||
|
|
||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
|
### View Review Progress
|
||||||
|
Use `ccw view` to open the review dashboard in browser:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw view
|
||||||
|
```
|
||||||
|
|
||||||
### Automated Fix Workflow
|
### Automated Fix Workflow
|
||||||
After completing a module review, use the dashboard to select findings and export them for automated fixing:
|
After completing a module review, use the generated findings JSON for automated fixing:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Step 1: Complete review (this command)
|
# Step 1: Complete review (this command)
|
||||||
/workflow:review-module-cycle src/auth/**
|
/workflow:review-module-cycle src/auth/**
|
||||||
|
|
||||||
# Step 2: Open dashboard, select findings, and export
|
# Step 2: Run automated fixes using dimension findings
|
||||||
# Dashboard generates: fix-export-{timestamp}.json
|
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
|
||||||
|
|
||||||
# Step 3: Run automated fixes
|
|
||||||
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/fix-export-{timestamp}.json
|
|
||||||
```
|
```
|
||||||
|
|
||||||
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
|
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
|
||||||
|
|||||||
@@ -45,13 +45,11 @@ Session-based multi-dimensional code review orchestrator with **hybrid parallel-
|
|||||||
1. **Comprehensive Coverage**: 7 specialized dimensions analyze all quality aspects simultaneously
|
1. **Comprehensive Coverage**: 7 specialized dimensions analyze all quality aspects simultaneously
|
||||||
2. **Intelligent Prioritization**: Automatic identification of critical issues and cross-cutting concerns
|
2. **Intelligent Prioritization**: Automatic identification of critical issues and cross-cutting concerns
|
||||||
3. **Actionable Insights**: Deep-dive iterations provide step-by-step remediation plans
|
3. **Actionable Insights**: Deep-dive iterations provide step-by-step remediation plans
|
||||||
4. **Real-time Visibility**: JSON-based progress tracking with interactive HTML dashboard
|
|
||||||
|
|
||||||
### Orchestrator Boundary (CRITICAL)
|
### Orchestrator Boundary (CRITICAL)
|
||||||
- **ONLY command** for comprehensive multi-dimensional review
|
- **ONLY command** for comprehensive multi-dimensional review
|
||||||
- Manages: dimension coordination, aggregation, iteration control, progress tracking
|
- Manages: dimension coordination, aggregation, iteration control, progress tracking
|
||||||
- Delegates: Code exploration and analysis to @cli-explore-agent, dimension-specific reviews via Deep Scan mode
|
- Delegates: Code exploration and analysis to @cli-explore-agent, dimension-specific reviews via Deep Scan mode
|
||||||
- **⚠️ DASHBOARD CONSTRAINT**: Dashboard is generated ONCE during Phase 1 initialization. After initialization, orchestrator and agents MUST NOT read, write, or modify dashboard.html - it remains static for user interaction only.
|
|
||||||
|
|
||||||
## How It Works
|
## How It Works
|
||||||
|
|
||||||
@@ -59,7 +57,7 @@ Session-based multi-dimensional code review orchestrator with **hybrid parallel-
|
|||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: Discovery & Initialization
|
Phase 1: Discovery & Initialization
|
||||||
└─ Validate session, initialize state, create output structure → Generate dashboard.html
|
└─ Validate session, initialize state, create output structure
|
||||||
|
|
||||||
Phase 2: Parallel Reviews (for each dimension)
|
Phase 2: Parallel Reviews (for each dimension)
|
||||||
├─ Launch 7 review agents simultaneously
|
├─ Launch 7 review agents simultaneously
|
||||||
@@ -83,7 +81,7 @@ Phase 4: Iterative Deep-Dive (optional)
|
|||||||
└─ Loop until no critical findings OR max iterations
|
└─ Loop until no critical findings OR max iterations
|
||||||
|
|
||||||
Phase 5: Completion
|
Phase 5: Completion
|
||||||
└─ Finalize review-progress.json → Output dashboard path
|
└─ Finalize review-progress.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Agent Roles
|
### Agent Roles
|
||||||
@@ -199,34 +197,9 @@ git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u
|
|||||||
|
|
||||||
**Step 5: Initialize Review State**
|
**Step 5: Initialize Review State**
|
||||||
- State initialization: Create `review-state.json` with metadata, dimensions, max_iterations (merged metadata + state)
|
- State initialization: Create `review-state.json` with metadata, dimensions, max_iterations (merged metadata + state)
|
||||||
- Progress tracking: Create `review-progress.json` for dashboard polling
|
- Progress tracking: Create `review-progress.json` for progress tracking
|
||||||
|
|
||||||
**Step 6: Dashboard Generation**
|
**Step 6: TodoWrite Initialization**
|
||||||
|
|
||||||
**Constraints**:
|
|
||||||
- **MANDATORY**: Dashboard MUST be generated from template: `~/.claude/templates/review-cycle-dashboard.html`
|
|
||||||
- **PROHIBITED**: Direct creation or custom generation without template
|
|
||||||
- **POST-GENERATION**: Orchestrator and agents MUST NOT read/write/modify dashboard.html after creation
|
|
||||||
|
|
||||||
**Generation Commands** (3 independent steps):
|
|
||||||
```bash
|
|
||||||
# Step 1: Copy template to output location
|
|
||||||
cp ~/.claude/templates/review-cycle-dashboard.html ${sessionDir}/.review/dashboard.html
|
|
||||||
|
|
||||||
# Step 2: Replace SESSION_ID placeholder
|
|
||||||
sed -i "s|{{SESSION_ID}}|${sessionId}|g" ${sessionDir}/.review/dashboard.html
|
|
||||||
|
|
||||||
# Step 3: Replace REVIEW_TYPE placeholder
|
|
||||||
sed -i "s|{{REVIEW_TYPE}}|session|g" ${sessionDir}/.review/dashboard.html
|
|
||||||
|
|
||||||
# Step 4: Replace REVIEW_DIR placeholder
|
|
||||||
sed -i "s|{{REVIEW_DIR}}|${reviewDir}|g" ${sessionDir}/.review/dashboard.html
|
|
||||||
|
|
||||||
# Output: Dashboard path
|
|
||||||
echo "📊 Dashboard: file://$(cd ${sessionDir} && pwd)/.review/dashboard.html"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 7: TodoWrite Initialization**
|
|
||||||
- Set up progress tracking with hierarchical structure
|
- Set up progress tracking with hierarchical structure
|
||||||
- Mark Phase 1 completed, Phase 2 in_progress
|
- Mark Phase 1 completed, Phase 2 in_progress
|
||||||
|
|
||||||
@@ -257,7 +230,6 @@ echo "📊 Dashboard: file://$(cd ${sessionDir} && pwd)/.review/dashboard.html"
|
|||||||
- Finalize review-progress.json with completion statistics
|
- Finalize review-progress.json with completion statistics
|
||||||
- Update review-state.json with completion_time and phase=complete
|
- Update review-state.json with completion_time and phase=complete
|
||||||
- TodoWrite completion: Mark all tasks done
|
- TodoWrite completion: Mark all tasks done
|
||||||
- Output: Dashboard path to user
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -278,12 +250,11 @@ echo "📊 Dashboard: file://$(cd ${sessionDir} && pwd)/.review/dashboard.html"
|
|||||||
├── iterations/ # Deep-dive results
|
├── iterations/ # Deep-dive results
|
||||||
│ ├── iteration-1-finding-{uuid}.json
|
│ ├── iteration-1-finding-{uuid}.json
|
||||||
│ └── iteration-2-finding-{uuid}.json
|
│ └── iteration-2-finding-{uuid}.json
|
||||||
├── reports/ # Human-readable reports
|
└── reports/ # Human-readable reports
|
||||||
│ ├── security-analysis.md
|
├── security-analysis.md
|
||||||
│ ├── security-cli-output.txt
|
├── security-cli-output.txt
|
||||||
│ ├── deep-dive-1-{uuid}.md
|
├── deep-dive-1-{uuid}.md
|
||||||
│ └── ...
|
└── ...
|
||||||
└── dashboard.html # Interactive dashboard (primary output)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Session Context**:
|
**Session Context**:
|
||||||
@@ -430,6 +401,7 @@ echo "📊 Dashboard: file://$(cd ${sessionDir} && pwd)/.review/dashboard.html"
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Execute ${dimension} review analysis via Deep Scan`,
|
description=`Execute ${dimension} review analysis via Deep Scan`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -516,6 +488,7 @@ Task(
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -780,23 +753,25 @@ TodoWrite({
|
|||||||
2. **Parallel Execution**: ~60 minutes for full initial review (7 dimensions)
|
2. **Parallel Execution**: ~60 minutes for full initial review (7 dimensions)
|
||||||
3. **Trust Aggregation Logic**: Auto-selection based on proven heuristics
|
3. **Trust Aggregation Logic**: Auto-selection based on proven heuristics
|
||||||
4. **Monitor Logs**: Check reports/ directory for CLI analysis insights
|
4. **Monitor Logs**: Check reports/ directory for CLI analysis insights
|
||||||
5. **Dashboard Polling**: Refresh every 5 seconds for real-time updates
|
|
||||||
6. **Export Results**: Use dashboard export for external tracking tools
|
|
||||||
|
|
||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
|
### View Review Progress
|
||||||
|
Use `ccw view` to open the review dashboard in browser:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw view
|
||||||
|
```
|
||||||
|
|
||||||
### Automated Fix Workflow
|
### Automated Fix Workflow
|
||||||
After completing a review, use the dashboard to select findings and export them for automated fixing:
|
After completing a review, use the generated findings JSON for automated fixing:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Step 1: Complete review (this command)
|
# Step 1: Complete review (this command)
|
||||||
/workflow:review-session-cycle
|
/workflow:review-session-cycle
|
||||||
|
|
||||||
# Step 2: Open dashboard, select findings, and export
|
# Step 2: Run automated fixes using dimension findings
|
||||||
# Dashboard generates: fix-export-{timestamp}.json
|
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
|
||||||
|
|
||||||
# Step 3: Run automated fixes
|
|
||||||
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/fix-export-{timestamp}.json
|
|
||||||
```
|
```
|
||||||
|
|
||||||
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
|
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
|
||||||
|
|||||||
@@ -29,6 +29,39 @@ argument-hint: "[--type=security|architecture|action-items|quality] [optional: s
|
|||||||
- For documentation generation, use `/workflow:tools:docs`
|
- For documentation generation, use `/workflow:tools:docs`
|
||||||
- For CLAUDE.md updates, use `/update-memory-related`
|
- For CLAUDE.md updates, use `/update-memory-related`
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse --type flag (default: quality)
|
||||||
|
└─ Parse session-id argument (optional)
|
||||||
|
|
||||||
|
Step 1: Session Resolution
|
||||||
|
└─ Decision:
|
||||||
|
├─ session-id provided → Use provided session
|
||||||
|
└─ Not provided → Auto-detect from .workflow/active/
|
||||||
|
|
||||||
|
Step 2: Validation
|
||||||
|
├─ Check session directory exists
|
||||||
|
└─ Check for completed implementation (.summaries/IMPL-*.md exists)
|
||||||
|
|
||||||
|
Step 3: Type Check
|
||||||
|
└─ Decision:
|
||||||
|
├─ type=docs → Redirect to /workflow:tools:docs
|
||||||
|
└─ Other types → Continue to analysis
|
||||||
|
|
||||||
|
Step 4: Model Analysis Phase
|
||||||
|
├─ Load context (summaries, test results, changed files)
|
||||||
|
└─ Perform specialized review by type:
|
||||||
|
├─ security → Security patterns + Gemini analysis
|
||||||
|
├─ architecture → Qwen architecture analysis
|
||||||
|
├─ quality → Gemini code quality analysis
|
||||||
|
└─ action-items → Requirements verification
|
||||||
|
|
||||||
|
Step 5: Generate Report
|
||||||
|
└─ Output: REVIEW-{type}.md
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Template
|
## Execution Template
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -79,11 +112,15 @@ After bash validation, the model takes control to:
|
|||||||
|
|
||||||
1. **Load Context**: Read completed task summaries and changed files
|
1. **Load Context**: Read completed task summaries and changed files
|
||||||
```bash
|
```bash
|
||||||
# Load implementation summaries
|
# Load implementation summaries (iterate through .summaries/ directory)
|
||||||
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
|
for summary in .workflow/active/${sessionId}/.summaries/*.md; do
|
||||||
|
cat "$summary"
|
||||||
|
done
|
||||||
|
|
||||||
# Load test results (if available)
|
# Load test results (if available)
|
||||||
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
for test_summary in .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null; do
|
||||||
|
cat "$test_summary"
|
||||||
|
done
|
||||||
|
|
||||||
# Get changed files
|
# Get changed files
|
||||||
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
||||||
@@ -99,51 +136,53 @@ After bash validation, the model takes control to:
|
|||||||
```
|
```
|
||||||
- Use Gemini for security analysis:
|
- Use Gemini for security analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Security audit of completed implementation
|
PURPOSE: Security audit of completed implementation
|
||||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Security findings report with severity levels
|
EXPECTED: Security findings report with severity levels
|
||||||
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Architecture Review** (`--type=architecture`):
|
**Architecture Review** (`--type=architecture`):
|
||||||
- Use Qwen for architecture analysis:
|
- Use Qwen for architecture analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && qwen -p "
|
ccw cli -p "
|
||||||
PURPOSE: Architecture compliance review
|
PURPOSE: Architecture compliance review
|
||||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Architecture assessment with recommendations
|
EXPECTED: Architecture assessment with recommendations
|
||||||
RULES: Check for patterns, separation of concerns, modularity, scalability
|
RULES: Check for patterns, separation of concerns, modularity, scalability
|
||||||
" --approval-mode yolo
|
" --tool qwen --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Quality Review** (`--type=quality`):
|
**Quality Review** (`--type=quality`):
|
||||||
- Use Gemini for code quality:
|
- Use Gemini for code quality:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Code quality and best practices review
|
PURPOSE: Code quality and best practices review
|
||||||
TASK: Assess code readability, maintainability, adherence to best practices
|
TASK: Assess code readability, maintainability, adherence to best practices
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Quality assessment with improvement suggestions
|
EXPECTED: Quality assessment with improvement suggestions
|
||||||
RULES: Check for code smells, duplication, complexity, naming conventions
|
RULES: Check for code smells, duplication, complexity, naming conventions
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Action Items Review** (`--type=action-items`):
|
**Action Items Review** (`--type=action-items`):
|
||||||
- Verify all requirements and acceptance criteria met:
|
- Verify all requirements and acceptance criteria met:
|
||||||
```bash
|
```bash
|
||||||
# Load task requirements and acceptance criteria
|
# Load task requirements and acceptance criteria
|
||||||
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
for task_file in .workflow/active/${sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file" | jq -r '
|
||||||
"Task: " + .id + "\n" +
|
"Task: " + .id + "\n" +
|
||||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||||
"Acceptance: " + (.context.acceptance | join(", "))
|
"Acceptance: " + (.context.acceptance | join(", "))
|
||||||
' {} \;
|
'
|
||||||
|
done
|
||||||
|
|
||||||
# Check implementation summaries against requirements
|
# Check implementation summaries against requirements
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||||
TASK: Cross-check implementation summaries against original requirements
|
TASK: Cross-check implementation summaries against original requirements
|
||||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -157,7 +196,7 @@ After bash validation, the model takes control to:
|
|||||||
- Verify all acceptance criteria are met
|
- Verify all acceptance criteria are met
|
||||||
- Flag any incomplete or missing action items
|
- Flag any incomplete or missing action items
|
||||||
- Assess deployment readiness
|
- Assess deployment readiness
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -8,493 +8,146 @@ examples:
|
|||||||
|
|
||||||
# Complete Workflow Session (/workflow:session:complete)
|
# Complete Workflow Session (/workflow:session:complete)
|
||||||
|
|
||||||
## Overview
|
Mark the currently active workflow session as complete, archive it, and update manifests.
|
||||||
Mark the currently active workflow session as complete, analyze it for lessons learned, move it to the archive directory, and remove the active flag marker.
|
|
||||||
|
## Pre-defined Commands
|
||||||
|
|
||||||
## Usage
|
|
||||||
```bash
|
```bash
|
||||||
/workflow:session:complete # Complete current active session
|
# Phase 1: Find active session
|
||||||
/workflow:session:complete --detailed # Show detailed completion summary
|
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
|
||||||
|
SESSION_ID=$(basename "$SESSION_PATH")
|
||||||
|
|
||||||
|
# Phase 3: Move to archive
|
||||||
|
mkdir -p .workflow/archives/
|
||||||
|
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
|
||||||
|
|
||||||
|
# Cleanup marker
|
||||||
|
rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||||
```
|
```
|
||||||
|
|
||||||
## Implementation Flow
|
## Key Files to Read
|
||||||
|
|
||||||
### Phase 1: Pre-Archival Preparation (Transactional Setup)
|
**For manifest.json generation**, read ONLY these files:
|
||||||
|
|
||||||
**Purpose**: Find active session, create archiving marker to prevent concurrent operations. Session remains in active location for agent processing.
|
| File | Extract |
|
||||||
|
|------|---------|
|
||||||
|
| `$SESSION_PATH/workflow-session.json` | session_id, description, started_at, status |
|
||||||
|
| `$SESSION_PATH/IMPL_PLAN.md` | title (first # heading), description (first paragraph) |
|
||||||
|
| `$SESSION_PATH/.tasks/*.json` | count files |
|
||||||
|
| `$SESSION_PATH/.summaries/*.md` | count files |
|
||||||
|
| `$SESSION_PATH/.review/dimensions/*.json` | count + findings summary (optional) |
|
||||||
|
|
||||||
|
## Execution Flow
|
||||||
|
|
||||||
|
### Phase 1: Find Session (2 commands)
|
||||||
|
|
||||||
#### Step 1.1: Find Active Session and Get Name
|
|
||||||
```bash
|
```bash
|
||||||
# Find active session directory
|
# 1. Find and extract session
|
||||||
bash(find .workflow/active/ -name "WFS-*" -type d | head -1)
|
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
|
||||||
|
SESSION_ID=$(basename "$SESSION_PATH")
|
||||||
|
|
||||||
# Extract session name from directory path
|
# 2. Check/create archiving marker
|
||||||
bash(basename .workflow/active/WFS-session-name)
|
test -f "$SESSION_PATH/.archiving" && echo "RESUMING" || touch "$SESSION_PATH/.archiving"
|
||||||
```
|
|
||||||
**Output**: Session name `WFS-session-name`
|
|
||||||
|
|
||||||
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
|
|
||||||
```bash
|
|
||||||
# Check if session is already being archived
|
|
||||||
bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**If RESUMING**:
|
**Output**: `SESSION_ID` = e.g., `WFS-auth-feature`
|
||||||
- Previous archival attempt was interrupted
|
|
||||||
- Skip to Phase 2 to resume agent analysis
|
|
||||||
|
|
||||||
**If NEW**:
|
### Phase 2: Generate Manifest Entry (Read-only)
|
||||||
- Continue to Step 1.3
|
|
||||||
|
|
||||||
#### Step 1.3: Create Archiving Marker
|
Read the key files above, then build this structure:
|
||||||
```bash
|
|
||||||
# Mark session as "archiving in progress"
|
|
||||||
bash(touch .workflow/active/WFS-session-name/.archiving)
|
|
||||||
```
|
|
||||||
**Purpose**:
|
|
||||||
- Prevents concurrent operations on this session
|
|
||||||
- Enables recovery if archival fails
|
|
||||||
- Session remains in `.workflow/active/` for agent analysis
|
|
||||||
|
|
||||||
**Result**: Session still at `.workflow/active/WFS-session-name/` with `.archiving` marker
|
```json
|
||||||
|
{
|
||||||
### Phase 2: Agent Analysis (In-Place Processing)
|
"session_id": "<from workflow-session.json>",
|
||||||
|
"description": "<from workflow-session.json>",
|
||||||
**Purpose**: Agent analyzes session WHILE STILL IN ACTIVE LOCATION. Generates metadata but does NOT move files or update manifest.
|
"archived_at": "<current ISO timestamp>",
|
||||||
|
"archive_path": ".workflow/archives/<SESSION_ID>",
|
||||||
#### Agent Invocation
|
|
||||||
|
|
||||||
Invoke `universal-executor` agent to analyze session and prepare archive metadata.
|
|
||||||
|
|
||||||
**Agent Task**:
|
|
||||||
```
|
|
||||||
Task(
|
|
||||||
subagent_type="universal-executor",
|
|
||||||
description="Analyze session for archival",
|
|
||||||
prompt=`
|
|
||||||
Analyze workflow session for archival preparation. Session is STILL in active location.
|
|
||||||
|
|
||||||
## Context
|
|
||||||
- Session: .workflow/active/WFS-session-name/
|
|
||||||
- Status: Marked as archiving (.archiving marker present)
|
|
||||||
- Location: Active sessions directory (NOT archived yet)
|
|
||||||
|
|
||||||
## Tasks
|
|
||||||
|
|
||||||
1. **Extract session data** from workflow-session.json
|
|
||||||
- session_id, description/topic, started_at, completed_at, status
|
|
||||||
- If status != "completed", update it with timestamp
|
|
||||||
|
|
||||||
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
|
|
||||||
|
|
||||||
3. **Extract review data** (if .review/ exists):
|
|
||||||
- Count dimension results: .review/dimensions/*.json
|
|
||||||
- Count deep-dive results: .review/iterations/*.json
|
|
||||||
- Extract findings summary from dimension JSONs (total, critical, high, medium, low)
|
|
||||||
- Check fix results if .review/fixes/ exists (fixed_count, failed_count)
|
|
||||||
- Build review_metrics: {dimensions_analyzed, total_findings, severity_distribution, fix_success_rate}
|
|
||||||
|
|
||||||
4. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
|
|
||||||
- Return: {successes, challenges, watch_patterns}
|
|
||||||
- If review data exists, include review-specific lessons (common issue patterns, effective fixes)
|
|
||||||
|
|
||||||
5. **Build archive entry**:
|
|
||||||
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
|
|
||||||
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
|
|
||||||
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
|
|
||||||
- If review data exists, include review_metrics in metrics object
|
|
||||||
|
|
||||||
6. **Extract feature metadata** (for Phase 4):
|
|
||||||
- Parse IMPL_PLAN.md for title (first # heading)
|
|
||||||
- Extract description (first paragraph, max 200 chars)
|
|
||||||
- Generate feature tags (3-5 keywords from content)
|
|
||||||
|
|
||||||
7. **Return result**: Complete metadata package for atomic commit
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"session_id": "WFS-session-name",
|
|
||||||
"archive_entry": {
|
|
||||||
"session_id": "...",
|
|
||||||
"description": "...",
|
|
||||||
"archived_at": "...",
|
|
||||||
"archive_path": ".workflow/archives/WFS-session-name",
|
|
||||||
"metrics": {
|
"metrics": {
|
||||||
"duration_hours": 2.5,
|
"duration_hours": "<(completed_at - started_at) / 3600000>",
|
||||||
"tasks_completed": 5,
|
"tasks_completed": "<count .tasks/*.json>",
|
||||||
"summaries_generated": 3,
|
"summaries_generated": "<count .summaries/*.md>",
|
||||||
"review_metrics": { // Optional, only if .review/ exists
|
"review_metrics": {
|
||||||
"dimensions_analyzed": 4,
|
"dimensions_analyzed": "<count .review/dimensions/*.json>",
|
||||||
"total_findings": 15,
|
"total_findings": "<sum from dimension JSONs>"
|
||||||
"severity_distribution": {"critical": 1, "high": 3, "medium": 8, "low": 3},
|
|
||||||
"fix_success_rate": 0.87 // Optional, only if .review/fixes/ exists
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"tags": [...],
|
"tags": ["<3-5 keywords from IMPL_PLAN.md>"],
|
||||||
"lessons": {...}
|
"lessons": {
|
||||||
},
|
"successes": ["<key wins>"],
|
||||||
"feature_metadata": {
|
"challenges": ["<difficulties>"],
|
||||||
"title": "...",
|
"watch_patterns": ["<patterns to monitor>"]
|
||||||
"description": "...",
|
|
||||||
"tags": [...]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
## Important Constraints
|
|
||||||
- DO NOT move or delete any files
|
|
||||||
- DO NOT update manifest.json yet
|
|
||||||
- Session remains in .workflow/active/ during analysis
|
|
||||||
- Return complete metadata package for orchestrator to commit atomically
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
- On failure: return {"status": "error", "task": "...", "message": "..."}
|
|
||||||
- Do NOT modify any files on error
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Output**:
|
|
||||||
- Agent returns complete metadata package
|
|
||||||
- Session remains in `.workflow/active/` with `.archiving` marker
|
|
||||||
- No files moved or manifests updated yet
|
|
||||||
|
|
||||||
### Phase 3: Atomic Commit (Transactional File Operations)
|
|
||||||
|
|
||||||
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
|
|
||||||
|
|
||||||
#### Step 3.1: Create Archive Directory
|
|
||||||
```bash
|
|
||||||
bash(mkdir -p .workflow/archives/)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 3.2: Move Session to Archive
|
|
||||||
```bash
|
|
||||||
bash(mv .workflow/active/WFS-session-name .workflow/archives/WFS-session-name)
|
|
||||||
```
|
|
||||||
**Result**: Session now at `.workflow/archives/WFS-session-name/`
|
|
||||||
|
|
||||||
#### Step 3.3: Update Manifest
|
|
||||||
```bash
|
|
||||||
# Read current manifest (or create empty array if not exists)
|
|
||||||
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
|
|
||||||
```
|
|
||||||
|
|
||||||
**JSON Update Logic**:
|
|
||||||
```javascript
|
|
||||||
// Read agent result from Phase 2
|
|
||||||
const agentResult = JSON.parse(agentOutput);
|
|
||||||
const archiveEntry = agentResult.archive_entry;
|
|
||||||
|
|
||||||
// Read existing manifest
|
|
||||||
let manifest = [];
|
|
||||||
try {
|
|
||||||
const manifestContent = Read('.workflow/archives/manifest.json');
|
|
||||||
manifest = JSON.parse(manifestContent);
|
|
||||||
} catch {
|
|
||||||
manifest = []; // Initialize if not exists
|
|
||||||
}
|
|
||||||
|
|
||||||
// Append new entry
|
|
||||||
manifest.push(archiveEntry);
|
|
||||||
|
|
||||||
// Write back
|
|
||||||
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 3.4: Remove Archiving Marker
|
|
||||||
```bash
|
|
||||||
bash(rm .workflow/archives/WFS-session-name/.archiving)
|
|
||||||
```
|
|
||||||
**Result**: Clean archived session without temporary markers
|
|
||||||
|
|
||||||
**Output Confirmation**:
|
|
||||||
```
|
|
||||||
✓ Session "${sessionId}" archived successfully
|
|
||||||
Location: .workflow/archives/WFS-session-name/
|
|
||||||
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
|
|
||||||
Manifest: Updated with ${manifest.length} total sessions
|
|
||||||
${reviewMetrics ? `Review: ${reviewMetrics.total_findings} findings across ${reviewMetrics.dimensions_analyzed} dimensions, ${Math.round(reviewMetrics.fix_success_rate * 100)}% fixed` : ''}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4: Update Project Feature Registry
|
|
||||||
|
|
||||||
**Purpose**: Record completed session as a project feature in `.workflow/project.json`.
|
|
||||||
|
|
||||||
**Execution**: Uses feature metadata from Phase 2 agent result to update project registry.
|
|
||||||
|
|
||||||
#### Step 4.1: Check Project State Exists
|
|
||||||
```bash
|
|
||||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
|
|
||||||
```
|
|
||||||
|
|
||||||
**If SKIP**: Output warning and skip Phase 4
|
|
||||||
```
|
|
||||||
WARNING: No project.json found. Run /workflow:session:start to initialize.
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 4.2: Extract Feature Information from Agent Result
|
|
||||||
|
|
||||||
**Data Processing** (Uses Phase 2 agent output):
|
|
||||||
```javascript
|
|
||||||
// Extract feature metadata from agent result
|
|
||||||
const agentResult = JSON.parse(agentOutput);
|
|
||||||
const featureMeta = agentResult.feature_metadata;
|
|
||||||
|
|
||||||
// Data already prepared by agent:
|
|
||||||
const title = featureMeta.title;
|
|
||||||
const description = featureMeta.description;
|
|
||||||
const tags = featureMeta.tags;
|
|
||||||
|
|
||||||
// Create feature ID (lowercase slug)
|
|
||||||
const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 50);
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 4.3: Update project.json
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Read current project state
|
|
||||||
bash(cat .workflow/project.json)
|
|
||||||
```
|
|
||||||
|
|
||||||
**JSON Update Logic**:
|
|
||||||
```javascript
|
|
||||||
// Read existing project.json (created by /workflow:init)
|
|
||||||
// Note: overview field is managed by /workflow:init, not modified here
|
|
||||||
const projectMeta = JSON.parse(Read('.workflow/project.json'));
|
|
||||||
const currentTimestamp = new Date().toISOString();
|
|
||||||
const currentDate = currentTimestamp.split('T')[0]; // YYYY-MM-DD
|
|
||||||
|
|
||||||
// Extract tags from IMPL_PLAN.md (simple keyword extraction)
|
|
||||||
const tags = extractTags(planContent); // e.g., ["auth", "security"]
|
|
||||||
|
|
||||||
// Build feature object with complete metadata
|
|
||||||
const newFeature = {
|
|
||||||
id: featureId,
|
|
||||||
title: title,
|
|
||||||
description: description,
|
|
||||||
status: "completed",
|
|
||||||
tags: tags,
|
|
||||||
timeline: {
|
|
||||||
created_at: currentTimestamp,
|
|
||||||
implemented_at: currentDate,
|
|
||||||
updated_at: currentTimestamp
|
|
||||||
},
|
|
||||||
traceability: {
|
|
||||||
session_id: sessionId,
|
|
||||||
archive_path: archivePath, // e.g., ".workflow/archives/WFS-auth-system"
|
|
||||||
commit_hash: getLatestCommitHash() || "" // Optional: git rev-parse HEAD
|
|
||||||
},
|
|
||||||
docs: [], // Placeholder for future doc links
|
|
||||||
relations: [] // Placeholder for feature dependencies
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add new feature to array
|
|
||||||
projectMeta.features.push(newFeature);
|
|
||||||
|
|
||||||
// Update statistics
|
|
||||||
projectMeta.statistics.total_features = projectMeta.features.length;
|
|
||||||
projectMeta.statistics.total_sessions += 1;
|
|
||||||
projectMeta.statistics.last_updated = currentTimestamp;
|
|
||||||
|
|
||||||
// Write back
|
|
||||||
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
|
||||||
```
|
|
||||||
|
|
||||||
**Helper Functions**:
|
|
||||||
```javascript
|
|
||||||
// Extract tags from IMPL_PLAN.md content
|
|
||||||
function extractTags(planContent) {
|
|
||||||
const tags = [];
|
|
||||||
|
|
||||||
// Look for common keywords
|
|
||||||
const keywords = {
|
|
||||||
'auth': /authentication|login|oauth|jwt/i,
|
|
||||||
'security': /security|encrypt|hash|token/i,
|
|
||||||
'api': /api|endpoint|rest|graphql/i,
|
|
||||||
'ui': /component|page|interface|frontend/i,
|
|
||||||
'database': /database|schema|migration|sql/i,
|
|
||||||
'test': /test|testing|spec|coverage/i
|
|
||||||
};
|
|
||||||
|
|
||||||
for (const [tag, pattern] of Object.entries(keywords)) {
|
|
||||||
if (pattern.test(planContent)) {
|
|
||||||
tags.push(tag);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return tags.slice(0, 5); // Max 5 tags
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get latest git commit hash (optional)
|
|
||||||
function getLatestCommitHash() {
|
|
||||||
try {
|
|
||||||
const result = Bash({
|
|
||||||
command: "git rev-parse --short HEAD 2>/dev/null",
|
|
||||||
description: "Get latest commit hash"
|
|
||||||
});
|
|
||||||
return result.trim();
|
|
||||||
} catch {
|
|
||||||
return "";
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Step 4.4: Output Confirmation
|
**Lessons Generation**: Use gemini with `~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt`
|
||||||
|
|
||||||
```
|
### Phase 3: Atomic Commit (4 commands)
|
||||||
✓ Feature "${title}" added to project registry
|
|
||||||
ID: ${featureId}
|
```bash
|
||||||
Session: ${sessionId}
|
# 1. Create archive directory
|
||||||
Location: .workflow/project.json
|
mkdir -p .workflow/archives/
|
||||||
|
|
||||||
|
# 2. Move session
|
||||||
|
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
|
||||||
|
|
||||||
|
# 3. Update manifest.json (Read → Append → Write)
|
||||||
|
# Read: .workflow/archives/manifest.json (or [])
|
||||||
|
# Append: archive_entry from Phase 2
|
||||||
|
# Write: updated JSON
|
||||||
|
|
||||||
|
# 4. Remove marker
|
||||||
|
rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||||
```
|
```
|
||||||
|
|
||||||
**Error Handling**:
|
**Output**:
|
||||||
- If project.json malformed: Output error, skip update
|
```
|
||||||
- If feature_metadata missing from agent result: Skip Phase 4
|
✓ Session "$SESSION_ID" archived successfully
|
||||||
- If extraction fails: Use minimal defaults
|
Location: .workflow/archives/$SESSION_ID/
|
||||||
|
Manifest: Updated with N total sessions
|
||||||
|
```
|
||||||
|
|
||||||
**Phase 4 Total Commands**: 1 bash read + JSON manipulation
|
### Phase 4: Update project.json (Optional)
|
||||||
|
|
||||||
|
**Skip if**: `.workflow/project.json` doesn't exist
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check
|
||||||
|
test -f .workflow/project.json || echo "SKIP"
|
||||||
|
```
|
||||||
|
|
||||||
|
**If exists**, add feature entry:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "<slugified title>",
|
||||||
|
"title": "<from IMPL_PLAN.md>",
|
||||||
|
"status": "completed",
|
||||||
|
"tags": ["<from Phase 2>"],
|
||||||
|
"timeline": { "implemented_at": "<date>" },
|
||||||
|
"traceability": { "session_id": "<SESSION_ID>", "archive_path": "<path>" }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**:
|
||||||
|
```
|
||||||
|
✓ Feature added to project registry
|
||||||
|
```
|
||||||
|
|
||||||
## Error Recovery
|
## Error Recovery
|
||||||
|
|
||||||
### If Agent Fails (Phase 2)
|
| Phase | Symptom | Recovery |
|
||||||
|
|-------|---------|----------|
|
||||||
|
| 1 | No active session | `No active session found` |
|
||||||
|
| 2 | Analysis fails | Remove marker: `rm $SESSION_PATH/.archiving`, retry |
|
||||||
|
| 3 | Move fails | Session safe in active/, fix issue, retry |
|
||||||
|
| 3 | Manifest fails | Session in archives/, manually add entry, remove marker |
|
||||||
|
|
||||||
**Symptoms**:
|
## Quick Reference
|
||||||
- Agent returns `{"status": "error", ...}`
|
|
||||||
- Agent crashes or times out
|
|
||||||
- Analysis incomplete
|
|
||||||
|
|
||||||
**Recovery Steps**:
|
|
||||||
```bash
|
|
||||||
# Session still in .workflow/active/WFS-session-name
|
|
||||||
# Remove archiving marker
|
|
||||||
bash(rm .workflow/active/WFS-session-name/.archiving)
|
|
||||||
```
|
```
|
||||||
|
Phase 1: find session → create .archiving marker
|
||||||
**User Notification**:
|
Phase 2: read key files → build manifest entry (no writes)
|
||||||
|
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||||
|
Phase 4: update project.json features array (optional)
|
||||||
```
|
```
|
||||||
ERROR: Session archival failed during analysis phase
|
|
||||||
Reason: [error message from agent]
|
|
||||||
Session remains active in: .workflow/active/WFS-session-name
|
|
||||||
|
|
||||||
Recovery:
|
|
||||||
1. Fix any issues identified in error message
|
|
||||||
2. Retry: /workflow:session:complete
|
|
||||||
|
|
||||||
Session state: SAFE (no changes committed)
|
|
||||||
```
|
|
||||||
|
|
||||||
### If Move Fails (Phase 3)
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
- `mv` command fails
|
|
||||||
- Permission denied
|
|
||||||
- Disk full
|
|
||||||
|
|
||||||
**Recovery Steps**:
|
|
||||||
```bash
|
|
||||||
# Archiving marker still present
|
|
||||||
# Session still in .workflow/active/ (move failed)
|
|
||||||
# No manifest updated yet
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Notification**:
|
|
||||||
```
|
|
||||||
ERROR: Session archival failed during move operation
|
|
||||||
Reason: [mv error message]
|
|
||||||
Session remains in: .workflow/active/WFS-session-name
|
|
||||||
|
|
||||||
Recovery:
|
|
||||||
1. Fix filesystem issues (permissions, disk space)
|
|
||||||
2. Retry: /workflow:session:complete
|
|
||||||
- System will detect .archiving marker
|
|
||||||
- Will resume from Phase 2 (agent analysis)
|
|
||||||
|
|
||||||
Session state: SAFE (analysis complete, ready to retry move)
|
|
||||||
```
|
|
||||||
|
|
||||||
### If Manifest Update Fails (Phase 3)
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
- JSON parsing error
|
|
||||||
- Write permission denied
|
|
||||||
- Session moved but manifest not updated
|
|
||||||
|
|
||||||
**Recovery Steps**:
|
|
||||||
```bash
|
|
||||||
# Session moved to .workflow/archives/WFS-session-name
|
|
||||||
# Manifest NOT updated
|
|
||||||
# Archiving marker still present in archived location
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Notification**:
|
|
||||||
```
|
|
||||||
ERROR: Session archived but manifest update failed
|
|
||||||
Reason: [error message]
|
|
||||||
Session location: .workflow/archives/WFS-session-name
|
|
||||||
|
|
||||||
Recovery:
|
|
||||||
1. Fix manifest.json issues (syntax, permissions)
|
|
||||||
2. Manual manifest update:
|
|
||||||
- Add archive entry from agent output
|
|
||||||
- Remove .archiving marker: rm .workflow/archives/WFS-session-name/.archiving
|
|
||||||
|
|
||||||
Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow Execution Strategy
|
|
||||||
|
|
||||||
### Transactional Four-Phase Approach
|
|
||||||
|
|
||||||
**Phase 1: Pre-Archival Preparation** (Marker creation)
|
|
||||||
- Find active session and extract name
|
|
||||||
- Check for existing `.archiving` marker (resume detection)
|
|
||||||
- Create `.archiving` marker if new
|
|
||||||
- **No data processing** - just state tracking
|
|
||||||
- **Total**: 2-3 bash commands (find + marker check/create)
|
|
||||||
|
|
||||||
**Phase 2: Agent Analysis** (Read-only data processing)
|
|
||||||
- Extract all session data from active location
|
|
||||||
- Count tasks and summaries
|
|
||||||
- Extract review data if .review/ exists (dimension results, findings, fix results)
|
|
||||||
- Generate lessons learned analysis (including review-specific lessons if applicable)
|
|
||||||
- Extract feature metadata from IMPL_PLAN.md
|
|
||||||
- Build complete archive + feature metadata package (with review_metrics if applicable)
|
|
||||||
- **No file modifications** - pure analysis
|
|
||||||
- **Total**: 1 agent invocation
|
|
||||||
|
|
||||||
**Phase 3: Atomic Commit** (Transactional file operations)
|
|
||||||
- Create archive directory
|
|
||||||
- Move session to archive location
|
|
||||||
- Update manifest.json with archive entry
|
|
||||||
- Remove `.archiving` marker
|
|
||||||
- **All-or-nothing**: Either all succeed or session remains in safe state
|
|
||||||
- **Total**: 4 bash commands + JSON manipulation
|
|
||||||
|
|
||||||
**Phase 4: Project Registry Update** (Optional feature tracking)
|
|
||||||
- Check project.json exists
|
|
||||||
- Use feature metadata from Phase 2 agent result
|
|
||||||
- Build feature object with complete traceability
|
|
||||||
- Update project statistics
|
|
||||||
- **Independent**: Can fail without affecting archival
|
|
||||||
- **Total**: 1 bash read + JSON manipulation
|
|
||||||
|
|
||||||
### Transactional Guarantees
|
|
||||||
|
|
||||||
**State Consistency**:
|
|
||||||
- Session NEVER in inconsistent state
|
|
||||||
- `.archiving` marker enables safe resume
|
|
||||||
- Agent failure leaves session in recoverable state
|
|
||||||
- Move/manifest operations grouped in Phase 3
|
|
||||||
|
|
||||||
**Failure Isolation**:
|
|
||||||
- Phase 1 failure: No changes made
|
|
||||||
- Phase 2 failure: Session still active, can retry
|
|
||||||
- Phase 3 failure: Clear error state, manual recovery documented
|
|
||||||
- Phase 4 failure: Does not affect archival success
|
|
||||||
|
|
||||||
**Resume Capability**:
|
|
||||||
- Detect interrupted archival via `.archiving` marker
|
|
||||||
- Resume from Phase 2 (skip marker creation)
|
|
||||||
- Idempotent operations (safe to retry)
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,13 @@
|
|||||||
---
|
---
|
||||||
name: start
|
name: start
|
||||||
description: Discover existing sessions or start new workflow session with intelligent session management and conflict detection
|
description: Discover existing sessions or start new workflow session with intelligent session management and conflict detection
|
||||||
argument-hint: [--auto|--new] [optional: task description for new session]
|
argument-hint: [--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]
|
||||||
examples:
|
examples:
|
||||||
- /workflow:session:start
|
- /workflow:session:start
|
||||||
- /workflow:session:start --auto "implement OAuth2 authentication"
|
- /workflow:session:start --auto "implement OAuth2 authentication"
|
||||||
- /workflow:session:start --new "fix login bug"
|
- /workflow:session:start --type review "Code review for auth module"
|
||||||
|
- /workflow:session:start --type tdd --auto "implement user authentication"
|
||||||
|
- /workflow:session:start --type test --new "test payment flow"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Start Workflow Session (/workflow:session:start)
|
# Start Workflow Session (/workflow:session:start)
|
||||||
@@ -17,6 +19,23 @@ Manages workflow sessions with three operation modes: discovery (manual), auto (
|
|||||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||||
2. **Session-level initialization** (always): Creates session directory structure
|
2. **Session-level initialization** (always): Creates session directory structure
|
||||||
|
|
||||||
|
## Session Types
|
||||||
|
|
||||||
|
The `--type` parameter classifies sessions for CCW dashboard organization:
|
||||||
|
|
||||||
|
| Type | Description | Default For |
|
||||||
|
|------|-------------|-------------|
|
||||||
|
| `workflow` | Standard implementation (default) | `/workflow:plan` |
|
||||||
|
| `review` | Code review sessions | `/workflow:review-module-cycle` |
|
||||||
|
| `tdd` | TDD-based development | `/workflow:tdd-plan` |
|
||||||
|
| `test` | Test generation/fix sessions | `/workflow:test-fix-gen` |
|
||||||
|
| `docs` | Documentation sessions | `/memory:docs` |
|
||||||
|
|
||||||
|
**Validation**: If `--type` is provided with invalid value, return error:
|
||||||
|
```
|
||||||
|
ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
|
||||||
|
```
|
||||||
|
|
||||||
## Step 0: Initialize Project State (First-time Only)
|
## Step 0: Initialize Project State (First-time Only)
|
||||||
|
|
||||||
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
|
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
|
||||||
@@ -86,8 +105,8 @@ bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.process)
|
|||||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
|
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
|
||||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
|
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
|
||||||
|
|
||||||
# Create metadata
|
# Create metadata (include type field, default to "workflow" if not specified)
|
||||||
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
|
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
||||||
@@ -143,11 +162,16 @@ bash(mkdir -p .workflow/active/WFS-fix-login-bug/.summaries)
|
|||||||
|
|
||||||
### Step 3: Create Metadata
|
### Step 3: Create Metadata
|
||||||
```bash
|
```bash
|
||||||
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
|
# Include type field from --type parameter (default: "workflow")
|
||||||
|
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
||||||
|
|
||||||
|
## Execution Guideline
|
||||||
|
|
||||||
|
- **Non-interrupting**: When called from other commands, this command completes and returns control to the caller without interrupting subsequent tasks.
|
||||||
|
|
||||||
## Output Format Specification
|
## Output Format Specification
|
||||||
|
|
||||||
### Success
|
### Success
|
||||||
|
|||||||
@@ -1,328 +0,0 @@
|
|||||||
---
|
|
||||||
name: workflow:status
|
|
||||||
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
|
|
||||||
argument-hint: "[optional: --project|task-id|--validate|--dashboard]"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow Status Command (/workflow:status)
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Generates on-demand views from project and session data. Supports multiple modes:
|
|
||||||
1. **Project Overview** (`--project`): Shows completed features and project statistics
|
|
||||||
2. **Workflow Tasks** (default): Shows current session task progress
|
|
||||||
3. **HTML Dashboard** (`--dashboard`): Generates interactive HTML task board with active and archived sessions
|
|
||||||
|
|
||||||
No synchronization needed - all views are calculated from current JSON state.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
```bash
|
|
||||||
/workflow:status # Show current workflow session overview
|
|
||||||
/workflow:status --project # Show project-level feature registry
|
|
||||||
/workflow:status impl-1 # Show specific task details
|
|
||||||
/workflow:status --validate # Validate workflow integrity
|
|
||||||
/workflow:status --dashboard # Generate HTML dashboard board
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation Flow
|
|
||||||
|
|
||||||
### Mode Selection
|
|
||||||
|
|
||||||
**Check for --project flag**:
|
|
||||||
- If `--project` flag present → Execute **Project Overview Mode**
|
|
||||||
- Otherwise → Execute **Workflow Session Mode** (default)
|
|
||||||
|
|
||||||
## Project Overview Mode
|
|
||||||
|
|
||||||
### Step 1: Check Project State
|
|
||||||
```bash
|
|
||||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
|
||||||
```
|
|
||||||
|
|
||||||
**If NOT_FOUND**:
|
|
||||||
```
|
|
||||||
No project state found.
|
|
||||||
Run /workflow:session:start to initialize project.
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Read Project Data
|
|
||||||
```bash
|
|
||||||
bash(cat .workflow/project.json)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Parse and Display
|
|
||||||
|
|
||||||
**Data Processing**:
|
|
||||||
```javascript
|
|
||||||
const projectData = JSON.parse(Read('.workflow/project.json'));
|
|
||||||
const features = projectData.features || [];
|
|
||||||
const stats = projectData.statistics || {};
|
|
||||||
const overview = projectData.overview || null;
|
|
||||||
|
|
||||||
// Sort features by implementation date (newest first)
|
|
||||||
const sortedFeatures = features.sort((a, b) =>
|
|
||||||
new Date(b.implemented_at) - new Date(a.implemented_at)
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Format** (with extended overview):
|
|
||||||
```
|
|
||||||
## Project: ${projectData.project_name}
|
|
||||||
Initialized: ${projectData.initialized_at}
|
|
||||||
|
|
||||||
${overview ? `
|
|
||||||
### Overview
|
|
||||||
${overview.description}
|
|
||||||
|
|
||||||
**Technology Stack**:
|
|
||||||
${overview.technology_stack.languages.map(l => `- ${l.name}${l.primary ? ' (primary)' : ''}: ${l.file_count} files`).join('\n')}
|
|
||||||
Frameworks: ${overview.technology_stack.frameworks.join(', ')}
|
|
||||||
|
|
||||||
**Architecture**:
|
|
||||||
Style: ${overview.architecture.style}
|
|
||||||
Patterns: ${overview.architecture.patterns.join(', ')}
|
|
||||||
|
|
||||||
**Key Components** (${overview.key_components.length}):
|
|
||||||
${overview.key_components.map(c => `- ${c.name} (${c.path})\n ${c.description}`).join('\n')}
|
|
||||||
|
|
||||||
**Metrics**:
|
|
||||||
- Files: ${overview.metrics.total_files}
|
|
||||||
- Lines of Code: ${overview.metrics.lines_of_code}
|
|
||||||
- Complexity: ${overview.metrics.complexity}
|
|
||||||
|
|
||||||
---
|
|
||||||
` : ''}
|
|
||||||
|
|
||||||
### Completed Features (${stats.total_features})
|
|
||||||
|
|
||||||
${sortedFeatures.map(f => `
|
|
||||||
- ${f.title} (${f.timeline?.implemented_at || f.implemented_at})
|
|
||||||
${f.description}
|
|
||||||
Tags: ${f.tags?.join(', ') || 'none'}
|
|
||||||
Session: ${f.traceability?.session_id || f.session_id}
|
|
||||||
Archive: ${f.traceability?.archive_path || 'unknown'}
|
|
||||||
${f.traceability?.commit_hash ? `Commit: ${f.traceability.commit_hash}` : ''}
|
|
||||||
`).join('\n')}
|
|
||||||
|
|
||||||
### Project Statistics
|
|
||||||
- Total Features: ${stats.total_features}
|
|
||||||
- Total Sessions: ${stats.total_sessions}
|
|
||||||
- Last Updated: ${stats.last_updated}
|
|
||||||
|
|
||||||
### Quick Access
|
|
||||||
- View session details: /workflow:status
|
|
||||||
- Archive query: jq '.archives[] | select(.session_id == "SESSION_ID")' .workflow/archives/manifest.json
|
|
||||||
- Documentation: .workflow/docs/${projectData.project_name}/
|
|
||||||
|
|
||||||
### Query Commands
|
|
||||||
# Find by tag
|
|
||||||
cat .workflow/project.json | jq '.features[] | select(.tags[] == "auth")'
|
|
||||||
|
|
||||||
# View archive
|
|
||||||
cat ${feature.traceability.archive_path}/IMPL_PLAN.md
|
|
||||||
|
|
||||||
# List all tags
|
|
||||||
cat .workflow/project.json | jq -r '.features[].tags[]' | sort -u
|
|
||||||
```
|
|
||||||
|
|
||||||
**Empty State**:
|
|
||||||
```
|
|
||||||
## Project: ${projectData.project_name}
|
|
||||||
Initialized: ${projectData.initialized_at}
|
|
||||||
|
|
||||||
No features completed yet.
|
|
||||||
|
|
||||||
Complete your first workflow session to add features:
|
|
||||||
1. /workflow:plan "feature description"
|
|
||||||
2. /workflow:execute
|
|
||||||
3. /workflow:session:complete
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Show Recent Sessions (Optional)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List 5 most recent archived sessions
|
|
||||||
bash(ls -1t .workflow/archives/WFS-* 2>/dev/null | head -5 | xargs -I {} basename {})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**:
|
|
||||||
```
|
|
||||||
### Recent Sessions
|
|
||||||
- WFS-auth-system (archived)
|
|
||||||
- WFS-payment-flow (archived)
|
|
||||||
- WFS-user-dashboard (archived)
|
|
||||||
|
|
||||||
Use /workflow:session:complete to archive current session.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow Session Mode (Default)
|
|
||||||
|
|
||||||
### Step 1: Find Active Session
|
|
||||||
```bash
|
|
||||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Load Session Data
|
|
||||||
```bash
|
|
||||||
cat .workflow/active/WFS-session/workflow-session.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Scan Task Files
|
|
||||||
```bash
|
|
||||||
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Generate Task Status
|
|
||||||
```bash
|
|
||||||
cat .workflow/active/WFS-session/.task/impl-1.json | jq -r '.status'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Count Task Progress
|
|
||||||
```bash
|
|
||||||
find .workflow/active/WFS-session/.task/ -name "*.json" -type f | wc -l
|
|
||||||
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Display Overview
|
|
||||||
```markdown
|
|
||||||
# Workflow Overview
|
|
||||||
**Session**: WFS-session-name
|
|
||||||
**Progress**: 3/8 tasks completed
|
|
||||||
|
|
||||||
## Active Tasks
|
|
||||||
- [IN PROGRESS] impl-1: Current task in progress
|
|
||||||
- [ ] impl-2: Next pending task
|
|
||||||
|
|
||||||
## Completed Tasks
|
|
||||||
- [COMPLETED] impl-0: Setup completed
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dashboard Mode (HTML Board)
|
|
||||||
|
|
||||||
### Step 1: Check for --dashboard flag
|
|
||||||
```bash
|
|
||||||
# If --dashboard flag present → Execute Dashboard Mode
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Collect Workflow Data
|
|
||||||
|
|
||||||
**Collect Active Sessions**:
|
|
||||||
```bash
|
|
||||||
# Find all active sessions
|
|
||||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null
|
|
||||||
|
|
||||||
# For each active session, read metadata and tasks
|
|
||||||
for session in $(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null); do
|
|
||||||
cat "$session/workflow-session.json"
|
|
||||||
find "$session/.task/" -name "*.json" -type f 2>/dev/null
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
**Collect Archived Sessions**:
|
|
||||||
```bash
|
|
||||||
# Find all archived sessions
|
|
||||||
find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null
|
|
||||||
|
|
||||||
# Read manifest if exists
|
|
||||||
cat .workflow/archives/manifest.json 2>/dev/null
|
|
||||||
|
|
||||||
# For each archived session, read metadata
|
|
||||||
for archive in $(find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null); do
|
|
||||||
cat "$archive/workflow-session.json" 2>/dev/null
|
|
||||||
# Count completed tasks
|
|
||||||
find "$archive/.task/" -name "*.json" -type f 2>/dev/null | wc -l
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Process and Structure Data
|
|
||||||
|
|
||||||
**Build data structure for dashboard**:
|
|
||||||
```javascript
|
|
||||||
const dashboardData = {
|
|
||||||
activeSessions: [],
|
|
||||||
archivedSessions: [],
|
|
||||||
generatedAt: new Date().toISOString()
|
|
||||||
};
|
|
||||||
|
|
||||||
// Process active sessions
|
|
||||||
for each active_session in active_sessions:
|
|
||||||
const sessionData = JSON.parse(Read(active_session/workflow-session.json));
|
|
||||||
const tasks = [];
|
|
||||||
|
|
||||||
// Load all tasks for this session
|
|
||||||
for each task_file in find(active_session/.task/*.json):
|
|
||||||
const taskData = JSON.parse(Read(task_file));
|
|
||||||
tasks.push({
|
|
||||||
task_id: taskData.task_id,
|
|
||||||
title: taskData.title,
|
|
||||||
status: taskData.status,
|
|
||||||
type: taskData.type
|
|
||||||
});
|
|
||||||
|
|
||||||
dashboardData.activeSessions.push({
|
|
||||||
session_id: sessionData.session_id,
|
|
||||||
project: sessionData.project,
|
|
||||||
status: sessionData.status,
|
|
||||||
created_at: sessionData.created_at || sessionData.initialized_at,
|
|
||||||
tasks: tasks
|
|
||||||
});
|
|
||||||
|
|
||||||
// Process archived sessions
|
|
||||||
for each archived_session in archived_sessions:
|
|
||||||
const sessionData = JSON.parse(Read(archived_session/workflow-session.json));
|
|
||||||
const taskCount = bash(find archived_session/.task/*.json | wc -l);
|
|
||||||
|
|
||||||
dashboardData.archivedSessions.push({
|
|
||||||
session_id: sessionData.session_id,
|
|
||||||
project: sessionData.project,
|
|
||||||
archived_at: sessionData.completed_at || sessionData.archived_at,
|
|
||||||
taskCount: parseInt(taskCount),
|
|
||||||
archive_path: archived_session
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Generate HTML from Template
|
|
||||||
|
|
||||||
**Load template and inject data**:
|
|
||||||
```javascript
|
|
||||||
// Read the HTML template
|
|
||||||
const template = Read("~/.claude/templates/workflow-dashboard.html");
|
|
||||||
|
|
||||||
// Prepare data for injection
|
|
||||||
const dataJson = JSON.stringify(dashboardData, null, 2);
|
|
||||||
|
|
||||||
// Replace placeholder with actual data
|
|
||||||
const htmlContent = template.replace('{{WORKFLOW_DATA}}', dataJson);
|
|
||||||
|
|
||||||
// Ensure .workflow directory exists
|
|
||||||
bash(mkdir -p .workflow);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Write HTML File
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Write the generated HTML to .workflow/dashboard.html
|
|
||||||
Write({
|
|
||||||
file_path: ".workflow/dashboard.html",
|
|
||||||
content: htmlContent
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Display Success Message
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
Dashboard generated successfully!
|
|
||||||
|
|
||||||
Location: .workflow/dashboard.html
|
|
||||||
|
|
||||||
Open in browser:
|
|
||||||
file://$(pwd)/.workflow/dashboard.html
|
|
||||||
|
|
||||||
Features:
|
|
||||||
- 📊 Active sessions overview
|
|
||||||
- 📦 Archived sessions history
|
|
||||||
- 🔍 Search and filter
|
|
||||||
- 📈 Progress tracking
|
|
||||||
- 🎨 Dark/light theme
|
|
||||||
|
|
||||||
Refresh data: Re-run /workflow:status --dashboard
|
|
||||||
```
|
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: tdd-plan
|
name: tdd-plan
|
||||||
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking
|
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking
|
||||||
argument-hint: "[--cli-execute] \"feature description\"|file.md"
|
argument-hint: "\"feature description\"|file.md"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -9,40 +9,43 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
|
|
||||||
## Coordinator Role
|
## Coordinator Role
|
||||||
|
|
||||||
**This command is a pure orchestrator**: Execute 6 slash commands in sequence, parse outputs, pass context, and ensure complete TDD workflow creation with Red-Green-Refactor task generation.
|
**This command is a pure orchestrator**: Dispatches 6 slash commands in sequence, parse outputs, pass context, and ensure complete TDD workflow creation with Red-Green-Refactor task generation.
|
||||||
|
|
||||||
**Execution Modes**:
|
**CLI Tool Selection**: CLI tool usage is determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
|
||||||
- **Agent Mode** (default): Use `/workflow:tools:task-generate-tdd` (autonomous agent-driven)
|
|
||||||
- **CLI Mode** (`--cli-execute`): Use `/workflow:tools:task-generate-tdd --cli-execute` (Gemini/Qwen)
|
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
- When dispatching a sub-command (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
||||||
- Orchestrator **executes these attached tasks** sequentially
|
- Orchestrator **executes these attached tasks** sequentially
|
||||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||||
- This is **task expansion**, not external delegation
|
- This is **task expansion**, not external delegation
|
||||||
|
|
||||||
**Auto-Continue Mechanism**:
|
**Auto-Continue Mechanism**:
|
||||||
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
||||||
- When each phase finishes executing, automatically execute next pending phase
|
- When each phase finishes executing, automatically dispatch next pending phase
|
||||||
- All phases run autonomously without user interaction
|
- All phases run autonomously without user interaction
|
||||||
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
|
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
1. **Start Immediately**: First action is TodoWrite initialization, second action is dispatch Phase 1
|
||||||
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
||||||
3. **Parse Every Output**: Extract required data for next phase
|
3. **Parse Every Output**: Extract required data for next phase
|
||||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
4. **Auto-Continue via TodoList**: Check TodoList status to dispatch next pending phase automatically
|
||||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
6. **TDD Context**: All descriptions include "TDD:" prefix
|
6. **TDD Context**: All descriptions include "TDD:" prefix
|
||||||
7. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
7. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and dispatch next phase
|
||||||
|
|
||||||
## 6-Phase Execution (with Conflict Resolution)
|
## 6-Phase Execution (with Conflict Resolution)
|
||||||
|
|
||||||
### Phase 1: Session Discovery
|
### Phase 1: Session Discovery
|
||||||
**Command**: `/workflow:session:start --auto "TDD: [structured-description]"`
|
|
||||||
|
**Step 1.1: Dispatch** - Session discovery and initialization
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:session:start --type tdd --auto \"TDD: [structured-description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**TDD Structured Format**:
|
**TDD Structured Format**:
|
||||||
```
|
```
|
||||||
@@ -62,7 +65,12 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Context Gathering
|
### Phase 2: Context Gathering
|
||||||
**Command**: `/workflow:tools:context-gather --session [sessionId] "TDD: [structured-description]"`
|
|
||||||
|
**Step 2.1: Dispatch** - Context gathering and analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"TDD: [structured-description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
||||||
|
|
||||||
@@ -83,7 +91,12 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Test Coverage Analysis
|
### Phase 3: Test Coverage Analysis
|
||||||
**Command**: `/workflow:tools:test-context-gather --session [sessionId]`
|
|
||||||
|
**Step 3.1: Dispatch** - Test coverage analysis and framework detection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
**Purpose**: Analyze existing codebase for:
|
**Purpose**: Analyze existing codebase for:
|
||||||
- Existing test patterns and conventions
|
- Existing test patterns and conventions
|
||||||
@@ -95,9 +108,9 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
<!-- TodoWrite: When test-context-gather invoked, INSERT 3 test-context-gather tasks -->
|
<!-- TodoWrite: When test-context-gather dispatched, INSERT 3 test-context-gather tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 3 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
@@ -111,7 +124,7 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
||||||
|
|
||||||
@@ -138,7 +151,11 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
|
|
||||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||||
|
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")`
|
**Step 4.1: Dispatch** - Conflict detection and resolution
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- sessionId from Phase 1
|
- sessionId from Phase 1
|
||||||
@@ -147,18 +164,18 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
- Extract: Execution status (success/skipped/failed)
|
- Extract: Execution status (success/skipped/failed)
|
||||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
- Verify: conflict-resolution.json file path (if executed)
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||||
|
|
||||||
**Skip Behavior**:
|
**Skip Behavior**:
|
||||||
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
||||||
- Display: "No significant conflicts detected, proceeding to TDD task generation"
|
- Display: "No significant conflicts detected, proceeding to TDD task generation"
|
||||||
|
|
||||||
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
|
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks when dispatched -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 4 SlashCommand invoked - tasks attached, if conflict_risk ≥ medium)**:
|
**TodoWrite Update (Phase 4 SlashCommand dispatched - tasks attached, if conflict_risk ≥ medium)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
@@ -173,7 +190,7 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
||||||
|
|
||||||
@@ -198,7 +215,13 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
**Memory State Check**:
|
**Memory State Check**:
|
||||||
- Evaluate current context window usage and memory state
|
- Evaluate current context window usage and memory state
|
||||||
- If memory usage is high (>110K tokens or approaching context limits):
|
- If memory usage is high (>110K tokens or approaching context limits):
|
||||||
- **Command**: `SlashCommand(command="/compact")`
|
|
||||||
|
**Step 4.5: Dispatch** - Memory compaction
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/compact")
|
||||||
|
```
|
||||||
|
|
||||||
- This optimizes memory before proceeding to Phase 5
|
- This optimizes memory before proceeding to Phase 5
|
||||||
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
||||||
- Ensures optimal performance and prevents context overflow
|
- Ensures optimal performance and prevents context overflow
|
||||||
@@ -206,9 +229,14 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 5: TDD Task Generation
|
### Phase 5: TDD Task Generation
|
||||||
**Command**:
|
|
||||||
- Agent Mode (default): `/workflow:tools:task-generate-tdd --session [sessionId]`
|
**Step 5.1: Dispatch** - TDD task generation via action-planning-agent
|
||||||
- CLI Mode (`--cli-execute`): `/workflow:tools:task-generate-tdd --session [sessionId] --cli-execute`
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: CLI tool usage is determined semantically from user's task description.
|
||||||
|
|
||||||
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles)
|
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles)
|
||||||
|
|
||||||
@@ -223,9 +251,9 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
||||||
- Task count ≤10 (compliance with task limit)
|
- Task count ≤10 (compliance with task limit)
|
||||||
|
|
||||||
<!-- TodoWrite: When task-generate-tdd invoked, INSERT 3 task-generate-tdd tasks -->
|
<!-- TodoWrite: When task-generate-tdd dispatched, INSERT 3 task-generate-tdd tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 5 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 5 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
@@ -239,7 +267,7 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
|
**Note**: SlashCommand dispatch **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 5.1-5.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 5.1-5.3** sequentially
|
||||||
|
|
||||||
@@ -319,7 +347,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task
|
|||||||
|
|
||||||
### Key Principles
|
### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand dispatched):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 3.1, 3.2, 3.3)
|
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 3.1, 3.2, 3.3)
|
||||||
- First attached task marked as `in_progress`, others as `pending`
|
- First attached task marked as `in_progress`, others as `pending`
|
||||||
@@ -336,7 +364,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins (conditional Phase 4 if conflict_risk ≥ medium) → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase dispatched (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins (conditional Phase 4 if conflict_risk ≥ medium) → Repeat until all phases complete.
|
||||||
|
|
||||||
### TDD-Specific Features
|
### TDD-Specific Features
|
||||||
|
|
||||||
@@ -374,7 +402,7 @@ TDD Workflow Orchestrator
|
|||||||
│ ├─ Phase 4.1: Detect conflicts with CLI
|
│ ├─ Phase 4.1: Detect conflicts with CLI
|
||||||
│ ├─ Phase 4.2: Present conflicts to user
|
│ ├─ Phase 4.2: Present conflicts to user
|
||||||
│ └─ Phase 4.3: Apply resolution strategies
|
│ └─ Phase 4.3: Apply resolution strategies
|
||||||
│ └─ Returns: CONFLICT_RESOLUTION.md ← COLLAPSED
|
│ └─ Returns: conflict-resolution.json ← COLLAPSED
|
||||||
│ ELSE:
|
│ ELSE:
|
||||||
│ └─ Skip to Phase 5
|
│ └─ Skip to Phase 5
|
||||||
│
|
│
|
||||||
@@ -422,8 +450,7 @@ Convert user input to TDD-structured format:
|
|||||||
- `/workflow:tools:test-context-gather` - Phase 3: Analyze existing test patterns and coverage
|
- `/workflow:tools:test-context-gather` - Phase 3: Analyze existing test patterns and coverage
|
||||||
- `/workflow:tools:conflict-resolution` - Phase 4: Detect and resolve conflicts (auto-triggered if conflict_risk ≥ medium)
|
- `/workflow:tools:conflict-resolution` - Phase 4: Detect and resolve conflicts (auto-triggered if conflict_risk ≥ medium)
|
||||||
- `/compact` - Phase 4: Memory optimization (if context approaching limits)
|
- `/compact` - Phase 4: Memory optimization (if context approaching limits)
|
||||||
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks with agent-driven approach (default, autonomous)
|
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks (CLI tool usage determined semantically)
|
||||||
- `/workflow:tools:task-generate-tdd --cli-execute` - Phase 5: Generate TDD tasks with CLI tools (Gemini/Qwen, when `--cli-execute` flag used)
|
|
||||||
|
|
||||||
**Follow-up Commands**:
|
**Follow-up Commands**:
|
||||||
- `/workflow:action-plan-verify` - Recommended: Verify TDD plan quality and structure before execution
|
- `/workflow:action-plan-verify` - Recommended: Verify TDD plan quality and structure before execution
|
||||||
|
|||||||
@@ -18,6 +18,39 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini:*)
|
|||||||
- Validate TDD cycle execution
|
- Validate TDD cycle execution
|
||||||
- Generate compliance report
|
- Generate compliance report
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
└─ Decision (session argument):
|
||||||
|
├─ session-id provided → Use provided session
|
||||||
|
└─ No session-id → Auto-detect active session
|
||||||
|
|
||||||
|
Phase 1: Session Discovery
|
||||||
|
├─ Validate session directory exists
|
||||||
|
└─ TodoWrite: Mark phase 1 completed
|
||||||
|
|
||||||
|
Phase 2: Task Chain Validation
|
||||||
|
├─ Load all task JSONs from .task/
|
||||||
|
├─ Extract task IDs and group by feature
|
||||||
|
├─ Validate TDD structure:
|
||||||
|
│ ├─ TEST-N.M → IMPL-N.M → REFACTOR-N.M chain
|
||||||
|
│ ├─ Dependency verification
|
||||||
|
│ └─ Meta field validation (tdd_phase, agent)
|
||||||
|
└─ TodoWrite: Mark phase 2 completed
|
||||||
|
|
||||||
|
Phase 3: Test Execution Analysis
|
||||||
|
└─ /workflow:tools:tdd-coverage-analysis
|
||||||
|
├─ Coverage metrics extraction
|
||||||
|
├─ TDD cycle verification
|
||||||
|
└─ Compliance score calculation
|
||||||
|
|
||||||
|
Phase 4: Compliance Report Generation
|
||||||
|
├─ Gemini analysis for comprehensive report
|
||||||
|
├─ Generate TDD_COMPLIANCE_REPORT.md
|
||||||
|
└─ Return summary to user
|
||||||
|
```
|
||||||
|
|
||||||
## 4-Phase Execution
|
## 4-Phase Execution
|
||||||
|
|
||||||
### Phase 1: Session Discovery
|
### Phase 1: Session Discovery
|
||||||
@@ -44,18 +77,32 @@ find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Load all task JSONs
|
# Load all task JSONs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json'
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file"
|
||||||
|
done
|
||||||
|
|
||||||
# Extract task IDs
|
# Extract task IDs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file" | jq -r '.id'
|
||||||
|
done
|
||||||
|
|
||||||
# Check dependencies
|
# Check dependencies - read tasks and filter for IMPL/REFACTOR
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/IMPL-*.json; do
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
cat "$task_file" | jq -r '.context.depends_on[]?'
|
||||||
|
done
|
||||||
|
|
||||||
|
for task_file in .workflow/active/{sessionId}/.task/REFACTOR-*.json; do
|
||||||
|
cat "$task_file" | jq -r '.context.depends_on[]?'
|
||||||
|
done
|
||||||
|
|
||||||
# Check meta fields
|
# Check meta fields
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
cat "$task_file" | jq -r '.meta.tdd_phase'
|
||||||
|
done
|
||||||
|
|
||||||
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file" | jq -r '.meta.agent'
|
||||||
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
@@ -94,7 +141,7 @@ find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent
|
|||||||
**Gemini analysis for comprehensive TDD compliance report**
|
**Gemini analysis for comprehensive TDD compliance report**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd project-root && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate TDD compliance report
|
PURPOSE: Generate TDD compliance report
|
||||||
TASK: Analyze TDD workflow execution and generate quality report
|
TASK: Analyze TDD workflow execution and generate quality report
|
||||||
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
|
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
|
||||||
@@ -106,7 +153,7 @@ EXPECTED:
|
|||||||
- Red-Green-Refactor cycle validation
|
- Red-Green-Refactor cycle validation
|
||||||
- Best practices adherence assessment
|
- Best practices adherence assessment
|
||||||
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
||||||
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
" --tool gemini --mode analysis --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: TDD_COMPLIANCE_REPORT.md
|
**Output**: TDD_COMPLIANCE_REPORT.md
|
||||||
|
|||||||
@@ -221,6 +221,7 @@ return "conservative";
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-planning-agent",
|
subagent_type="cli-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Analyze test failures (iteration ${N}) - ${strategy} strategy`,
|
description=`Analyze test failures (iteration ${N}) - ${strategy} strategy`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -271,6 +272,7 @@ Task(
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="test-fix-agent",
|
subagent_type="test-fix-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Execute ${task.meta.type}: ${task.title}`,
|
description=`Execute ${task.meta.type}: ${task.title}`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: test-fix-gen
|
name: test-fix-gen
|
||||||
description: Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning
|
description: Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning
|
||||||
argument-hint: "[--use-codex] [--cli-execute] (source-session-id | \"feature description\" | /path/to/file.md)"
|
argument-hint: "(source-session-id | \"feature description\" | /path/to/file.md)"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -43,7 +43,7 @@ fi
|
|||||||
- **Session Isolation**: Creates independent `WFS-test-[slug]` session
|
- **Session Isolation**: Creates independent `WFS-test-[slug]` session
|
||||||
- **Context-First**: Gathers implementation context via appropriate method
|
- **Context-First**: Gathers implementation context via appropriate method
|
||||||
- **Format Reuse**: Creates standard `IMPL-*.json` tasks with `meta.type: "test-fix"`
|
- **Format Reuse**: Creates standard `IMPL-*.json` tasks with `meta.type: "test-fix"`
|
||||||
- **Manual First**: Default to manual fixes, use `--use-codex` for automation
|
- **Semantic CLI Selection**: CLI tool usage determined from user's task description
|
||||||
- **Automatic Detection**: Input pattern determines execution mode
|
- **Automatic Detection**: Input pattern determines execution mode
|
||||||
|
|
||||||
### Coordinator Role
|
### Coordinator Role
|
||||||
@@ -59,8 +59,8 @@ This command is a **pure planning coordinator**:
|
|||||||
- **All execution delegated to `/workflow:test-cycle-execute`**
|
- **All execution delegated to `/workflow:test-cycle-execute`**
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
- When dispatching a sub-command (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
||||||
- Orchestrator **executes these attached tasks** sequentially
|
- Orchestrator **executes these attached tasks** sequentially
|
||||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||||
- This is **task expansion**, not external delegation
|
- This is **task expansion**, not external delegation
|
||||||
@@ -79,16 +79,14 @@ This command is a **pure planning coordinator**:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Basic syntax
|
# Basic syntax
|
||||||
/workflow:test-fix-gen [FLAGS] <INPUT>
|
/workflow:test-fix-gen <INPUT>
|
||||||
|
|
||||||
# Flags (optional)
|
|
||||||
--use-codex # Enable Codex automated fixes in IMPL-002
|
|
||||||
--cli-execute # Enable CLI execution in IMPL-001
|
|
||||||
|
|
||||||
# Input
|
# Input
|
||||||
<INPUT> # Session ID, description, or file path
|
<INPUT> # Session ID, description, or file path
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Note**: CLI tool usage is determined semantically from the task description. To request CLI execution, include it in your description (e.g., "use Codex for automated fixes").
|
||||||
|
|
||||||
### Usage Examples
|
### Usage Examples
|
||||||
|
|
||||||
#### Session Mode
|
#### Session Mode
|
||||||
@@ -96,11 +94,8 @@ This command is a **pure planning coordinator**:
|
|||||||
# Test validation for completed implementation
|
# Test validation for completed implementation
|
||||||
/workflow:test-fix-gen WFS-user-auth-v2
|
/workflow:test-fix-gen WFS-user-auth-v2
|
||||||
|
|
||||||
# With automated fixes
|
# With semantic CLI request
|
||||||
/workflow:test-fix-gen --use-codex WFS-api-endpoints
|
/workflow:test-fix-gen WFS-api-endpoints # Add "use Codex" in description for automated fixes
|
||||||
|
|
||||||
# With CLI execution
|
|
||||||
/workflow:test-fix-gen --cli-execute --use-codex WFS-payment-flow
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Prompt Mode - Text Description
|
#### Prompt Mode - Text Description
|
||||||
@@ -108,17 +103,14 @@ This command is a **pure planning coordinator**:
|
|||||||
# Generate tests from feature description
|
# Generate tests from feature description
|
||||||
/workflow:test-fix-gen "Test the user authentication API endpoints in src/auth/api.ts"
|
/workflow:test-fix-gen "Test the user authentication API endpoints in src/auth/api.ts"
|
||||||
|
|
||||||
# With automated fixes
|
# With CLI execution (semantic)
|
||||||
/workflow:test-fix-gen --use-codex "Test user registration and login flows"
|
/workflow:test-fix-gen "Test user registration and login flows, use Codex for automated fixes"
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Prompt Mode - File Reference
|
#### Prompt Mode - File Reference
|
||||||
```bash
|
```bash
|
||||||
# Generate tests from requirements file
|
# Generate tests from requirements file
|
||||||
/workflow:test-fix-gen ./docs/api-requirements.md
|
/workflow:test-fix-gen ./docs/api-requirements.md
|
||||||
|
|
||||||
# With flags
|
|
||||||
/workflow:test-fix-gen --use-codex --cli-execute ./specs/feature.md
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Mode Comparison
|
### Mode Comparison
|
||||||
@@ -136,32 +128,50 @@ This command is a **pure planning coordinator**:
|
|||||||
|
|
||||||
### Core Execution Rules
|
### Core Execution Rules
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite, second is Phase 1 session creation
|
1. **Start Immediately**: First action is TodoWrite, second is dispatch Phase 1 session creation
|
||||||
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
||||||
3. **Parse Every Output**: Extract required data from each phase for next phase
|
3. **Parse Every Output**: Extract required data from each phase for next phase
|
||||||
4. **Sequential Execution**: Each phase depends on previous phase's output
|
4. **Sequential Execution**: Each phase depends on previous phase's output
|
||||||
5. **Complete All Phases**: Do not return until Phase 5 completes
|
5. **Complete All Phases**: Do not return until Phase 5 completes
|
||||||
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
7. **Automatic Detection**: Mode auto-detected from input pattern
|
7. **Automatic Detection**: Mode auto-detected from input pattern
|
||||||
8. **Parse Flags**: Extract `--use-codex` and `--cli-execute` flags for Phase 4
|
8. **Semantic CLI Detection**: CLI tool usage determined from user's task description for Phase 4
|
||||||
9. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
9. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
10. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
10. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||||
|
|
||||||
### 5-Phase Execution
|
### 5-Phase Execution
|
||||||
|
|
||||||
#### Phase 1: Create Test Session
|
#### Phase 1: Create Test Session
|
||||||
|
|
||||||
**Command**:
|
**Step 1.0: Load Source Session Intent (Session Mode Only)** - Preserve user's original task description for semantic CLI selection
|
||||||
- **Session Mode**: `SlashCommand("/workflow:session:start --new \"Test validation for [sourceSessionId]\"")`
|
|
||||||
- **Prompt Mode**: `SlashCommand("/workflow:session:start --new \"Test generation for: [description]\"")`
|
```javascript
|
||||||
|
// Session Mode: Read source session metadata to get original task description
|
||||||
|
Read(".workflow/active/[sourceSessionId]/workflow-session.json")
|
||||||
|
// OR if context-package exists:
|
||||||
|
Read(".workflow/active/[sourceSessionId]/.process/context-package.json")
|
||||||
|
|
||||||
|
// Extract: metadata.task_description or project/description field
|
||||||
|
// This preserves user's CLI tool preferences (e.g., "use Codex for fixes")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.1: Dispatch** - Create test workflow session with preserved intent
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Session Mode - Include original task description to enable semantic CLI selection
|
||||||
|
SlashCommand(command="/workflow:session:start --type test --new \"Test validation for [sourceSessionId]: [originalTaskDescription]\"")
|
||||||
|
|
||||||
|
// Prompt Mode - User's description already contains their intent
|
||||||
|
SlashCommand(command="/workflow:session:start --type test --new \"Test generation for: [description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**: User argument (session ID, description, or file path)
|
**Input**: User argument (session ID, description, or file path)
|
||||||
|
|
||||||
**Expected Behavior**:
|
**Expected Behavior**:
|
||||||
- Creates new session: `WFS-test-[slug]`
|
- Creates new session: `WFS-test-[slug]`
|
||||||
- Writes `workflow-session.json` metadata:
|
- Writes `workflow-session.json` metadata with `type: "test"`
|
||||||
- **Session Mode**: Includes `workflow_type: "test_session"`, `source_session_id: "[sourceId]"`
|
- **Session Mode**: Additionally includes `source_session_id: "[sourceId]"`, description with original user intent
|
||||||
- **Prompt Mode**: Includes `workflow_type: "test_session"` only
|
- **Prompt Mode**: Uses user's description (already contains intent)
|
||||||
- Returns new session ID
|
- Returns new session ID
|
||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
@@ -177,9 +187,15 @@ This command is a **pure planning coordinator**:
|
|||||||
|
|
||||||
#### Phase 2: Gather Test Context
|
#### Phase 2: Gather Test Context
|
||||||
|
|
||||||
**Command**:
|
**Step 2.1: Dispatch** - Gather test context via appropriate method
|
||||||
- **Session Mode**: `SlashCommand("/workflow:tools:test-context-gather --session [testSessionId]")`
|
|
||||||
- **Prompt Mode**: `SlashCommand("/workflow:tools:context-gather --session [testSessionId] \"[task_description]\"")`
|
```javascript
|
||||||
|
// Session Mode
|
||||||
|
SlashCommand(command="/workflow:tools:test-context-gather --session [testSessionId]")
|
||||||
|
|
||||||
|
// Prompt Mode
|
||||||
|
SlashCommand(command="/workflow:tools:context-gather --session [testSessionId] \"[task_description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**: `testSessionId` from Phase 1
|
**Input**: `testSessionId` from Phase 1
|
||||||
|
|
||||||
@@ -208,7 +224,11 @@ This command is a **pure planning coordinator**:
|
|||||||
|
|
||||||
#### Phase 3: Test Generation Analysis
|
#### Phase 3: Test Generation Analysis
|
||||||
|
|
||||||
**Command**: `SlashCommand("/workflow:tools:test-concept-enhanced --session [testSessionId] --context [contextPath]")`
|
**Step 3.1: Dispatch** - Generate test requirements using Gemini
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessionId] --context [contextPath]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- `testSessionId` from Phase 1
|
- `testSessionId` from Phase 1
|
||||||
@@ -264,12 +284,16 @@ For each targeted file/function, Gemini MUST generate:
|
|||||||
|
|
||||||
#### Phase 4: Generate Test Tasks
|
#### Phase 4: Generate Test Tasks
|
||||||
|
|
||||||
**Command**: `SlashCommand("/workflow:tools:test-task-generate [--use-codex] [--cli-execute] --session [testSessionId]")`
|
**Step 4.1: Dispatch** - Generate test task JSONs
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-task-generate --session [testSessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- `testSessionId` from Phase 1
|
- `testSessionId` from Phase 1
|
||||||
- `--use-codex` flag (if present) - Controls IMPL-002 fix mode
|
|
||||||
- `--cli-execute` flag (if present) - Controls IMPL-001 generation mode
|
**Note**: CLI tool usage is determined semantically from user's task description.
|
||||||
|
|
||||||
**Expected Behavior**:
|
**Expected Behavior**:
|
||||||
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3 (multi-layered test plan)
|
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3 (multi-layered test plan)
|
||||||
@@ -357,7 +381,7 @@ CRITICAL - Next Steps:
|
|||||||
|
|
||||||
#### Key Principles
|
#### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand dispatched):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Example - Phase 2 with sub-tasks:
|
- Example - Phase 2 with sub-tasks:
|
||||||
```json
|
```json
|
||||||
@@ -392,7 +416,7 @@ CRITICAL - Next Steps:
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED with mode-specific context gathering) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase dispatched (tasks ATTACHED with mode-specific context gathering) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
||||||
|
|
||||||
#### Test-Fix-Gen Specific Features
|
#### Test-Fix-Gen Specific Features
|
||||||
|
|
||||||
@@ -402,7 +426,7 @@ CRITICAL - Next Steps:
|
|||||||
- **Phase 2**: Mode-specific context gathering (session summaries vs codebase analysis)
|
- **Phase 2**: Mode-specific context gathering (session summaries vs codebase analysis)
|
||||||
- **Phase 3**: Multi-layered test requirements analysis (L0: Static, L1: Unit, L2: Integration, L3: E2E)
|
- **Phase 3**: Multi-layered test requirements analysis (L0: Static, L1: Unit, L2: Integration, L3: E2E)
|
||||||
- **Phase 4**: Multi-task generation with quality gate (IMPL-001, IMPL-001.5-review, IMPL-002)
|
- **Phase 4**: Multi-task generation with quality gate (IMPL-001, IMPL-001.5-review, IMPL-002)
|
||||||
- **Fix Mode Configuration**: `--use-codex` flag controls IMPL-002 fix mode (manual vs automated)
|
- **Fix Mode Configuration**: CLI tool usage determined semantically from user's task description
|
||||||
|
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -501,16 +525,15 @@ If quality gate fails:
|
|||||||
- Task ID: `IMPL-002`
|
- Task ID: `IMPL-002`
|
||||||
- `meta.type: "test-fix"`
|
- `meta.type: "test-fix"`
|
||||||
- `meta.agent: "@test-fix-agent"`
|
- `meta.agent: "@test-fix-agent"`
|
||||||
- `meta.use_codex: true|false` (based on `--use-codex` flag)
|
|
||||||
- `context.depends_on: ["IMPL-001"]`
|
- `context.depends_on: ["IMPL-001"]`
|
||||||
- `context.requirements`: Execute and fix tests
|
- `context.requirements`: Execute and fix tests
|
||||||
|
|
||||||
**Test-Fix Cycle Specification**:
|
**Test-Fix Cycle Specification**:
|
||||||
**Note**: This specification describes what test-cycle-execute orchestrator will do. The agent only executes single tasks.
|
**Note**: This specification describes what test-cycle-execute orchestrator will do. The agent only executes single tasks.
|
||||||
- **Cycle Pattern** (orchestrator-managed): test → gemini_diagnose → manual_fix (or codex) → retest
|
- **Cycle Pattern** (orchestrator-managed): test → gemini_diagnose → fix (agent or CLI) → retest
|
||||||
- **Tools Configuration** (orchestrator-controlled):
|
- **Tools Configuration** (orchestrator-controlled):
|
||||||
- Gemini for analysis with bug-fix template → surgical fix suggestions
|
- Gemini for analysis with bug-fix template → surgical fix suggestions
|
||||||
- Manual fix application (default) OR Codex if `--use-codex` flag (resume mechanism)
|
- Agent fix application (default) OR CLI if `command` field present in implementation_approach
|
||||||
- **Exit Conditions** (orchestrator-enforced):
|
- **Exit Conditions** (orchestrator-enforced):
|
||||||
- Success: All tests pass
|
- Success: All tests pass
|
||||||
- Failure: Max iterations reached (5)
|
- Failure: Max iterations reached (5)
|
||||||
@@ -556,11 +579,11 @@ WFS-test-[session]/
|
|||||||
**File**: `workflow-session.json`
|
**File**: `workflow-session.json`
|
||||||
|
|
||||||
**Session Mode** includes:
|
**Session Mode** includes:
|
||||||
- `workflow_type: "test_session"`
|
- `type: "test"` (set by session:start --type test)
|
||||||
- `source_session_id: "[sourceSessionId]"` (enables automatic cross-session context)
|
- `source_session_id: "[sourceSessionId]"` (enables automatic cross-session context)
|
||||||
|
|
||||||
**Prompt Mode** includes:
|
**Prompt Mode** includes:
|
||||||
- `workflow_type: "test_session"`
|
- `type: "test"` (set by session:start --type test)
|
||||||
- No `source_session_id` field
|
- No `source_session_id` field
|
||||||
|
|
||||||
### Execution Flow Diagram
|
### Execution Flow Diagram
|
||||||
@@ -654,8 +677,7 @@ Key Points:
|
|||||||
4. **Mode Selection**:
|
4. **Mode Selection**:
|
||||||
- Use **Session Mode** for completed workflow validation
|
- Use **Session Mode** for completed workflow validation
|
||||||
- Use **Prompt Mode** for ad-hoc test generation
|
- Use **Prompt Mode** for ad-hoc test generation
|
||||||
- Use `--use-codex` for autonomous fix application
|
- Include "use Codex" in description for autonomous fix application
|
||||||
- Use `--cli-execute` for enhanced generation capabilities
|
|
||||||
|
|
||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
@@ -668,9 +690,7 @@ Key Points:
|
|||||||
- `/workflow:tools:test-context-gather` - Phase 2 (Session Mode): Gather source session context
|
- `/workflow:tools:test-context-gather` - Phase 2 (Session Mode): Gather source session context
|
||||||
- `/workflow:tools:context-gather` - Phase 2 (Prompt Mode): Analyze codebase directly
|
- `/workflow:tools:context-gather` - Phase 2 (Prompt Mode): Analyze codebase directly
|
||||||
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements using Gemini
|
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements using Gemini
|
||||||
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs using action-planning-agent (autonomous, default)
|
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs (CLI tool usage determined semantically)
|
||||||
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes for IMPL-002 (when `--use-codex` flag used)
|
|
||||||
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode for IMPL-001 test generation (when `--cli-execute` flag used)
|
|
||||||
|
|
||||||
**Follow-up Commands**:
|
**Follow-up Commands**:
|
||||||
- `/workflow:status` - Review generated test tasks
|
- `/workflow:status` - Review generated test tasks
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: test-gen
|
name: test-gen
|
||||||
description: Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks
|
description: Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks
|
||||||
argument-hint: "[--use-codex] [--cli-execute] source-session-id"
|
argument-hint: "source-session-id"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -16,11 +16,11 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
- **Context-First**: Prioritizes gathering code changes and summaries from source session
|
- **Context-First**: Prioritizes gathering code changes and summaries from source session
|
||||||
- **Format Reuse**: Creates standard `IMPL-*.json` task, using `meta.type: "test-fix"` for agent assignment
|
- **Format Reuse**: Creates standard `IMPL-*.json` task, using `meta.type: "test-fix"` for agent assignment
|
||||||
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
|
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
|
||||||
- **Manual First**: Default to manual fixes, use `--use-codex` flag for automated Codex fix application
|
- **Semantic CLI Selection**: CLI tool usage is determined by user's task description (e.g., "use Codex for fixes")
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
- When a sub-command is dispatched (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
||||||
- Orchestrator **executes these attached tasks** sequentially
|
- Orchestrator **executes these attached tasks** sequentially
|
||||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||||
- This is **task expansion**, not external delegation
|
- This is **task expansion**, not external delegation
|
||||||
@@ -48,23 +48,44 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
5. **Complete All Phases**: Do not return to user until Phase 5 completes (summary returned)
|
5. **Complete All Phases**: Do not return to user until Phase 5 completes (summary returned)
|
||||||
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
|
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
|
||||||
8. **Parse --use-codex Flag**: Extract flag from arguments and pass to Phase 4 (test-task-generate)
|
8. **Semantic CLI Selection**: CLI tool usage determined from user's task description, passed to Phase 4
|
||||||
9. **Command Boundary**: This command ends at Phase 5 summary. Test execution is NOT part of this command.
|
9. **Command Boundary**: This command ends at Phase 5 summary. Test execution is NOT part of this command.
|
||||||
10. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
10. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
11. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
11. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||||
|
|
||||||
## 5-Phase Execution
|
## 5-Phase Execution
|
||||||
|
|
||||||
### Phase 1: Create Test Session
|
### Phase 1: Create Test Session
|
||||||
**Command**: `SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]\"")`
|
|
||||||
|
|
||||||
**Input**: `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
|
**Step 1.0: Load Source Session Intent** - Preserve user's original task description for semantic CLI selection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Read source session metadata to get original task description
|
||||||
|
Read(".workflow/active/[sourceSessionId]/workflow-session.json")
|
||||||
|
// OR if context-package exists:
|
||||||
|
Read(".workflow/active/[sourceSessionId]/.process/context-package.json")
|
||||||
|
|
||||||
|
// Extract: metadata.task_description or project/description field
|
||||||
|
// This preserves user's CLI tool preferences (e.g., "use Codex for fixes")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.1: Dispatch** - Create new test workflow session with preserved intent
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Include original task description to enable semantic CLI selection
|
||||||
|
SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]: [originalTaskDescription]\"")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Input**:
|
||||||
|
- `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
|
||||||
|
- `originalTaskDescription` from source session metadata (preserves CLI tool preferences)
|
||||||
|
|
||||||
**Expected Behavior**:
|
**Expected Behavior**:
|
||||||
- Creates new session with pattern `WFS-test-[source-slug]` (e.g., `WFS-test-user-auth`)
|
- Creates new session with pattern `WFS-test-[source-slug]` (e.g., `WFS-test-user-auth`)
|
||||||
- Writes metadata to `workflow-session.json`:
|
- Writes metadata to `workflow-session.json`:
|
||||||
- `workflow_type: "test_session"`
|
- `workflow_type: "test_session"`
|
||||||
- `source_session_id: "[sourceSessionId]"`
|
- `source_session_id: "[sourceSessionId]"`
|
||||||
|
- Description includes original user intent for semantic CLI selection
|
||||||
- Returns new session ID for subsequent phases
|
- Returns new session ID for subsequent phases
|
||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
@@ -82,7 +103,12 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Gather Test Context
|
### Phase 2: Gather Test Context
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:test-context-gather --session [testSessionId]")`
|
|
||||||
|
**Step 2.1: Dispatch** - Gather test coverage context from source session
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-context-gather --session [testSessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**: `testSessionId` from Phase 1 (e.g., `WFS-test-user-auth`)
|
**Input**: `testSessionId` from Phase 1 (e.g., `WFS-test-user-auth`)
|
||||||
|
|
||||||
@@ -104,9 +130,9 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
- Test framework detected
|
- Test framework detected
|
||||||
- Test conventions documented
|
- Test conventions documented
|
||||||
|
|
||||||
<!-- TodoWrite: When test-context-gather invoked, INSERT 3 test-context-gather tasks -->
|
<!-- TodoWrite: When test-context-gather dispatched, INSERT 3 test-context-gather tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 2 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 2 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
||||||
@@ -119,7 +145,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 2.1-2.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 2.1-2.3** sequentially
|
||||||
|
|
||||||
@@ -141,7 +167,12 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Test Generation Analysis
|
### Phase 3: Test Generation Analysis
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessionId] --context [testContextPath]")`
|
|
||||||
|
**Step 3.1: Dispatch** - Analyze test requirements with Gemini
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessionId] --context [testContextPath]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- `testSessionId` from Phase 1
|
- `testSessionId` from Phase 1
|
||||||
@@ -168,9 +199,9 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
- Implementation Targets (test files to create)
|
- Implementation Targets (test files to create)
|
||||||
- Success Criteria
|
- Success Criteria
|
||||||
|
|
||||||
<!-- TodoWrite: When test-concept-enhanced invoked, INSERT 3 concept-enhanced tasks -->
|
<!-- TodoWrite: When test-concept-enhanced dispatched, INSERT 3 concept-enhanced tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 3 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
||||||
@@ -183,7 +214,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** test-concept-enhanced's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** test-concept-enhanced's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
||||||
|
|
||||||
@@ -205,12 +236,17 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 4: Generate Test Tasks
|
### Phase 4: Generate Test Tasks
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:test-task-generate [--use-codex] [--cli-execute] --session [testSessionId]")`
|
|
||||||
|
**Step 4.1: Dispatch** - Generate test task JSON files and planning documents
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-task-generate --session [testSessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- `testSessionId` from Phase 1
|
- `testSessionId` from Phase 1
|
||||||
- `--use-codex` flag (if present in original command) - Controls IMPL-002 fix mode
|
|
||||||
- `--cli-execute` flag (if present in original command) - Controls IMPL-001 generation mode
|
**Note**: CLI tool usage for fixes is determined semantically from user's task description (e.g., "use Codex for automated fixes").
|
||||||
|
|
||||||
**Expected Behavior**:
|
**Expected Behavior**:
|
||||||
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3
|
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3
|
||||||
@@ -240,21 +276,20 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
- Task ID: `IMPL-002`
|
- Task ID: `IMPL-002`
|
||||||
- `meta.type: "test-fix"`
|
- `meta.type: "test-fix"`
|
||||||
- `meta.agent: "@test-fix-agent"`
|
- `meta.agent: "@test-fix-agent"`
|
||||||
- `meta.use_codex: true|false` (based on --use-codex flag)
|
|
||||||
- `context.depends_on: ["IMPL-001"]`
|
- `context.depends_on: ["IMPL-001"]`
|
||||||
- `context.requirements`: Execute and fix tests
|
- `context.requirements`: Execute and fix tests
|
||||||
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
|
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
|
||||||
- **Cycle pattern**: test → gemini_diagnose → manual_fix (or codex if --use-codex) → retest
|
- **Cycle pattern**: test → gemini_diagnose → fix (agent or CLI based on `command` field) → retest
|
||||||
- **Tools configuration**: Gemini for analysis with bug-fix template, manual or Codex for fixes
|
- **Tools configuration**: Gemini for analysis with bug-fix template, agent or CLI for fixes
|
||||||
- **Exit conditions**: Success (all pass) or failure (max iterations)
|
- **Exit conditions**: Success (all pass) or failure (max iterations)
|
||||||
- `flow_control.implementation_approach.modification_points`: 3-phase execution flow
|
- `flow_control.implementation_approach.modification_points`: 3-phase execution flow
|
||||||
- Phase 1: Initial test execution
|
- Phase 1: Initial test execution
|
||||||
- Phase 2: Iterative Gemini diagnosis + manual/Codex fixes (based on flag)
|
- Phase 2: Iterative Gemini diagnosis + fixes (agent or CLI based on step's `command` field)
|
||||||
- Phase 3: Final validation and certification
|
- Phase 3: Final validation and certification
|
||||||
|
|
||||||
<!-- TodoWrite: When test-task-generate invoked, INSERT 3 test-task-generate tasks -->
|
<!-- TodoWrite: When test-task-generate dispatched, INSERT 3 test-task-generate tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 4 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 4 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
||||||
@@ -267,7 +302,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** test-task-generate's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** test-task-generate's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
||||||
|
|
||||||
@@ -307,7 +342,7 @@ Artifacts Created:
|
|||||||
|
|
||||||
Test Framework: [detected framework]
|
Test Framework: [detected framework]
|
||||||
Test Files to Generate: [count]
|
Test Files to Generate: [count]
|
||||||
Fix Mode: [Manual|Codex Automated] (based on --use-codex flag)
|
Fix Mode: [Agent|CLI] (based on `command` field in implementation_approach steps)
|
||||||
|
|
||||||
Review Generated Artifacts:
|
Review Generated Artifacts:
|
||||||
- Test plan: .workflow/[testSessionId]/IMPL_PLAN.md
|
- Test plan: .workflow/[testSessionId]/IMPL_PLAN.md
|
||||||
@@ -329,7 +364,7 @@ Ready for execution. Use appropriate workflow commands to proceed.
|
|||||||
|
|
||||||
### Key Principles
|
### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand dispatched):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
|
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
|
||||||
- First attached task marked as `in_progress`, others as `pending`
|
- First attached task marked as `in_progress`, others as `pending`
|
||||||
@@ -346,14 +381,14 @@ Ready for execution. Use appropriate workflow commands to proceed.
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase dispatched (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
||||||
|
|
||||||
### Test-Gen Specific Features
|
### Test-Gen Specific Features
|
||||||
|
|
||||||
- **Phase 2**: Cross-session context gathering from source implementation session
|
- **Phase 2**: Cross-session context gathering from source implementation session
|
||||||
- **Phase 3**: Test requirements analysis with Gemini for generation strategy
|
- **Phase 3**: Test requirements analysis with Gemini for generation strategy
|
||||||
- **Phase 4**: Dual-task generation (IMPL-001 for test generation, IMPL-002 for test execution)
|
- **Phase 4**: Dual-task generation (IMPL-001 for test generation, IMPL-002 for test execution)
|
||||||
- **Fix Mode Configuration**: `--use-codex` flag controls IMPL-002 fix mode (manual vs automated)
|
- **Fix Mode Configuration**: CLI tool usage determined semantically from user's task description
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -424,7 +459,7 @@ Generates two task definition files:
|
|||||||
- Agent: @test-fix-agent
|
- Agent: @test-fix-agent
|
||||||
- Dependency: IMPL-001 must complete first
|
- Dependency: IMPL-001 must complete first
|
||||||
- Max iterations: 5
|
- Max iterations: 5
|
||||||
- Fix mode: Manual or Codex (based on --use-codex flag)
|
- Fix mode: Agent or CLI (based on `command` field in implementation_approach)
|
||||||
|
|
||||||
See `/workflow:tools:test-task-generate` for complete task JSON schemas.
|
See `/workflow:tools:test-task-generate` for complete task JSON schemas.
|
||||||
|
|
||||||
@@ -461,11 +496,10 @@ Created in `.workflow/active/WFS-test-[session]/`:
|
|||||||
**IMPL-002.json Structure**:
|
**IMPL-002.json Structure**:
|
||||||
- `meta.type: "test-fix"`
|
- `meta.type: "test-fix"`
|
||||||
- `meta.agent: "@test-fix-agent"`
|
- `meta.agent: "@test-fix-agent"`
|
||||||
- `meta.use_codex`: true/false (based on --use-codex flag)
|
|
||||||
- `context.depends_on: ["IMPL-001"]`
|
- `context.depends_on: ["IMPL-001"]`
|
||||||
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
|
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
|
||||||
- Gemini diagnosis template
|
- Gemini diagnosis template
|
||||||
- Fix application mode (manual/codex)
|
- Fix application mode (agent or CLI based on `command` field)
|
||||||
- Max iterations: 5
|
- Max iterations: 5
|
||||||
- `flow_control.implementation_approach.modification_points`: 3-phase flow
|
- `flow_control.implementation_approach.modification_points`: 3-phase flow
|
||||||
|
|
||||||
@@ -483,13 +517,11 @@ See `/workflow:tools:test-task-generate` for complete JSON schemas.
|
|||||||
**Prerequisite Commands**:
|
**Prerequisite Commands**:
|
||||||
- `/workflow:plan` or `/workflow:execute` - Complete implementation session that needs test validation
|
- `/workflow:plan` or `/workflow:execute` - Complete implementation session that needs test validation
|
||||||
|
|
||||||
**Called by This Command** (5 phases):
|
**Dispatched by This Command** (4 phases):
|
||||||
- `/workflow:session:start` - Phase 1: Create independent test workflow session
|
- `/workflow:session:start` - Phase 1: Create independent test workflow session
|
||||||
- `/workflow:tools:test-context-gather` - Phase 2: Analyze test coverage and gather source session context
|
- `/workflow:tools:test-context-gather` - Phase 2: Analyze test coverage and gather source session context
|
||||||
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements and strategy using Gemini
|
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements and strategy using Gemini
|
||||||
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs using action-planning-agent (autonomous, default)
|
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs (CLI tool usage determined semantically)
|
||||||
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes for IMPL-002 (when `--use-codex` flag used)
|
|
||||||
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode for IMPL-001 test generation (when `--cli-execute` flag used)
|
|
||||||
|
|
||||||
**Follow-up Commands**:
|
**Follow-up Commands**:
|
||||||
- `/workflow:status` - Review generated test tasks
|
- `/workflow:status` - Review generated test tasks
|
||||||
|
|||||||
@@ -59,6 +59,41 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
|
|||||||
- Module merge/split decisions
|
- Module merge/split decisions
|
||||||
- **Requires iterative clarification until uniqueness confirmed**
|
- **Requires iterative clarification until uniqueness confirmed**
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session, --context
|
||||||
|
└─ Validation: Both REQUIRED, conflict_risk >= medium
|
||||||
|
|
||||||
|
Phase 1: Validation
|
||||||
|
├─ Step 1: Verify session directory exists
|
||||||
|
├─ Step 2: Load context-package.json
|
||||||
|
├─ Step 3: Check conflict_risk (skip if none/low)
|
||||||
|
└─ Step 4: Prepare agent task prompt
|
||||||
|
|
||||||
|
Phase 2: CLI-Powered Analysis (Agent)
|
||||||
|
├─ Execute Gemini analysis (Qwen fallback)
|
||||||
|
├─ Detect conflicts including ModuleOverlap category
|
||||||
|
└─ Generate 2-4 strategies per conflict with modifications
|
||||||
|
|
||||||
|
Phase 3: Iterative User Interaction
|
||||||
|
└─ FOR each conflict (one by one):
|
||||||
|
├─ Display conflict with overlap_analysis (if ModuleOverlap)
|
||||||
|
├─ Display strategies (2-4 + custom option)
|
||||||
|
├─ User selects strategy
|
||||||
|
└─ IF clarification_needed:
|
||||||
|
├─ Collect answers
|
||||||
|
├─ Agent re-analysis
|
||||||
|
└─ Loop until uniqueness_confirmed (max 10 rounds)
|
||||||
|
|
||||||
|
Phase 4: Apply Modifications
|
||||||
|
├─ Step 1: Extract modifications from resolved strategies
|
||||||
|
├─ Step 2: Apply using Edit tool
|
||||||
|
├─ Step 3: Update context-package.json (mark resolved)
|
||||||
|
└─ Step 4: Output custom conflict summary (if any)
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
### Phase 1: Validation
|
### Phase 1: Validation
|
||||||
@@ -73,42 +108,51 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
|
|||||||
|
|
||||||
**Agent Delegation**:
|
**Agent Delegation**:
|
||||||
```javascript
|
```javascript
|
||||||
Task(subagent_type="cli-execution-agent", prompt=`
|
Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||||
## Context
|
## Context
|
||||||
- Session: {session_id}
|
- Session: {session_id}
|
||||||
- Risk: {conflict_risk}
|
- Risk: {conflict_risk}
|
||||||
- Files: {existing_files_list}
|
- Files: {existing_files_list}
|
||||||
|
|
||||||
|
## Exploration Context (from context-package.exploration_results)
|
||||||
|
- Exploration Count: ${contextPackage.exploration_results?.exploration_count || 0}
|
||||||
|
- Angles Analyzed: ${JSON.stringify(contextPackage.exploration_results?.angles || [])}
|
||||||
|
- Pre-identified Conflict Indicators: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.conflict_indicators || [])}
|
||||||
|
- Critical Files: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.critical_files?.map(f => f.path) || [])}
|
||||||
|
- All Patterns: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_patterns || [])}
|
||||||
|
- All Integration Points: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_integration_points || [])}
|
||||||
|
|
||||||
## Analysis Steps
|
## Analysis Steps
|
||||||
|
|
||||||
### 1. Load Context
|
### 1. Load Context
|
||||||
- Read existing files from conflict_detection.existing_files
|
- Read existing files from conflict_detection.existing_files
|
||||||
- Load plan from .workflow/active/{session_id}/.process/context-package.json
|
- Load plan from .workflow/active/{session_id}/.process/context-package.json
|
||||||
|
- **NEW**: Load exploration_results and use aggregated_insights for enhanced analysis
|
||||||
- Extract role analyses and requirements
|
- Extract role analyses and requirements
|
||||||
|
|
||||||
### 2. Execute CLI Analysis (Enhanced with Scenario Uniqueness Detection)
|
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
||||||
|
|
||||||
Primary (Gemini):
|
Primary (Gemini):
|
||||||
cd {project_root} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Detect conflicts between plan and codebase, including module scenario overlaps
|
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
||||||
TASK:
|
TASK:
|
||||||
• Compare architectures
|
• **Review pre-identified conflict_indicators from exploration results**
|
||||||
|
• Compare architectures (use exploration key_patterns)
|
||||||
• Identify breaking API changes
|
• Identify breaking API changes
|
||||||
• Detect data model incompatibilities
|
• Detect data model incompatibilities
|
||||||
• Assess dependency conflicts
|
• Assess dependency conflicts
|
||||||
• **NEW: Analyze module scenario uniqueness**
|
• **Analyze module scenario uniqueness**
|
||||||
- Extract new module functionality from plan
|
- Use exploration integration_points for precise locations
|
||||||
- Search all existing modules with similar functionality
|
- Cross-validate with exploration critical_files
|
||||||
- Compare scenario coverage and identify overlaps
|
|
||||||
- Generate clarification questions for boundary definition
|
- Generate clarification questions for boundary definition
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/{session_id}/**/*
|
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/{session_id}/**/*
|
||||||
EXPECTED: Conflict list with severity ratings, including ModuleOverlap conflicts with:
|
EXPECTED: Conflict list with severity ratings, including:
|
||||||
- Existing module list with scenarios
|
- Validation of exploration conflict_indicators
|
||||||
- Overlap analysis matrix
|
- ModuleOverlap conflicts with overlap_analysis
|
||||||
- Targeted clarification questions
|
- Targeted clarification questions
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | analysis=READ-ONLY
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||||
"
|
" --tool gemini --mode analysis --cd {project_root}
|
||||||
|
|
||||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||||
|
|
||||||
@@ -125,7 +169,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
|
|
||||||
### 4. Return Structured Conflict Data
|
### 4. Return Structured Conflict Data
|
||||||
|
|
||||||
⚠️ DO NOT generate CONFLICT_RESOLUTION.md file
|
⚠️ Output to conflict-resolution.json (generated in Phase 4)
|
||||||
|
|
||||||
Return JSON format for programmatic processing:
|
Return JSON format for programmatic processing:
|
||||||
|
|
||||||
@@ -423,14 +467,30 @@ selectedStrategies.forEach(item => {
|
|||||||
|
|
||||||
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
||||||
|
|
||||||
// 2. Apply each modification using Edit tool
|
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
|
||||||
const appliedModifications = [];
|
const appliedModifications = [];
|
||||||
const failedModifications = [];
|
const failedModifications = [];
|
||||||
|
const fallbackConstraints = []; // For files that don't exist
|
||||||
|
|
||||||
modifications.forEach((mod, idx) => {
|
modifications.forEach((mod, idx) => {
|
||||||
try {
|
try {
|
||||||
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
||||||
|
|
||||||
|
// Check if target file exists (brainstorm files may not exist in lite workflow)
|
||||||
|
if (!file_exists(mod.file)) {
|
||||||
|
console.log(` ⚠️ 文件不存在,写入 context-package.json 作为约束`);
|
||||||
|
fallbackConstraints.push({
|
||||||
|
source: "conflict-resolution",
|
||||||
|
conflict_id: mod.conflict_id,
|
||||||
|
target_file: mod.file,
|
||||||
|
section: mod.section,
|
||||||
|
change_type: mod.change_type,
|
||||||
|
content: mod.new_content,
|
||||||
|
rationale: mod.rationale
|
||||||
|
});
|
||||||
|
return; // Skip to next modification
|
||||||
|
}
|
||||||
|
|
||||||
if (mod.change_type === "update") {
|
if (mod.change_type === "update") {
|
||||||
Edit({
|
Edit({
|
||||||
file_path: mod.file,
|
file_path: mod.file,
|
||||||
@@ -458,14 +518,45 @@ modifications.forEach((mod, idx) => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// 3. Update context-package.json with resolution details
|
// 2b. Generate conflict-resolution.json output file
|
||||||
const contextPackage = JSON.parse(Read(contextPath));
|
const resolutionOutput = {
|
||||||
contextPackage.conflict_detection.conflict_risk = "resolved";
|
session_id: sessionId,
|
||||||
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({
|
resolved_at: new Date().toISOString(),
|
||||||
|
summary: {
|
||||||
|
total_conflicts: conflicts.length,
|
||||||
|
resolved_with_strategy: selectedStrategies.length,
|
||||||
|
custom_handling: customConflicts.length,
|
||||||
|
fallback_constraints: fallbackConstraints.length
|
||||||
|
},
|
||||||
|
resolved_conflicts: selectedStrategies.map(s => ({
|
||||||
conflict_id: s.conflict_id,
|
conflict_id: s.conflict_id,
|
||||||
strategy_name: s.strategy.name,
|
strategy_name: s.strategy.name,
|
||||||
clarifications: s.clarifications
|
strategy_approach: s.strategy.approach,
|
||||||
}));
|
clarifications: s.clarifications || [],
|
||||||
|
modifications_applied: s.strategy.modifications?.filter(m =>
|
||||||
|
appliedModifications.some(am => am.conflict_id === s.conflict_id)
|
||||||
|
) || []
|
||||||
|
})),
|
||||||
|
custom_conflicts: customConflicts.map(c => ({
|
||||||
|
id: c.id,
|
||||||
|
brief: c.brief,
|
||||||
|
category: c.category,
|
||||||
|
suggestions: c.suggestions,
|
||||||
|
overlap_analysis: c.overlap_analysis || null
|
||||||
|
})),
|
||||||
|
planning_constraints: fallbackConstraints, // Constraints for files that don't exist
|
||||||
|
failed_modifications: failedModifications
|
||||||
|
};
|
||||||
|
|
||||||
|
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
|
||||||
|
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
|
||||||
|
console.log(`\n📄 冲突解决结果已保存: ${resolutionPath}`);
|
||||||
|
|
||||||
|
// 3. Update context-package.json with resolution details (reference to JSON file)
|
||||||
|
const contextPackage = JSON.parse(Read(contextPath));
|
||||||
|
contextPackage.conflict_detection.conflict_risk = "resolved";
|
||||||
|
contextPackage.conflict_detection.resolution_file = resolutionPath; // Reference to detailed JSON
|
||||||
|
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
|
||||||
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
||||||
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
||||||
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
||||||
@@ -538,12 +629,50 @@ return {
|
|||||||
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
||||||
```
|
```
|
||||||
|
|
||||||
## Output Format: Agent JSON Response
|
## Output Format
|
||||||
|
|
||||||
|
### Primary Output: conflict-resolution.json
|
||||||
|
|
||||||
|
**Path**: `.workflow/active/{session_id}/.process/conflict-resolution.json`
|
||||||
|
|
||||||
|
**Schema**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "WFS-xxx",
|
||||||
|
"resolved_at": "ISO timestamp",
|
||||||
|
"summary": {
|
||||||
|
"total_conflicts": 3,
|
||||||
|
"resolved_with_strategy": 2,
|
||||||
|
"custom_handling": 1,
|
||||||
|
"fallback_constraints": 0
|
||||||
|
},
|
||||||
|
"resolved_conflicts": [
|
||||||
|
{
|
||||||
|
"conflict_id": "CON-001",
|
||||||
|
"strategy_name": "策略名称",
|
||||||
|
"strategy_approach": "实现方法",
|
||||||
|
"clarifications": [],
|
||||||
|
"modifications_applied": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"custom_conflicts": [
|
||||||
|
{
|
||||||
|
"id": "CON-002",
|
||||||
|
"brief": "冲突摘要",
|
||||||
|
"category": "ModuleOverlap",
|
||||||
|
"suggestions": ["建议1", "建议2"],
|
||||||
|
"overlap_analysis": null
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"planning_constraints": [],
|
||||||
|
"failed_modifications": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Secondary: Agent JSON Response (stdout)
|
||||||
|
|
||||||
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
|
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
|
||||||
|
|
||||||
**Format**: JSON to stdout (NO file generation)
|
|
||||||
|
|
||||||
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
|
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
|
||||||
|
|
||||||
### Key Requirements
|
### Key Requirements
|
||||||
@@ -591,11 +720,12 @@ If Edit tool fails mid-application:
|
|||||||
- Requires: `conflict_risk ≥ medium`
|
- Requires: `conflict_risk ≥ medium`
|
||||||
|
|
||||||
**Output**:
|
**Output**:
|
||||||
- Modified files:
|
- Generated file:
|
||||||
|
- `.workflow/active/{session_id}/.process/conflict-resolution.json` (primary output)
|
||||||
|
- Modified files (if exist):
|
||||||
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
||||||
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
||||||
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved)
|
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved, resolution_file reference)
|
||||||
- NO report file generation
|
|
||||||
|
|
||||||
**User Interaction**:
|
**User Interaction**:
|
||||||
- **Iterative conflict processing**: One conflict at a time, not in batches
|
- **Iterative conflict processing**: One conflict at a time, not in batches
|
||||||
@@ -623,7 +753,7 @@ If Edit tool fails mid-application:
|
|||||||
✓ guidance-specification.md updated with resolved conflicts
|
✓ guidance-specification.md updated with resolved conflicts
|
||||||
✓ Role analyses (*.md) updated with resolved conflicts
|
✓ Role analyses (*.md) updated with resolved conflicts
|
||||||
✓ context-package.json marked as "resolved" with clarification records
|
✓ context-package.json marked as "resolved" with clarification records
|
||||||
✓ No CONFLICT_RESOLUTION.md file generated
|
✓ conflict-resolution.json generated with full resolution details
|
||||||
✓ Modification summary includes:
|
✓ Modification summary includes:
|
||||||
- Total conflicts
|
- Total conflicts
|
||||||
- Resolved with strategy (count)
|
- Resolved with strategy (count)
|
||||||
|
|||||||
@@ -24,6 +24,37 @@ Orchestrator command that invokes `context-search-agent` to gather comprehensive
|
|||||||
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
|
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
|
||||||
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
|
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Parse: task_description (required)
|
||||||
|
|
||||||
|
Step 1: Context-Package Detection
|
||||||
|
└─ Decision (existing package):
|
||||||
|
├─ Valid package exists → Return existing (skip execution)
|
||||||
|
└─ No valid package → Continue to Step 2
|
||||||
|
|
||||||
|
Step 2: Complexity Assessment & Parallel Explore (NEW)
|
||||||
|
├─ Analyze task_description → classify Low/Medium/High
|
||||||
|
├─ Select exploration angles (1-4 based on complexity)
|
||||||
|
├─ Launch N cli-explore-agents in parallel
|
||||||
|
│ └─ Each outputs: exploration-{angle}.json
|
||||||
|
└─ Generate explorations-manifest.json
|
||||||
|
|
||||||
|
Step 3: Invoke Context-Search Agent (with exploration input)
|
||||||
|
├─ Phase 1: Initialization & Pre-Analysis
|
||||||
|
├─ Phase 2: Multi-Source Discovery
|
||||||
|
│ ├─ Track 0: Exploration Synthesis (prioritize & deduplicate)
|
||||||
|
│ ├─ Track 1-4: Existing tracks
|
||||||
|
└─ Phase 3: Synthesis & Packaging
|
||||||
|
└─ Generate context-package.json with exploration_results
|
||||||
|
|
||||||
|
Step 4: Output Verification
|
||||||
|
└─ Verify context-package.json contains exploration_results
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
### Step 1: Context-Package Detection
|
### Step 1: Context-Package Detection
|
||||||
@@ -48,13 +79,144 @@ if (file_exists(contextPackagePath)) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Invoke Context-Search Agent
|
### Step 2: Complexity Assessment & Parallel Explore
|
||||||
|
|
||||||
**Only execute if Step 1 finds no valid package**
|
**Only execute if Step 1 finds no valid package**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 2.1 Complexity Assessment
|
||||||
|
function analyzeTaskComplexity(taskDescription) {
|
||||||
|
const text = taskDescription.toLowerCase();
|
||||||
|
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
|
||||||
|
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
|
||||||
|
return 'Low';
|
||||||
|
}
|
||||||
|
|
||||||
|
const ANGLE_PRESETS = {
|
||||||
|
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
|
||||||
|
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
|
||||||
|
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
|
||||||
|
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
|
||||||
|
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
|
||||||
|
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
|
||||||
|
};
|
||||||
|
|
||||||
|
function selectAngles(taskDescription, complexity) {
|
||||||
|
const text = taskDescription.toLowerCase();
|
||||||
|
let preset = 'feature';
|
||||||
|
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
|
||||||
|
else if (/security|auth|permission/.test(text)) preset = 'security';
|
||||||
|
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
|
||||||
|
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
|
||||||
|
|
||||||
|
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
|
||||||
|
return ANGLE_PRESETS[preset].slice(0, count);
|
||||||
|
}
|
||||||
|
|
||||||
|
const complexity = analyzeTaskComplexity(task_description);
|
||||||
|
const selectedAngles = selectAngles(task_description, complexity);
|
||||||
|
const sessionFolder = `.workflow/active/${session_id}/.process`;
|
||||||
|
|
||||||
|
// 2.2 Launch Parallel Explore Agents
|
||||||
|
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
|
description=`Explore: ${angle}`,
|
||||||
|
prompt=`
|
||||||
|
## Task Objective
|
||||||
|
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
|
||||||
|
|
||||||
|
## Assigned Context
|
||||||
|
- **Exploration Angle**: ${angle}
|
||||||
|
- **Task Description**: ${task_description}
|
||||||
|
- **Session ID**: ${session_id}
|
||||||
|
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||||
|
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||||
|
|
||||||
|
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||||
|
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||||
|
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||||
|
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||||
|
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||||
|
|
||||||
|
## Exploration Strategy (${angle} focus)
|
||||||
|
|
||||||
|
**Step 1: Structural Scan** (Bash)
|
||||||
|
- get_modules_by_depth.sh → identify modules related to ${angle}
|
||||||
|
- find/rg → locate files relevant to ${angle} aspect
|
||||||
|
- Analyze imports/dependencies from ${angle} perspective
|
||||||
|
|
||||||
|
**Step 2: Semantic Analysis** (Gemini CLI)
|
||||||
|
- How does existing code handle ${angle} concerns?
|
||||||
|
- What patterns are used for ${angle}?
|
||||||
|
- Where would new code integrate from ${angle} viewpoint?
|
||||||
|
|
||||||
|
**Step 3: Write Output**
|
||||||
|
- Consolidate ${angle} findings into JSON
|
||||||
|
- Identify ${angle}-specific clarification needs
|
||||||
|
|
||||||
|
## Expected Output
|
||||||
|
|
||||||
|
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||||
|
|
||||||
|
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
|
||||||
|
|
||||||
|
**Required Fields** (all ${angle} focused):
|
||||||
|
- project_structure: Modules/architecture relevant to ${angle}
|
||||||
|
- relevant_files: Files affected from ${angle} perspective
|
||||||
|
**IMPORTANT**: Use object format with relevance scores for synthesis:
|
||||||
|
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Core ${angle} logic"}]\`
|
||||||
|
Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
|
||||||
|
- patterns: ${angle}-related patterns to follow
|
||||||
|
- dependencies: Dependencies relevant to ${angle}
|
||||||
|
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
|
||||||
|
- constraints: ${angle}-specific limitations/conventions
|
||||||
|
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
|
||||||
|
- _metadata.exploration_angle: "${angle}"
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] Schema obtained via cat explore-json-schema.json
|
||||||
|
- [ ] get_modules_by_depth.sh executed
|
||||||
|
- [ ] At least 3 relevant files identified with ${angle} rationale
|
||||||
|
- [ ] Patterns are actionable (code examples, not generic advice)
|
||||||
|
- [ ] Integration points include file:line locations
|
||||||
|
- [ ] Constraints are project-specific to ${angle}
|
||||||
|
- [ ] JSON output follows schema exactly
|
||||||
|
- [ ] clarification_needs includes options + recommended
|
||||||
|
|
||||||
|
## Output
|
||||||
|
Write: ${sessionFolder}/exploration-${angle}.json
|
||||||
|
Return: 2-3 sentence summary of ${angle} findings
|
||||||
|
`
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
// 2.3 Generate Manifest after all complete
|
||||||
|
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
|
||||||
|
const explorationManifest = {
|
||||||
|
session_id,
|
||||||
|
task_description,
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
complexity,
|
||||||
|
exploration_count: selectedAngles.length,
|
||||||
|
angles_explored: selectedAngles,
|
||||||
|
explorations: explorationFiles.map(file => {
|
||||||
|
const data = JSON.parse(Read(file));
|
||||||
|
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
|
||||||
|
})
|
||||||
|
};
|
||||||
|
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Invoke Context-Search Agent
|
||||||
|
|
||||||
|
**Only execute after Step 2 completes**
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="context-search-agent",
|
subagent_type="context-search-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Gather comprehensive context for plan",
|
description="Gather comprehensive context for plan",
|
||||||
prompt=`
|
prompt=`
|
||||||
## Execution Mode
|
## Execution Mode
|
||||||
@@ -65,17 +227,24 @@ Task(
|
|||||||
- **Task Description**: ${task_description}
|
- **Task Description**: ${task_description}
|
||||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
||||||
|
|
||||||
|
## Exploration Input (from Step 2)
|
||||||
|
- **Manifest**: ${sessionFolder}/explorations-manifest.json
|
||||||
|
- **Exploration Count**: ${explorationManifest.exploration_count}
|
||||||
|
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
|
||||||
|
- **Complexity**: ${complexity}
|
||||||
|
|
||||||
## Mission
|
## Mission
|
||||||
Execute complete context-search-agent workflow for implementation planning:
|
Execute complete context-search-agent workflow for implementation planning:
|
||||||
|
|
||||||
### Phase 1: Initialization & Pre-Analysis
|
### Phase 1: Initialization & Pre-Analysis
|
||||||
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
|
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
|
||||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||||
3. **Foundation**: Initialize code-index, get project structure, load docs
|
3. **Foundation**: Initialize CodexLens, get project structure, load docs
|
||||||
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
### Phase 2: Multi-Source Context Discovery
|
||||||
Execute all 4 discovery tracks:
|
Execute all discovery tracks:
|
||||||
|
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
|
||||||
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
|
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
|
||||||
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
|
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
|
||||||
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
||||||
@@ -84,7 +253,7 @@ Execute all 4 discovery tracks:
|
|||||||
### Phase 3: Synthesis, Assessment & Packaging
|
### Phase 3: Synthesis, Assessment & Packaging
|
||||||
1. Apply relevance scoring and build dependency graph
|
1. Apply relevance scoring and build dependency graph
|
||||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||||
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include technology_stack, architecture, key_components, and entry_points.
|
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include description, technology_stack, architecture, and key_components.
|
||||||
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||||
5. Perform conflict detection with risk assessment
|
5. Perform conflict detection with risk assessment
|
||||||
6. **Inject historical conflicts** from archive analysis into conflict_detection
|
6. **Inject historical conflicts** from archive analysis into conflict_detection
|
||||||
@@ -93,11 +262,12 @@ Execute all 4 discovery tracks:
|
|||||||
## Output Requirements
|
## Output Requirements
|
||||||
Complete context-package.json with:
|
Complete context-package.json with:
|
||||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack (sourced from `project.json` overview)
|
- **project_context**: description, technology_stack, architecture, key_components (sourced from `project.json` overview)
|
||||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||||
- **dependencies**: {internal[], external[]} with dependency graph
|
- **dependencies**: {internal[], external[]} with dependency graph
|
||||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
|
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
|
||||||
|
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
|
||||||
|
|
||||||
## Quality Validation
|
## Quality Validation
|
||||||
Before completion verify:
|
Before completion verify:
|
||||||
@@ -114,7 +284,7 @@ Report completion with statistics.
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Output Verification
|
### Step 4: Output Verification
|
||||||
|
|
||||||
After agent completes, verify output:
|
After agent completes, verify output:
|
||||||
|
|
||||||
@@ -124,6 +294,12 @@ const outputPath = `.workflow/${session_id}/.process/context-package.json`;
|
|||||||
if (!file_exists(outputPath)) {
|
if (!file_exists(outputPath)) {
|
||||||
throw new Error("❌ Agent failed to generate context-package.json");
|
throw new Error("❌ Agent failed to generate context-package.json");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Verify exploration_results included
|
||||||
|
const pkg = JSON.parse(Read(outputPath));
|
||||||
|
if (pkg.exploration_results?.exploration_count > 0) {
|
||||||
|
console.log(`✅ Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Parameter Reference
|
## Parameter Reference
|
||||||
@@ -144,6 +320,7 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
|
|||||||
- **dependencies**: Internal and external dependency graphs
|
- **dependencies**: Internal and external dependency graphs
|
||||||
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
||||||
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
|
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
|
||||||
|
- **exploration_results**: Aggregated exploration insights (from parallel explore phase)
|
||||||
|
|
||||||
## Historical Archive Analysis
|
## Historical Archive Analysis
|
||||||
|
|
||||||
|
|||||||
@@ -1,10 +1,9 @@
|
|||||||
---
|
---
|
||||||
name: task-generate-agent
|
name: task-generate-agent
|
||||||
description: Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation
|
description: Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation
|
||||||
argument-hint: "--session WFS-session-id [--cli-execute]"
|
argument-hint: "--session WFS-session-id"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:task-generate-agent --session WFS-auth
|
- /workflow:tools:task-generate-agent --session WFS-auth
|
||||||
- /workflow:tools:task-generate-agent --session WFS-auth --cli-execute
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Generate Implementation Plan Command
|
# Generate Implementation Plan Command
|
||||||
@@ -15,18 +14,134 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
|
|||||||
## Core Philosophy
|
## Core Philosophy
|
||||||
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
|
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
|
||||||
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
|
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
|
||||||
|
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
|
||||||
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
|
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
|
||||||
- **Two-Phase Flow**: Discovery (context gathering) → Output (planning document generation)
|
|
||||||
- **Memory-First**: Reuse loaded documents from conversation memory
|
- **Memory-First**: Reuse loaded documents from conversation memory
|
||||||
- **Smart Selection**: Load synthesis_output OR guidance + relevant role analyses, NOT all role analyses
|
- **Smart Selection**: Load synthesis_output OR guidance + relevant role analyses, NOT all role analyses
|
||||||
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
||||||
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 0: User Configuration (Interactive)
|
||||||
|
├─ Question 1: Supplementary materials/guidelines?
|
||||||
|
├─ Question 2: Execution method preference (Agent/CLI/Hybrid)
|
||||||
|
├─ Question 3: CLI tool preference (if CLI selected)
|
||||||
|
└─ Store: userConfig for agent prompt
|
||||||
|
|
||||||
|
Phase 1: Context Preparation & Module Detection (Command)
|
||||||
|
├─ Assemble session paths (metadata, context package, output dirs)
|
||||||
|
├─ Provide metadata (session_id, execution_mode, mcp_capabilities)
|
||||||
|
├─ Auto-detect modules from context-package + directory structure
|
||||||
|
└─ Decision:
|
||||||
|
├─ modules.length == 1 → Single Agent Mode (Phase 2A)
|
||||||
|
└─ modules.length >= 2 → Parallel Mode (Phase 2B + Phase 3)
|
||||||
|
|
||||||
|
Phase 2A: Single Agent Planning (Original Flow)
|
||||||
|
├─ Load context package (progressive loading strategy)
|
||||||
|
├─ Generate Task JSON Files (.task/IMPL-*.json)
|
||||||
|
├─ Create IMPL_PLAN.md
|
||||||
|
└─ Generate TODO_LIST.md
|
||||||
|
|
||||||
|
Phase 2B: N Parallel Planning (Multi-Module)
|
||||||
|
├─ Launch N action-planning-agents simultaneously (one per module)
|
||||||
|
├─ Each agent generates module-scoped tasks (IMPL-{prefix}{seq}.json)
|
||||||
|
├─ Task ID format: IMPL-A1, IMPL-A2... / IMPL-B1, IMPL-B2...
|
||||||
|
└─ Each module limited to ≤9 tasks
|
||||||
|
|
||||||
|
Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
||||||
|
├─ Collect all module task JSONs
|
||||||
|
├─ Resolve cross-module dependencies (CROSS::{module}::{pattern} → actual ID)
|
||||||
|
├─ Generate unified IMPL_PLAN.md (grouped by module)
|
||||||
|
└─ Generate TODO_LIST.md (hierarchical: module → tasks)
|
||||||
|
```
|
||||||
|
|
||||||
## Document Generation Lifecycle
|
## Document Generation Lifecycle
|
||||||
|
|
||||||
### Phase 1: Context Preparation (Command Responsibility)
|
### Phase 0: User Configuration (Interactive)
|
||||||
|
|
||||||
**Command prepares session paths and metadata for planning document generation.**
|
**Purpose**: Collect user preferences before task generation to ensure generated tasks match execution expectations.
|
||||||
|
|
||||||
|
**User Questions**:
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "Do you have supplementary materials or guidelines to include?",
|
||||||
|
header: "Materials",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "No additional materials", description: "Use existing context only" },
|
||||||
|
{ label: "Provide file paths", description: "I'll specify paths to include" },
|
||||||
|
{ label: "Provide inline content", description: "I'll paste content directly" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "Select execution method for generated tasks:",
|
||||||
|
header: "Execution",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Agent (Recommended)", description: "Claude agent executes tasks directly" },
|
||||||
|
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps" },
|
||||||
|
{ label: "CLI Only", description: "All execution via CLI tools (codex/gemini/qwen)" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "If using CLI, which tool do you prefer?",
|
||||||
|
header: "CLI Tool",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Codex (Recommended)", description: "Best for implementation tasks" },
|
||||||
|
{ label: "Gemini", description: "Best for analysis and large context" },
|
||||||
|
{ label: "Qwen", description: "Alternative analysis tool" },
|
||||||
|
{ label: "Auto", description: "Let agent decide per-task" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Handle Materials Response**:
|
||||||
|
```javascript
|
||||||
|
if (userConfig.materials === "Provide file paths") {
|
||||||
|
// Follow-up question for file paths
|
||||||
|
const pathsResponse = AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "Enter file paths to include (comma-separated or one per line):",
|
||||||
|
header: "Paths",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Enter paths", description: "Provide paths in text input" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Build userConfig**:
|
||||||
|
```javascript
|
||||||
|
const userConfig = {
|
||||||
|
supplementaryMaterials: {
|
||||||
|
type: "none|paths|inline",
|
||||||
|
content: [...], // Parsed paths or inline content
|
||||||
|
},
|
||||||
|
executionMethod: "agent|hybrid|cli",
|
||||||
|
preferredCliTool: "codex|gemini|qwen|auto",
|
||||||
|
enableResume: true // Always enable resume for CLI executions
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2A/2B.
|
||||||
|
|
||||||
|
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
|
||||||
|
|
||||||
|
**Command prepares session paths, metadata, and detects module structure.**
|
||||||
|
|
||||||
**Session Path Structure**:
|
**Session Path Structure**:
|
||||||
```
|
```
|
||||||
@@ -35,8 +150,12 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
|
|||||||
├── .process/
|
├── .process/
|
||||||
│ └── context-package.json # Context package with artifact catalog
|
│ └── context-package.json # Context package with artifact catalog
|
||||||
├── .task/ # Output: Task JSON files
|
├── .task/ # Output: Task JSON files
|
||||||
├── IMPL_PLAN.md # Output: Implementation plan
|
│ ├── IMPL-A1.json # Multi-module: prefixed by module
|
||||||
└── TODO_LIST.md # Output: TODO list
|
│ ├── IMPL-A2.json
|
||||||
|
│ ├── IMPL-B1.json
|
||||||
|
│ └── ...
|
||||||
|
├── IMPL_PLAN.md # Output: Implementation plan (grouped by module)
|
||||||
|
└── TODO_LIST.md # Output: TODO list (hierarchical)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Command Preparation**:
|
**Command Preparation**:
|
||||||
@@ -47,10 +166,51 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
|
|||||||
|
|
||||||
2. **Provide Metadata** (simple values):
|
2. **Provide Metadata** (simple values):
|
||||||
- `session_id`
|
- `session_id`
|
||||||
- `execution_mode` (agent-mode | cli-execute-mode)
|
|
||||||
- `mcp_capabilities` (available MCP tools)
|
- `mcp_capabilities` (available MCP tools)
|
||||||
|
|
||||||
### Phase 2: Planning Document Generation (Agent Responsibility)
|
3. **Auto Module Detection** (determines single vs parallel mode):
|
||||||
|
```javascript
|
||||||
|
function autoDetectModules(contextPackage, projectRoot) {
|
||||||
|
// === Complexity Gate: Only parallelize for High complexity ===
|
||||||
|
const complexity = contextPackage.metadata?.complexity || 'Medium';
|
||||||
|
if (complexity !== 'High') {
|
||||||
|
// Force single agent mode for Low/Medium complexity
|
||||||
|
// This maximizes agent context reuse for related tasks
|
||||||
|
return [{ name: 'main', prefix: '', paths: ['.'] }];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Priority 1: Explicit frontend/backend separation
|
||||||
|
if (exists('src/frontend') && exists('src/backend')) {
|
||||||
|
return [
|
||||||
|
{ name: 'frontend', prefix: 'A', paths: ['src/frontend'] },
|
||||||
|
{ name: 'backend', prefix: 'B', paths: ['src/backend'] }
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Priority 2: Monorepo structure
|
||||||
|
if (exists('packages/*') || exists('apps/*')) {
|
||||||
|
return detectMonorepoModules(); // Returns 2-3 main packages
|
||||||
|
}
|
||||||
|
|
||||||
|
// Priority 3: Context-package dependency clustering
|
||||||
|
const modules = clusterByDependencies(contextPackage.dependencies?.internal);
|
||||||
|
if (modules.length >= 2) return modules.slice(0, 3);
|
||||||
|
|
||||||
|
// Default: Single module (original flow)
|
||||||
|
return [{ name: 'main', prefix: '', paths: ['.'] }];
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Decision Logic**:
|
||||||
|
- `complexity !== 'High'` → Force Phase 2A (Single Agent, maximize context reuse)
|
||||||
|
- `modules.length == 1` → Phase 2A (Single Agent, original flow)
|
||||||
|
- `modules.length >= 2 && complexity == 'High'` → Phase 2B + Phase 3 (N+1 Parallel)
|
||||||
|
|
||||||
|
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
|
||||||
|
|
||||||
|
### Phase 2A: Single Agent Planning (Original Flow)
|
||||||
|
|
||||||
|
**Condition**: `modules.length == 1` (no multi-module detected)
|
||||||
|
|
||||||
**Purpose**: Generate IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT code implementation.
|
**Purpose**: Generate IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT code implementation.
|
||||||
|
|
||||||
@@ -58,6 +218,7 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
||||||
prompt=`
|
prompt=`
|
||||||
## TASK OBJECTIVE
|
## TASK OBJECTIVE
|
||||||
@@ -79,15 +240,47 @@ Output:
|
|||||||
|
|
||||||
## CONTEXT METADATA
|
## CONTEXT METADATA
|
||||||
Session ID: {session-id}
|
Session ID: {session-id}
|
||||||
Planning Mode: {agent-mode | cli-execute-mode}
|
|
||||||
MCP Capabilities: {exa_code, exa_web, code_index}
|
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||||
|
|
||||||
|
## USER CONFIGURATION (from Phase 0)
|
||||||
|
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
|
||||||
|
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
|
||||||
|
Supplementary Materials: ${userConfig.supplementaryMaterials}
|
||||||
|
|
||||||
|
## CLI TOOL SELECTION
|
||||||
|
Based on userConfig.executionMethod:
|
||||||
|
- "agent": No command field in implementation_approach steps
|
||||||
|
- "hybrid": Add command field to complex steps only (agent handles simple steps)
|
||||||
|
- "cli": Add command field to ALL implementation_approach steps
|
||||||
|
|
||||||
|
CLI Resume Support (MANDATORY for all CLI commands):
|
||||||
|
- Use --resume parameter to continue from previous task execution
|
||||||
|
- Read previous task's cliExecutionId from session state
|
||||||
|
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
|
||||||
|
|
||||||
|
## EXPLORATION CONTEXT (from context-package.exploration_results)
|
||||||
|
- Load exploration_results from context-package.json
|
||||||
|
- Use aggregated_insights.critical_files for focus_paths generation
|
||||||
|
- Apply aggregated_insights.constraints to acceptance criteria
|
||||||
|
- Reference aggregated_insights.all_patterns for implementation approach
|
||||||
|
- Use aggregated_insights.all_integration_points for precise modification locations
|
||||||
|
- Use conflict_indicators for risk-aware task sequencing
|
||||||
|
|
||||||
|
## CONFLICT RESOLUTION CONTEXT (if exists)
|
||||||
|
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
|
||||||
|
- If exists, load .process/conflict-resolution.json:
|
||||||
|
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
|
||||||
|
- Reference resolved_conflicts for implementation approach alignment
|
||||||
|
- Handle custom_conflicts with explicit task notes
|
||||||
|
|
||||||
## EXPECTED DELIVERABLES
|
## EXPECTED DELIVERABLES
|
||||||
1. Task JSON Files (.task/IMPL-*.json)
|
1. Task JSON Files (.task/IMPL-*.json)
|
||||||
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
|
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
|
||||||
- Quantified requirements with explicit counts
|
- Quantified requirements with explicit counts
|
||||||
- Artifacts integration from context package
|
- Artifacts integration from context package
|
||||||
- Flow control with pre_analysis steps
|
- **focus_paths enhanced with exploration critical_files**
|
||||||
|
- Flow control with pre_analysis steps (include exploration integration_points analysis)
|
||||||
|
- **CLI Execution IDs and strategies (MANDATORY)**
|
||||||
|
|
||||||
2. Implementation Plan (IMPL_PLAN.md)
|
2. Implementation Plan (IMPL_PLAN.md)
|
||||||
- Context analysis and artifact references
|
- Context analysis and artifact references
|
||||||
@@ -99,9 +292,30 @@ MCP Capabilities: {exa_code, exa_web, code_index}
|
|||||||
- Links to task JSONs and summaries
|
- Links to task JSONs and summaries
|
||||||
- Matches task JSON hierarchy
|
- Matches task JSON hierarchy
|
||||||
|
|
||||||
|
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
|
||||||
|
Each task JSON MUST include:
|
||||||
|
- **cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-{task_id}`)
|
||||||
|
- **cli_execution**: Strategy object based on depends_on:
|
||||||
|
- No deps → `{ "strategy": "new" }`
|
||||||
|
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
|
||||||
|
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
|
||||||
|
- N deps → `{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }`
|
||||||
|
|
||||||
|
**CLI Execution Strategy Rules**:
|
||||||
|
1. **new**: Task has no dependencies - starts fresh CLI conversation
|
||||||
|
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
|
||||||
|
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
|
||||||
|
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
|
||||||
|
|
||||||
|
**Execution Command Patterns**:
|
||||||
|
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
|
||||||
|
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
|
||||||
|
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
|
||||||
|
- merge_fork: `ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write`
|
||||||
|
|
||||||
## QUALITY STANDARDS
|
## QUALITY STANDARDS
|
||||||
Hard Constraints:
|
Hard Constraints:
|
||||||
- Task count <= 12 (hard limit - request re-scope if exceeded)
|
- Task count <= 18 (hard limit - request re-scope if exceeded)
|
||||||
- All requirements quantified (explicit counts and enumerated lists)
|
- All requirements quantified (explicit counts and enumerated lists)
|
||||||
- Acceptance criteria measurable (include verification commands)
|
- Acceptance criteria measurable (include verification commands)
|
||||||
- Artifact references mapped from context package
|
- Artifact references mapped from context package
|
||||||
@@ -117,4 +331,160 @@ Hard Constraints:
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
、
|
### Phase 2B: N Parallel Planning (Multi-Module)
|
||||||
|
|
||||||
|
**Condition**: `modules.length >= 2` (multi-module detected)
|
||||||
|
|
||||||
|
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task JSON generation.
|
||||||
|
|
||||||
|
**Note**: Phase 2B agents generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md are generated by Phase 3 Coordinator.
|
||||||
|
|
||||||
|
**Parallel Agent Invocation**:
|
||||||
|
```javascript
|
||||||
|
// Launch N agents in parallel (one per module)
|
||||||
|
const planningTasks = modules.map(module =>
|
||||||
|
Task(
|
||||||
|
subagent_type="action-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
|
description=`Generate ${module.name} module task JSONs`,
|
||||||
|
prompt=`
|
||||||
|
## TASK OBJECTIVE
|
||||||
|
Generate task JSON files for ${module.name} module within workflow session
|
||||||
|
|
||||||
|
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
|
||||||
|
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
|
||||||
|
|
||||||
|
CRITICAL: Follow progressive loading strategy in agent specification
|
||||||
|
|
||||||
|
## MODULE SCOPE
|
||||||
|
- Module: ${module.name} (${module.type})
|
||||||
|
- Focus Paths: ${module.paths.join(', ')}
|
||||||
|
- Task ID Prefix: IMPL-${module.prefix}
|
||||||
|
- Task Limit: ≤9 tasks
|
||||||
|
- Other Modules: ${otherModules.join(', ')}
|
||||||
|
|
||||||
|
## SESSION PATHS
|
||||||
|
Input:
|
||||||
|
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
|
||||||
|
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||||
|
Output:
|
||||||
|
- Task Dir: .workflow/active/{session-id}/.task/
|
||||||
|
|
||||||
|
## CONTEXT METADATA
|
||||||
|
Session ID: {session-id}
|
||||||
|
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||||
|
|
||||||
|
## CROSS-MODULE DEPENDENCIES
|
||||||
|
- Use placeholder: depends_on: ["CROSS::{module}::{pattern}"]
|
||||||
|
- Example: depends_on: ["CROSS::B::api-endpoint"]
|
||||||
|
- Phase 3 Coordinator resolves to actual task IDs
|
||||||
|
|
||||||
|
## EXPECTED DELIVERABLES
|
||||||
|
Task JSON Files (.task/IMPL-${module.prefix}*.json):
|
||||||
|
- 6-field schema per agent specification
|
||||||
|
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
|
||||||
|
- Focus ONLY on ${module.name} module scope
|
||||||
|
|
||||||
|
## SUCCESS CRITERIA
|
||||||
|
- Task JSONs saved to .task/ with IMPL-${module.prefix}* naming
|
||||||
|
- Cross-module dependencies use CROSS:: placeholder format
|
||||||
|
- Return task count and brief summary
|
||||||
|
`
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Execute all in parallel
|
||||||
|
await Promise.all(planningTasks);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output Structure** (direct to .task/):
|
||||||
|
```
|
||||||
|
.task/
|
||||||
|
├── IMPL-A1.json # Module A (e.g., frontend)
|
||||||
|
├── IMPL-A2.json
|
||||||
|
├── IMPL-B1.json # Module B (e.g., backend)
|
||||||
|
├── IMPL-B2.json
|
||||||
|
└── IMPL-C1.json # Module C (e.g., shared)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Task ID Naming**:
|
||||||
|
- Format: `IMPL-{prefix}{seq}.json`
|
||||||
|
- Prefix: A, B, C... (assigned by detection order)
|
||||||
|
- Sequence: 1, 2, 3... (per-module increment)
|
||||||
|
|
||||||
|
### Phase 3: Integration (+1 Coordinator Agent, Multi-Module Only)
|
||||||
|
|
||||||
|
**Condition**: Only executed when `modules.length >= 2`
|
||||||
|
|
||||||
|
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified IMPL_PLAN.md and TODO_LIST.md documents.
|
||||||
|
|
||||||
|
**Coordinator Agent Invocation**:
|
||||||
|
```javascript
|
||||||
|
// Wait for all Phase 2B agents to complete
|
||||||
|
const moduleResults = await Promise.all(planningTasks);
|
||||||
|
|
||||||
|
// Launch +1 Coordinator Agent
|
||||||
|
Task(
|
||||||
|
subagent_type="action-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
|
description="Integrate module tasks and generate unified documents",
|
||||||
|
prompt=`
|
||||||
|
## TASK OBJECTIVE
|
||||||
|
Integrate all module task JSONs, resolve cross-module dependencies, and generate unified IMPL_PLAN.md and TODO_LIST.md
|
||||||
|
|
||||||
|
IMPORTANT: This is INTEGRATION ONLY - consolidate existing task JSONs, NOT creating new tasks.
|
||||||
|
|
||||||
|
## SESSION PATHS
|
||||||
|
Input:
|
||||||
|
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
|
||||||
|
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||||
|
- Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (from Phase 2B)
|
||||||
|
Output:
|
||||||
|
- Updated Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (resolved dependencies)
|
||||||
|
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
|
||||||
|
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
|
||||||
|
|
||||||
|
## CONTEXT METADATA
|
||||||
|
Session ID: {session-id}
|
||||||
|
Modules: ${modules.map(m => m.name + '(' + m.prefix + ')').join(', ')}
|
||||||
|
Module Count: ${modules.length}
|
||||||
|
|
||||||
|
## INTEGRATION STEPS
|
||||||
|
1. Collect all .task/IMPL-*.json, group by module prefix
|
||||||
|
2. Resolve CROSS:: dependencies → actual task IDs, update task JSONs
|
||||||
|
3. Generate IMPL_PLAN.md (multi-module format per agent specification)
|
||||||
|
4. Generate TODO_LIST.md (hierarchical format per agent specification)
|
||||||
|
|
||||||
|
## CROSS-MODULE DEPENDENCY RESOLUTION
|
||||||
|
- Pattern: CROSS::{module}::{pattern} → IMPL-{module}* matching title/context
|
||||||
|
- Example: CROSS::B::api-endpoint → IMPL-B1 (if B1 title contains "api-endpoint")
|
||||||
|
- Log unresolved as warnings
|
||||||
|
|
||||||
|
## EXPECTED DELIVERABLES
|
||||||
|
1. Updated Task JSONs with resolved dependency IDs
|
||||||
|
2. IMPL_PLAN.md - multi-module format with cross-dependency section
|
||||||
|
3. TODO_LIST.md - hierarchical by module with cross-dependency section
|
||||||
|
|
||||||
|
## SUCCESS CRITERIA
|
||||||
|
- No CROSS:: placeholders remaining in task JSONs
|
||||||
|
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
|
||||||
|
- Return: task count, per-module breakdown, resolved dependency count
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Dependency Resolution Algorithm**:
|
||||||
|
```javascript
|
||||||
|
function resolveCrossModuleDependency(placeholder, allTasks) {
|
||||||
|
const [, targetModule, pattern] = placeholder.match(/CROSS::(\w+)::(.+)/);
|
||||||
|
const candidates = allTasks.filter(t =>
|
||||||
|
t.id.startsWith(`IMPL-${targetModule}`) &&
|
||||||
|
(t.title.toLowerCase().includes(pattern.toLowerCase()) ||
|
||||||
|
t.context?.description?.toLowerCase().includes(pattern.toLowerCase()))
|
||||||
|
);
|
||||||
|
return candidates.length > 0
|
||||||
|
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
|
||||||
|
: placeholder; // Keep for manual resolution
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -1,24 +1,23 @@
|
|||||||
---
|
---
|
||||||
name: task-generate-tdd
|
name: task-generate-tdd
|
||||||
description: Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation
|
description: Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation
|
||||||
argument-hint: "--session WFS-session-id [--cli-execute]"
|
argument-hint: "--session WFS-session-id"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:task-generate-tdd --session WFS-auth
|
- /workflow:tools:task-generate-tdd --session WFS-auth
|
||||||
- /workflow:tools:task-generate-tdd --session WFS-auth --cli-execute
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Autonomous TDD Task Generation Command
|
# Autonomous TDD Task Generation Command
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes. Generates complete Red-Green-Refactor cycles contained within each task.
|
Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Generates complete Red-Green-Refactor cycles contained within each task.
|
||||||
|
|
||||||
## Core Philosophy
|
## Core Philosophy
|
||||||
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
|
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
|
||||||
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
|
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
|
||||||
- **Memory-First**: Reuse loaded documents from conversation memory
|
- **Memory-First**: Reuse loaded documents from conversation memory
|
||||||
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
||||||
- **Pre-Selected Templates**: Command selects correct TDD template based on `--cli-execute` flag **before** invoking agent
|
- **Semantic CLI Selection**: CLI tool usage determined from user's task description, not flags
|
||||||
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
|
- **Agent Simplicity**: Agent generates content with semantic CLI detection
|
||||||
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
||||||
- **TDD-First**: Every feature starts with a failing test (Red phase)
|
- **TDD-First**: Every feature starts with a failing test (Red phase)
|
||||||
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
|
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
|
||||||
@@ -43,16 +42,40 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
|||||||
- Different tech stacks or domains within feature
|
- Different tech stacks or domains within feature
|
||||||
|
|
||||||
### Task Limits
|
### Task Limits
|
||||||
- **Maximum 10 tasks** (hard limit for TDD workflows)
|
- **Maximum 18 tasks** (hard limit for TDD workflows)
|
||||||
- **Feature-based**: Complete functional units with internal TDD cycles
|
- **Feature-based**: Complete functional units with internal TDD cycles
|
||||||
- **Hierarchy**: Flat (≤5 simple features) | Two-level (6-10 for complex features with sub-features)
|
- **Hierarchy**: Flat (≤5 simple features) | Two-level (6-10 for complex features with sub-features)
|
||||||
- **Re-scope**: If >10 tasks needed, break project into multiple TDD workflow sessions
|
- **Re-scope**: If >18 tasks needed, break project into multiple TDD workflow sessions
|
||||||
|
|
||||||
### TDD Cycle Mapping
|
### TDD Cycle Mapping
|
||||||
- **Old approach**: 1 feature = 3 tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
|
- **Old approach**: 1 feature = 3 tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
|
||||||
- **Current approach**: 1 feature = 1 task (IMPL-N with internal Red-Green-Refactor phases)
|
- **Current approach**: 1 feature = 1 task (IMPL-N with internal Red-Green-Refactor phases)
|
||||||
- **Complex features**: 1 container (IMPL-N) + subtasks (IMPL-N.M) when necessary
|
- **Complex features**: 1 container (IMPL-N) + subtasks (IMPL-N.M) when necessary
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Discovery & Context Loading (Memory-First)
|
||||||
|
├─ Load session context (if not in memory)
|
||||||
|
├─ Load context package (if not in memory)
|
||||||
|
├─ Load test context package (if not in memory)
|
||||||
|
├─ Extract & load role analyses from context package
|
||||||
|
├─ Load conflict resolution (if exists)
|
||||||
|
└─ Optional: MCP external research
|
||||||
|
|
||||||
|
Phase 2: Agent Execution (Document Generation)
|
||||||
|
├─ Pre-agent template selection (semantic CLI detection)
|
||||||
|
├─ Invoke action-planning-agent
|
||||||
|
├─ Generate TDD Task JSON Files (.task/IMPL-*.json)
|
||||||
|
│ └─ Each task: complete Red-Green-Refactor cycle internally
|
||||||
|
├─ Create IMPL_PLAN.md (TDD variant)
|
||||||
|
└─ Generate TODO_LIST.md with TDD phase indicators
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Phase 1: Discovery & Context Loading
|
### Phase 1: Discovery & Context Loading
|
||||||
@@ -62,11 +85,8 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
|||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
"session_id": "WFS-[session-id]",
|
"session_id": "WFS-[session-id]",
|
||||||
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
|
|
||||||
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
|
|
||||||
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
|
|
||||||
// Path selected by command based on --cli-execute flag, agent reads it
|
|
||||||
"workflow_type": "tdd",
|
"workflow_type": "tdd",
|
||||||
|
// Note: CLI tool usage is determined semantically by action-planning-agent based on user's task description
|
||||||
"session_metadata": {
|
"session_metadata": {
|
||||||
// If in memory: use cached content
|
// If in memory: use cached content
|
||||||
// Else: Load from .workflow/active//{session-id}/workflow-session.json
|
// Else: Load from .workflow/active//{session-id}/workflow-session.json
|
||||||
@@ -93,7 +113,7 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
|||||||
// Existing test patterns and coverage analysis
|
// Existing test patterns and coverage analysis
|
||||||
},
|
},
|
||||||
"mcp_capabilities": {
|
"mcp_capabilities": {
|
||||||
"code_index": true,
|
"codex_lens": true,
|
||||||
"exa_code": true,
|
"exa_code": true,
|
||||||
"exa_web": true
|
"exa_web": true
|
||||||
}
|
}
|
||||||
@@ -132,9 +152,14 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
|||||||
roleAnalysisPaths.forEach(path => Read(path));
|
roleAnalysisPaths.forEach(path => Read(path));
|
||||||
```
|
```
|
||||||
|
|
||||||
5. **Load Conflict Resolution** (from context-package.json, if exists)
|
5. **Load Conflict Resolution** (from conflict-resolution.json, if exists)
|
||||||
```javascript
|
```javascript
|
||||||
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
|
// Check for new conflict-resolution.json format
|
||||||
|
if (contextPackage.conflict_detection?.resolution_file) {
|
||||||
|
Read(contextPackage.conflict_detection.resolution_file) // .process/conflict-resolution.json
|
||||||
|
}
|
||||||
|
// Fallback: legacy brainstorm_artifacts path
|
||||||
|
else if (contextPackage.brainstorm_artifacts?.conflict_resolution?.exists) {
|
||||||
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
|
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -169,14 +194,14 @@ const templatePath = hasCliExecuteFlag
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate TDD task JSON and implementation plan",
|
description="Generate TDD task JSON and implementation plan",
|
||||||
prompt=`
|
prompt=`
|
||||||
## Execution Context
|
## Execution Context
|
||||||
|
|
||||||
**Session ID**: WFS-{session-id}
|
**Session ID**: WFS-{session-id}
|
||||||
**Workflow Type**: TDD
|
**Workflow Type**: TDD
|
||||||
**Execution Mode**: {agent-mode | cli-execute-mode}
|
**Note**: CLI tool usage is determined semantically from user's task description
|
||||||
**Task JSON Template Path**: {template_path}
|
|
||||||
|
|
||||||
## Phase 1: Discovery Results (Provided Context)
|
## Phase 1: Discovery Results (Provided Context)
|
||||||
|
|
||||||
@@ -204,7 +229,7 @@ If conflict_risk was medium/high, modifications have been applied to:
|
|||||||
- **guidance-specification.md**: Design decisions updated to resolve conflicts
|
- **guidance-specification.md**: Design decisions updated to resolve conflicts
|
||||||
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
|
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
|
||||||
- **context-package.json**: Marked as "resolved" with conflict IDs
|
- **context-package.json**: Marked as "resolved" with conflict IDs
|
||||||
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
|
- Conflict resolution results stored in conflict-resolution.json
|
||||||
|
|
||||||
### MCP Analysis Results (Optional)
|
### MCP Analysis Results (Optional)
|
||||||
**Code Structure**: {mcp_code_index_results}
|
**Code Structure**: {mcp_code_index_results}
|
||||||
@@ -230,7 +255,7 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
|||||||
- Each task executes Red-Green-Refactor phases sequentially
|
- Each task executes Red-Green-Refactor phases sequentially
|
||||||
- Task count = Feature count (typically 5 features = 5 tasks)
|
- Task count = Feature count (typically 5 features = 5 tasks)
|
||||||
- Subtasks only when complexity >2500 lines or >6 files per cycle
|
- Subtasks only when complexity >2500 lines or >6 files per cycle
|
||||||
- **Maximum 10 tasks** (hard limit for TDD workflows)
|
- **Maximum 18 tasks** (hard limit for TDD workflows)
|
||||||
|
|
||||||
#### TDD Cycle Mapping
|
#### TDD Cycle Mapping
|
||||||
- **Simple features**: IMPL-N with internal Red-Green-Refactor phases
|
- **Simple features**: IMPL-N with internal Red-Green-Refactor phases
|
||||||
@@ -241,16 +266,15 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
|||||||
|
|
||||||
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
|
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
|
||||||
- **Location**: `.workflow/active//{session-id}/.task/`
|
- **Location**: `.workflow/active//{session-id}/.task/`
|
||||||
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
|
|
||||||
- **Schema**: 5-field structure with TDD-specific metadata
|
- **Schema**: 5-field structure with TDD-specific metadata
|
||||||
- `meta.tdd_workflow`: true (REQUIRED)
|
- `meta.tdd_workflow`: true (REQUIRED)
|
||||||
- `meta.max_iterations`: 3 (Green phase test-fix cycle limit)
|
- `meta.max_iterations`: 3 (Green phase test-fix cycle limit)
|
||||||
- `meta.use_codex`: false (manual fixes by default)
|
|
||||||
- `context.tdd_cycles`: Array with quantified test cases and coverage
|
- `context.tdd_cycles`: Array with quantified test cases and coverage
|
||||||
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase` field
|
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase` field
|
||||||
1. Red Phase (`tdd_phase: "red"`): Write failing tests
|
1. Red Phase (`tdd_phase: "red"`): Write failing tests
|
||||||
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
|
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
|
||||||
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
|
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
|
||||||
|
- CLI tool usage determined semantically (add `command` field when user requests CLI execution)
|
||||||
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
|
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
|
||||||
|
|
||||||
##### 2. IMPL_PLAN.md (TDD Variant)
|
##### 2. IMPL_PLAN.md (TDD Variant)
|
||||||
@@ -300,7 +324,7 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
|||||||
|
|
||||||
**Quality Gates** (Full checklist in action-planning-agent.md):
|
**Quality Gates** (Full checklist in action-planning-agent.md):
|
||||||
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
|
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
|
||||||
- ✓ Task count ≤10 (hard limit)
|
- ✓ Task count ≤18 (hard limit)
|
||||||
- ✓ Each task has meta.tdd_workflow: true
|
- ✓ Each task has meta.tdd_workflow: true
|
||||||
- ✓ Each task has exactly 3 implementation steps with tdd_phase field
|
- ✓ Each task has exactly 3 implementation steps with tdd_phase field
|
||||||
- ✓ Green phase includes test-fix cycle logic
|
- ✓ Green phase includes test-fix cycle logic
|
||||||
@@ -315,7 +339,7 @@ Generate all three documents and report completion status:
|
|||||||
- TDD cycles configured: N cycles with quantified test cases
|
- TDD cycles configured: N cycles with quantified test cases
|
||||||
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
||||||
- Test context integrated: existing patterns and coverage
|
- Test context integrated: existing patterns and coverage
|
||||||
- MCP enhancements: code-index, exa-research
|
- MCP enhancements: CodexLens, exa-research
|
||||||
- Session ready for TDD execution: /workflow:execute
|
- Session ready for TDD execution: /workflow:execute
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
@@ -355,10 +379,12 @@ const agentContext = {
|
|||||||
.flatMap(role => role.files)
|
.flatMap(role => role.files)
|
||||||
.map(file => Read(file.path)),
|
.map(file => Read(file.path)),
|
||||||
|
|
||||||
// Load conflict resolution if exists (from context package)
|
// Load conflict resolution if exists (prefer new JSON format)
|
||||||
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
|
conflict_resolution: context_package.conflict_detection?.resolution_file
|
||||||
|
? Read(context_package.conflict_detection.resolution_file) // .process/conflict-resolution.json
|
||||||
|
: (brainstorm_artifacts?.conflict_resolution?.exists
|
||||||
? Read(brainstorm_artifacts.conflict_resolution.path)
|
? Read(brainstorm_artifacts.conflict_resolution.path)
|
||||||
: null,
|
: null),
|
||||||
|
|
||||||
// Optional MCP enhancements
|
// Optional MCP enhancements
|
||||||
mcp_analysis: executeMcpDiscovery()
|
mcp_analysis: executeMcpDiscovery()
|
||||||
@@ -390,7 +416,7 @@ This section provides quick reference for TDD task JSON structure. For complete
|
|||||||
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
|
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
|
||||||
│ └── ...
|
│ └── ...
|
||||||
└── .process/
|
└── .process/
|
||||||
├── CONFLICT_RESOLUTION.md # Conflict resolution strategies (if conflict_risk ≥ medium)
|
├── conflict-resolution.json # Conflict resolution results (if conflict_risk ≥ medium)
|
||||||
├── test-context-package.json # Test coverage analysis
|
├── test-context-package.json # Test coverage analysis
|
||||||
├── context-package.json # Input from context-gather
|
├── context-package.json # Input from context-gather
|
||||||
├── context_package_path # Path to smart context package
|
├── context_package_path # Path to smart context package
|
||||||
@@ -451,16 +477,14 @@ This section provides quick reference for TDD task JSON structure. For complete
|
|||||||
|
|
||||||
**Basic Usage**:
|
**Basic Usage**:
|
||||||
```bash
|
```bash
|
||||||
# Agent mode (default, autonomous execution)
|
# Standard execution
|
||||||
/workflow:tools:task-generate-tdd --session WFS-auth
|
/workflow:tools:task-generate-tdd --session WFS-auth
|
||||||
|
|
||||||
# CLI tool mode (use Gemini/Qwen for generation)
|
# With semantic CLI request (include in task description)
|
||||||
/workflow:tools:task-generate-tdd --session WFS-auth --cli-execute
|
# e.g., "Generate TDD tasks for auth module, use Codex for implementation"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Execution Modes**:
|
**CLI Tool Selection**: Determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
|
||||||
- **Agent mode** (default): Uses `action-planning-agent` with agent-mode task template
|
|
||||||
- **CLI mode** (`--cli-execute`): Uses Gemini/Qwen with cli-mode task template
|
|
||||||
|
|
||||||
**Output**:
|
**Output**:
|
||||||
- TDD task JSON files in `.task/` directory (IMPL-N.json format)
|
- TDD task JSON files in `.task/` directory (IMPL-N.json format)
|
||||||
@@ -489,7 +513,7 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
|
|||||||
3. **Success Path**: Tests pass → Complete task
|
3. **Success Path**: Tests pass → Complete task
|
||||||
4. **Failure Path**: Tests fail → Enter iterative fix cycle:
|
4. **Failure Path**: Tests fail → Enter iterative fix cycle:
|
||||||
- **Gemini Diagnosis**: Analyze failures with bug-fix template
|
- **Gemini Diagnosis**: Analyze failures with bug-fix template
|
||||||
- **Fix Application**: Manual (default) or Codex (if meta.use_codex=true)
|
- **Fix Application**: Agent (default) or CLI (if `command` field present)
|
||||||
- **Retest**: Verify fix resolves failures
|
- **Retest**: Verify fix resolves failures
|
||||||
- **Repeat**: Up to max_iterations (default: 3)
|
- **Repeat**: Up to max_iterations (default: 3)
|
||||||
5. **Safety Net**: Auto-revert all changes if max iterations reached
|
5. **Safety Net**: Auto-revert all changes if max iterations reached
|
||||||
@@ -498,5 +522,5 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
|
|||||||
|
|
||||||
## Configuration Options
|
## Configuration Options
|
||||||
- **meta.max_iterations**: Number of fix attempts (default: 3 for TDD, 5 for test-gen)
|
- **meta.max_iterations**: Number of fix attempts (default: 3 for TDD, 5 for test-gen)
|
||||||
- **meta.use_codex**: Enable Codex automated fixes (default: false, manual)
|
- **CLI tool usage**: Determined semantically from user's task description via `command` field in implementation_approach
|
||||||
|
|
||||||
|
|||||||
@@ -17,6 +17,38 @@ Analyze test coverage and verify Red-Green-Refactor cycle execution for TDD work
|
|||||||
- Verify TDD cycle execution (Red -> Green -> Refactor)
|
- Verify TDD cycle execution (Red -> Green -> Refactor)
|
||||||
- Generate coverage and cycle reports
|
- Generate coverage and cycle reports
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Extract Test Tasks
|
||||||
|
└─ Find TEST-*.json files and extract focus_paths
|
||||||
|
|
||||||
|
Phase 2: Run Test Suite
|
||||||
|
└─ Decision (test framework):
|
||||||
|
├─ Node.js → npm test --coverage --json
|
||||||
|
├─ Python → pytest --cov --json-report
|
||||||
|
└─ Other → [test_command] --coverage --json
|
||||||
|
|
||||||
|
Phase 3: Parse Coverage Data
|
||||||
|
├─ Extract line coverage percentage
|
||||||
|
├─ Extract branch coverage percentage
|
||||||
|
├─ Extract function coverage percentage
|
||||||
|
└─ Identify uncovered lines/branches
|
||||||
|
|
||||||
|
Phase 4: Verify TDD Cycle
|
||||||
|
└─ FOR each TDD chain (TEST-N.M → IMPL-N.M → REFACTOR-N.M):
|
||||||
|
├─ Red Phase: Verify tests created and failed initially
|
||||||
|
├─ Green Phase: Verify tests now pass
|
||||||
|
└─ Refactor Phase: Verify code quality improved
|
||||||
|
|
||||||
|
Phase 5: Generate Analysis Report
|
||||||
|
└─ Create tdd-cycle-report.md with coverage metrics and cycle verification
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Phase 1: Extract Test Tasks
|
### Phase 1: Extract Test Tasks
|
||||||
|
|||||||
@@ -24,6 +24,29 @@ Workflow coordinator that delegates test analysis to cli-execution-agent. Agent
|
|||||||
- Execute Gemini analysis via agent for test strategy generation
|
- Execute Gemini analysis via agent for test strategy generation
|
||||||
- Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)
|
- Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session, --context
|
||||||
|
└─ Validation: Both REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Context Preparation (Command)
|
||||||
|
├─ Load workflow-session.json
|
||||||
|
├─ Verify test session type is "test-gen"
|
||||||
|
├─ Validate test-context-package.json
|
||||||
|
└─ Determine strategy (Simple: 1-3 files | Medium: 4-6 | Complex: >6)
|
||||||
|
|
||||||
|
Phase 2: Test Analysis Execution (Agent)
|
||||||
|
├─ Execute Gemini analysis via cli-execution-agent
|
||||||
|
└─ Generate TEST_ANALYSIS_RESULTS.md
|
||||||
|
|
||||||
|
Phase 3: Output Validation (Command)
|
||||||
|
├─ Verify gemini-test-analysis.md exists
|
||||||
|
├─ Validate TEST_ANALYSIS_RESULTS.md
|
||||||
|
└─ Confirm test requirements are actionable
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Phase 1: Context Preparation (Command Responsibility)
|
### Phase 1: Context Preparation (Command Responsibility)
|
||||||
@@ -53,6 +76,7 @@ Workflow coordinator that delegates test analysis to cli-execution-agent. Agent
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-execution-agent",
|
subagent_type="cli-execution-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Analyze test coverage gaps and generate test strategy",
|
description="Analyze test coverage gaps and generate test strategy",
|
||||||
prompt=`
|
prompt=`
|
||||||
## TASK OBJECTIVE
|
## TASK OBJECTIVE
|
||||||
@@ -66,7 +90,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
|
|||||||
|
|
||||||
## EXECUTION STEPS
|
## EXECUTION STEPS
|
||||||
1. Execute Gemini analysis:
|
1. Execute Gemini analysis:
|
||||||
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
|
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
|
||||||
|
|
||||||
2. Generate TEST_ANALYSIS_RESULTS.md:
|
2. Generate TEST_ANALYSIS_RESULTS.md:
|
||||||
Synthesize gemini-test-analysis.md into standardized format for task generation
|
Synthesize gemini-test-analysis.md into standardized format for task generation
|
||||||
|
|||||||
@@ -24,6 +24,36 @@ Orchestrator command that invokes `test-context-search-agent` to gather comprehe
|
|||||||
- **Source Context Loading**: Import implementation summaries from source session
|
- **Source Context Loading**: Import implementation summaries from source session
|
||||||
- **Standardized Output**: Generate `.workflow/active/{test_session_id}/.process/test-context-package.json`
|
- **Standardized Output**: Generate `.workflow/active/{test_session_id}/.process/test-context-package.json`
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: test_session_id REQUIRED
|
||||||
|
|
||||||
|
Step 1: Test-Context-Package Detection
|
||||||
|
└─ Decision (existing package):
|
||||||
|
├─ Valid package exists → Return existing (skip execution)
|
||||||
|
└─ No valid package → Continue to Step 2
|
||||||
|
|
||||||
|
Step 2: Invoke Test-Context-Search Agent
|
||||||
|
├─ Phase 1: Session Validation & Source Context Loading
|
||||||
|
│ ├─ Detection: Check for existing test-context-package
|
||||||
|
│ ├─ Test session validation
|
||||||
|
│ └─ Source context loading (summaries, changed files)
|
||||||
|
├─ Phase 2: Test Coverage Analysis
|
||||||
|
│ ├─ Track 1: Existing test discovery
|
||||||
|
│ ├─ Track 2: Coverage gap analysis
|
||||||
|
│ └─ Track 3: Coverage statistics
|
||||||
|
└─ Phase 3: Framework Detection & Packaging
|
||||||
|
├─ Framework identification
|
||||||
|
├─ Convention analysis
|
||||||
|
└─ Generate test-context-package.json
|
||||||
|
|
||||||
|
Step 3: Output Verification
|
||||||
|
└─ Verify test-context-package.json created
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
### Step 1: Test-Context-Package Detection
|
### Step 1: Test-Context-Package Detection
|
||||||
@@ -56,6 +86,7 @@ if (file_exists(testContextPath)) {
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="test-context-search-agent",
|
subagent_type="test-context-search-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Gather test coverage context",
|
description="Gather test coverage context",
|
||||||
prompt=`
|
prompt=`
|
||||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
||||||
|
|||||||
@@ -1,11 +1,9 @@
|
|||||||
---
|
---
|
||||||
name: test-task-generate
|
name: test-task-generate
|
||||||
description: Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests
|
description: Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests
|
||||||
argument-hint: "[--use-codex] [--cli-execute] --session WFS-test-session-id"
|
argument-hint: "--session WFS-test-session-id"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:test-task-generate --session WFS-test-auth
|
- /workflow:tools:test-task-generate --session WFS-test-auth
|
||||||
- /workflow:tools:test-task-generate --use-codex --session WFS-test-auth
|
|
||||||
- /workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Generate Test Planning Documents Command
|
# Generate Test Planning Documents Command
|
||||||
@@ -26,11 +24,34 @@ Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) u
|
|||||||
|
|
||||||
### Test Generation (IMPL-001)
|
### Test Generation (IMPL-001)
|
||||||
- **Agent Mode** (default): @code-developer generates tests within agent context
|
- **Agent Mode** (default): @code-developer generates tests within agent context
|
||||||
- **CLI Execute Mode** (`--cli-execute`): Use Codex CLI for autonomous test generation
|
- **CLI Mode**: Use CLI tools when `command` field present in implementation_approach (determined semantically)
|
||||||
|
|
||||||
### Test Execution & Fix (IMPL-002+)
|
### Test Execution & Fix (IMPL-002+)
|
||||||
- **Manual Mode** (default): Gemini diagnosis → user applies fixes
|
- **Agent Mode** (default): Gemini diagnosis → agent applies fixes
|
||||||
- **Codex Mode** (`--use-codex`): Gemini diagnosis → Codex applies fixes with resume mechanism
|
- **CLI Mode**: Gemini diagnosis → CLI applies fixes (when `command` field present in implementation_approach)
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Context Preparation (Command)
|
||||||
|
├─ Assemble test session paths
|
||||||
|
│ ├─ session_metadata_path
|
||||||
|
│ ├─ test_analysis_results_path (REQUIRED)
|
||||||
|
│ └─ test_context_package_path
|
||||||
|
└─ Provide metadata (session_id, source_session_id)
|
||||||
|
|
||||||
|
Phase 2: Test Document Generation (Agent)
|
||||||
|
├─ Load TEST_ANALYSIS_RESULTS.md as primary requirements source
|
||||||
|
├─ Generate Test Task JSON Files (.task/IMPL-*.json)
|
||||||
|
│ ├─ IMPL-001: Test generation (meta.type: "test-gen")
|
||||||
|
│ └─ IMPL-002+: Test execution & fix (meta.type: "test-fix")
|
||||||
|
├─ Create IMPL_PLAN.md (test_session variant)
|
||||||
|
└─ Generate TODO_LIST.md with test phase indicators
|
||||||
|
```
|
||||||
|
|
||||||
## Document Generation Lifecycle
|
## Document Generation Lifecycle
|
||||||
|
|
||||||
@@ -60,11 +81,11 @@ Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) u
|
|||||||
|
|
||||||
2. **Provide Metadata** (simple values):
|
2. **Provide Metadata** (simple values):
|
||||||
- `session_id`
|
- `session_id`
|
||||||
- `execution_mode` (agent-mode | cli-execute-mode)
|
|
||||||
- `use_codex` flag (true | false)
|
|
||||||
- `source_session_id` (if exists)
|
- `source_session_id` (if exists)
|
||||||
- `mcp_capabilities` (available MCP tools)
|
- `mcp_capabilities` (available MCP tools)
|
||||||
|
|
||||||
|
**Note**: CLI tool usage is now determined semantically from user's task description, not by flags.
|
||||||
|
|
||||||
### Phase 2: Test Document Generation (Agent Responsibility)
|
### Phase 2: Test Document Generation (Agent Responsibility)
|
||||||
|
|
||||||
**Purpose**: Generate test-specific IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT test execution.
|
**Purpose**: Generate test-specific IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT test execution.
|
||||||
@@ -73,6 +94,7 @@ Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) u
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
||||||
prompt=`
|
prompt=`
|
||||||
## TASK OBJECTIVE
|
## TASK OBJECTIVE
|
||||||
@@ -111,11 +133,14 @@ Output:
|
|||||||
## CONTEXT METADATA
|
## CONTEXT METADATA
|
||||||
Session ID: {test-session-id}
|
Session ID: {test-session-id}
|
||||||
Workflow Type: test_session
|
Workflow Type: test_session
|
||||||
Planning Mode: {agent-mode | cli-execute-mode}
|
|
||||||
Use Codex: {true | false}
|
|
||||||
Source Session: {source-session-id} (if exists)
|
Source Session: {source-session-id} (if exists)
|
||||||
MCP Capabilities: {exa_code, exa_web, code_index}
|
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||||
|
|
||||||
|
## CLI TOOL SELECTION
|
||||||
|
Determine CLI tool usage per-step based on user's task description:
|
||||||
|
- If user specifies "use Codex/Gemini/Qwen for X" → Add command field to relevant steps
|
||||||
|
- Default: Agent execution (no command field) unless user explicitly requests CLI
|
||||||
|
|
||||||
## TEST-SPECIFIC REQUIREMENTS SUMMARY
|
## TEST-SPECIFIC REQUIREMENTS SUMMARY
|
||||||
(Detailed specifications in your agent definition)
|
(Detailed specifications in your agent definition)
|
||||||
|
|
||||||
@@ -126,25 +151,35 @@ MCP Capabilities: {exa_code, exa_web, code_index}
|
|||||||
Task Configuration:
|
Task Configuration:
|
||||||
IMPL-001 (Test Generation):
|
IMPL-001 (Test Generation):
|
||||||
- meta.type: "test-gen"
|
- meta.type: "test-gen"
|
||||||
- meta.agent: "@code-developer" (agent-mode) OR CLI execution (cli-execute-mode)
|
- meta.agent: "@code-developer"
|
||||||
- meta.test_framework: Specify existing framework (e.g., "jest", "vitest", "pytest")
|
- meta.test_framework: Specify existing framework (e.g., "jest", "vitest", "pytest")
|
||||||
- flow_control: Test generation strategy from TEST_ANALYSIS_RESULTS.md
|
- flow_control: Test generation strategy from TEST_ANALYSIS_RESULTS.md
|
||||||
|
- CLI execution: Add `command` field when user requests (determined semantically)
|
||||||
|
|
||||||
IMPL-002+ (Test Execution & Fix):
|
IMPL-002+ (Test Execution & Fix):
|
||||||
- meta.type: "test-fix"
|
- meta.type: "test-fix"
|
||||||
- meta.agent: "@test-fix-agent"
|
- meta.agent: "@test-fix-agent"
|
||||||
- meta.use_codex: true/false (based on flag)
|
|
||||||
- flow_control: Test-fix cycle with iteration limits and diagnosis configuration
|
- flow_control: Test-fix cycle with iteration limits and diagnosis configuration
|
||||||
|
- CLI execution: Add `command` field when user requests (determined semantically)
|
||||||
|
|
||||||
### Test-Fix Cycle Specification (IMPL-002+)
|
### Test-Fix Cycle Specification (IMPL-002+)
|
||||||
Required flow_control fields:
|
Required flow_control fields:
|
||||||
- max_iterations: 5
|
- max_iterations: 5
|
||||||
- diagnosis_tool: "gemini"
|
- diagnosis_tool: "gemini"
|
||||||
- diagnosis_template: "~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt"
|
- diagnosis_template: "~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt"
|
||||||
- fix_mode: "manual" OR "codex" (based on use_codex flag)
|
|
||||||
- cycle_pattern: "test → gemini_diagnose → fix → retest"
|
- cycle_pattern: "test → gemini_diagnose → fix → retest"
|
||||||
- exit_conditions: ["all_tests_pass", "max_iterations_reached"]
|
- exit_conditions: ["all_tests_pass", "max_iterations_reached"]
|
||||||
- auto_revert_on_failure: true
|
- auto_revert_on_failure: true
|
||||||
|
- CLI fix: Add `command` field when user specifies CLI tool usage
|
||||||
|
|
||||||
|
### Automation Framework Configuration
|
||||||
|
Select automation tools based on test requirements from TEST_ANALYSIS_RESULTS.md:
|
||||||
|
- UI interaction testing → E2E browser automation (meta.e2e_framework)
|
||||||
|
- API/database integration → integration test tools (meta.test_tools)
|
||||||
|
- Performance metrics → load testing tools (meta.perf_framework)
|
||||||
|
- Logic verification → unit test framework (meta.test_framework)
|
||||||
|
|
||||||
|
**Tool Selection**: Detect from project config > suggest based on requirements
|
||||||
|
|
||||||
### TEST_ANALYSIS_RESULTS.md Mapping
|
### TEST_ANALYSIS_RESULTS.md Mapping
|
||||||
PRIMARY requirements source - extract and map to task JSONs:
|
PRIMARY requirements source - extract and map to task JSONs:
|
||||||
@@ -159,8 +194,9 @@ PRIMARY requirements source - extract and map to task JSONs:
|
|||||||
## EXPECTED DELIVERABLES
|
## EXPECTED DELIVERABLES
|
||||||
1. Test Task JSON Files (.task/IMPL-*.json)
|
1. Test Task JSON Files (.task/IMPL-*.json)
|
||||||
- 6-field schema with quantified requirements from TEST_ANALYSIS_RESULTS.md
|
- 6-field schema with quantified requirements from TEST_ANALYSIS_RESULTS.md
|
||||||
- Test-specific metadata: type, agent, use_codex, test_framework, coverage_target
|
- Test-specific metadata: type, agent, test_framework, coverage_target
|
||||||
- flow_control includes: reusable_test_tools, test_commands (from project config)
|
- flow_control includes: reusable_test_tools, test_commands (from project config)
|
||||||
|
- CLI execution via `command` field when user requests (determined semantically)
|
||||||
- Artifact references from test-context-package.json
|
- Artifact references from test-context-package.json
|
||||||
- Absolute paths in context.files_to_test
|
- Absolute paths in context.files_to_test
|
||||||
|
|
||||||
@@ -177,13 +213,13 @@ PRIMARY requirements source - extract and map to task JSONs:
|
|||||||
|
|
||||||
## QUALITY STANDARDS
|
## QUALITY STANDARDS
|
||||||
Hard Constraints:
|
Hard Constraints:
|
||||||
- Task count: minimum 2, maximum 12
|
- Task count: minimum 2, maximum 18
|
||||||
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
|
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
|
||||||
- Test framework matches existing project framework
|
- Test framework matches existing project framework
|
||||||
- flow_control includes reusable_test_tools and test_commands from project
|
- flow_control includes reusable_test_tools and test_commands from project
|
||||||
- use_codex flag correctly set in IMPL-002+ tasks
|
|
||||||
- Absolute paths for all focus_paths
|
- Absolute paths for all focus_paths
|
||||||
- Acceptance criteria include verification commands
|
- Acceptance criteria include verification commands
|
||||||
|
- CLI `command` field added only when user explicitly requests CLI tool usage
|
||||||
|
|
||||||
## SUCCESS CRITERIA
|
## SUCCESS CRITERIA
|
||||||
- All test planning documents generated successfully
|
- All test planning documents generated successfully
|
||||||
@@ -201,21 +237,18 @@ Hard Constraints:
|
|||||||
|
|
||||||
### Usage Examples
|
### Usage Examples
|
||||||
```bash
|
```bash
|
||||||
# Agent mode (default)
|
# Standard execution
|
||||||
/workflow:tools:test-task-generate --session WFS-test-auth
|
/workflow:tools:test-task-generate --session WFS-test-auth
|
||||||
|
|
||||||
# With automated Codex fixes
|
# With semantic CLI request (include in task description)
|
||||||
/workflow:tools:test-task-generate --use-codex --session WFS-test-auth
|
# e.g., "Generate tests, use Codex for implementation and fixes"
|
||||||
|
|
||||||
# CLI execution mode for test generation
|
|
||||||
/workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Flag Behavior
|
### CLI Tool Selection
|
||||||
- **No flags**: `meta.use_codex=false` (manual fixes), agent-mode test generation
|
CLI tool usage is determined semantically from user's task description:
|
||||||
- **--use-codex**: `meta.use_codex=true` (Codex automated fixes in IMPL-002+)
|
- Include "use Codex" for automated fixes
|
||||||
- **--cli-execute**: CLI tool execution mode for IMPL-001 test generation
|
- Include "use Gemini" for analysis
|
||||||
- **Both flags**: CLI generation + automated Codex fixes
|
- Default: Agent execution (no `command` field)
|
||||||
|
|
||||||
### Output
|
### Output
|
||||||
- Test task JSON files in `.task/` directory (minimum 2)
|
- Test task JSON files in `.task/` directory (minimum 2)
|
||||||
|
|||||||
@@ -23,6 +23,44 @@ Extract animation and transition patterns from prompt inference and image refere
|
|||||||
- **Production-Ready**: CSS var() format, WCAG-compliant, semantic naming
|
- **Production-Ready**: CSS var() format, WCAG-compliant, semantic naming
|
||||||
- **Default Behavior**: Non-interactive mode uses inferred patterns + best practices
|
- **Default Behavior**: Non-interactive mode uses inferred patterns + best practices
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session, --images, --focus, --interactive, --refine
|
||||||
|
└─ Decision (mode detection):
|
||||||
|
├─ --refine flag → Refinement Mode
|
||||||
|
└─ No --refine → Exploration Mode
|
||||||
|
|
||||||
|
Phase 0: Setup & Input Validation
|
||||||
|
├─ Step 1: Detect input mode & base path
|
||||||
|
├─ Step 2: Prepare image references (if available)
|
||||||
|
├─ Step 3: Load design tokens context
|
||||||
|
└─ Step 4: Memory check (skip if exists)
|
||||||
|
|
||||||
|
Phase 1: Animation Specification Generation
|
||||||
|
├─ Step 1: Load project context
|
||||||
|
├─ Step 2: Generate animation specification options (Agent Task 1)
|
||||||
|
│ └─ Decision:
|
||||||
|
│ ├─ Exploration Mode → Generate specification questions
|
||||||
|
│ └─ Refinement Mode → Generate refinement options
|
||||||
|
└─ Step 3: Verify options file created
|
||||||
|
|
||||||
|
Phase 1.5: User Confirmation (Optional)
|
||||||
|
└─ Decision (--interactive flag):
|
||||||
|
├─ --interactive present → Present options, capture selection
|
||||||
|
└─ No --interactive → Skip to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Animation System Generation
|
||||||
|
├─ Step 1: Load user selection or use defaults
|
||||||
|
├─ Step 2: Create output directory
|
||||||
|
└─ Step 3: Launch animation generation task (Agent Task 2)
|
||||||
|
|
||||||
|
Phase 3: Verify Output
|
||||||
|
├─ Step 1: Check files created
|
||||||
|
└─ Step 2: Verify file sizes
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 0: Setup & Input Validation
|
## Phase 0: Setup & Input Validation
|
||||||
|
|
||||||
### Step 1: Detect Input Mode & Base Path
|
### Step 1: Detect Input Mode & Base Path
|
||||||
|
|||||||
@@ -19,6 +19,48 @@ Synchronize finalized design system references to brainstorming artifacts, prepa
|
|||||||
- **Plan-Ready Output**: Ensure design artifacts discoverable by task-generate
|
- **Plan-Ready Output**: Ensure design artifacts discoverable by task-generate
|
||||||
- **Minimal Reading**: Verify file existence, don't read design content
|
- **Minimal Reading**: Verify file existence, don't read design content
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session, --selected-prototypes
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Session & Artifact Validation
|
||||||
|
├─ Step 1: Validate session exists
|
||||||
|
├─ Step 2: Find latest design run
|
||||||
|
├─ Step 3: Detect design system structure
|
||||||
|
└─ Step 4: Select prototypes (--selected-prototypes OR all)
|
||||||
|
|
||||||
|
Phase 1.1: Memory Check (Conditional)
|
||||||
|
└─ Decision (current design run in synthesis):
|
||||||
|
├─ Already updated → Skip Phase 2-5, EXIT
|
||||||
|
└─ Not found → Continue to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Load Target Artifacts
|
||||||
|
├─ Read role analysis documents (files to update)
|
||||||
|
├─ Read ui-designer/analysis.md (if exists)
|
||||||
|
└─ Read prototype notes (minimal context)
|
||||||
|
|
||||||
|
Phase 3: Update Synthesis Specification
|
||||||
|
└─ Edit role analysis documents with UI/UX Guidelines section
|
||||||
|
|
||||||
|
Phase 4A: Update Relevant Role Analysis Documents
|
||||||
|
├─ ui-designer/analysis.md (always)
|
||||||
|
├─ ux-expert/analysis.md (if animations exist)
|
||||||
|
├─ system-architect/analysis.md (if layouts exist)
|
||||||
|
└─ product-manager/analysis.md (if prototypes)
|
||||||
|
|
||||||
|
Phase 4B: Create UI Designer Design System Reference
|
||||||
|
└─ Write ui-designer/design-system-reference.md
|
||||||
|
|
||||||
|
Phase 5: Update Context Package
|
||||||
|
└─ Update context-package.json with design system references
|
||||||
|
|
||||||
|
Phase 6: Completion
|
||||||
|
└─ Report updated artifacts
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Protocol
|
## Execution Protocol
|
||||||
|
|
||||||
### Phase 1: Session & Artifact Validation
|
### Phase 1: Session & Artifact Validation
|
||||||
|
|||||||
@@ -27,8 +27,8 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
|||||||
6. **Phase 10 (ui-assembly)** → **Attach tasks → Execute → Collapse** → Workflow complete
|
6. **Phase 10 (ui-assembly)** → **Attach tasks → Execute → Collapse** → Workflow complete
|
||||||
|
|
||||||
**Phase Transition Mechanism**:
|
**Phase Transition Mechanism**:
|
||||||
- **Phase 5 (User Interaction)**: User confirms targets → IMMEDIATELY triggers Phase 7
|
- **Phase 5 (User Interaction)**: User confirms targets → IMMEDIATELY dispatches Phase 7
|
||||||
- **Phase 7-10 (Autonomous)**: `SlashCommand` invocation **ATTACHES** tasks to current workflow
|
- **Phase 7-10 (Autonomous)**: SlashCommand dispatch **ATTACHES** tasks to current workflow
|
||||||
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
||||||
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
||||||
- **Phase Transition**: Automatically execute next phase after collapsing
|
- **Phase Transition**: Automatically execute next phase after collapsing
|
||||||
@@ -36,10 +36,55 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
|||||||
|
|
||||||
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
||||||
|
|
||||||
**Task Attachment Model**: SlashCommand invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
**Task Attachment Model**: SlashCommand dispatch is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
||||||
|
|
||||||
**Target Type Detection**: Automatically inferred from prompt/targets, or explicitly set via `--target-type`.
|
**Target Type Detection**: Automatically inferred from prompt/targets, or explicitly set via `--target-type`.
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --input, --targets, --target-type, --device-type, --session, --style-variants, --layout-variants
|
||||||
|
└─ Decision (input detection):
|
||||||
|
├─ Contains * or glob matches → images_input (visual)
|
||||||
|
├─ File/directory exists → code import source
|
||||||
|
└─ Pure text → design prompt
|
||||||
|
|
||||||
|
Phase 1-4: Parameter Parsing & Initialization
|
||||||
|
├─ Phase 1: Normalize parameters (legacy deprecation warning)
|
||||||
|
├─ Phase 2: Intelligent prompt parsing (extract variant counts)
|
||||||
|
├─ Phase 3: Device type inference (explicit > keywords > target_type > default)
|
||||||
|
└─ Phase 4: Run initialization and directory setup
|
||||||
|
|
||||||
|
Phase 5: Unified Target Inference
|
||||||
|
├─ Priority: --pages/--components (legacy) → --targets → prompt analysis → synthesis → default
|
||||||
|
├─ Display confirmation with modification options
|
||||||
|
└─ User confirms → IMMEDIATELY triggers Phase 7
|
||||||
|
|
||||||
|
Phase 6: Code Import (Conditional)
|
||||||
|
└─ Decision (design_source):
|
||||||
|
├─ code_only | hybrid → Dispatch /workflow:ui-design:import-from-code
|
||||||
|
└─ visual_only → Skip to Phase 7
|
||||||
|
|
||||||
|
Phase 7: Style Extraction
|
||||||
|
└─ Decision (needs_visual_supplement):
|
||||||
|
├─ visual_only OR supplement needed → Dispatch /workflow:ui-design:style-extract
|
||||||
|
└─ code_only AND style_complete → Use code import
|
||||||
|
|
||||||
|
Phase 8: Animation Extraction
|
||||||
|
└─ Decision (should_extract_animation):
|
||||||
|
├─ visual_only OR incomplete OR regenerate → Dispatch /workflow:ui-design:animation-extract
|
||||||
|
└─ code_only AND animation_complete → Use code import
|
||||||
|
|
||||||
|
Phase 9: Layout Extraction
|
||||||
|
└─ Decision (needs_visual_supplement OR NOT layout_complete):
|
||||||
|
├─ True → Dispatch /workflow:ui-design:layout-extract
|
||||||
|
└─ False → Use code import
|
||||||
|
|
||||||
|
Phase 10: UI Assembly
|
||||||
|
└─ Dispatch /workflow:ui-design:generate → Workflow complete
|
||||||
|
```
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
1. **Start Immediately**: TodoWrite initialization → Phase 7 execution
|
1. **Start Immediately**: TodoWrite initialization → Phase 7 execution
|
||||||
@@ -47,7 +92,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
|||||||
3. **Parse & Pass**: Extract data from each output for next phase
|
3. **Parse & Pass**: Extract data from each output for next phase
|
||||||
4. **Default to All**: When selecting variants/prototypes, use ALL generated items
|
4. **Default to All**: When selecting variants/prototypes, use ALL generated items
|
||||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
6. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
6. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand dispatch **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
||||||
7. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
7. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
||||||
|
|
||||||
## Parameter Requirements
|
## Parameter Requirements
|
||||||
@@ -310,13 +355,16 @@ detect_target_type(target_list):
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 6: Code Import & Completeness Assessment (Conditional)
|
### Phase 6: Code Import & Completeness Assessment (Conditional)
|
||||||
```bash
|
|
||||||
|
**Step 6.1: Dispatch** - Import design system from code files
|
||||||
|
|
||||||
|
```javascript
|
||||||
IF design_source IN ["code_only", "hybrid"]:
|
IF design_source IN ["code_only", "hybrid"]:
|
||||||
REPORT: "🔍 Phase 6: Code Import ({design_source})"
|
REPORT: "🔍 Phase 6: Code Import ({design_source})"
|
||||||
command = "/workflow:ui-design:import-from-code --design-id \"{design_id}\" --source \"{code_base_path}\""
|
command = "/workflow:ui-design:import-from-code --design-id \"{design_id}\" --source \"{code_base_path}\""
|
||||||
|
|
||||||
TRY:
|
TRY:
|
||||||
# SlashCommand invocation ATTACHES import-from-code's tasks to current workflow
|
# SlashCommand dispatch ATTACHES import-from-code's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself:
|
# Orchestrator will EXECUTE these attached tasks itself:
|
||||||
# - Phase 0: Discover and categorize code files
|
# - Phase 0: Discover and categorize code files
|
||||||
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
|
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
|
||||||
@@ -420,7 +468,10 @@ IF design_source IN ["code_only", "hybrid"]:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 7: Style Extraction
|
### Phase 7: Style Extraction
|
||||||
```bash
|
|
||||||
|
**Step 7.1: Dispatch** - Extract style design systems
|
||||||
|
|
||||||
|
```javascript
|
||||||
IF design_source == "visual_only" OR needs_visual_supplement:
|
IF design_source == "visual_only" OR needs_visual_supplement:
|
||||||
REPORT: "🎨 Phase 7: Style Extraction (variants: {style_variants})"
|
REPORT: "🎨 Phase 7: Style Extraction (variants: {style_variants})"
|
||||||
command = "/workflow:ui-design:style-extract --design-id \"{design_id}\" " +
|
command = "/workflow:ui-design:style-extract --design-id \"{design_id}\" " +
|
||||||
@@ -428,7 +479,7 @@ IF design_source == "visual_only" OR needs_visual_supplement:
|
|||||||
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
|
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
|
||||||
"--variants {style_variants} --interactive"
|
"--variants {style_variants} --interactive"
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES style-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES style-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(command)
|
SlashCommand(command)
|
||||||
|
|
||||||
@@ -438,7 +489,10 @@ ELSE:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 8: Animation Extraction
|
### Phase 8: Animation Extraction
|
||||||
```bash
|
|
||||||
|
**Step 8.1: Dispatch** - Extract animation patterns
|
||||||
|
|
||||||
|
```javascript
|
||||||
# Determine if animation extraction is needed
|
# Determine if animation extraction is needed
|
||||||
should_extract_animation = false
|
should_extract_animation = false
|
||||||
|
|
||||||
@@ -468,7 +522,7 @@ IF should_extract_animation:
|
|||||||
|
|
||||||
command = " ".join(command_parts)
|
command = " ".join(command_parts)
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES animation-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES animation-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(command)
|
SlashCommand(command)
|
||||||
|
|
||||||
@@ -481,7 +535,10 @@ ELSE:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 9: Layout Extraction
|
### Phase 9: Layout Extraction
|
||||||
```bash
|
|
||||||
|
**Step 9.1: Dispatch** - Extract layout templates
|
||||||
|
|
||||||
|
```javascript
|
||||||
targets_string = ",".join(inferred_target_list)
|
targets_string = ",".join(inferred_target_list)
|
||||||
|
|
||||||
IF (design_source == "visual_only" OR needs_visual_supplement) OR (NOT layout_complete):
|
IF (design_source == "visual_only" OR needs_visual_supplement) OR (NOT layout_complete):
|
||||||
@@ -491,7 +548,7 @@ IF (design_source == "visual_only" OR needs_visual_supplement) OR (NOT layout_co
|
|||||||
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
|
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
|
||||||
"--targets \"{targets_string}\" --variants {layout_variants} --device-type \"{device_type}\" --interactive"
|
"--targets \"{targets_string}\" --variants {layout_variants} --device-type \"{device_type}\" --interactive"
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES layout-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES layout-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(command)
|
SlashCommand(command)
|
||||||
|
|
||||||
@@ -501,7 +558,10 @@ ELSE:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 10: UI Assembly
|
### Phase 10: UI Assembly
|
||||||
```bash
|
|
||||||
|
**Step 10.1: Dispatch** - Assemble UI prototypes from design tokens and layout templates
|
||||||
|
|
||||||
|
```javascript
|
||||||
command = "/workflow:ui-design:generate --design-id \"{design_id}\"" + (--session ? " --session {session_id}" : "")
|
command = "/workflow:ui-design:generate --design-id \"{design_id}\"" + (--session ? " --session {session_id}" : "")
|
||||||
|
|
||||||
total = style_variants × layout_variants × len(inferred_target_list)
|
total = style_variants × layout_variants × len(inferred_target_list)
|
||||||
@@ -511,7 +571,7 @@ REPORT: " → Pure assembly: Combining layout templates + design tokens"
|
|||||||
REPORT: " → Device: {device_type} (from layout templates)"
|
REPORT: " → Device: {device_type} (from layout templates)"
|
||||||
REPORT: " → Assembly tasks: {total} combinations"
|
REPORT: " → Assembly tasks: {total} combinations"
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES generate's tasks to current workflow
|
# SlashCommand dispatch ATTACHES generate's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(command)
|
SlashCommand(command)
|
||||||
|
|
||||||
@@ -536,10 +596,10 @@ TodoWrite({todos: [
|
|||||||
|
|
||||||
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
||||||
//
|
//
|
||||||
// **Key Concept**: SlashCommand invocation ATTACHES tasks to current workflow.
|
// **Key Concept**: SlashCommand dispatch ATTACHES tasks to current workflow.
|
||||||
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
|
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
|
||||||
//
|
//
|
||||||
// Phase 7-10 SlashCommand Invocation Pattern (when tasks are attached):
|
// Phase 7-10 SlashCommand Dispatch Pattern (when tasks are attached):
|
||||||
// Example - Phase 7 with sub-tasks:
|
// Example - Phase 7 with sub-tasks:
|
||||||
// [
|
// [
|
||||||
// {"content": "Phase 7: Style Extraction", "status": "in_progress", "activeForm": "Executing style extraction"},
|
// {"content": "Phase 7: Style Extraction", "status": "in_progress", "activeForm": "Executing style extraction"},
|
||||||
|
|||||||
@@ -21,6 +21,36 @@ Pure assembler that combines pre-extracted layout templates with design tokens t
|
|||||||
- `/workflow:ui-design:style-extract` → Complete design systems (design-tokens.json + style-guide.md)
|
- `/workflow:ui-design:style-extract` → Complete design systems (design-tokens.json + style-guide.md)
|
||||||
- `/workflow:ui-design:layout-extract` → Layout structure
|
- `/workflow:ui-design:layout-extract` → Layout structure
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session
|
||||||
|
└─ Decision (base path resolution):
|
||||||
|
├─ --design-id provided → Exact match by design ID
|
||||||
|
├─ --session provided → Latest in session
|
||||||
|
└─ No flags → Latest globally
|
||||||
|
|
||||||
|
Phase 1: Setup & Validation
|
||||||
|
├─ Step 1: Resolve base path & parse configuration
|
||||||
|
├─ Step 2: Load layout templates
|
||||||
|
├─ Step 3: Validate design tokens
|
||||||
|
└─ Step 4: Load animation tokens (optional)
|
||||||
|
|
||||||
|
Phase 2: Assembly (Agent)
|
||||||
|
├─ Step 1: Calculate agent grouping plan
|
||||||
|
│ └─ Grouping rules:
|
||||||
|
│ ├─ Style isolation: Each agent processes ONE style
|
||||||
|
│ ├─ Balanced distribution: Layouts evenly split
|
||||||
|
│ └─ Max 10 layouts per agent, max 6 concurrent agents
|
||||||
|
├─ Step 2: Launch batched assembly tasks (parallel)
|
||||||
|
└─ Step 3: Verify generated files
|
||||||
|
|
||||||
|
Phase 3: Generate Preview Files
|
||||||
|
├─ Step 1: Run preview generation script
|
||||||
|
└─ Step 2: Verify preview files
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 1: Setup & Validation
|
## Phase 1: Setup & Validation
|
||||||
|
|
||||||
### Step 1: Resolve Base Path & Parse Configuration
|
### Step 1: Resolve Base Path & Parse Configuration
|
||||||
@@ -290,7 +320,7 @@ Read({base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html)
|
|||||||
|
|
||||||
### Step 1: Run Preview Generation Script
|
### Step 1: Run Preview Generation Script
|
||||||
```bash
|
```bash
|
||||||
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
|
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
|
||||||
```
|
```
|
||||||
|
|
||||||
**Script generates**:
|
**Script generates**:
|
||||||
@@ -402,7 +432,7 @@ bash(test -f {base_path}/prototypes/compare.html && echo "exists")
|
|||||||
bash(mkdir -p {base_path}/prototypes)
|
bash(mkdir -p {base_path}/prototypes)
|
||||||
|
|
||||||
# Run preview script
|
# Run preview script
|
||||||
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
|
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
|
||||||
```
|
```
|
||||||
|
|
||||||
## Output Structure
|
## Output Structure
|
||||||
@@ -437,7 +467,7 @@ ERROR: Agent assembly failed
|
|||||||
→ Check inputs exist, validate JSON structure
|
→ Check inputs exist, validate JSON structure
|
||||||
|
|
||||||
ERROR: Script permission denied
|
ERROR: Script permission denied
|
||||||
→ chmod +x ~/.claude/scripts/ui-generate-preview.sh
|
→ Verify ccw tool is available: ccw tool list
|
||||||
```
|
```
|
||||||
|
|
||||||
### Recovery Strategies
|
### Recovery Strategies
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
|||||||
7. Phase 4: Design system integration → **Execute orchestrator task** → Reports completion
|
7. Phase 4: Design system integration → **Execute orchestrator task** → Reports completion
|
||||||
|
|
||||||
**Phase Transition Mechanism**:
|
**Phase Transition Mechanism**:
|
||||||
- **Task Attachment**: `SlashCommand` invocation **ATTACHES** tasks to current workflow
|
- **Task Attachment**: SlashCommand dispatch **ATTACHES** tasks to current workflow
|
||||||
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
||||||
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
||||||
- **Phase Transition**: Automatically execute next phase after collapsing
|
- **Phase Transition**: Automatically execute next phase after collapsing
|
||||||
@@ -34,7 +34,51 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
|||||||
|
|
||||||
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until reaching Phase 4.
|
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until reaching Phase 4.
|
||||||
|
|
||||||
**Task Attachment Model**: SlashCommand invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
**Task Attachment Model**: SlashCommand dispatch is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --input, --session (legacy: --images, --prompt)
|
||||||
|
└─ Decision (input detection):
|
||||||
|
├─ Contains * or glob matches → images_input (visual)
|
||||||
|
├─ File/directory exists → code import source
|
||||||
|
└─ Pure text → design prompt
|
||||||
|
|
||||||
|
Phase 0: Parameter Parsing & Input Detection
|
||||||
|
├─ Step 1: Normalize parameters (legacy deprecation warning)
|
||||||
|
├─ Step 2: Detect design source (hybrid | code_only | visual_only)
|
||||||
|
└─ Step 3: Initialize directories and metadata
|
||||||
|
|
||||||
|
Phase 0.5: Code Import (Conditional)
|
||||||
|
└─ Decision (design_source):
|
||||||
|
├─ hybrid → Dispatch /workflow:ui-design:import-from-code
|
||||||
|
└─ Other → Skip to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Style Extraction
|
||||||
|
└─ Decision (skip_style):
|
||||||
|
├─ code_only AND style_complete → Use code import
|
||||||
|
└─ Otherwise → Dispatch /workflow:ui-design:style-extract
|
||||||
|
|
||||||
|
Phase 2.3: Animation Extraction
|
||||||
|
└─ Decision (skip_animation):
|
||||||
|
├─ code_only AND animation_complete → Use code import
|
||||||
|
└─ Otherwise → Dispatch /workflow:ui-design:animation-extract
|
||||||
|
|
||||||
|
Phase 2.5: Layout Extraction
|
||||||
|
└─ Decision (skip_layout):
|
||||||
|
├─ code_only AND layout_complete → Use code import
|
||||||
|
└─ Otherwise → Dispatch /workflow:ui-design:layout-extract
|
||||||
|
|
||||||
|
Phase 3: UI Assembly
|
||||||
|
└─ Dispatch /workflow:ui-design:generate
|
||||||
|
|
||||||
|
Phase 4: Design System Integration
|
||||||
|
└─ Decision (session_id):
|
||||||
|
├─ Provided → Dispatch /workflow:ui-design:update
|
||||||
|
└─ Not provided → Standalone completion
|
||||||
|
```
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
@@ -42,7 +86,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
|||||||
2. **No Preliminary Validation**: Sub-commands handle their own validation
|
2. **No Preliminary Validation**: Sub-commands handle their own validation
|
||||||
3. **Parse & Pass**: Extract data from each output for next phase
|
3. **Parse & Pass**: Extract data from each output for next phase
|
||||||
4. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
4. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
5. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
5. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand dispatch **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
||||||
6. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 4.
|
6. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 4.
|
||||||
|
|
||||||
## Parameter Requirements
|
## Parameter Requirements
|
||||||
@@ -232,7 +276,9 @@ TodoWrite({todos: [
|
|||||||
|
|
||||||
### Phase 0.5: Code Import & Completeness Assessment (Conditional)
|
### Phase 0.5: Code Import & Completeness Assessment (Conditional)
|
||||||
|
|
||||||
```bash
|
**Step 0.5.1: Dispatch** - Import design system from code files
|
||||||
|
|
||||||
|
```javascript
|
||||||
# Only execute if code files detected
|
# Only execute if code files detected
|
||||||
IF design_source == "hybrid":
|
IF design_source == "hybrid":
|
||||||
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
@@ -245,7 +291,7 @@ IF design_source == "hybrid":
|
|||||||
"--source \"{code_base_path}\""
|
"--source \"{code_base_path}\""
|
||||||
|
|
||||||
TRY:
|
TRY:
|
||||||
# SlashCommand invocation ATTACHES import-from-code's tasks to current workflow
|
# SlashCommand dispatch ATTACHES import-from-code's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself:
|
# Orchestrator will EXECUTE these attached tasks itself:
|
||||||
# - Phase 0: Discover and categorize code files
|
# - Phase 0: Discover and categorize code files
|
||||||
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
|
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
|
||||||
@@ -336,7 +382,9 @@ TodoWrite(mark_completed: "Initialize and detect design source",
|
|||||||
|
|
||||||
### Phase 2: Style Extraction
|
### Phase 2: Style Extraction
|
||||||
|
|
||||||
```bash
|
**Step 2.1: Dispatch** - Extract style design system
|
||||||
|
|
||||||
|
```javascript
|
||||||
# Determine if style extraction needed
|
# Determine if style extraction needed
|
||||||
skip_style = (design_source == "code_only" AND style_complete)
|
skip_style = (design_source == "code_only" AND style_complete)
|
||||||
|
|
||||||
@@ -361,7 +409,7 @@ ELSE:
|
|||||||
|
|
||||||
extract_command = " ".join(command_parts)
|
extract_command = " ".join(command_parts)
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES style-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES style-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(extract_command)
|
SlashCommand(extract_command)
|
||||||
|
|
||||||
@@ -371,7 +419,9 @@ ELSE:
|
|||||||
|
|
||||||
### Phase 2.3: Animation Extraction
|
### Phase 2.3: Animation Extraction
|
||||||
|
|
||||||
```bash
|
**Step 2.3.1: Dispatch** - Extract animation patterns
|
||||||
|
|
||||||
|
```javascript
|
||||||
skip_animation = (design_source == "code_only" AND animation_complete)
|
skip_animation = (design_source == "code_only" AND animation_complete)
|
||||||
|
|
||||||
IF skip_animation:
|
IF skip_animation:
|
||||||
@@ -392,7 +442,7 @@ ELSE:
|
|||||||
|
|
||||||
animation_extract_command = " ".join(command_parts)
|
animation_extract_command = " ".join(command_parts)
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES animation-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES animation-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(animation_extract_command)
|
SlashCommand(animation_extract_command)
|
||||||
|
|
||||||
@@ -402,7 +452,9 @@ ELSE:
|
|||||||
|
|
||||||
### Phase 2.5: Layout Extraction
|
### Phase 2.5: Layout Extraction
|
||||||
|
|
||||||
```bash
|
**Step 2.5.1: Dispatch** - Extract layout templates
|
||||||
|
|
||||||
|
```javascript
|
||||||
skip_layout = (design_source == "code_only" AND layout_complete)
|
skip_layout = (design_source == "code_only" AND layout_complete)
|
||||||
|
|
||||||
IF skip_layout:
|
IF skip_layout:
|
||||||
@@ -425,7 +477,7 @@ ELSE:
|
|||||||
|
|
||||||
layout_extract_command = " ".join(command_parts)
|
layout_extract_command = " ".join(command_parts)
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES layout-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES layout-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(layout_extract_command)
|
SlashCommand(layout_extract_command)
|
||||||
|
|
||||||
@@ -435,11 +487,13 @@ ELSE:
|
|||||||
|
|
||||||
### Phase 3: UI Assembly
|
### Phase 3: UI Assembly
|
||||||
|
|
||||||
```bash
|
**Step 3.1: Dispatch** - Assemble UI prototypes from design tokens and layout templates
|
||||||
|
|
||||||
|
```javascript
|
||||||
REPORT: "🚀 Phase 3: UI Assembly"
|
REPORT: "🚀 Phase 3: UI Assembly"
|
||||||
generate_command = f"/workflow:ui-design:generate --design-id \"{design_id}\""
|
generate_command = f"/workflow:ui-design:generate --design-id \"{design_id}\""
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES generate's tasks to current workflow
|
# SlashCommand dispatch ATTACHES generate's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(generate_command)
|
SlashCommand(generate_command)
|
||||||
|
|
||||||
@@ -449,12 +503,14 @@ TodoWrite(mark_completed: "Assemble UI", mark_in_progress: session_id ? "Integra
|
|||||||
|
|
||||||
### Phase 4: Design System Integration
|
### Phase 4: Design System Integration
|
||||||
|
|
||||||
```bash
|
**Step 4.1: Dispatch** - Integrate design system into workflow session
|
||||||
|
|
||||||
|
```javascript
|
||||||
IF session_id:
|
IF session_id:
|
||||||
REPORT: "🚀 Phase 4: Design System Integration"
|
REPORT: "🚀 Phase 4: Design System Integration"
|
||||||
update_command = f"/workflow:ui-design:update --session {session_id}"
|
update_command = f"/workflow:ui-design:update --session {session_id}"
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES update's tasks to current workflow
|
# SlashCommand dispatch ATTACHES update's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(update_command)
|
SlashCommand(update_command)
|
||||||
|
|
||||||
@@ -580,10 +636,10 @@ TodoWrite({todos: [
|
|||||||
|
|
||||||
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
||||||
//
|
//
|
||||||
// **Key Concept**: SlashCommand invocation ATTACHES tasks to current workflow.
|
// **Key Concept**: SlashCommand dispatch ATTACHES tasks to current workflow.
|
||||||
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
|
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
|
||||||
//
|
//
|
||||||
// Phase 2-4 SlashCommand Invocation Pattern (when tasks are attached):
|
// Phase 2-4 SlashCommand Dispatch Pattern (when tasks are attached):
|
||||||
// Example - Phase 2 with sub-tasks:
|
// Example - Phase 2 with sub-tasks:
|
||||||
// [
|
// [
|
||||||
// {"content": "Phase 0: Initialize and Detect Design Source", "status": "completed", "activeForm": "Initializing"},
|
// {"content": "Phase 0: Initialize and Detect Design Source", "status": "completed", "activeForm": "Initializing"},
|
||||||
@@ -646,7 +702,7 @@ TodoWrite({todos: [
|
|||||||
|
|
||||||
- **Input**: `--images` (glob pattern) and/or `--prompt` (text/file paths) + optional `--session`
|
- **Input**: `--images` (glob pattern) and/or `--prompt` (text/file paths) + optional `--session`
|
||||||
- **Output**: Complete design system in `{base_path}/` (style-extraction, layout-extraction, prototypes)
|
- **Output**: Complete design system in `{base_path}/` (style-extraction, layout-extraction, prototypes)
|
||||||
- **Sub-commands Called**:
|
- **Sub-commands Dispatched**:
|
||||||
1. `/workflow:ui-design:import-from-code` (Phase 0.5, conditional - if code files detected)
|
1. `/workflow:ui-design:import-from-code` (Phase 0.5, conditional - if code files detected)
|
||||||
2. `/workflow:ui-design:style-extract` (Phase 2 - complete design systems)
|
2. `/workflow:ui-design:style-extract` (Phase 2 - complete design systems)
|
||||||
3. `/workflow:ui-design:animation-extract` (Phase 2.3 - animation tokens)
|
3. `/workflow:ui-design:animation-extract` (Phase 2.3 - animation tokens)
|
||||||
|
|||||||
@@ -43,6 +43,25 @@ Extract design system tokens from source code files (CSS/SCSS/JS/TS/HTML) using
|
|||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session, --source
|
||||||
|
└─ Decision (base path resolution):
|
||||||
|
├─ --design-id provided → Exact match by design ID
|
||||||
|
├─ --session provided → Latest design run in session
|
||||||
|
└─ Neither → ERROR: Must provide --design-id or --session
|
||||||
|
|
||||||
|
Phase 0: Setup & File Discovery
|
||||||
|
├─ Step 1: Resolve base path
|
||||||
|
├─ Step 2: Initialize directories
|
||||||
|
└─ Step 3: Discover files using script
|
||||||
|
|
||||||
|
Phase 1: Parallel Agent Analysis (3 agents)
|
||||||
|
├─ Style Agent → design-tokens.json + code_snippets
|
||||||
|
├─ Animation Agent → animation-tokens.json + code_snippets
|
||||||
|
└─ Layout Agent → layout-templates.json + code_snippets
|
||||||
|
```
|
||||||
|
|
||||||
### Step 1: Setup & File Discovery
|
### Step 1: Setup & File Discovery
|
||||||
|
|
||||||
**Purpose**: Initialize session, discover and categorize code files
|
**Purpose**: Initialize session, discover and categorize code files
|
||||||
@@ -87,7 +106,7 @@ echo " Output: $base_path"
|
|||||||
|
|
||||||
# 3. Discover files using script
|
# 3. Discover files using script
|
||||||
discovery_file="${intermediates_dir}/discovered-files.json"
|
discovery_file="${intermediates_dir}/discovered-files.json"
|
||||||
~/.claude/scripts/discover-design-files.sh "$source" "$discovery_file"
|
ccw tool exec discover_design_files '{"sourceDir":"'"$source"'","outputPath":"'"$discovery_file"'"}'
|
||||||
|
|
||||||
echo " Output: $discovery_file"
|
echo " Output: $discovery_file"
|
||||||
```
|
```
|
||||||
@@ -142,6 +161,7 @@ echo "[Phase 1] Starting parallel agent analysis (3 agents)"
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(subagent_type="ui-design-agent",
|
Task(subagent_type="ui-design-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="[STYLE_TOKENS_EXTRACTION]
|
prompt="[STYLE_TOKENS_EXTRACTION]
|
||||||
Extract visual design tokens from code files using code import extraction pattern.
|
Extract visual design tokens from code files using code import extraction pattern.
|
||||||
|
|
||||||
@@ -161,14 +181,14 @@ Task(subagent_type="ui-design-agent",
|
|||||||
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
|
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
|
||||||
- Alternative (if many files): Execute CLI analysis for comprehensive report:
|
- Alternative (if many files): Execute CLI analysis for comprehensive report:
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd ${source} && gemini -p \"
|
ccw cli -p \"
|
||||||
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
|
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
|
||||||
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
|
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
||||||
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
|
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
|
||||||
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
|
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
|
||||||
\"
|
\" --tool gemini --mode analysis --cd ${source}
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
**Step 1: Load file list**
|
**Step 1: Load file list**
|
||||||
@@ -257,6 +277,7 @@ Task(subagent_type="ui-design-agent",
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(subagent_type="ui-design-agent",
|
Task(subagent_type="ui-design-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="[ANIMATION_TOKEN_GENERATION_TASK]
|
prompt="[ANIMATION_TOKEN_GENERATION_TASK]
|
||||||
Extract animation tokens from code files using code import extraction pattern.
|
Extract animation tokens from code files using code import extraction pattern.
|
||||||
|
|
||||||
@@ -276,14 +297,14 @@ Task(subagent_type="ui-design-agent",
|
|||||||
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
|
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
|
||||||
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
|
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd ${source} && gemini -p \"
|
ccw cli -p \"
|
||||||
PURPOSE: Detect animation frameworks and patterns
|
PURPOSE: Detect animation frameworks and patterns
|
||||||
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
|
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
||||||
EXPECTED: JSON report listing frameworks, animation types, file locations
|
EXPECTED: JSON report listing frameworks, animation types, file locations
|
||||||
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
|
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
|
||||||
\"
|
\" --tool gemini --mode analysis --cd ${source}
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
**Step 1: Load file list**
|
**Step 1: Load file list**
|
||||||
@@ -336,6 +357,7 @@ Task(subagent_type="ui-design-agent",
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(subagent_type="ui-design-agent",
|
Task(subagent_type="ui-design-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="[LAYOUT_TEMPLATE_GENERATION_TASK]
|
prompt="[LAYOUT_TEMPLATE_GENERATION_TASK]
|
||||||
Extract layout patterns from code files using code import extraction pattern.
|
Extract layout patterns from code files using code import extraction pattern.
|
||||||
|
|
||||||
@@ -355,14 +377,14 @@ Task(subagent_type="ui-design-agent",
|
|||||||
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
|
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
|
||||||
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
|
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd ${source} && gemini -p \"
|
ccw cli -p \"
|
||||||
PURPOSE: Classify components as universal vs specialized
|
PURPOSE: Classify components as universal vs specialized
|
||||||
TASK: • Identify UI components • Classify reusability • Map layout systems
|
TASK: • Identify UI components • Classify reusability • Map layout systems
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
|
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
|
||||||
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
|
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
|
||||||
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
|
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
|
||||||
\"
|
\" --tool gemini --mode analysis --cd ${source}
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
**Step 1: Load file list**
|
**Step 1: Load file list**
|
||||||
|
|||||||
@@ -23,6 +23,39 @@ This command separates the "scaffolding" (HTML structure and CSS layout) from th
|
|||||||
- **Device-Aware**: Optimized for specific device types (desktop, mobile, tablet, responsive)
|
- **Device-Aware**: Optimized for specific device types (desktop, mobile, tablet, responsive)
|
||||||
- **Token-Based**: CSS uses `var()` placeholders for spacing and breakpoints
|
- **Token-Based**: CSS uses `var()` placeholders for spacing and breakpoints
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session, --images, --prompt, --targets, --variants, --device-type, --interactive, --refine
|
||||||
|
└─ Decision (mode detection):
|
||||||
|
├─ --refine flag → Refinement Mode (variants_count = 1)
|
||||||
|
└─ No --refine → Exploration Mode (variants_count = --variants OR 3)
|
||||||
|
|
||||||
|
Phase 0: Setup & Input Validation
|
||||||
|
├─ Step 1: Detect input, mode & targets
|
||||||
|
├─ Step 2: Load inputs & create directories
|
||||||
|
└─ Step 3: Memory check (skip if cached)
|
||||||
|
|
||||||
|
Phase 1: Layout Concept/Refinement Options Generation
|
||||||
|
├─ Step 0.5: Load existing layout (Refinement Mode only)
|
||||||
|
├─ Step 1: Generate options (Agent Task 1)
|
||||||
|
│ └─ Decision:
|
||||||
|
│ ├─ Exploration Mode → Generate contrasting layout concepts
|
||||||
|
│ └─ Refinement Mode → Generate refinement options
|
||||||
|
└─ Step 2: Verify options file created
|
||||||
|
|
||||||
|
Phase 1.5: User Confirmation (Optional)
|
||||||
|
└─ Decision (--interactive flag):
|
||||||
|
├─ --interactive present → Present options, capture selection
|
||||||
|
└─ No --interactive → Skip to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Layout Template Generation
|
||||||
|
├─ Step 1: Load user selections or default to all
|
||||||
|
├─ Step 2: Launch parallel agent tasks
|
||||||
|
└─ Step 3: Verify output files
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 0: Setup & Input Validation
|
## Phase 0: Setup & Input Validation
|
||||||
|
|
||||||
### Step 1: Detect Input, Mode & Targets
|
### Step 1: Detect Input, Mode & Targets
|
||||||
|
|||||||
@@ -33,6 +33,29 @@ Converts design run extraction results into shareable reference package with:
|
|||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-run, --package-name, --output-dir
|
||||||
|
└─ Validation:
|
||||||
|
├─ --design-run and --package-name REQUIRED
|
||||||
|
└─ Package name format: lowercase, alphanumeric, hyphens only
|
||||||
|
|
||||||
|
Phase 0: Setup & Validation
|
||||||
|
├─ Step 1: Validate required parameters
|
||||||
|
├─ Step 2: Validate package name format
|
||||||
|
├─ Step 3: Validate design run exists
|
||||||
|
├─ Step 4: Check required extraction files (design-tokens.json, layout-templates.json)
|
||||||
|
└─ Step 5: Setup output directory
|
||||||
|
|
||||||
|
Phase 1: Prepare Component Data
|
||||||
|
├─ Step 1: Copy layout templates
|
||||||
|
├─ Step 2: Copy design tokens
|
||||||
|
└─ Step 3: Copy animation tokens (optional)
|
||||||
|
|
||||||
|
Phase 2: Preview Generation (Agent)
|
||||||
|
└─ Generate preview.html + preview.css via ui-design-agent
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 0: Setup & Validation
|
### Phase 0: Setup & Validation
|
||||||
|
|
||||||
**Purpose**: Validate inputs, prepare output directory
|
**Purpose**: Validate inputs, prepare output directory
|
||||||
|
|||||||
@@ -19,6 +19,43 @@ Extract design style from reference images or text prompts using Claude's built-
|
|||||||
- **Dual Mode**: Exploration (multiple contrasting variants) or Refinement (single design fine-tuning)
|
- **Dual Mode**: Exploration (multiple contrasting variants) or Refinement (single design fine-tuning)
|
||||||
- **Production-Ready**: WCAG AA compliant, OKLCH colors, semantic naming
|
- **Production-Ready**: WCAG AA compliant, OKLCH colors, semantic naming
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session, --images, --prompt, --variants, --interactive, --refine
|
||||||
|
└─ Decision (mode detection):
|
||||||
|
├─ --refine flag → Refinement Mode (variants_count = 1)
|
||||||
|
└─ No --refine → Exploration Mode (variants_count = --variants OR 3)
|
||||||
|
|
||||||
|
Phase 0: Setup & Input Validation
|
||||||
|
├─ Step 1: Detect input mode, extraction mode & base path
|
||||||
|
├─ Step 2: Load inputs
|
||||||
|
└─ Step 3: Memory check (skip if exists)
|
||||||
|
|
||||||
|
Phase 1: Design Direction/Refinement Options Generation
|
||||||
|
├─ Step 1: Load project context
|
||||||
|
├─ Step 2: Generate options (Agent Task 1)
|
||||||
|
│ └─ Decision:
|
||||||
|
│ ├─ Exploration Mode → Generate contrasting design directions
|
||||||
|
│ └─ Refinement Mode → Generate refinement options
|
||||||
|
└─ Step 3: Verify options file created
|
||||||
|
|
||||||
|
Phase 1.5: User Confirmation (Optional)
|
||||||
|
└─ Decision (--interactive flag):
|
||||||
|
├─ --interactive present → Present options, capture selection
|
||||||
|
└─ No --interactive → Skip to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Design System Generation
|
||||||
|
├─ Step 1: Load user selection or default to all
|
||||||
|
├─ Step 2: Create output directories
|
||||||
|
└─ Step 3: Launch agent tasks (parallel)
|
||||||
|
|
||||||
|
Phase 3: Verify Output
|
||||||
|
├─ Step 1: Check files created
|
||||||
|
└─ Step 2: Verify file sizes
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 0: Setup & Input Validation
|
## Phase 0: Setup & Input Validation
|
||||||
|
|
||||||
### Step 1: Detect Input Mode, Extraction Mode & Base Path
|
### Step 1: Detect Input Mode, Extraction Mode & Base Path
|
||||||
|
|||||||
@@ -1,35 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Classify folders by type for documentation generation
|
|
||||||
# Usage: get_modules_by_depth.sh | classify-folders.sh
|
|
||||||
# Output: folder_path|folder_type|code:N|dirs:N
|
|
||||||
|
|
||||||
while IFS='|' read -r depth_info path_info files_info types_info claude_info; do
|
|
||||||
# Extract folder path from format "path:./src/modules"
|
|
||||||
folder_path=$(echo "$path_info" | cut -d':' -f2-)
|
|
||||||
|
|
||||||
# Skip if path extraction failed
|
|
||||||
[[ -z "$folder_path" || ! -d "$folder_path" ]] && continue
|
|
||||||
|
|
||||||
# Count code files (maxdepth 1)
|
|
||||||
code_files=$(find "$folder_path" -maxdepth 1 -type f \
|
|
||||||
\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \
|
|
||||||
-o -name "*.py" -o -name "*.go" -o -name "*.java" -o -name "*.rs" \
|
|
||||||
-o -name "*.c" -o -name "*.cpp" -o -name "*.cs" \) \
|
|
||||||
2>/dev/null | wc -l)
|
|
||||||
|
|
||||||
# Count subdirectories
|
|
||||||
subfolders=$(find "$folder_path" -maxdepth 1 -type d \
|
|
||||||
-not -path "$folder_path" 2>/dev/null | wc -l)
|
|
||||||
|
|
||||||
# Determine folder type
|
|
||||||
if [[ $code_files -gt 0 ]]; then
|
|
||||||
folder_type="code" # API.md + README.md
|
|
||||||
elif [[ $subfolders -gt 0 ]]; then
|
|
||||||
folder_type="navigation" # README.md only
|
|
||||||
else
|
|
||||||
folder_type="skip" # Empty or no relevant content
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Output classification result
|
|
||||||
echo "${folder_path}|${folder_type}|code:${code_files}|dirs:${subfolders}"
|
|
||||||
done
|
|
||||||
@@ -1,225 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Convert design-tokens.json to tokens.css with Google Fonts import and global font rules
|
|
||||||
# Usage: cat design-tokens.json | ./convert_tokens_to_css.sh > tokens.css
|
|
||||||
# Or: ./convert_tokens_to_css.sh < design-tokens.json > tokens.css
|
|
||||||
|
|
||||||
# Read JSON from stdin
|
|
||||||
json_input=$(cat)
|
|
||||||
|
|
||||||
# Extract metadata for header comment
|
|
||||||
style_name=$(echo "$json_input" | jq -r '.meta.name // "Unknown Style"' 2>/dev/null || echo "Design Tokens")
|
|
||||||
|
|
||||||
# Generate header
|
|
||||||
cat <<EOF
|
|
||||||
/* ========================================
|
|
||||||
Design Tokens: ${style_name}
|
|
||||||
Auto-generated from design-tokens.json
|
|
||||||
======================================== */
|
|
||||||
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# Google Fonts Import Generation
|
|
||||||
# ========================================
|
|
||||||
# Extract font families and generate Google Fonts import URL
|
|
||||||
fonts=$(echo "$json_input" | jq -r '
|
|
||||||
.typography.font_family | to_entries[] | .value
|
|
||||||
' 2>/dev/null | sed "s/'//g" | cut -d',' -f1 | sort -u)
|
|
||||||
|
|
||||||
# Build Google Fonts URL
|
|
||||||
google_fonts_url="https://fonts.googleapis.com/css2?"
|
|
||||||
font_params=""
|
|
||||||
|
|
||||||
while IFS= read -r font; do
|
|
||||||
# Skip system fonts and empty lines
|
|
||||||
if [[ -z "$font" ]] || [[ "$font" =~ ^(system-ui|sans-serif|serif|monospace|cursive|fantasy)$ ]]; then
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Special handling for common web fonts with weights
|
|
||||||
case "$font" in
|
|
||||||
"Comic Neue")
|
|
||||||
font_params+="family=Comic+Neue:wght@300;400;700&"
|
|
||||||
;;
|
|
||||||
"Patrick Hand"|"Caveat"|"Dancing Script"|"Architects Daughter"|"Indie Flower"|"Shadows Into Light"|"Permanent Marker")
|
|
||||||
# URL-encode font name and add common weights
|
|
||||||
encoded_font=$(echo "$font" | sed 's/ /+/g')
|
|
||||||
font_params+="family=${encoded_font}:wght@400;700&"
|
|
||||||
;;
|
|
||||||
"Segoe Print"|"Bradley Hand"|"Chilanka")
|
|
||||||
# These are system fonts, skip
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
# Generic font: add with default weights
|
|
||||||
encoded_font=$(echo "$font" | sed 's/ /+/g')
|
|
||||||
font_params+="family=${encoded_font}:wght@400;500;600;700&"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done <<< "$fonts"
|
|
||||||
|
|
||||||
# Generate @import if we have fonts
|
|
||||||
if [[ -n "$font_params" ]]; then
|
|
||||||
# Remove trailing &
|
|
||||||
font_params="${font_params%&}"
|
|
||||||
echo "/* Import Web Fonts */"
|
|
||||||
echo "@import url('${google_fonts_url}${font_params}&display=swap');"
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# CSS Custom Properties Generation
|
|
||||||
# ========================================
|
|
||||||
echo ":root {"
|
|
||||||
|
|
||||||
# Colors - Brand
|
|
||||||
echo " /* Colors - Brand */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.brand | to_entries[] |
|
|
||||||
" --color-brand-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Colors - Surface
|
|
||||||
echo " /* Colors - Surface */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.surface | to_entries[] |
|
|
||||||
" --color-surface-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Colors - Semantic
|
|
||||||
echo " /* Colors - Semantic */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.semantic | to_entries[] |
|
|
||||||
" --color-semantic-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Colors - Text
|
|
||||||
echo " /* Colors - Text */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.text | to_entries[] |
|
|
||||||
" --color-text-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Colors - Border
|
|
||||||
echo " /* Colors - Border */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.border | to_entries[] |
|
|
||||||
" --color-border-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Font Family
|
|
||||||
echo " /* Typography - Font Family */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.font_family | to_entries[] |
|
|
||||||
" --font-family-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Font Size
|
|
||||||
echo " /* Typography - Font Size */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.font_size | to_entries[] |
|
|
||||||
" --font-size-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Font Weight
|
|
||||||
echo " /* Typography - Font Weight */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.font_weight | to_entries[] |
|
|
||||||
" --font-weight-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Line Height
|
|
||||||
echo " /* Typography - Line Height */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.line_height | to_entries[] |
|
|
||||||
" --line-height-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Letter Spacing
|
|
||||||
echo " /* Typography - Letter Spacing */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.letter_spacing | to_entries[] |
|
|
||||||
" --letter-spacing-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Spacing
|
|
||||||
echo " /* Spacing */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.spacing | to_entries[] |
|
|
||||||
" --spacing-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Border Radius
|
|
||||||
echo " /* Border Radius */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.border_radius | to_entries[] |
|
|
||||||
" --border-radius-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Shadows
|
|
||||||
echo " /* Shadows */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.shadows | to_entries[] |
|
|
||||||
" --shadow-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Breakpoints
|
|
||||||
echo " /* Breakpoints */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.breakpoints | to_entries[] |
|
|
||||||
" --breakpoint-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo "}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# Global Font Application
|
|
||||||
# ========================================
|
|
||||||
echo "/* ========================================"
|
|
||||||
echo " Global Font Application"
|
|
||||||
echo " ======================================== */"
|
|
||||||
echo ""
|
|
||||||
echo "body {"
|
|
||||||
echo " font-family: var(--font-family-body);"
|
|
||||||
echo " font-size: var(--font-size-base);"
|
|
||||||
echo " line-height: var(--line-height-normal);"
|
|
||||||
echo " color: var(--color-text-primary);"
|
|
||||||
echo " background-color: var(--color-surface-background);"
|
|
||||||
echo "}"
|
|
||||||
echo ""
|
|
||||||
echo "h1, h2, h3, h4, h5, h6, legend {"
|
|
||||||
echo " font-family: var(--font-family-heading);"
|
|
||||||
echo "}"
|
|
||||||
echo ""
|
|
||||||
echo "/* Reset default margins for better control */"
|
|
||||||
echo "* {"
|
|
||||||
echo " margin: 0;"
|
|
||||||
echo " padding: 0;"
|
|
||||||
echo " box-sizing: border-box;"
|
|
||||||
echo "}"
|
|
||||||
@@ -1,157 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Detect modules affected by git changes or recent modifications
|
|
||||||
# Usage: detect_changed_modules.sh [format]
|
|
||||||
# format: list|grouped|paths (default: paths)
|
|
||||||
#
|
|
||||||
# Features:
|
|
||||||
# - Respects .gitignore patterns (current directory or git root)
|
|
||||||
# - Detects git changes (staged, unstaged, or last commit)
|
|
||||||
# - Falls back to recently modified files (last 24 hours)
|
|
||||||
|
|
||||||
# Build exclusion filters from .gitignore
|
|
||||||
build_exclusion_filters() {
|
|
||||||
local filters=""
|
|
||||||
|
|
||||||
# Common system/cache directories to exclude
|
|
||||||
local system_excludes=(
|
|
||||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
|
||||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
|
||||||
"coverage" ".nyc_output" "logs" "tmp" "temp"
|
|
||||||
)
|
|
||||||
|
|
||||||
for exclude in "${system_excludes[@]}"; do
|
|
||||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Find and parse .gitignore (current dir first, then git root)
|
|
||||||
local gitignore_file=""
|
|
||||||
|
|
||||||
# Check current directory first
|
|
||||||
if [ -f ".gitignore" ]; then
|
|
||||||
gitignore_file=".gitignore"
|
|
||||||
else
|
|
||||||
# Try to find git root and check for .gitignore there
|
|
||||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
|
||||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
|
||||||
gitignore_file="$git_root/.gitignore"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse .gitignore if found
|
|
||||||
if [ -n "$gitignore_file" ]; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
|
||||||
|
|
||||||
# Remove trailing slash and whitespace
|
|
||||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
|
||||||
|
|
||||||
# Skip wildcards patterns (too complex for simple find)
|
|
||||||
[[ "$line" =~ \* ]] && continue
|
|
||||||
|
|
||||||
# Add to filters
|
|
||||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
|
||||||
done < "$gitignore_file"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$filters"
|
|
||||||
}
|
|
||||||
|
|
||||||
detect_changed_modules() {
|
|
||||||
local format="${1:-paths}"
|
|
||||||
local changed_files=""
|
|
||||||
local affected_dirs=""
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
|
|
||||||
# Step 1: Try to get git changes (staged + unstaged)
|
|
||||||
if git rev-parse --git-dir > /dev/null 2>&1; then
|
|
||||||
changed_files=$(git diff --name-only HEAD 2>/dev/null; git diff --name-only --cached 2>/dev/null)
|
|
||||||
|
|
||||||
# If no changes in working directory, check last commit
|
|
||||||
if [ -z "$changed_files" ]; then
|
|
||||||
changed_files=$(git diff --name-only HEAD~1 HEAD 2>/dev/null)
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Step 2: If no git changes, find recently modified source files (last 24 hours)
|
|
||||||
# Apply exclusion filters from .gitignore
|
|
||||||
if [ -z "$changed_files" ]; then
|
|
||||||
changed_files=$(eval "find . -type f \( \
|
|
||||||
-name '*.md' -o \
|
|
||||||
-name '*.js' -o -name '*.ts' -o -name '*.jsx' -o -name '*.tsx' -o \
|
|
||||||
-name '*.py' -o -name '*.go' -o -name '*.rs' -o \
|
|
||||||
-name '*.java' -o -name '*.cpp' -o -name '*.c' -o -name '*.h' -o \
|
|
||||||
-name '*.sh' -o -name '*.ps1' -o \
|
|
||||||
-name '*.json' -o -name '*.yaml' -o -name '*.yml' \
|
|
||||||
\) $exclusion_filters -mtime -1 2>/dev/null")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Step 3: Extract unique parent directories
|
|
||||||
if [ -n "$changed_files" ]; then
|
|
||||||
affected_dirs=$(echo "$changed_files" | \
|
|
||||||
sed 's|/[^/]*$||' | \
|
|
||||||
grep -v '^\.$' | \
|
|
||||||
sort -u)
|
|
||||||
|
|
||||||
# Add current directory if files are in root
|
|
||||||
if echo "$changed_files" | grep -q '^[^/]*$'; then
|
|
||||||
affected_dirs=$(echo -e ".\n$affected_dirs" | sort -u)
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Step 4: Output in requested format
|
|
||||||
case "$format" in
|
|
||||||
"list")
|
|
||||||
if [ -n "$affected_dirs" ]; then
|
|
||||||
echo "$affected_dirs" | while read dir; do
|
|
||||||
if [ -d "$dir" ]; then
|
|
||||||
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
|
|
||||||
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
|
|
||||||
if [ "$dir" = "." ]; then depth=0; fi
|
|
||||||
|
|
||||||
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
|
|
||||||
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
|
|
||||||
local has_claude="no"
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
|
|
||||||
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude|status:changed"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
|
|
||||||
"grouped")
|
|
||||||
if [ -n "$affected_dirs" ]; then
|
|
||||||
echo "📊 Affected modules by changes:"
|
|
||||||
# Group by depth
|
|
||||||
echo "$affected_dirs" | while read dir; do
|
|
||||||
if [ -d "$dir" ]; then
|
|
||||||
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
|
|
||||||
if [ "$dir" = "." ]; then depth=0; fi
|
|
||||||
local claude_indicator=""
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
|
|
||||||
echo "$depth:$dir$claude_indicator"
|
|
||||||
fi
|
|
||||||
done | sort -n | awk -F: '
|
|
||||||
{
|
|
||||||
if ($1 != prev_depth) {
|
|
||||||
if (prev_depth != "") print ""
|
|
||||||
print " 📁 Depth " $1 ":"
|
|
||||||
prev_depth = $1
|
|
||||||
}
|
|
||||||
print " - " $2 " (changed)"
|
|
||||||
}'
|
|
||||||
else
|
|
||||||
echo "📊 No recent changes detected"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
|
|
||||||
"paths"|*)
|
|
||||||
echo "$affected_dirs"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
# Execute function if script is run directly
|
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
|
||||||
detect_changed_modules "$@"
|
|
||||||
fi
|
|
||||||
@@ -1,83 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# discover-design-files.sh - Discover design-related files and output JSON
|
|
||||||
# Usage: discover-design-files.sh <source_dir> <output_json>
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
source_dir="${1:-.}"
|
|
||||||
output_json="${2:-discovered-files.json}"
|
|
||||||
|
|
||||||
# Function to find and format files as JSON array
|
|
||||||
find_files() {
|
|
||||||
local pattern="$1"
|
|
||||||
local files
|
|
||||||
files=$(eval "find \"$source_dir\" -type f $pattern \
|
|
||||||
! -path \"*/node_modules/*\" \
|
|
||||||
! -path \"*/dist/*\" \
|
|
||||||
! -path \"*/.git/*\" \
|
|
||||||
! -path \"*/build/*\" \
|
|
||||||
! -path \"*/coverage/*\" \
|
|
||||||
2>/dev/null | sort || true")
|
|
||||||
|
|
||||||
local count
|
|
||||||
if [ -z "$files" ]; then
|
|
||||||
count=0
|
|
||||||
else
|
|
||||||
count=$(echo "$files" | grep -c . || echo 0)
|
|
||||||
fi
|
|
||||||
local json_files=""
|
|
||||||
|
|
||||||
if [ "$count" -gt 0 ]; then
|
|
||||||
json_files=$(echo "$files" | awk '{printf "\"%s\"%s\n", $0, (NR<'$count'?",":"")}' | tr '\n' ' ')
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$count|$json_files"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Discover CSS/SCSS files
|
|
||||||
css_result=$(find_files '\( -name "*.css" -o -name "*.scss" \)')
|
|
||||||
css_count=${css_result%%|*}
|
|
||||||
css_files=${css_result#*|}
|
|
||||||
|
|
||||||
# Discover JS/TS files (all framework files)
|
|
||||||
js_result=$(find_files '\( -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o -name "*.mjs" -o -name "*.cjs" -o -name "*.vue" -o -name "*.svelte" \)')
|
|
||||||
js_count=${js_result%%|*}
|
|
||||||
js_files=${js_result#*|}
|
|
||||||
|
|
||||||
# Discover HTML files
|
|
||||||
html_result=$(find_files '-name "*.html"')
|
|
||||||
html_count=${html_result%%|*}
|
|
||||||
html_files=${html_result#*|}
|
|
||||||
|
|
||||||
# Calculate total
|
|
||||||
total_count=$((css_count + js_count + html_count))
|
|
||||||
|
|
||||||
# Generate JSON
|
|
||||||
cat > "$output_json" << EOF
|
|
||||||
{
|
|
||||||
"discovery_time": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
|
||||||
"source_directory": "$(cd "$source_dir" && pwd)",
|
|
||||||
"file_types": {
|
|
||||||
"css": {
|
|
||||||
"count": $css_count,
|
|
||||||
"files": [${css_files}]
|
|
||||||
},
|
|
||||||
"js": {
|
|
||||||
"count": $js_count,
|
|
||||||
"files": [${js_files}]
|
|
||||||
},
|
|
||||||
"html": {
|
|
||||||
"count": $html_count,
|
|
||||||
"files": [${html_files}]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"total_files": $total_count
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# Ensure file is fully written and synchronized to disk
|
|
||||||
# This prevents race conditions when the file is immediately read by another process
|
|
||||||
sync "$output_json" 2>/dev/null || sync # Sync specific file, fallback to full sync
|
|
||||||
sleep 0.1 # Additional safety: 100ms delay for filesystem metadata update
|
|
||||||
|
|
||||||
echo "Discovered: CSS=$css_count, JS=$js_count, HTML=$html_count (Total: $total_count)" >&2
|
|
||||||
@@ -1,243 +0,0 @@
|
|||||||
/**
|
|
||||||
* Animation & Transition Extraction Script
|
|
||||||
*
|
|
||||||
* Extracts CSS animations, transitions, and transform patterns from a live web page.
|
|
||||||
* This script runs in the browser context via Chrome DevTools Protocol.
|
|
||||||
*
|
|
||||||
* @returns {Object} Structured animation data
|
|
||||||
*/
|
|
||||||
(() => {
|
|
||||||
const extractionTimestamp = new Date().toISOString();
|
|
||||||
const currentUrl = window.location.href;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Parse transition shorthand or individual properties
|
|
||||||
*/
|
|
||||||
function parseTransition(element, computedStyle) {
|
|
||||||
const transition = computedStyle.transition || computedStyle.webkitTransition;
|
|
||||||
|
|
||||||
if (!transition || transition === 'none' || transition === 'all 0s ease 0s') {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse shorthand: "property duration easing delay"
|
|
||||||
const transitions = [];
|
|
||||||
const parts = transition.split(/,\s*/);
|
|
||||||
|
|
||||||
parts.forEach(part => {
|
|
||||||
const match = part.match(/^(\S+)\s+([\d.]+m?s)\s+(\S+)(?:\s+([\d.]+m?s))?/);
|
|
||||||
if (match) {
|
|
||||||
transitions.push({
|
|
||||||
property: match[1],
|
|
||||||
duration: match[2],
|
|
||||||
easing: match[3],
|
|
||||||
delay: match[4] || '0s'
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
return transitions.length > 0 ? transitions : null;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract animation name and properties
|
|
||||||
*/
|
|
||||||
function parseAnimation(element, computedStyle) {
|
|
||||||
const animationName = computedStyle.animationName || computedStyle.webkitAnimationName;
|
|
||||||
|
|
||||||
if (!animationName || animationName === 'none') {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
name: animationName,
|
|
||||||
duration: computedStyle.animationDuration || computedStyle.webkitAnimationDuration,
|
|
||||||
easing: computedStyle.animationTimingFunction || computedStyle.webkitAnimationTimingFunction,
|
|
||||||
delay: computedStyle.animationDelay || computedStyle.webkitAnimationDelay || '0s',
|
|
||||||
iterationCount: computedStyle.animationIterationCount || computedStyle.webkitAnimationIterationCount || '1',
|
|
||||||
direction: computedStyle.animationDirection || computedStyle.webkitAnimationDirection || 'normal',
|
|
||||||
fillMode: computedStyle.animationFillMode || computedStyle.webkitAnimationFillMode || 'none'
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract transform value
|
|
||||||
*/
|
|
||||||
function parseTransform(computedStyle) {
|
|
||||||
const transform = computedStyle.transform || computedStyle.webkitTransform;
|
|
||||||
|
|
||||||
if (!transform || transform === 'none') {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
return transform;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get element selector (simplified for readability)
|
|
||||||
*/
|
|
||||||
function getSelector(element) {
|
|
||||||
if (element.id) {
|
|
||||||
return `#${element.id}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (element.className && typeof element.className === 'string') {
|
|
||||||
const classes = element.className.trim().split(/\s+/).slice(0, 2).join('.');
|
|
||||||
if (classes) {
|
|
||||||
return `.${classes}`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return element.tagName.toLowerCase();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract all stylesheets and find @keyframes rules
|
|
||||||
*/
|
|
||||||
function extractKeyframes() {
|
|
||||||
const keyframes = {};
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Iterate through all stylesheets
|
|
||||||
Array.from(document.styleSheets).forEach(sheet => {
|
|
||||||
try {
|
|
||||||
// Skip external stylesheets due to CORS
|
|
||||||
if (sheet.href && !sheet.href.startsWith(window.location.origin)) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
Array.from(sheet.cssRules || sheet.rules || []).forEach(rule => {
|
|
||||||
// Check for @keyframes rules
|
|
||||||
if (rule.type === CSSRule.KEYFRAMES_RULE || rule.type === CSSRule.WEBKIT_KEYFRAMES_RULE) {
|
|
||||||
const name = rule.name;
|
|
||||||
const frames = {};
|
|
||||||
|
|
||||||
Array.from(rule.cssRules || []).forEach(keyframe => {
|
|
||||||
const key = keyframe.keyText; // e.g., "0%", "50%", "100%"
|
|
||||||
frames[key] = keyframe.style.cssText;
|
|
||||||
});
|
|
||||||
|
|
||||||
keyframes[name] = frames;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} catch (e) {
|
|
||||||
// Skip stylesheets that can't be accessed (CORS)
|
|
||||||
console.warn('Cannot access stylesheet:', sheet.href, e.message);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} catch (e) {
|
|
||||||
console.error('Error extracting keyframes:', e);
|
|
||||||
}
|
|
||||||
|
|
||||||
return keyframes;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Scan visible elements for animations and transitions
|
|
||||||
*/
|
|
||||||
function scanElements() {
|
|
||||||
const elements = document.querySelectorAll('*');
|
|
||||||
const transitionData = [];
|
|
||||||
const animationData = [];
|
|
||||||
const transformData = [];
|
|
||||||
|
|
||||||
const uniqueTransitions = new Set();
|
|
||||||
const uniqueAnimations = new Set();
|
|
||||||
const uniqueEasings = new Set();
|
|
||||||
const uniqueDurations = new Set();
|
|
||||||
|
|
||||||
elements.forEach(element => {
|
|
||||||
// Skip invisible elements
|
|
||||||
const rect = element.getBoundingClientRect();
|
|
||||||
if (rect.width === 0 && rect.height === 0) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const computedStyle = window.getComputedStyle(element);
|
|
||||||
|
|
||||||
// Extract transitions
|
|
||||||
const transitions = parseTransition(element, computedStyle);
|
|
||||||
if (transitions) {
|
|
||||||
const selector = getSelector(element);
|
|
||||||
transitions.forEach(t => {
|
|
||||||
const key = `${t.property}-${t.duration}-${t.easing}`;
|
|
||||||
if (!uniqueTransitions.has(key)) {
|
|
||||||
uniqueTransitions.add(key);
|
|
||||||
transitionData.push({
|
|
||||||
selector,
|
|
||||||
...t
|
|
||||||
});
|
|
||||||
uniqueEasings.add(t.easing);
|
|
||||||
uniqueDurations.add(t.duration);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract animations
|
|
||||||
const animation = parseAnimation(element, computedStyle);
|
|
||||||
if (animation) {
|
|
||||||
const selector = getSelector(element);
|
|
||||||
const key = `${animation.name}-${animation.duration}`;
|
|
||||||
if (!uniqueAnimations.has(key)) {
|
|
||||||
uniqueAnimations.add(key);
|
|
||||||
animationData.push({
|
|
||||||
selector,
|
|
||||||
...animation
|
|
||||||
});
|
|
||||||
uniqueEasings.add(animation.easing);
|
|
||||||
uniqueDurations.add(animation.duration);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract transforms (on hover/active, we only get current state)
|
|
||||||
const transform = parseTransform(computedStyle);
|
|
||||||
if (transform) {
|
|
||||||
const selector = getSelector(element);
|
|
||||||
transformData.push({
|
|
||||||
selector,
|
|
||||||
transform
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
return {
|
|
||||||
transitions: transitionData,
|
|
||||||
animations: animationData,
|
|
||||||
transforms: transformData,
|
|
||||||
uniqueEasings: Array.from(uniqueEasings),
|
|
||||||
uniqueDurations: Array.from(uniqueDurations)
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Main extraction function
|
|
||||||
*/
|
|
||||||
function extractAnimations() {
|
|
||||||
const elementData = scanElements();
|
|
||||||
const keyframes = extractKeyframes();
|
|
||||||
|
|
||||||
return {
|
|
||||||
metadata: {
|
|
||||||
timestamp: extractionTimestamp,
|
|
||||||
url: currentUrl,
|
|
||||||
method: 'chrome-devtools',
|
|
||||||
version: '1.0.0'
|
|
||||||
},
|
|
||||||
transitions: elementData.transitions,
|
|
||||||
animations: elementData.animations,
|
|
||||||
transforms: elementData.transforms,
|
|
||||||
keyframes: keyframes,
|
|
||||||
summary: {
|
|
||||||
total_transitions: elementData.transitions.length,
|
|
||||||
total_animations: elementData.animations.length,
|
|
||||||
total_transforms: elementData.transforms.length,
|
|
||||||
total_keyframes: Object.keys(keyframes).length,
|
|
||||||
unique_easings: elementData.uniqueEasings,
|
|
||||||
unique_durations: elementData.uniqueDurations
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Execute extraction
|
|
||||||
return extractAnimations();
|
|
||||||
})();
|
|
||||||
@@ -1,118 +0,0 @@
|
|||||||
/**
|
|
||||||
* Extract Computed Styles from DOM
|
|
||||||
*
|
|
||||||
* This script extracts real CSS computed styles from a webpage's DOM
|
|
||||||
* to provide accurate design tokens for UI replication.
|
|
||||||
*
|
|
||||||
* Usage: Execute this function via Chrome DevTools evaluate_script
|
|
||||||
*/
|
|
||||||
|
|
||||||
(() => {
|
|
||||||
/**
|
|
||||||
* Extract unique values from a set and sort them
|
|
||||||
*/
|
|
||||||
const uniqueSorted = (set) => {
|
|
||||||
return Array.from(set)
|
|
||||||
.filter(v => v && v !== 'none' && v !== '0px' && v !== 'rgba(0, 0, 0, 0)')
|
|
||||||
.sort();
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Parse rgb/rgba to OKLCH format (placeholder - returns original for now)
|
|
||||||
*/
|
|
||||||
const toOKLCH = (color) => {
|
|
||||||
// TODO: Implement actual RGB to OKLCH conversion
|
|
||||||
// For now, return the original color with a note
|
|
||||||
return `${color} /* TODO: Convert to OKLCH */`;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract only key styles from an element
|
|
||||||
*/
|
|
||||||
const extractKeyStyles = (element) => {
|
|
||||||
const s = window.getComputedStyle(element);
|
|
||||||
return {
|
|
||||||
color: s.color,
|
|
||||||
bg: s.backgroundColor,
|
|
||||||
borderRadius: s.borderRadius,
|
|
||||||
boxShadow: s.boxShadow,
|
|
||||||
fontSize: s.fontSize,
|
|
||||||
fontWeight: s.fontWeight,
|
|
||||||
padding: s.padding,
|
|
||||||
margin: s.margin
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Main extraction function - extract all critical design tokens
|
|
||||||
*/
|
|
||||||
const extractDesignTokens = () => {
|
|
||||||
// Include all key UI elements
|
|
||||||
const selectors = [
|
|
||||||
'button', '.btn', '[role="button"]',
|
|
||||||
'input', 'textarea', 'select',
|
|
||||||
'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
|
|
||||||
'.card', 'article', 'section',
|
|
||||||
'a', 'p', 'nav', 'header', 'footer'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Collect all design tokens
|
|
||||||
const tokens = {
|
|
||||||
colors: new Set(),
|
|
||||||
borderRadii: new Set(),
|
|
||||||
shadows: new Set(),
|
|
||||||
fontSizes: new Set(),
|
|
||||||
fontWeights: new Set(),
|
|
||||||
spacing: new Set()
|
|
||||||
};
|
|
||||||
|
|
||||||
// Extract from all elements
|
|
||||||
selectors.forEach(selector => {
|
|
||||||
try {
|
|
||||||
const elements = document.querySelectorAll(selector);
|
|
||||||
elements.forEach(element => {
|
|
||||||
const s = extractKeyStyles(element);
|
|
||||||
|
|
||||||
// Collect all tokens (no limits)
|
|
||||||
if (s.color && s.color !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.color);
|
|
||||||
if (s.bg && s.bg !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.bg);
|
|
||||||
if (s.borderRadius && s.borderRadius !== '0px') tokens.borderRadii.add(s.borderRadius);
|
|
||||||
if (s.boxShadow && s.boxShadow !== 'none') tokens.shadows.add(s.boxShadow);
|
|
||||||
if (s.fontSize) tokens.fontSizes.add(s.fontSize);
|
|
||||||
if (s.fontWeight) tokens.fontWeights.add(s.fontWeight);
|
|
||||||
|
|
||||||
// Extract all spacing values
|
|
||||||
[s.padding, s.margin].forEach(val => {
|
|
||||||
if (val && val !== '0px') {
|
|
||||||
val.split(' ').forEach(v => {
|
|
||||||
if (v && v !== '0px') tokens.spacing.add(v);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
} catch (e) {
|
|
||||||
console.warn(`Error: ${selector}`, e);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Return all tokens (no element details to save context)
|
|
||||||
return {
|
|
||||||
metadata: {
|
|
||||||
extractedAt: new Date().toISOString(),
|
|
||||||
url: window.location.href,
|
|
||||||
method: 'computed-styles'
|
|
||||||
},
|
|
||||||
tokens: {
|
|
||||||
colors: uniqueSorted(tokens.colors),
|
|
||||||
borderRadii: uniqueSorted(tokens.borderRadii), // ALL radius values
|
|
||||||
shadows: uniqueSorted(tokens.shadows), // ALL shadows
|
|
||||||
fontSizes: uniqueSorted(tokens.fontSizes),
|
|
||||||
fontWeights: uniqueSorted(tokens.fontWeights),
|
|
||||||
spacing: uniqueSorted(tokens.spacing)
|
|
||||||
}
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
// Execute and return results
|
|
||||||
return extractDesignTokens();
|
|
||||||
})();
|
|
||||||
@@ -1,411 +0,0 @@
|
|||||||
/**
|
|
||||||
* Extract Layout Structure from DOM - Enhanced Version
|
|
||||||
*
|
|
||||||
* Extracts real layout information from DOM to provide accurate
|
|
||||||
* structural data for UI replication.
|
|
||||||
*
|
|
||||||
* Features:
|
|
||||||
* - Framework detection (Nuxt.js, Next.js, React, Vue, Angular)
|
|
||||||
* - Multi-strategy container detection (strict → relaxed → class-based → framework-specific)
|
|
||||||
* - Intelligent main content detection with common class names support
|
|
||||||
* - Supports modern SPA frameworks
|
|
||||||
* - Detects non-semantic main containers (.main, .content, etc.)
|
|
||||||
* - Progressive exploration: Auto-discovers missing selectors when standard patterns fail
|
|
||||||
* - Suggests new class names to add to script based on actual page structure
|
|
||||||
*
|
|
||||||
* Progressive Exploration:
|
|
||||||
* When fewer than 3 main containers are found, the script automatically:
|
|
||||||
* 1. Analyzes all large visible containers (≥500×300px)
|
|
||||||
* 2. Extracts class name patterns (main/content/wrapper/container/page/etc.)
|
|
||||||
* 3. Suggests new selectors to add to the script
|
|
||||||
* 4. Returns exploration data in result.exploration
|
|
||||||
*
|
|
||||||
* Usage: Execute via Chrome DevTools evaluate_script
|
|
||||||
* Version: 2.2.0
|
|
||||||
*/
|
|
||||||
|
|
||||||
(() => {
|
|
||||||
/**
|
|
||||||
* Get element's bounding box relative to viewport
|
|
||||||
*/
|
|
||||||
const getBounds = (element) => {
|
|
||||||
const rect = element.getBoundingClientRect();
|
|
||||||
return {
|
|
||||||
x: Math.round(rect.x),
|
|
||||||
y: Math.round(rect.y),
|
|
||||||
width: Math.round(rect.width),
|
|
||||||
height: Math.round(rect.height)
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract layout properties from an element
|
|
||||||
*/
|
|
||||||
const extractLayoutProps = (element) => {
|
|
||||||
const s = window.getComputedStyle(element);
|
|
||||||
|
|
||||||
return {
|
|
||||||
// Core layout
|
|
||||||
display: s.display,
|
|
||||||
position: s.position,
|
|
||||||
|
|
||||||
// Flexbox
|
|
||||||
flexDirection: s.flexDirection,
|
|
||||||
justifyContent: s.justifyContent,
|
|
||||||
alignItems: s.alignItems,
|
|
||||||
flexWrap: s.flexWrap,
|
|
||||||
gap: s.gap,
|
|
||||||
|
|
||||||
// Grid
|
|
||||||
gridTemplateColumns: s.gridTemplateColumns,
|
|
||||||
gridTemplateRows: s.gridTemplateRows,
|
|
||||||
gridAutoFlow: s.gridAutoFlow,
|
|
||||||
|
|
||||||
// Dimensions
|
|
||||||
width: s.width,
|
|
||||||
height: s.height,
|
|
||||||
maxWidth: s.maxWidth,
|
|
||||||
minWidth: s.minWidth,
|
|
||||||
|
|
||||||
// Spacing
|
|
||||||
padding: s.padding,
|
|
||||||
margin: s.margin
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Identify layout pattern for an element
|
|
||||||
*/
|
|
||||||
const identifyPattern = (props) => {
|
|
||||||
const { display, flexDirection, gridTemplateColumns } = props;
|
|
||||||
|
|
||||||
if (display === 'flex' || display === 'inline-flex') {
|
|
||||||
if (flexDirection === 'column') return 'flex-column';
|
|
||||||
if (flexDirection === 'row') return 'flex-row';
|
|
||||||
return 'flex';
|
|
||||||
}
|
|
||||||
|
|
||||||
if (display === 'grid') {
|
|
||||||
const cols = gridTemplateColumns;
|
|
||||||
if (cols && cols !== 'none') {
|
|
||||||
const colCount = cols.split(' ').length;
|
|
||||||
return `grid-${colCount}col`;
|
|
||||||
}
|
|
||||||
return 'grid';
|
|
||||||
}
|
|
||||||
|
|
||||||
if (display === 'block') return 'block';
|
|
||||||
|
|
||||||
return display;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Detect frontend framework
|
|
||||||
*/
|
|
||||||
const detectFramework = () => {
|
|
||||||
if (document.querySelector('#__nuxt')) return { name: 'Nuxt.js', version: 'unknown' };
|
|
||||||
if (document.querySelector('#__next')) return { name: 'Next.js', version: 'unknown' };
|
|
||||||
if (document.querySelector('[data-reactroot]')) return { name: 'React', version: 'unknown' };
|
|
||||||
if (document.querySelector('[ng-version]')) return { name: 'Angular', version: 'unknown' };
|
|
||||||
if (window.Vue) return { name: 'Vue.js', version: window.Vue.version || 'unknown' };
|
|
||||||
return { name: 'Unknown', version: 'unknown' };
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Build layout tree recursively
|
|
||||||
*/
|
|
||||||
const buildLayoutTree = (element, depth = 0, maxDepth = 3) => {
|
|
||||||
if (depth > maxDepth) return null;
|
|
||||||
|
|
||||||
const props = extractLayoutProps(element);
|
|
||||||
const bounds = getBounds(element);
|
|
||||||
const pattern = identifyPattern(props);
|
|
||||||
|
|
||||||
// Get semantic role
|
|
||||||
const tagName = element.tagName.toLowerCase();
|
|
||||||
const classes = Array.from(element.classList).slice(0, 3); // Max 3 classes
|
|
||||||
const role = element.getAttribute('role');
|
|
||||||
|
|
||||||
// Build node
|
|
||||||
const node = {
|
|
||||||
tag: tagName,
|
|
||||||
classes: classes,
|
|
||||||
role: role,
|
|
||||||
pattern: pattern,
|
|
||||||
bounds: bounds,
|
|
||||||
layout: {
|
|
||||||
display: props.display,
|
|
||||||
position: props.position
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add flex/grid specific properties
|
|
||||||
if (props.display === 'flex' || props.display === 'inline-flex') {
|
|
||||||
node.layout.flexDirection = props.flexDirection;
|
|
||||||
node.layout.justifyContent = props.justifyContent;
|
|
||||||
node.layout.alignItems = props.alignItems;
|
|
||||||
node.layout.gap = props.gap;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (props.display === 'grid') {
|
|
||||||
node.layout.gridTemplateColumns = props.gridTemplateColumns;
|
|
||||||
node.layout.gridTemplateRows = props.gridTemplateRows;
|
|
||||||
node.layout.gap = props.gap;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process children for container elements
|
|
||||||
if (props.display === 'flex' || props.display === 'grid' || props.display === 'block') {
|
|
||||||
const children = Array.from(element.children);
|
|
||||||
if (children.length > 0 && children.length < 50) { // Limit to 50 children
|
|
||||||
node.children = children
|
|
||||||
.map(child => buildLayoutTree(child, depth + 1, maxDepth))
|
|
||||||
.filter(child => child !== null);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return node;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Find main layout containers with multi-strategy approach
|
|
||||||
*/
|
|
||||||
const findMainContainers = () => {
|
|
||||||
const containers = [];
|
|
||||||
const found = new Set();
|
|
||||||
|
|
||||||
// Strategy 1: Strict selectors (body direct children)
|
|
||||||
const strictSelectors = [
|
|
||||||
'body > header',
|
|
||||||
'body > nav',
|
|
||||||
'body > main',
|
|
||||||
'body > footer'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Strategy 2: Relaxed selectors (any level)
|
|
||||||
const relaxedSelectors = [
|
|
||||||
'header',
|
|
||||||
'nav',
|
|
||||||
'main',
|
|
||||||
'footer',
|
|
||||||
'[role="banner"]',
|
|
||||||
'[role="navigation"]',
|
|
||||||
'[role="main"]',
|
|
||||||
'[role="contentinfo"]'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Strategy 3: Common class-based main content selectors
|
|
||||||
const commonClassSelectors = [
|
|
||||||
'.main',
|
|
||||||
'.content',
|
|
||||||
'.main-content',
|
|
||||||
'.page-content',
|
|
||||||
'.container.main',
|
|
||||||
'.wrapper > .main',
|
|
||||||
'div[class*="main-wrapper"]',
|
|
||||||
'div[class*="content-wrapper"]'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Strategy 4: Framework-specific selectors
|
|
||||||
const frameworkSelectors = [
|
|
||||||
'#__nuxt header', '#__nuxt .main', '#__nuxt main', '#__nuxt footer',
|
|
||||||
'#__next header', '#__next .main', '#__next main', '#__next footer',
|
|
||||||
'#app header', '#app .main', '#app main', '#app footer',
|
|
||||||
'[data-app] header', '[data-app] .main', '[data-app] main', '[data-app] footer'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Try all strategies
|
|
||||||
const allSelectors = [...strictSelectors, ...relaxedSelectors, ...commonClassSelectors, ...frameworkSelectors];
|
|
||||||
|
|
||||||
allSelectors.forEach(selector => {
|
|
||||||
try {
|
|
||||||
const elements = document.querySelectorAll(selector);
|
|
||||||
elements.forEach(element => {
|
|
||||||
// Avoid duplicates and invisible elements
|
|
||||||
if (!found.has(element) && element.offsetParent !== null) {
|
|
||||||
found.add(element);
|
|
||||||
const tree = buildLayoutTree(element, 0, 3);
|
|
||||||
if (tree && tree.bounds.width > 0 && tree.bounds.height > 0) {
|
|
||||||
containers.push(tree);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} catch (e) {
|
|
||||||
console.warn(`Selector failed: ${selector}`, e);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Fallback: If no containers found, use body's direct children
|
|
||||||
if (containers.length === 0) {
|
|
||||||
Array.from(document.body.children).forEach(child => {
|
|
||||||
if (child.offsetParent !== null && !found.has(child)) {
|
|
||||||
const tree = buildLayoutTree(child, 0, 2);
|
|
||||||
if (tree && tree.bounds.width > 100 && tree.bounds.height > 100) {
|
|
||||||
containers.push(tree);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
return containers;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Progressive exploration: Discover main containers when standard selectors fail
|
|
||||||
* Analyzes large visible containers and suggests class name patterns
|
|
||||||
*/
|
|
||||||
const exploreMainContainers = () => {
|
|
||||||
const candidates = [];
|
|
||||||
const minWidth = 500;
|
|
||||||
const minHeight = 300;
|
|
||||||
|
|
||||||
// Find all large visible divs
|
|
||||||
const allDivs = document.querySelectorAll('div');
|
|
||||||
allDivs.forEach(div => {
|
|
||||||
const rect = div.getBoundingClientRect();
|
|
||||||
const style = window.getComputedStyle(div);
|
|
||||||
|
|
||||||
// Filter: large size, visible, not header/footer
|
|
||||||
if (rect.width >= minWidth &&
|
|
||||||
rect.height >= minHeight &&
|
|
||||||
div.offsetParent !== null &&
|
|
||||||
!div.closest('header') &&
|
|
||||||
!div.closest('footer')) {
|
|
||||||
|
|
||||||
const classes = Array.from(div.classList);
|
|
||||||
const area = rect.width * rect.height;
|
|
||||||
|
|
||||||
candidates.push({
|
|
||||||
element: div,
|
|
||||||
classes: classes,
|
|
||||||
area: area,
|
|
||||||
bounds: {
|
|
||||||
width: Math.round(rect.width),
|
|
||||||
height: Math.round(rect.height)
|
|
||||||
},
|
|
||||||
display: style.display,
|
|
||||||
depth: getElementDepth(div)
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Sort by area (largest first) and take top candidates
|
|
||||||
candidates.sort((a, b) => b.area - a.area);
|
|
||||||
|
|
||||||
// Extract unique class patterns from top candidates
|
|
||||||
const classPatterns = new Set();
|
|
||||||
candidates.slice(0, 20).forEach(c => {
|
|
||||||
c.classes.forEach(cls => {
|
|
||||||
// Identify potential main content class patterns
|
|
||||||
if (cls.match(/main|content|container|wrapper|page|body|layout|app/i)) {
|
|
||||||
classPatterns.add(cls);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
return {
|
|
||||||
candidates: candidates.slice(0, 10).map(c => ({
|
|
||||||
classes: c.classes,
|
|
||||||
bounds: c.bounds,
|
|
||||||
display: c.display,
|
|
||||||
depth: c.depth
|
|
||||||
})),
|
|
||||||
suggestedSelectors: Array.from(classPatterns).map(cls => `.${cls}`)
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get element depth in DOM tree
|
|
||||||
*/
|
|
||||||
const getElementDepth = (element) => {
|
|
||||||
let depth = 0;
|
|
||||||
let current = element;
|
|
||||||
while (current.parentElement) {
|
|
||||||
depth++;
|
|
||||||
current = current.parentElement;
|
|
||||||
}
|
|
||||||
return depth;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Analyze layout patterns
|
|
||||||
*/
|
|
||||||
const analyzePatterns = (containers) => {
|
|
||||||
const patterns = {
|
|
||||||
flexColumn: 0,
|
|
||||||
flexRow: 0,
|
|
||||||
grid: 0,
|
|
||||||
sticky: 0,
|
|
||||||
fixed: 0
|
|
||||||
};
|
|
||||||
|
|
||||||
const analyze = (node) => {
|
|
||||||
if (!node) return;
|
|
||||||
|
|
||||||
if (node.pattern === 'flex-column') patterns.flexColumn++;
|
|
||||||
if (node.pattern === 'flex-row') patterns.flexRow++;
|
|
||||||
if (node.pattern && node.pattern.startsWith('grid')) patterns.grid++;
|
|
||||||
if (node.layout.position === 'sticky') patterns.sticky++;
|
|
||||||
if (node.layout.position === 'fixed') patterns.fixed++;
|
|
||||||
|
|
||||||
if (node.children) {
|
|
||||||
node.children.forEach(analyze);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
containers.forEach(analyze);
|
|
||||||
return patterns;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Main extraction function with progressive exploration
|
|
||||||
*/
|
|
||||||
const extractLayout = () => {
|
|
||||||
const framework = detectFramework();
|
|
||||||
const containers = findMainContainers();
|
|
||||||
const patterns = analyzePatterns(containers);
|
|
||||||
|
|
||||||
// Progressive exploration: if too few containers found, explore and suggest
|
|
||||||
let exploration = null;
|
|
||||||
const minExpectedContainers = 3; // At least header, main, footer
|
|
||||||
|
|
||||||
if (containers.length < minExpectedContainers) {
|
|
||||||
exploration = exploreMainContainers();
|
|
||||||
|
|
||||||
// Add warning message
|
|
||||||
exploration.warning = `Only ${containers.length} containers found. Consider adding these selectors to the script:`;
|
|
||||||
exploration.recommendation = exploration.suggestedSelectors.join(', ');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = {
|
|
||||||
metadata: {
|
|
||||||
extractedAt: new Date().toISOString(),
|
|
||||||
url: window.location.href,
|
|
||||||
framework: framework,
|
|
||||||
method: 'layout-structure-enhanced',
|
|
||||||
version: '2.2.0'
|
|
||||||
},
|
|
||||||
statistics: {
|
|
||||||
totalContainers: containers.length,
|
|
||||||
patterns: patterns
|
|
||||||
},
|
|
||||||
structure: containers
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add exploration results if triggered
|
|
||||||
if (exploration) {
|
|
||||||
result.exploration = {
|
|
||||||
triggered: true,
|
|
||||||
reason: 'Insufficient containers found with standard selectors',
|
|
||||||
discoveredCandidates: exploration.candidates,
|
|
||||||
suggestedSelectors: exploration.suggestedSelectors,
|
|
||||||
warning: exploration.warning,
|
|
||||||
recommendation: exploration.recommendation
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
|
||||||
};
|
|
||||||
|
|
||||||
// Execute and return results
|
|
||||||
return extractLayout();
|
|
||||||
})();
|
|
||||||
@@ -1,713 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Generate documentation for modules and projects with multiple strategies
|
|
||||||
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
|
|
||||||
# strategy: full|single|project-readme|project-architecture|http-api
|
|
||||||
# source_path: Path to the source module directory (or project root for project-level docs)
|
|
||||||
# project_name: Project name for output path (e.g., "myproject")
|
|
||||||
# tool: gemini|qwen|codex (default: gemini)
|
|
||||||
# model: Model name (optional, uses tool defaults)
|
|
||||||
#
|
|
||||||
# Default Models:
|
|
||||||
# gemini: gemini-2.5-flash
|
|
||||||
# qwen: coder-model
|
|
||||||
# codex: gpt5-codex
|
|
||||||
#
|
|
||||||
# Module-Level Strategies:
|
|
||||||
# full: Full documentation generation
|
|
||||||
# - Read: All files in current and subdirectories (@**/*)
|
|
||||||
# - Generate: API.md + README.md for each directory containing code files
|
|
||||||
# - Use: Deep directories (Layer 3), comprehensive documentation
|
|
||||||
#
|
|
||||||
# single: Single-layer documentation
|
|
||||||
# - Read: Current directory code + child API.md/README.md files
|
|
||||||
# - Generate: API.md + README.md only in current directory
|
|
||||||
# - Use: Upper layers (Layer 1-2), incremental updates
|
|
||||||
#
|
|
||||||
# Project-Level Strategies:
|
|
||||||
# project-readme: Project overview documentation
|
|
||||||
# - Read: All module API.md and README.md files
|
|
||||||
# - Generate: README.md (project root)
|
|
||||||
# - Use: After all module docs are generated
|
|
||||||
#
|
|
||||||
# project-architecture: System design documentation
|
|
||||||
# - Read: All module docs + project README
|
|
||||||
# - Generate: ARCHITECTURE.md + EXAMPLES.md
|
|
||||||
# - Use: After project README is generated
|
|
||||||
#
|
|
||||||
# http-api: HTTP API documentation
|
|
||||||
# - Read: API route files + existing docs
|
|
||||||
# - Generate: api/README.md
|
|
||||||
# - Use: For projects with HTTP APIs
|
|
||||||
#
|
|
||||||
# Output Structure:
|
|
||||||
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
|
|
||||||
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
|
|
||||||
# Project docs: .workflow/docs/{project_name}/README.md
|
|
||||||
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
|
|
||||||
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
|
|
||||||
# API docs: .workflow/docs/{project_name}/api/README.md
|
|
||||||
#
|
|
||||||
# Features:
|
|
||||||
# - Path mirroring: source structure → docs structure
|
|
||||||
# - Template-driven generation
|
|
||||||
# - Respects .gitignore patterns
|
|
||||||
# - Detects code vs navigation folders
|
|
||||||
# - Tool fallback support
|
|
||||||
|
|
||||||
# Build exclusion filters from .gitignore
|
|
||||||
build_exclusion_filters() {
|
|
||||||
local filters=""
|
|
||||||
|
|
||||||
# Common system/cache directories to exclude
|
|
||||||
local system_excludes=(
|
|
||||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
|
||||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
|
||||||
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
|
|
||||||
)
|
|
||||||
|
|
||||||
for exclude in "${system_excludes[@]}"; do
|
|
||||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Find and parse .gitignore (current dir first, then git root)
|
|
||||||
local gitignore_file=""
|
|
||||||
|
|
||||||
# Check current directory first
|
|
||||||
if [ -f ".gitignore" ]; then
|
|
||||||
gitignore_file=".gitignore"
|
|
||||||
else
|
|
||||||
# Try to find git root and check for .gitignore there
|
|
||||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
|
||||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
|
||||||
gitignore_file="$git_root/.gitignore"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse .gitignore if found
|
|
||||||
if [ -n "$gitignore_file" ]; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
|
||||||
|
|
||||||
# Remove trailing slash and whitespace
|
|
||||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
|
||||||
|
|
||||||
# Skip wildcards patterns (too complex for simple find)
|
|
||||||
[[ "$line" =~ \* ]] && continue
|
|
||||||
|
|
||||||
# Add to filters
|
|
||||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
|
||||||
done < "$gitignore_file"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$filters"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Detect folder type (code vs navigation)
|
|
||||||
detect_folder_type() {
|
|
||||||
local target_path="$1"
|
|
||||||
local exclusion_filters="$2"
|
|
||||||
|
|
||||||
# Count code files (primary indicators)
|
|
||||||
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
|
|
||||||
if [ $code_count -gt 0 ]; then
|
|
||||||
echo "code"
|
|
||||||
else
|
|
||||||
echo "navigation"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Scan directory structure and generate structured information
|
|
||||||
scan_directory_structure() {
|
|
||||||
local target_path="$1"
|
|
||||||
local strategy="$2"
|
|
||||||
|
|
||||||
if [ ! -d "$target_path" ]; then
|
|
||||||
echo "Directory not found: $target_path"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
local structure_info=""
|
|
||||||
|
|
||||||
# Get basic directory info
|
|
||||||
local dir_name=$(basename "$target_path")
|
|
||||||
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
|
|
||||||
|
|
||||||
structure_info+="Directory: $dir_name\n"
|
|
||||||
structure_info+="Total files: $total_files\n"
|
|
||||||
structure_info+="Total directories: $total_dirs\n"
|
|
||||||
structure_info+="Folder type: $folder_type\n\n"
|
|
||||||
|
|
||||||
if [ "$strategy" = "full" ]; then
|
|
||||||
# For full: show all subdirectories with file counts
|
|
||||||
structure_info+="Subdirectories with files:\n"
|
|
||||||
while IFS= read -r dir; do
|
|
||||||
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
|
||||||
local rel_path=${dir#$target_path/}
|
|
||||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
if [ $file_count -gt 0 ]; then
|
|
||||||
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
|
|
||||||
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
|
||||||
else
|
|
||||||
# For single: show direct children only
|
|
||||||
structure_info+="Direct subdirectories:\n"
|
|
||||||
while IFS= read -r dir; do
|
|
||||||
if [ -n "$dir" ]; then
|
|
||||||
local dir_name=$(basename "$dir")
|
|
||||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
|
|
||||||
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
|
|
||||||
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
|
|
||||||
fi
|
|
||||||
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Show main file types in current directory
|
|
||||||
structure_info+="\nCurrent directory files:\n"
|
|
||||||
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
|
|
||||||
structure_info+=" - Code files: $code_files\n"
|
|
||||||
structure_info+=" - Config files: $config_files\n"
|
|
||||||
structure_info+=" - Documentation: $doc_files\n"
|
|
||||||
|
|
||||||
printf "%b" "$structure_info"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Calculate output path based on source path and project name
|
|
||||||
calculate_output_path() {
|
|
||||||
local source_path="$1"
|
|
||||||
local project_name="$2"
|
|
||||||
local project_root="$3"
|
|
||||||
|
|
||||||
# Get absolute path of source (normalize to Unix-style path)
|
|
||||||
local abs_source=$(cd "$source_path" && pwd)
|
|
||||||
|
|
||||||
# Normalize project root to same format
|
|
||||||
local norm_project_root=$(cd "$project_root" && pwd)
|
|
||||||
|
|
||||||
# Calculate relative path from project root
|
|
||||||
local rel_path="${abs_source#$norm_project_root}"
|
|
||||||
|
|
||||||
# Remove leading slash if present
|
|
||||||
rel_path="${rel_path#/}"
|
|
||||||
|
|
||||||
# If source is project root, use project name directly
|
|
||||||
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
|
|
||||||
echo "$norm_project_root/.workflow/docs/$project_name"
|
|
||||||
else
|
|
||||||
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
generate_module_docs() {
|
|
||||||
local strategy="$1"
|
|
||||||
local source_path="$2"
|
|
||||||
local project_name="$3"
|
|
||||||
local tool="${4:-gemini}"
|
|
||||||
local model="$5"
|
|
||||||
|
|
||||||
# Validate parameters
|
|
||||||
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
|
|
||||||
echo "❌ Error: Strategy, source path, and project name are required"
|
|
||||||
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
|
||||||
echo "Module strategies: full, single"
|
|
||||||
echo "Project strategies: project-readme, project-architecture, http-api"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate strategy
|
|
||||||
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
|
|
||||||
local strategy_valid=false
|
|
||||||
for valid_strategy in "${valid_strategies[@]}"; do
|
|
||||||
if [ "$strategy" = "$valid_strategy" ]; then
|
|
||||||
strategy_valid=true
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ "$strategy_valid" = false ]; then
|
|
||||||
echo "❌ Error: Invalid strategy '$strategy'"
|
|
||||||
echo "Valid module strategies: full, single"
|
|
||||||
echo "Valid project strategies: project-readme, project-architecture, http-api"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -d "$source_path" ]; then
|
|
||||||
echo "❌ Error: Source directory '$source_path' does not exist"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set default models if not specified
|
|
||||||
if [ -z "$model" ]; then
|
|
||||||
case "$tool" in
|
|
||||||
gemini)
|
|
||||||
model="gemini-2.5-flash"
|
|
||||||
;;
|
|
||||||
qwen)
|
|
||||||
model="coder-model"
|
|
||||||
;;
|
|
||||||
codex)
|
|
||||||
model="gpt5-codex"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
model=""
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build exclusion filters
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
|
|
||||||
# Get project root
|
|
||||||
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
|
||||||
|
|
||||||
# Determine if this is a project-level strategy
|
|
||||||
local is_project_level=false
|
|
||||||
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
|
|
||||||
is_project_level=true
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Calculate output path
|
|
||||||
local output_path
|
|
||||||
if [ "$is_project_level" = true ]; then
|
|
||||||
# Project-level docs go to project root
|
|
||||||
if [ "$strategy" = "http-api" ]; then
|
|
||||||
output_path="$project_root/.workflow/docs/$project_name/api"
|
|
||||||
else
|
|
||||||
output_path="$project_root/.workflow/docs/$project_name"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create output directory
|
|
||||||
mkdir -p "$output_path"
|
|
||||||
|
|
||||||
# Detect folder type (only for module-level strategies)
|
|
||||||
local folder_type=""
|
|
||||||
if [ "$is_project_level" = false ]; then
|
|
||||||
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Load templates based on strategy
|
|
||||||
local api_template=""
|
|
||||||
local readme_template=""
|
|
||||||
local template_content=""
|
|
||||||
|
|
||||||
if [ "$is_project_level" = true ]; then
|
|
||||||
# Project-level templates
|
|
||||||
case "$strategy" in
|
|
||||||
project-readme)
|
|
||||||
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
|
|
||||||
if [ -f "$proj_readme_path" ]; then
|
|
||||||
template_content=$(cat "$proj_readme_path")
|
|
||||||
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
project-architecture)
|
|
||||||
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
|
|
||||||
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
|
|
||||||
if [ -f "$arch_path" ]; then
|
|
||||||
template_content=$(cat "$arch_path")
|
|
||||||
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
|
|
||||||
fi
|
|
||||||
if [ -f "$examples_path" ]; then
|
|
||||||
template_content="$template_content
|
|
||||||
|
|
||||||
EXAMPLES TEMPLATE:
|
|
||||||
$(cat "$examples_path")"
|
|
||||||
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
http-api)
|
|
||||||
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
|
||||||
if [ -f "$api_path" ]; then
|
|
||||||
template_content=$(cat "$api_path")
|
|
||||||
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
else
|
|
||||||
# Module-level templates
|
|
||||||
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
|
||||||
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
|
|
||||||
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
|
|
||||||
|
|
||||||
if [ "$folder_type" = "code" ]; then
|
|
||||||
if [ -f "$api_template_path" ]; then
|
|
||||||
api_template=$(cat "$api_template_path")
|
|
||||||
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
|
|
||||||
fi
|
|
||||||
if [ -f "$readme_template_path" ]; then
|
|
||||||
readme_template=$(cat "$readme_template_path")
|
|
||||||
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Navigation folder uses navigation template
|
|
||||||
if [ -f "$nav_template_path" ]; then
|
|
||||||
readme_template=$(cat "$nav_template_path")
|
|
||||||
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Scan directory structure (only for module-level strategies)
|
|
||||||
local structure_info=""
|
|
||||||
if [ "$is_project_level" = false ]; then
|
|
||||||
echo " 🔍 Scanning directory structure..."
|
|
||||||
structure_info=$(scan_directory_structure "$source_path" "$strategy")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare logging info
|
|
||||||
local module_name=$(basename "$source_path")
|
|
||||||
|
|
||||||
echo "⚡ Generating docs: $source_path → $output_path"
|
|
||||||
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
|
|
||||||
echo " Output: $output_path"
|
|
||||||
|
|
||||||
# Build strategy-specific prompt
|
|
||||||
local final_prompt=""
|
|
||||||
|
|
||||||
# Project-level strategies
|
|
||||||
if [ "$strategy" = "project-readme" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate comprehensive project overview documentation
|
|
||||||
|
|
||||||
PROJECT: $project_name
|
|
||||||
OUTPUT: Current directory (file will be moved to final location)
|
|
||||||
|
|
||||||
Read: @.workflow/docs/$project_name/**/*.md
|
|
||||||
|
|
||||||
Context: All module documentation files from the project
|
|
||||||
|
|
||||||
Generate ONE documentation file in current directory:
|
|
||||||
- README.md - Project root documentation
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create README.md in CURRENT DIRECTORY
|
|
||||||
- Synthesize information from all module docs
|
|
||||||
- Include project overview, getting started, and navigation
|
|
||||||
- Create clear module navigation with links
|
|
||||||
- Follow template structure exactly"
|
|
||||||
|
|
||||||
elif [ "$strategy" = "project-architecture" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate system design and usage examples documentation
|
|
||||||
|
|
||||||
PROJECT: $project_name
|
|
||||||
OUTPUT: Current directory (files will be moved to final location)
|
|
||||||
|
|
||||||
Read: @.workflow/docs/$project_name/**/*.md
|
|
||||||
|
|
||||||
Context: All project documentation including module docs and project README
|
|
||||||
|
|
||||||
Generate TWO documentation files in current directory:
|
|
||||||
1. ARCHITECTURE.md - System architecture and design patterns
|
|
||||||
2. EXAMPLES.md - End-to-end usage examples
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
|
|
||||||
- Synthesize architectural patterns from module documentation
|
|
||||||
- Document system structure, module relationships, and design decisions
|
|
||||||
- Provide practical code examples and usage scenarios
|
|
||||||
- Follow template structure for both files"
|
|
||||||
|
|
||||||
elif [ "$strategy" = "http-api" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate HTTP API reference documentation
|
|
||||||
|
|
||||||
PROJECT: $project_name
|
|
||||||
OUTPUT: Current directory (file will be moved to final location)
|
|
||||||
|
|
||||||
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
|
|
||||||
|
|
||||||
Context: API route files and existing documentation
|
|
||||||
|
|
||||||
Generate ONE documentation file in current directory:
|
|
||||||
- README.md - HTTP API documentation (in api/ subdirectory)
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create README.md in CURRENT DIRECTORY
|
|
||||||
- Document all HTTP endpoints (routes, methods, parameters, responses)
|
|
||||||
- Include authentication requirements and error codes
|
|
||||||
- Provide request/response examples
|
|
||||||
- Follow template structure (Part B: HTTP API documentation)"
|
|
||||||
|
|
||||||
# Module-level strategies
|
|
||||||
elif [ "$strategy" = "full" ]; then
|
|
||||||
# Full strategy: read all files, generate for each directory
|
|
||||||
if [ "$folder_type" = "code" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate comprehensive API and module documentation
|
|
||||||
|
|
||||||
Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
SOURCE: $source_path
|
|
||||||
OUTPUT: Current directory (files will be moved to final location)
|
|
||||||
|
|
||||||
Read: @**/*
|
|
||||||
|
|
||||||
Generate TWO documentation files in current directory:
|
|
||||||
1. API.md - Code API documentation (functions, classes, interfaces)
|
|
||||||
Template:
|
|
||||||
$api_template
|
|
||||||
|
|
||||||
2. README.md - Module overview documentation
|
|
||||||
Template:
|
|
||||||
$readme_template
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Generate both API.md and README.md in CURRENT DIRECTORY
|
|
||||||
- If subdirectories contain code files, generate their docs too (recursive)
|
|
||||||
- Work bottom-up: deepest directories first
|
|
||||||
- Follow template structure exactly
|
|
||||||
- Use structure analysis for context"
|
|
||||||
else
|
|
||||||
# Navigation folder - README only
|
|
||||||
final_prompt="PURPOSE: Generate navigation documentation for folder structure
|
|
||||||
|
|
||||||
Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
SOURCE: $source_path
|
|
||||||
OUTPUT: Current directory (file will be moved to final location)
|
|
||||||
|
|
||||||
Read: @**/*
|
|
||||||
|
|
||||||
Generate ONE documentation file in current directory:
|
|
||||||
- README.md - Navigation and folder overview
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$readme_template
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create README.md in CURRENT DIRECTORY
|
|
||||||
- Focus on folder structure and navigation
|
|
||||||
- Link to subdirectory documentation
|
|
||||||
- Use structure analysis for context"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Single strategy: read current + child docs only
|
|
||||||
if [ "$folder_type" = "code" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate API and module documentation for current directory
|
|
||||||
|
|
||||||
Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
SOURCE: $source_path
|
|
||||||
OUTPUT: Current directory (files will be moved to final location)
|
|
||||||
|
|
||||||
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
|
|
||||||
|
|
||||||
Generate TWO documentation files in current directory:
|
|
||||||
1. API.md - Code API documentation
|
|
||||||
Template:
|
|
||||||
$api_template
|
|
||||||
|
|
||||||
2. README.md - Module overview
|
|
||||||
Template:
|
|
||||||
$readme_template
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Generate both API.md and README.md in CURRENT DIRECTORY
|
|
||||||
- Reference child documentation, do not duplicate
|
|
||||||
- Follow template structure
|
|
||||||
- Use structure analysis for current directory context"
|
|
||||||
else
|
|
||||||
# Navigation folder - README only
|
|
||||||
final_prompt="PURPOSE: Generate navigation documentation
|
|
||||||
|
|
||||||
Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
SOURCE: $source_path
|
|
||||||
OUTPUT: Current directory (file will be moved to final location)
|
|
||||||
|
|
||||||
Read: @*/API.md @*/README.md @*.md
|
|
||||||
|
|
||||||
Generate ONE documentation file in current directory:
|
|
||||||
- README.md - Navigation and overview
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$readme_template
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create README.md in CURRENT DIRECTORY
|
|
||||||
- Link to child documentation
|
|
||||||
- Use structure analysis for navigation context"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Execute documentation generation
|
|
||||||
local start_time=$(date +%s)
|
|
||||||
echo " 🔄 Starting documentation generation..."
|
|
||||||
|
|
||||||
if cd "$source_path" 2>/dev/null; then
|
|
||||||
local tool_result=0
|
|
||||||
|
|
||||||
# Store current output path for CLI context
|
|
||||||
export DOC_OUTPUT_PATH="$output_path"
|
|
||||||
|
|
||||||
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
|
|
||||||
local git_head_before=""
|
|
||||||
if git rev-parse --git-dir >/dev/null 2>&1; then
|
|
||||||
git_head_before=$(git rev-parse HEAD 2>/dev/null)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Execute with selected tool
|
|
||||||
case "$tool" in
|
|
||||||
qwen)
|
|
||||||
if [ "$model" = "coder-model" ]; then
|
|
||||||
qwen -p "$final_prompt" --yolo 2>&1
|
|
||||||
else
|
|
||||||
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
fi
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
codex)
|
|
||||||
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
gemini)
|
|
||||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
|
||||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# Move generated files to output directory
|
|
||||||
local docs_created=0
|
|
||||||
local moved_files=""
|
|
||||||
|
|
||||||
if [ $tool_result -eq 0 ]; then
|
|
||||||
if [ "$is_project_level" = true ]; then
|
|
||||||
# Project-level documentation files
|
|
||||||
case "$strategy" in
|
|
||||||
project-readme)
|
|
||||||
if [ -f "README.md" ]; then
|
|
||||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="README.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
project-architecture)
|
|
||||||
if [ -f "ARCHITECTURE.md" ]; then
|
|
||||||
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="ARCHITECTURE.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
if [ -f "EXAMPLES.md" ]; then
|
|
||||||
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="EXAMPLES.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
http-api)
|
|
||||||
if [ -f "README.md" ]; then
|
|
||||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="api/README.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
else
|
|
||||||
# Module-level documentation files
|
|
||||||
# Check and move API.md if it exists
|
|
||||||
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
|
|
||||||
mv "API.md" "$output_path/API.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="API.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check and move README.md if it exists
|
|
||||||
if [ -f "README.md" ]; then
|
|
||||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="README.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if CLI tool auto-committed (and revert if needed)
|
|
||||||
if [ -n "$git_head_before" ]; then
|
|
||||||
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
|
|
||||||
if [ "$git_head_before" != "$git_head_after" ]; then
|
|
||||||
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
|
|
||||||
git reset --soft "$git_head_before" 2>/dev/null
|
|
||||||
echo " ✅ Auto-commit reverted (files remain staged)"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $docs_created -gt 0 ]; then
|
|
||||||
local end_time=$(date +%s)
|
|
||||||
local duration=$((end_time - start_time))
|
|
||||||
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
|
|
||||||
cd - > /dev/null
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
echo " ❌ Documentation generation failed for $source_path"
|
|
||||||
cd - > /dev/null
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ❌ Cannot access directory: $source_path"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Execute function if script is run directly
|
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
|
||||||
# Show help if no arguments or help requested
|
|
||||||
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
|
||||||
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
|
||||||
echo ""
|
|
||||||
echo "Module-Level Strategies:"
|
|
||||||
echo " full - Generate docs for all subdirectories with code"
|
|
||||||
echo " single - Generate docs only for current directory"
|
|
||||||
echo ""
|
|
||||||
echo "Project-Level Strategies:"
|
|
||||||
echo " project-readme - Generate project root README.md"
|
|
||||||
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
|
|
||||||
echo " http-api - Generate HTTP API documentation (api/README.md)"
|
|
||||||
echo ""
|
|
||||||
echo "Tools: gemini (default), qwen, codex"
|
|
||||||
echo "Models: Use tool defaults if not specified"
|
|
||||||
echo ""
|
|
||||||
echo "Module Examples:"
|
|
||||||
echo " ./generate_module_docs.sh full ./src/auth myproject"
|
|
||||||
echo " ./generate_module_docs.sh single ./components myproject gemini"
|
|
||||||
echo ""
|
|
||||||
echo "Project Examples:"
|
|
||||||
echo " ./generate_module_docs.sh project-readme . myproject"
|
|
||||||
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
|
|
||||||
echo " ./generate_module_docs.sh http-api . myproject"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
generate_module_docs "$@"
|
|
||||||
fi
|
|
||||||
@@ -1,166 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Get modules organized by directory depth (deepest first)
|
|
||||||
# Usage: get_modules_by_depth.sh [format]
|
|
||||||
# format: list|grouped|json (default: list)
|
|
||||||
|
|
||||||
# Parse .gitignore patterns and build exclusion filters
|
|
||||||
build_exclusion_filters() {
|
|
||||||
local filters=""
|
|
||||||
|
|
||||||
# Always exclude these system/cache directories and common web dev packages
|
|
||||||
local system_excludes=(
|
|
||||||
# Version control and IDE
|
|
||||||
".git" ".gitignore" ".gitmodules" ".gitattributes"
|
|
||||||
".svn" ".hg" ".bzr"
|
|
||||||
".history" ".vscode" ".idea" ".vs" ".vscode-test"
|
|
||||||
".sublime-text" ".atom"
|
|
||||||
|
|
||||||
# Python
|
|
||||||
"__pycache__" ".pytest_cache" ".mypy_cache" ".tox"
|
|
||||||
".coverage" "htmlcov" ".nox" ".venv" "venv" "env"
|
|
||||||
".egg-info" "*.egg-info" ".eggs" ".wheel"
|
|
||||||
"site-packages" ".python-version" ".pyc"
|
|
||||||
|
|
||||||
# Node.js/JavaScript
|
|
||||||
"node_modules" ".npm" ".yarn" ".pnpm" "yarn-error.log"
|
|
||||||
".nyc_output" "coverage" ".next" ".nuxt"
|
|
||||||
".cache" ".parcel-cache" ".vite" "dist" "build"
|
|
||||||
".turbo" ".vercel" ".netlify"
|
|
||||||
|
|
||||||
# Package managers
|
|
||||||
".pnpm-store" "pnpm-lock.yaml" "yarn.lock" "package-lock.json"
|
|
||||||
".bundle" "vendor/bundle" "Gemfile.lock"
|
|
||||||
".gradle" "gradle" "gradlew" "gradlew.bat"
|
|
||||||
".mvn" "target" ".m2"
|
|
||||||
|
|
||||||
# Build/compile outputs
|
|
||||||
"dist" "build" "out" "output" "_site" "public"
|
|
||||||
".output" ".generated" "generated" "gen"
|
|
||||||
"bin" "obj" "Debug" "Release"
|
|
||||||
|
|
||||||
# Testing
|
|
||||||
".pytest_cache" ".coverage" "htmlcov" "test-results"
|
|
||||||
".nyc_output" "junit.xml" "test_results"
|
|
||||||
"cypress/screenshots" "cypress/videos"
|
|
||||||
"playwright-report" ".playwright"
|
|
||||||
|
|
||||||
# Logs and temp files
|
|
||||||
"logs" "*.log" "log" "tmp" "temp" ".tmp" ".temp"
|
|
||||||
".env" ".env.local" ".env.*.local"
|
|
||||||
".DS_Store" "Thumbs.db" "*.tmp" "*.swp" "*.swo"
|
|
||||||
|
|
||||||
# Documentation build outputs
|
|
||||||
"_book" "_site" "docs/_build" "site" "gh-pages"
|
|
||||||
".docusaurus" ".vuepress" ".gitbook"
|
|
||||||
|
|
||||||
# Database files
|
|
||||||
"*.sqlite" "*.sqlite3" "*.db" "data.db"
|
|
||||||
|
|
||||||
# OS and editor files
|
|
||||||
".DS_Store" "Thumbs.db" "desktop.ini"
|
|
||||||
"*.stackdump" "*.core"
|
|
||||||
|
|
||||||
# Cloud and deployment
|
|
||||||
".serverless" ".terraform" "terraform.tfstate"
|
|
||||||
".aws" ".azure" ".gcp"
|
|
||||||
|
|
||||||
# Mobile development
|
|
||||||
".gradle" "build" ".expo" ".metro"
|
|
||||||
"android/app/build" "ios/build" "DerivedData"
|
|
||||||
|
|
||||||
# Game development
|
|
||||||
"Library" "Temp" "ProjectSettings"
|
|
||||||
"Logs" "MemoryCaptures" "UserSettings"
|
|
||||||
)
|
|
||||||
|
|
||||||
for exclude in "${system_excludes[@]}"; do
|
|
||||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Parse .gitignore if it exists
|
|
||||||
if [ -f ".gitignore" ]; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
|
||||||
|
|
||||||
# Remove trailing slash and whitespace
|
|
||||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
|
||||||
|
|
||||||
# Add to filters
|
|
||||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
|
||||||
done < .gitignore
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$filters"
|
|
||||||
}
|
|
||||||
|
|
||||||
get_modules_by_depth() {
|
|
||||||
local format="${1:-list}"
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
local max_depth=$(eval "find . -type d $exclusion_filters 2>/dev/null" | awk -F/ '{print NF-1}' | sort -n | tail -1)
|
|
||||||
|
|
||||||
case "$format" in
|
|
||||||
"grouped")
|
|
||||||
echo "📊 Modules by depth (deepest first):"
|
|
||||||
for depth in $(seq $max_depth -1 0); do
|
|
||||||
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
|
||||||
while read dir; do
|
|
||||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
|
||||||
local claude_indicator=""
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
|
|
||||||
echo "$dir$claude_indicator"
|
|
||||||
fi
|
|
||||||
done)
|
|
||||||
if [ -n "$dirs" ]; then
|
|
||||||
echo " 📁 Depth $depth:"
|
|
||||||
echo "$dirs" | sed 's/^/ - /'
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
;;
|
|
||||||
|
|
||||||
"json")
|
|
||||||
echo "{"
|
|
||||||
echo " \"max_depth\": $max_depth,"
|
|
||||||
echo " \"modules\": {"
|
|
||||||
for depth in $(seq $max_depth -1 0); do
|
|
||||||
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
|
||||||
while read dir; do
|
|
||||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
|
||||||
local has_claude="false"
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && has_claude="true"
|
|
||||||
echo "{\"path\":\"$dir\",\"has_claude\":$has_claude}"
|
|
||||||
fi
|
|
||||||
done | tr '\n' ',')
|
|
||||||
if [ -n "$dirs" ]; then
|
|
||||||
dirs=${dirs%,} # Remove trailing comma
|
|
||||||
echo " \"$depth\": [$dirs]"
|
|
||||||
[ $depth -gt 0 ] && echo ","
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
echo " }"
|
|
||||||
echo "}"
|
|
||||||
;;
|
|
||||||
|
|
||||||
"list"|*)
|
|
||||||
# Simple list format (deepest first)
|
|
||||||
for depth in $(seq $max_depth -1 0); do
|
|
||||||
eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
|
||||||
while read dir; do
|
|
||||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
|
||||||
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
|
|
||||||
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
|
|
||||||
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
|
|
||||||
local has_claude="no"
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
|
|
||||||
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
done
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
# Execute function if script is run directly
|
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
|
||||||
get_modules_by_depth "$@"
|
|
||||||
fi
|
|
||||||
@@ -1,391 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# UI Generate Preview v2.0 - Template-Based Preview Generation
|
|
||||||
# Purpose: Generate compare.html and index.html using template substitution
|
|
||||||
# Template: ~/.claude/workflows/_template-compare-matrix.html
|
|
||||||
#
|
|
||||||
# Usage: ui-generate-preview.sh <prototypes_dir> [--template <path>]
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Color output
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
# Default template path
|
|
||||||
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
|
|
||||||
|
|
||||||
# Parse arguments
|
|
||||||
prototypes_dir="${1:-.}"
|
|
||||||
shift || true
|
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--template)
|
|
||||||
TEMPLATE_PATH="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo -e "${RED}Unknown option: $1${NC}"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ ! -d "$prototypes_dir" ]]; then
|
|
||||||
echo -e "${RED}Error: Directory not found: $prototypes_dir${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd "$prototypes_dir" || exit 1
|
|
||||||
|
|
||||||
echo -e "${GREEN}📊 Auto-detecting matrix dimensions...${NC}"
|
|
||||||
|
|
||||||
# Auto-detect styles, layouts, targets from file patterns
|
|
||||||
# Pattern: {target}-style-{s}-layout-{l}.html
|
|
||||||
styles=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
|
||||||
sed 's/.*-style-\([0-9]\+\)-.*/\1/' | sort -un)
|
|
||||||
layouts=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
|
||||||
sed 's/.*-layout-\([0-9]\+\)\.html/\1/' | sort -un)
|
|
||||||
targets=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
|
||||||
sed 's/\.\///; s/-style-.*//' | sort -u)
|
|
||||||
|
|
||||||
S=$(echo "$styles" | wc -l)
|
|
||||||
L=$(echo "$layouts" | wc -l)
|
|
||||||
T=$(echo "$targets" | wc -l)
|
|
||||||
|
|
||||||
echo -e " Detected: ${GREEN}${S}${NC} styles × ${GREEN}${L}${NC} layouts × ${GREEN}${T}${NC} targets"
|
|
||||||
|
|
||||||
if [[ $S -eq 0 ]] || [[ $L -eq 0 ]] || [[ $T -eq 0 ]]; then
|
|
||||||
echo -e "${RED}Error: No prototype files found matching pattern {target}-style-{s}-layout-{l}.html${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Generate compare.html from template
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo -e "${YELLOW}🎨 Generating compare.html from template...${NC}"
|
|
||||||
|
|
||||||
if [[ ! -f "$TEMPLATE_PATH" ]]; then
|
|
||||||
echo -e "${RED}Error: Template not found: $TEMPLATE_PATH${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build pages/targets JSON array
|
|
||||||
PAGES_JSON="["
|
|
||||||
first=true
|
|
||||||
for target in $targets; do
|
|
||||||
if [[ "$first" == true ]]; then
|
|
||||||
first=false
|
|
||||||
else
|
|
||||||
PAGES_JSON+=", "
|
|
||||||
fi
|
|
||||||
PAGES_JSON+="\"$target\""
|
|
||||||
done
|
|
||||||
PAGES_JSON+="]"
|
|
||||||
|
|
||||||
# Generate metadata
|
|
||||||
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
|
|
||||||
SESSION_ID="standalone"
|
|
||||||
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +"%Y-%m-%d")
|
|
||||||
|
|
||||||
# Replace placeholders in template
|
|
||||||
cat "$TEMPLATE_PATH" | \
|
|
||||||
sed "s|{{run_id}}|${RUN_ID}|g" | \
|
|
||||||
sed "s|{{session_id}}|${SESSION_ID}|g" | \
|
|
||||||
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
|
|
||||||
sed "s|{{style_variants}}|${S}|g" | \
|
|
||||||
sed "s|{{layout_variants}}|${L}|g" | \
|
|
||||||
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
|
|
||||||
> compare.html
|
|
||||||
|
|
||||||
echo -e "${GREEN} ✓ Generated compare.html from template${NC}"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Generate index.html
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo -e "${YELLOW}📋 Generating index.html...${NC}"
|
|
||||||
|
|
||||||
cat > index.html << 'EOF'
|
|
||||||
<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
||||||
<title>UI Prototypes Index</title>
|
|
||||||
<style>
|
|
||||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
|
||||||
body {
|
|
||||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
|
||||||
max-width: 1200px;
|
|
||||||
margin: 0 auto;
|
|
||||||
padding: 40px 20px;
|
|
||||||
background: #f5f5f5;
|
|
||||||
}
|
|
||||||
h1 { margin-bottom: 10px; color: #333; }
|
|
||||||
.subtitle { color: #666; margin-bottom: 30px; }
|
|
||||||
.cta {
|
|
||||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
|
||||||
color: white;
|
|
||||||
padding: 20px;
|
|
||||||
border-radius: 8px;
|
|
||||||
margin-bottom: 30px;
|
|
||||||
box-shadow: 0 4px 6px rgba(0,0,0,0.1);
|
|
||||||
}
|
|
||||||
.cta h2 { margin-bottom: 10px; }
|
|
||||||
.cta a {
|
|
||||||
display: inline-block;
|
|
||||||
background: white;
|
|
||||||
color: #667eea;
|
|
||||||
padding: 10px 20px;
|
|
||||||
border-radius: 6px;
|
|
||||||
text-decoration: none;
|
|
||||||
font-weight: 600;
|
|
||||||
margin-top: 10px;
|
|
||||||
}
|
|
||||||
.cta a:hover { background: #f8f9fa; }
|
|
||||||
.style-section {
|
|
||||||
background: white;
|
|
||||||
padding: 20px;
|
|
||||||
border-radius: 8px;
|
|
||||||
margin-bottom: 20px;
|
|
||||||
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
|
|
||||||
}
|
|
||||||
.style-section h2 {
|
|
||||||
color: #495057;
|
|
||||||
margin-bottom: 15px;
|
|
||||||
padding-bottom: 10px;
|
|
||||||
border-bottom: 2px solid #e9ecef;
|
|
||||||
}
|
|
||||||
.target-group {
|
|
||||||
margin-bottom: 20px;
|
|
||||||
}
|
|
||||||
.target-group h3 {
|
|
||||||
color: #6c757d;
|
|
||||||
font-size: 16px;
|
|
||||||
margin-bottom: 10px;
|
|
||||||
}
|
|
||||||
.link-grid {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
|
|
||||||
gap: 10px;
|
|
||||||
}
|
|
||||||
.prototype-link {
|
|
||||||
padding: 12px 16px;
|
|
||||||
background: #f8f9fa;
|
|
||||||
border: 1px solid #dee2e6;
|
|
||||||
border-radius: 6px;
|
|
||||||
text-decoration: none;
|
|
||||||
color: #495057;
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
transition: all 0.2s;
|
|
||||||
}
|
|
||||||
.prototype-link:hover {
|
|
||||||
background: #e9ecef;
|
|
||||||
border-color: #667eea;
|
|
||||||
transform: translateX(2px);
|
|
||||||
}
|
|
||||||
.prototype-link .label { font-weight: 500; }
|
|
||||||
.prototype-link .icon { color: #667eea; }
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<h1>🎨 UI Prototypes Index</h1>
|
|
||||||
<p class="subtitle">Generated __S__×__L__×__T__ = __TOTAL__ prototypes</p>
|
|
||||||
|
|
||||||
<div class="cta">
|
|
||||||
<h2>📊 Interactive Comparison</h2>
|
|
||||||
<p>View all styles and layouts side-by-side in an interactive matrix</p>
|
|
||||||
<a href="compare.html">Open Matrix View →</a>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<h2>📂 All Prototypes</h2>
|
|
||||||
__CONTENT__
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# Build content HTML
|
|
||||||
CONTENT=""
|
|
||||||
for style in $styles; do
|
|
||||||
CONTENT+="<div class='style-section'>"$'\n'
|
|
||||||
CONTENT+="<h2>Style ${style}</h2>"$'\n'
|
|
||||||
|
|
||||||
for target in $targets; do
|
|
||||||
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
|
|
||||||
CONTENT+="<div class='target-group'>"$'\n'
|
|
||||||
CONTENT+="<h3>${target_capitalized}</h3>"$'\n'
|
|
||||||
CONTENT+="<div class='link-grid'>"$'\n'
|
|
||||||
|
|
||||||
for layout in $layouts; do
|
|
||||||
html_file="${target}-style-${style}-layout-${layout}.html"
|
|
||||||
if [[ -f "$html_file" ]]; then
|
|
||||||
CONTENT+="<a href='${html_file}' class='prototype-link' target='_blank'>"$'\n'
|
|
||||||
CONTENT+="<span class='label'>Layout ${layout}</span>"$'\n'
|
|
||||||
CONTENT+="<span class='icon'>↗</span>"$'\n'
|
|
||||||
CONTENT+="</a>"$'\n'
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
CONTENT+="</div></div>"$'\n'
|
|
||||||
done
|
|
||||||
|
|
||||||
CONTENT+="</div>"$'\n'
|
|
||||||
done
|
|
||||||
|
|
||||||
# Calculate total
|
|
||||||
TOTAL_PROTOTYPES=$((S * L * T))
|
|
||||||
|
|
||||||
# Replace placeholders (using a temp file for complex replacement)
|
|
||||||
{
|
|
||||||
echo "$CONTENT" > /tmp/content_tmp.txt
|
|
||||||
sed "s|__S__|${S}|g" index.html | \
|
|
||||||
sed "s|__L__|${L}|g" | \
|
|
||||||
sed "s|__T__|${T}|g" | \
|
|
||||||
sed "s|__TOTAL__|${TOTAL_PROTOTYPES}|g" | \
|
|
||||||
sed -e "/__CONTENT__/r /tmp/content_tmp.txt" -e "/__CONTENT__/d" > /tmp/index_tmp.html
|
|
||||||
mv /tmp/index_tmp.html index.html
|
|
||||||
rm -f /tmp/content_tmp.txt
|
|
||||||
}
|
|
||||||
|
|
||||||
echo -e "${GREEN} ✓ Generated index.html${NC}"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Generate PREVIEW.md
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo -e "${YELLOW}📝 Generating PREVIEW.md...${NC}"
|
|
||||||
|
|
||||||
cat > PREVIEW.md << EOF
|
|
||||||
# UI Prototypes Preview Guide
|
|
||||||
|
|
||||||
Generated: $(date +"%Y-%m-%d %H:%M:%S")
|
|
||||||
|
|
||||||
## 📊 Matrix Dimensions
|
|
||||||
|
|
||||||
- **Styles**: ${S}
|
|
||||||
- **Layouts**: ${L}
|
|
||||||
- **Targets**: ${T}
|
|
||||||
- **Total Prototypes**: $((S*L*T))
|
|
||||||
|
|
||||||
## 🌐 How to View
|
|
||||||
|
|
||||||
### Option 1: Interactive Matrix (Recommended)
|
|
||||||
|
|
||||||
Open \`compare.html\` in your browser to see all prototypes in an interactive matrix view.
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- Side-by-side comparison of all styles and layouts
|
|
||||||
- Switch between targets using the dropdown
|
|
||||||
- Adjust grid columns for better viewing
|
|
||||||
- Direct links to full-page views
|
|
||||||
- Selection system with export to JSON
|
|
||||||
- Fullscreen mode for detailed inspection
|
|
||||||
|
|
||||||
### Option 2: Simple Index
|
|
||||||
|
|
||||||
Open \`index.html\` for a simple list of all prototypes with direct links.
|
|
||||||
|
|
||||||
### Option 3: Direct File Access
|
|
||||||
|
|
||||||
Each prototype can be opened directly:
|
|
||||||
- Pattern: \`{target}-style-{s}-layout-{l}.html\`
|
|
||||||
- Example: \`dashboard-style-1-layout-1.html\`
|
|
||||||
|
|
||||||
## 📁 File Structure
|
|
||||||
|
|
||||||
\`\`\`
|
|
||||||
prototypes/
|
|
||||||
├── compare.html # Interactive matrix view
|
|
||||||
├── index.html # Simple navigation index
|
|
||||||
├── PREVIEW.md # This file
|
|
||||||
EOF
|
|
||||||
|
|
||||||
for style in $styles; do
|
|
||||||
for target in $targets; do
|
|
||||||
for layout in $layouts; do
|
|
||||||
echo "├── ${target}-style-${style}-layout-${layout}.html" >> PREVIEW.md
|
|
||||||
echo "├── ${target}-style-${style}-layout-${layout}.css" >> PREVIEW.md
|
|
||||||
done
|
|
||||||
done
|
|
||||||
done
|
|
||||||
|
|
||||||
cat >> PREVIEW.md << 'EOF2'
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎨 Style Variants
|
|
||||||
|
|
||||||
EOF2
|
|
||||||
|
|
||||||
for style in $styles; do
|
|
||||||
cat >> PREVIEW.md << EOF3
|
|
||||||
### Style ${style}
|
|
||||||
|
|
||||||
EOF3
|
|
||||||
style_guide="../style-extraction/style-${style}/style-guide.md"
|
|
||||||
if [[ -f "$style_guide" ]]; then
|
|
||||||
head -n 10 "$style_guide" | tail -n +2 >> PREVIEW.md 2>/dev/null || echo "Design philosophy and tokens" >> PREVIEW.md
|
|
||||||
else
|
|
||||||
echo "Design system ${style}" >> PREVIEW.md
|
|
||||||
fi
|
|
||||||
echo "" >> PREVIEW.md
|
|
||||||
done
|
|
||||||
|
|
||||||
cat >> PREVIEW.md << 'EOF4'
|
|
||||||
|
|
||||||
## 🎯 Targets
|
|
||||||
|
|
||||||
EOF4
|
|
||||||
|
|
||||||
for target in $targets; do
|
|
||||||
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
|
|
||||||
echo "- **${target_capitalized}**: ${L} layouts × ${S} styles = $((L*S)) variations" >> PREVIEW.md
|
|
||||||
done
|
|
||||||
|
|
||||||
cat >> PREVIEW.md << 'EOF5'
|
|
||||||
|
|
||||||
## 💡 Tips
|
|
||||||
|
|
||||||
1. **Comparison**: Use compare.html to see how different styles affect the same layout
|
|
||||||
2. **Navigation**: Use index.html for quick access to specific prototypes
|
|
||||||
3. **Selection**: Mark favorites in compare.html using star icons
|
|
||||||
4. **Export**: Download selection JSON for implementation planning
|
|
||||||
5. **Inspection**: Open browser DevTools to inspect HTML structure and CSS
|
|
||||||
6. **Sharing**: All files are standalone - can be shared or deployed directly
|
|
||||||
|
|
||||||
## 📝 Next Steps
|
|
||||||
|
|
||||||
1. Review prototypes in compare.html
|
|
||||||
2. Select preferred style × layout combinations
|
|
||||||
3. Export selections as JSON
|
|
||||||
4. Provide feedback for refinement
|
|
||||||
5. Use selected designs for implementation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated by /workflow:ui-design:generate-v2 (Style-Centric Architecture)
|
|
||||||
EOF5
|
|
||||||
|
|
||||||
echo -e "${GREEN} ✓ Generated PREVIEW.md${NC}"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Completion Summary
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}✅ Preview generation complete!${NC}"
|
|
||||||
echo -e " Files created: compare.html, index.html, PREVIEW.md"
|
|
||||||
echo -e " Matrix: ${S} styles × ${L} layouts × ${T} targets = $((S*L*T)) prototypes"
|
|
||||||
echo ""
|
|
||||||
echo -e "${YELLOW}🌐 Next Steps:${NC}"
|
|
||||||
echo -e " 1. Open compare.html for interactive matrix view"
|
|
||||||
echo -e " 2. Open index.html for simple navigation"
|
|
||||||
echo -e " 3. Read PREVIEW.md for detailed usage guide"
|
|
||||||
echo ""
|
|
||||||
@@ -1,811 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# UI Prototype Instantiation Script with Preview Generation (v3.0 - Auto-detect)
|
|
||||||
# Purpose: Generate S × L × P final prototypes from templates + interactive preview files
|
|
||||||
# Usage:
|
|
||||||
# Simple: ui-instantiate-prototypes.sh <prototypes_dir>
|
|
||||||
# Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
|
|
||||||
|
|
||||||
# Use safer error handling
|
|
||||||
set -o pipefail
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Helper Functions
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
log_info() {
|
|
||||||
echo "$1"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_success() {
|
|
||||||
echo "✅ $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_error() {
|
|
||||||
echo "❌ $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_warning() {
|
|
||||||
echo "⚠️ $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Auto-detect pages from templates directory
|
|
||||||
auto_detect_pages() {
|
|
||||||
local templates_dir="$1/_templates"
|
|
||||||
|
|
||||||
if [ ! -d "$templates_dir" ]; then
|
|
||||||
log_error "Templates directory not found: $templates_dir"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Find unique page names from template files (e.g., login-layout-1.html -> login)
|
|
||||||
local pages=$(find "$templates_dir" -name "*-layout-*.html" -type f | \
|
|
||||||
sed 's|.*/||' | \
|
|
||||||
sed 's|-layout-[0-9]*\.html||' | \
|
|
||||||
sort -u | \
|
|
||||||
tr '\n' ',' | \
|
|
||||||
sed 's/,$//')
|
|
||||||
|
|
||||||
echo "$pages"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Auto-detect style variants count
|
|
||||||
auto_detect_style_variants() {
|
|
||||||
local base_path="$1"
|
|
||||||
local style_dir="$base_path/../style-extraction"
|
|
||||||
|
|
||||||
if [ ! -d "$style_dir" ]; then
|
|
||||||
log_warning "Style consolidation directory not found: $style_dir"
|
|
||||||
echo "3" # Default
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Count style-* directories
|
|
||||||
local count=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" | wc -l)
|
|
||||||
|
|
||||||
if [ "$count" -eq 0 ]; then
|
|
||||||
echo "3" # Default
|
|
||||||
else
|
|
||||||
echo "$count"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Auto-detect layout variants count
|
|
||||||
auto_detect_layout_variants() {
|
|
||||||
local templates_dir="$1/_templates"
|
|
||||||
|
|
||||||
if [ ! -d "$templates_dir" ]; then
|
|
||||||
echo "3" # Default
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Find the first page and count its layouts
|
|
||||||
local first_page=$(find "$templates_dir" -name "*-layout-1.html" -type f | head -1 | sed 's|.*/||' | sed 's|-layout-1\.html||')
|
|
||||||
|
|
||||||
if [ -z "$first_page" ]; then
|
|
||||||
echo "3" # Default
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Count layout files for this page
|
|
||||||
local count=$(find "$templates_dir" -name "${first_page}-layout-*.html" -type f | wc -l)
|
|
||||||
|
|
||||||
if [ "$count" -eq 0 ]; then
|
|
||||||
echo "3" # Default
|
|
||||||
else
|
|
||||||
echo "$count"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Parse Arguments
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
show_usage() {
|
|
||||||
cat <<'EOF'
|
|
||||||
Usage:
|
|
||||||
Simple (auto-detect): ui-instantiate-prototypes.sh <prototypes_dir> [options]
|
|
||||||
Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
|
|
||||||
|
|
||||||
Simple Mode (Recommended):
|
|
||||||
prototypes_dir Path to prototypes directory (auto-detects everything)
|
|
||||||
|
|
||||||
Full Mode:
|
|
||||||
base_path Base path to prototypes directory
|
|
||||||
pages Comma-separated list of pages/components
|
|
||||||
style_variants Number of style variants (1-5)
|
|
||||||
layout_variants Number of layout variants (1-5)
|
|
||||||
|
|
||||||
Options:
|
|
||||||
--run-id <id> Run ID (default: auto-generated)
|
|
||||||
--session-id <id> Session ID (default: standalone)
|
|
||||||
--mode <page|component> Exploration mode (default: page)
|
|
||||||
--template <path> Path to compare.html template (default: ~/.claude/workflows/_template-compare-matrix.html)
|
|
||||||
--no-preview Skip preview file generation
|
|
||||||
--help Show this help message
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
# Simple usage (auto-detect everything)
|
|
||||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes
|
|
||||||
|
|
||||||
# With options
|
|
||||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes --session-id WFS-auth
|
|
||||||
|
|
||||||
# Full manual mode
|
|
||||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes "login,dashboard" 3 3 --session-id WFS-auth
|
|
||||||
EOF
|
|
||||||
}
|
|
||||||
|
|
||||||
# Default values
|
|
||||||
BASE_PATH=""
|
|
||||||
PAGES=""
|
|
||||||
STYLE_VARIANTS=""
|
|
||||||
LAYOUT_VARIANTS=""
|
|
||||||
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
|
|
||||||
SESSION_ID="standalone"
|
|
||||||
MODE="page"
|
|
||||||
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
|
|
||||||
GENERATE_PREVIEW=true
|
|
||||||
AUTO_DETECT=false
|
|
||||||
|
|
||||||
# Parse arguments
|
|
||||||
if [ $# -lt 1 ]; then
|
|
||||||
log_error "Missing required arguments"
|
|
||||||
show_usage
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if using simple mode (only 1 positional arg before options)
|
|
||||||
if [ $# -eq 1 ] || [[ "$2" == --* ]]; then
|
|
||||||
# Simple mode - auto-detect
|
|
||||||
AUTO_DETECT=true
|
|
||||||
BASE_PATH="$1"
|
|
||||||
shift 1
|
|
||||||
else
|
|
||||||
# Full mode - manual parameters
|
|
||||||
if [ $# -lt 4 ]; then
|
|
||||||
log_error "Full mode requires 4 positional arguments"
|
|
||||||
show_usage
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
BASE_PATH="$1"
|
|
||||||
PAGES="$2"
|
|
||||||
STYLE_VARIANTS="$3"
|
|
||||||
LAYOUT_VARIANTS="$4"
|
|
||||||
shift 4
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse optional arguments
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--run-id)
|
|
||||||
RUN_ID="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--session-id)
|
|
||||||
SESSION_ID="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--mode)
|
|
||||||
MODE="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--template)
|
|
||||||
TEMPLATE_PATH="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--no-preview)
|
|
||||||
GENERATE_PREVIEW=false
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--help)
|
|
||||||
show_usage
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
log_error "Unknown option: $1"
|
|
||||||
show_usage
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Auto-detection (if enabled)
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
if [ "$AUTO_DETECT" = true ]; then
|
|
||||||
log_info "🔍 Auto-detecting configuration from directory..."
|
|
||||||
|
|
||||||
# Detect pages
|
|
||||||
PAGES=$(auto_detect_pages "$BASE_PATH")
|
|
||||||
if [ -z "$PAGES" ]; then
|
|
||||||
log_error "Could not auto-detect pages from templates"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
log_info " Pages: $PAGES"
|
|
||||||
|
|
||||||
# Detect style variants
|
|
||||||
STYLE_VARIANTS=$(auto_detect_style_variants "$BASE_PATH")
|
|
||||||
log_info " Style variants: $STYLE_VARIANTS"
|
|
||||||
|
|
||||||
# Detect layout variants
|
|
||||||
LAYOUT_VARIANTS=$(auto_detect_layout_variants "$BASE_PATH")
|
|
||||||
log_info " Layout variants: $LAYOUT_VARIANTS"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Validation
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Validate base path
|
|
||||||
if [ ! -d "$BASE_PATH" ]; then
|
|
||||||
log_error "Base path not found: $BASE_PATH"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate style and layout variants
|
|
||||||
if [ "$STYLE_VARIANTS" -lt 1 ] || [ "$STYLE_VARIANTS" -gt 5 ]; then
|
|
||||||
log_error "Style variants must be between 1 and 5 (got: $STYLE_VARIANTS)"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$LAYOUT_VARIANTS" -lt 1 ] || [ "$LAYOUT_VARIANTS" -gt 5 ]; then
|
|
||||||
log_error "Layout variants must be between 1 and 5 (got: $LAYOUT_VARIANTS)"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate STYLE_VARIANTS against actual style directories
|
|
||||||
if [ "$STYLE_VARIANTS" -gt 0 ]; then
|
|
||||||
style_dir="$BASE_PATH/../style-extraction"
|
|
||||||
|
|
||||||
if [ ! -d "$style_dir" ]; then
|
|
||||||
log_error "Style consolidation directory not found: $style_dir"
|
|
||||||
log_info "Run /workflow:ui-design:consolidate first"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
actual_styles=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | wc -l)
|
|
||||||
|
|
||||||
if [ "$actual_styles" -eq 0 ]; then
|
|
||||||
log_error "No style directories found in: $style_dir"
|
|
||||||
log_info "Run /workflow:ui-design:consolidate first to generate style design systems"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$STYLE_VARIANTS" -gt "$actual_styles" ]; then
|
|
||||||
log_warning "Requested $STYLE_VARIANTS style variants, but only found $actual_styles directories"
|
|
||||||
log_info "Available style directories:"
|
|
||||||
find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | sed 's|.*/||' | sort
|
|
||||||
log_info "Auto-correcting to $actual_styles style variants"
|
|
||||||
STYLE_VARIANTS=$actual_styles
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse pages into array
|
|
||||||
IFS=',' read -ra PAGE_ARRAY <<< "$PAGES"
|
|
||||||
|
|
||||||
if [ ${#PAGE_ARRAY[@]} -eq 0 ]; then
|
|
||||||
log_error "No pages found"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Header Output
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo "========================================="
|
|
||||||
echo "UI Prototype Instantiation & Preview"
|
|
||||||
if [ "$AUTO_DETECT" = true ]; then
|
|
||||||
echo "(Auto-detected configuration)"
|
|
||||||
fi
|
|
||||||
echo "========================================="
|
|
||||||
echo "Base Path: $BASE_PATH"
|
|
||||||
echo "Mode: $MODE"
|
|
||||||
echo "Pages/Components: $PAGES"
|
|
||||||
echo "Style Variants: $STYLE_VARIANTS"
|
|
||||||
echo "Layout Variants: $LAYOUT_VARIANTS"
|
|
||||||
echo "Run ID: $RUN_ID"
|
|
||||||
echo "Session ID: $SESSION_ID"
|
|
||||||
echo "========================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Change to base path
|
|
||||||
cd "$BASE_PATH" || exit 1
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Phase 1: Instantiate Prototypes
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
log_info "🚀 Phase 1: Instantiating prototypes from templates..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
total_generated=0
|
|
||||||
total_failed=0
|
|
||||||
|
|
||||||
for page in "${PAGE_ARRAY[@]}"; do
|
|
||||||
# Trim whitespace
|
|
||||||
page=$(echo "$page" | xargs)
|
|
||||||
|
|
||||||
log_info "Processing page/component: $page"
|
|
||||||
|
|
||||||
for s in $(seq 1 "$STYLE_VARIANTS"); do
|
|
||||||
for l in $(seq 1 "$LAYOUT_VARIANTS"); do
|
|
||||||
# Define file paths
|
|
||||||
TEMPLATE_HTML="_templates/${page}-layout-${l}.html"
|
|
||||||
STRUCTURAL_CSS="_templates/${page}-layout-${l}.css"
|
|
||||||
TOKEN_CSS="../style-extraction/style-${s}/tokens.css"
|
|
||||||
OUTPUT_HTML="${page}-style-${s}-layout-${l}.html"
|
|
||||||
|
|
||||||
# Copy template and replace placeholders
|
|
||||||
if [ -f "$TEMPLATE_HTML" ]; then
|
|
||||||
cp "$TEMPLATE_HTML" "$OUTPUT_HTML" || {
|
|
||||||
log_error "Failed to copy template: $TEMPLATE_HTML"
|
|
||||||
((total_failed++))
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
# Replace CSS placeholders (Windows-compatible sed syntax)
|
|
||||||
sed -i "s|{{STRUCTURAL_CSS}}|${STRUCTURAL_CSS}|g" "$OUTPUT_HTML" || true
|
|
||||||
sed -i "s|{{TOKEN_CSS}}|${TOKEN_CSS}|g" "$OUTPUT_HTML" || true
|
|
||||||
|
|
||||||
log_success "Created: $OUTPUT_HTML"
|
|
||||||
((total_generated++))
|
|
||||||
|
|
||||||
# Create implementation notes (simplified)
|
|
||||||
NOTES_FILE="${page}-style-${s}-layout-${l}-notes.md"
|
|
||||||
|
|
||||||
# Generate notes with simple heredoc
|
|
||||||
cat > "$NOTES_FILE" <<NOTESEOF
|
|
||||||
# Implementation Notes: ${page}-style-${s}-layout-${l}
|
|
||||||
|
|
||||||
## Generation Details
|
|
||||||
- **Template**: ${TEMPLATE_HTML}
|
|
||||||
- **Structural CSS**: ${STRUCTURAL_CSS}
|
|
||||||
- **Style Tokens**: ${TOKEN_CSS}
|
|
||||||
- **Layout Strategy**: Layout ${l}
|
|
||||||
- **Style Variant**: Style ${s}
|
|
||||||
- **Mode**: ${MODE}
|
|
||||||
|
|
||||||
## Template Reuse
|
|
||||||
This prototype was generated from a shared layout template to ensure consistency
|
|
||||||
across all style variants. The HTML structure is identical for all ${page}-layout-${l}
|
|
||||||
prototypes, with only the design tokens (colors, fonts, spacing) varying.
|
|
||||||
|
|
||||||
## Design System Reference
|
|
||||||
Refer to \`../style-extraction/style-${s}/style-guide.md\` for:
|
|
||||||
- Design philosophy
|
|
||||||
- Token usage guidelines
|
|
||||||
- Component patterns
|
|
||||||
- Accessibility requirements
|
|
||||||
|
|
||||||
## Customization
|
|
||||||
To modify this prototype:
|
|
||||||
1. Edit the layout template: \`${TEMPLATE_HTML}\` (affects all styles)
|
|
||||||
2. Edit the structural CSS: \`${STRUCTURAL_CSS}\` (affects all styles)
|
|
||||||
3. Edit design tokens: \`${TOKEN_CSS}\` (affects only this style variant)
|
|
||||||
|
|
||||||
## Run Information
|
|
||||||
- **Run ID**: ${RUN_ID}
|
|
||||||
- **Session ID**: ${SESSION_ID}
|
|
||||||
- **Generated**: $(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
|
|
||||||
NOTESEOF
|
|
||||||
|
|
||||||
else
|
|
||||||
log_error "Template not found: $TEMPLATE_HTML"
|
|
||||||
((total_failed++))
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
done
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
log_success "Phase 1 complete: Generated ${total_generated} prototypes"
|
|
||||||
if [ $total_failed -gt 0 ]; then
|
|
||||||
log_warning "Failed: ${total_failed} prototypes"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Phase 2: Generate Preview Files (if enabled)
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
if [ "$GENERATE_PREVIEW" = false ]; then
|
|
||||||
log_info "⏭️ Skipping preview generation (--no-preview flag)"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_info "🎨 Phase 2: Generating preview files..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 2a. Generate compare.html from template
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
if [ ! -f "$TEMPLATE_PATH" ]; then
|
|
||||||
log_warning "Template not found: $TEMPLATE_PATH"
|
|
||||||
log_info " Skipping compare.html generation"
|
|
||||||
else
|
|
||||||
log_info "📄 Generating compare.html from template..."
|
|
||||||
|
|
||||||
# Convert page array to JSON format
|
|
||||||
PAGES_JSON="["
|
|
||||||
for i in "${!PAGE_ARRAY[@]}"; do
|
|
||||||
page=$(echo "${PAGE_ARRAY[$i]}" | xargs)
|
|
||||||
PAGES_JSON+="\"$page\""
|
|
||||||
if [ $i -lt $((${#PAGE_ARRAY[@]} - 1)) ]; then
|
|
||||||
PAGES_JSON+=", "
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
PAGES_JSON+="]"
|
|
||||||
|
|
||||||
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
|
|
||||||
|
|
||||||
# Read template and replace placeholders
|
|
||||||
cat "$TEMPLATE_PATH" | \
|
|
||||||
sed "s|{{run_id}}|${RUN_ID}|g" | \
|
|
||||||
sed "s|{{session_id}}|${SESSION_ID}|g" | \
|
|
||||||
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
|
|
||||||
sed "s|{{style_variants}}|${STYLE_VARIANTS}|g" | \
|
|
||||||
sed "s|{{layout_variants}}|${LAYOUT_VARIANTS}|g" | \
|
|
||||||
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
|
|
||||||
> compare.html
|
|
||||||
|
|
||||||
log_success "Generated: compare.html"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 2b. Generate index.html
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
log_info "📄 Generating index.html..."
|
|
||||||
|
|
||||||
# Calculate total prototypes
|
|
||||||
TOTAL_PROTOTYPES=$((STYLE_VARIANTS * LAYOUT_VARIANTS * ${#PAGE_ARRAY[@]}))
|
|
||||||
|
|
||||||
# Generate index.html with simple heredoc
|
|
||||||
cat > index.html <<'INDEXEOF'
|
|
||||||
<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
||||||
<title>UI Prototypes - __MODE__ Mode - __RUN_ID__</title>
|
|
||||||
<style>
|
|
||||||
body {
|
|
||||||
font-family: system-ui, -apple-system, sans-serif;
|
|
||||||
max-width: 900px;
|
|
||||||
margin: 2rem auto;
|
|
||||||
padding: 0 2rem;
|
|
||||||
background: #f9fafb;
|
|
||||||
}
|
|
||||||
.header {
|
|
||||||
background: white;
|
|
||||||
padding: 2rem;
|
|
||||||
border-radius: 0.75rem;
|
|
||||||
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
|
|
||||||
margin-bottom: 2rem;
|
|
||||||
}
|
|
||||||
h1 {
|
|
||||||
color: #2563eb;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
font-size: 2rem;
|
|
||||||
}
|
|
||||||
.meta {
|
|
||||||
color: #6b7280;
|
|
||||||
font-size: 0.875rem;
|
|
||||||
margin-top: 0.5rem;
|
|
||||||
}
|
|
||||||
.info {
|
|
||||||
background: #f3f4f6;
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-radius: 0.5rem;
|
|
||||||
margin: 1.5rem 0;
|
|
||||||
border-left: 4px solid #2563eb;
|
|
||||||
}
|
|
||||||
.cta {
|
|
||||||
display: inline-block;
|
|
||||||
background: #2563eb;
|
|
||||||
color: white;
|
|
||||||
padding: 1rem 2rem;
|
|
||||||
border-radius: 0.5rem;
|
|
||||||
text-decoration: none;
|
|
||||||
font-weight: 600;
|
|
||||||
margin: 1rem 0;
|
|
||||||
transition: background 0.2s;
|
|
||||||
}
|
|
||||||
.cta:hover {
|
|
||||||
background: #1d4ed8;
|
|
||||||
}
|
|
||||||
.stats {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
|
|
||||||
gap: 1rem;
|
|
||||||
margin: 1.5rem 0;
|
|
||||||
}
|
|
||||||
.stat {
|
|
||||||
background: white;
|
|
||||||
border: 1px solid #e5e7eb;
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-radius: 0.5rem;
|
|
||||||
text-align: center;
|
|
||||||
box-shadow: 0 1px 2px rgba(0,0,0,0.05);
|
|
||||||
}
|
|
||||||
.stat-value {
|
|
||||||
font-size: 2.5rem;
|
|
||||||
font-weight: bold;
|
|
||||||
color: #2563eb;
|
|
||||||
margin-bottom: 0.25rem;
|
|
||||||
}
|
|
||||||
.stat-label {
|
|
||||||
color: #6b7280;
|
|
||||||
font-size: 0.875rem;
|
|
||||||
}
|
|
||||||
.section {
|
|
||||||
background: white;
|
|
||||||
padding: 2rem;
|
|
||||||
border-radius: 0.75rem;
|
|
||||||
margin-bottom: 2rem;
|
|
||||||
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
|
|
||||||
}
|
|
||||||
h2 {
|
|
||||||
color: #1f2937;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
font-size: 1.5rem;
|
|
||||||
}
|
|
||||||
ul {
|
|
||||||
line-height: 1.8;
|
|
||||||
color: #374151;
|
|
||||||
}
|
|
||||||
.pages-list {
|
|
||||||
list-style: none;
|
|
||||||
padding: 0;
|
|
||||||
}
|
|
||||||
.pages-list li {
|
|
||||||
background: #f9fafb;
|
|
||||||
padding: 0.75rem 1rem;
|
|
||||||
margin: 0.5rem 0;
|
|
||||||
border-radius: 0.375rem;
|
|
||||||
border-left: 3px solid #2563eb;
|
|
||||||
}
|
|
||||||
.badge {
|
|
||||||
display: inline-block;
|
|
||||||
background: #dbeafe;
|
|
||||||
color: #1e40af;
|
|
||||||
padding: 0.25rem 0.75rem;
|
|
||||||
border-radius: 0.25rem;
|
|
||||||
font-size: 0.75rem;
|
|
||||||
font-weight: 600;
|
|
||||||
margin-left: 0.5rem;
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<div class="header">
|
|
||||||
<h1>🎨 UI Prototype __MODE__ Mode</h1>
|
|
||||||
<div class="meta">
|
|
||||||
<strong>Run ID:</strong> __RUN_ID__ |
|
|
||||||
<strong>Session:</strong> __SESSION_ID__ |
|
|
||||||
<strong>Generated:</strong> __TIMESTAMP__
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="info">
|
|
||||||
<p><strong>Matrix Configuration:</strong> __STYLE_VARIANTS__ styles × __LAYOUT_VARIANTS__ layouts × __PAGE_COUNT__ __MODE__s</p>
|
|
||||||
<p><strong>Total Prototypes:</strong> __TOTAL_PROTOTYPES__ interactive HTML files</p>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<a href="compare.html" class="cta">🔍 Open Interactive Matrix Comparison →</a>
|
|
||||||
|
|
||||||
<div class="stats">
|
|
||||||
<div class="stat">
|
|
||||||
<div class="stat-value">__STYLE_VARIANTS__</div>
|
|
||||||
<div class="stat-label">Style Variants</div>
|
|
||||||
</div>
|
|
||||||
<div class="stat">
|
|
||||||
<div class="stat-value">__LAYOUT_VARIANTS__</div>
|
|
||||||
<div class="stat-label">Layout Options</div>
|
|
||||||
</div>
|
|
||||||
<div class="stat">
|
|
||||||
<div class="stat-value">__PAGE_COUNT__</div>
|
|
||||||
<div class="stat-label">__MODE__s</div>
|
|
||||||
</div>
|
|
||||||
<div class="stat">
|
|
||||||
<div class="stat-value">__TOTAL_PROTOTYPES__</div>
|
|
||||||
<div class="stat-label">Total Prototypes</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="section">
|
|
||||||
<h2>🌟 Features</h2>
|
|
||||||
<ul>
|
|
||||||
<li><strong>Interactive Matrix View:</strong> __STYLE_VARIANTS__×__LAYOUT_VARIANTS__ grid with synchronized scrolling</li>
|
|
||||||
<li><strong>Flexible Zoom:</strong> 25%, 50%, 75%, 100% viewport scaling</li>
|
|
||||||
<li><strong>Fullscreen Mode:</strong> Detailed view for individual prototypes</li>
|
|
||||||
<li><strong>Selection System:</strong> Mark favorites with export to JSON</li>
|
|
||||||
<li><strong>__MODE__ Switcher:</strong> Compare different __MODE__s side-by-side</li>
|
|
||||||
<li><strong>Persistent State:</strong> Selections saved in localStorage</li>
|
|
||||||
</ul>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="section">
|
|
||||||
<h2>📄 Generated __MODE__s</h2>
|
|
||||||
<ul class="pages-list">
|
|
||||||
__PAGES_LIST__
|
|
||||||
</ul>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="section">
|
|
||||||
<h2>📚 Next Steps</h2>
|
|
||||||
<ol>
|
|
||||||
<li>Open <code>compare.html</code> to explore all variants in matrix view</li>
|
|
||||||
<li>Use zoom and sync scroll controls to compare details</li>
|
|
||||||
<li>Select your preferred style×layout combinations</li>
|
|
||||||
<li>Export selections as JSON for implementation planning</li>
|
|
||||||
<li>Review implementation notes in <code>*-notes.md</code> files</li>
|
|
||||||
</ol>
|
|
||||||
</div>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
INDEXEOF
|
|
||||||
|
|
||||||
# Build pages list HTML
|
|
||||||
PAGES_LIST_HTML=""
|
|
||||||
for page in "${PAGE_ARRAY[@]}"; do
|
|
||||||
page=$(echo "$page" | xargs)
|
|
||||||
VARIANT_COUNT=$((STYLE_VARIANTS * LAYOUT_VARIANTS))
|
|
||||||
PAGES_LIST_HTML+=" <li>\n"
|
|
||||||
PAGES_LIST_HTML+=" <strong>${page}</strong>\n"
|
|
||||||
PAGES_LIST_HTML+=" <span class=\"badge\">${STYLE_VARIANTS}×${LAYOUT_VARIANTS} = ${VARIANT_COUNT} variants</span>\n"
|
|
||||||
PAGES_LIST_HTML+=" </li>\n"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Replace all placeholders in index.html
|
|
||||||
MODE_UPPER=$(echo "$MODE" | awk '{print toupper(substr($0,1,1)) tolower(substr($0,2))}')
|
|
||||||
sed -i "s|__RUN_ID__|${RUN_ID}|g" index.html
|
|
||||||
sed -i "s|__SESSION_ID__|${SESSION_ID}|g" index.html
|
|
||||||
sed -i "s|__TIMESTAMP__|${TIMESTAMP}|g" index.html
|
|
||||||
sed -i "s|__MODE__|${MODE_UPPER}|g" index.html
|
|
||||||
sed -i "s|__STYLE_VARIANTS__|${STYLE_VARIANTS}|g" index.html
|
|
||||||
sed -i "s|__LAYOUT_VARIANTS__|${LAYOUT_VARIANTS}|g" index.html
|
|
||||||
sed -i "s|__PAGE_COUNT__|${#PAGE_ARRAY[@]}|g" index.html
|
|
||||||
sed -i "s|__TOTAL_PROTOTYPES__|${TOTAL_PROTOTYPES}|g" index.html
|
|
||||||
sed -i "s|__PAGES_LIST__|${PAGES_LIST_HTML}|g" index.html
|
|
||||||
|
|
||||||
log_success "Generated: index.html"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 2c. Generate PREVIEW.md
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
log_info "📄 Generating PREVIEW.md..."
|
|
||||||
|
|
||||||
cat > PREVIEW.md <<PREVIEWEOF
|
|
||||||
# UI Prototype Preview Guide
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
1. Open \`index.html\` for overview and navigation
|
|
||||||
2. Open \`compare.html\` for interactive matrix comparison
|
|
||||||
3. Use browser developer tools to inspect responsive behavior
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
- **Exploration Mode:** ${MODE_UPPER}
|
|
||||||
- **Run ID:** ${RUN_ID}
|
|
||||||
- **Session ID:** ${SESSION_ID}
|
|
||||||
- **Style Variants:** ${STYLE_VARIANTS}
|
|
||||||
- **Layout Options:** ${LAYOUT_VARIANTS}
|
|
||||||
- **${MODE_UPPER}s:** ${PAGES}
|
|
||||||
- **Total Prototypes:** ${TOTAL_PROTOTYPES}
|
|
||||||
- **Generated:** ${TIMESTAMP}
|
|
||||||
|
|
||||||
## File Naming Convention
|
|
||||||
|
|
||||||
\`\`\`
|
|
||||||
{${MODE}}-style-{s}-layout-{l}.html
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
**Example:** \`dashboard-style-1-layout-2.html\`
|
|
||||||
- ${MODE_UPPER}: dashboard
|
|
||||||
- Style: Design system 1
|
|
||||||
- Layout: Layout variant 2
|
|
||||||
|
|
||||||
## Interactive Features (compare.html)
|
|
||||||
|
|
||||||
### Matrix View
|
|
||||||
- **Grid Layout:** ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} table with all prototypes visible
|
|
||||||
- **Synchronized Scroll:** All iframes scroll together (toggle with button)
|
|
||||||
- **Zoom Controls:** Adjust viewport scale (25%, 50%, 75%, 100%)
|
|
||||||
- **${MODE_UPPER} Selector:** Switch between different ${MODE}s instantly
|
|
||||||
|
|
||||||
### Prototype Actions
|
|
||||||
- **⭐ Selection:** Click star icon to mark favorites
|
|
||||||
- **⛶ Fullscreen:** View prototype in fullscreen overlay
|
|
||||||
- **↗ New Tab:** Open prototype in dedicated browser tab
|
|
||||||
|
|
||||||
### Selection Export
|
|
||||||
1. Select preferred prototypes using star icons
|
|
||||||
2. Click "Export Selection" button
|
|
||||||
3. Downloads JSON file: \`selection-${RUN_ID}.json\`
|
|
||||||
4. Use exported file for implementation planning
|
|
||||||
|
|
||||||
## Design System References
|
|
||||||
|
|
||||||
Each prototype references a specific style design system:
|
|
||||||
PREVIEWEOF
|
|
||||||
|
|
||||||
# Add style references
|
|
||||||
for s in $(seq 1 "$STYLE_VARIANTS"); do
|
|
||||||
cat >> PREVIEW.md <<STYLEEOF
|
|
||||||
|
|
||||||
### Style ${s}
|
|
||||||
- **Tokens:** \`../style-extraction/style-${s}/design-tokens.json\`
|
|
||||||
- **CSS Variables:** \`../style-extraction/style-${s}/tokens.css\`
|
|
||||||
- **Style Guide:** \`../style-extraction/style-${s}/style-guide.md\`
|
|
||||||
STYLEEOF
|
|
||||||
done
|
|
||||||
|
|
||||||
cat >> PREVIEW.md <<'FOOTEREOF'
|
|
||||||
|
|
||||||
## Responsive Testing
|
|
||||||
|
|
||||||
All prototypes are mobile-first responsive. Test at these breakpoints:
|
|
||||||
|
|
||||||
- **Mobile:** 375px - 767px
|
|
||||||
- **Tablet:** 768px - 1023px
|
|
||||||
- **Desktop:** 1024px+
|
|
||||||
|
|
||||||
Use browser DevTools responsive mode for testing.
|
|
||||||
|
|
||||||
## Accessibility Features
|
|
||||||
|
|
||||||
- Semantic HTML5 structure
|
|
||||||
- ARIA attributes for screen readers
|
|
||||||
- Keyboard navigation support
|
|
||||||
- Proper heading hierarchy
|
|
||||||
- Focus indicators
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. **Review:** Open `compare.html` and explore all variants
|
|
||||||
2. **Select:** Mark preferred prototypes using star icons
|
|
||||||
3. **Export:** Download selection JSON for implementation
|
|
||||||
4. **Implement:** Use `/workflow:ui-design:update` to integrate selected designs
|
|
||||||
5. **Plan:** Run `/workflow:plan` to generate implementation tasks
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Generated by:** `ui-instantiate-prototypes.sh`
|
|
||||||
**Version:** 3.0 (auto-detect mode)
|
|
||||||
FOOTEREOF
|
|
||||||
|
|
||||||
log_success "Generated: PREVIEW.md"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Completion Summary
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "========================================="
|
|
||||||
echo "✅ Generation Complete!"
|
|
||||||
echo "========================================="
|
|
||||||
echo ""
|
|
||||||
echo "📊 Summary:"
|
|
||||||
echo " Prototypes: ${total_generated} generated"
|
|
||||||
if [ $total_failed -gt 0 ]; then
|
|
||||||
echo " Failed: ${total_failed}"
|
|
||||||
fi
|
|
||||||
echo " Preview Files: compare.html, index.html, PREVIEW.md"
|
|
||||||
echo " Matrix: ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} (${#PAGE_ARRAY[@]} ${MODE}s)"
|
|
||||||
echo " Total Files: ${TOTAL_PROTOTYPES} prototypes + preview files"
|
|
||||||
echo ""
|
|
||||||
echo "🌐 Next Steps:"
|
|
||||||
echo " 1. Open: ${BASE_PATH}/index.html"
|
|
||||||
echo " 2. Explore: ${BASE_PATH}/compare.html"
|
|
||||||
echo " 3. Review: ${BASE_PATH}/PREVIEW.md"
|
|
||||||
echo ""
|
|
||||||
echo "Performance: Template-based approach with ${STYLE_VARIANTS}× speedup"
|
|
||||||
echo "========================================="
|
|
||||||
@@ -1,333 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Update CLAUDE.md for modules with two strategies
|
|
||||||
# Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]
|
|
||||||
# strategy: single-layer|multi-layer
|
|
||||||
# module_path: Path to the module directory
|
|
||||||
# tool: gemini|qwen|codex (default: gemini)
|
|
||||||
# model: Model name (optional, uses tool defaults)
|
|
||||||
#
|
|
||||||
# Default Models:
|
|
||||||
# gemini: gemini-2.5-flash
|
|
||||||
# qwen: coder-model
|
|
||||||
# codex: gpt5-codex
|
|
||||||
#
|
|
||||||
# Strategies:
|
|
||||||
# single-layer: Upward aggregation
|
|
||||||
# - Read: Current directory code + child CLAUDE.md files
|
|
||||||
# - Generate: Single ./CLAUDE.md in current directory
|
|
||||||
# - Use: Large projects, incremental bottom-up updates
|
|
||||||
#
|
|
||||||
# multi-layer: Downward distribution
|
|
||||||
# - Read: All files in current and subdirectories
|
|
||||||
# - Generate: CLAUDE.md for each directory containing files
|
|
||||||
# - Use: Small projects, full documentation generation
|
|
||||||
#
|
|
||||||
# Features:
|
|
||||||
# - Minimal prompts based on unified template
|
|
||||||
# - Respects .gitignore patterns
|
|
||||||
# - Path-focused processing (script only cares about paths)
|
|
||||||
# - Template-driven generation
|
|
||||||
|
|
||||||
# Build exclusion filters from .gitignore
|
|
||||||
build_exclusion_filters() {
|
|
||||||
local filters=""
|
|
||||||
|
|
||||||
# Common system/cache directories to exclude
|
|
||||||
local system_excludes=(
|
|
||||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
|
||||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
|
||||||
"coverage" ".nyc_output" "logs" "tmp" "temp"
|
|
||||||
)
|
|
||||||
|
|
||||||
for exclude in "${system_excludes[@]}"; do
|
|
||||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Find and parse .gitignore (current dir first, then git root)
|
|
||||||
local gitignore_file=""
|
|
||||||
|
|
||||||
# Check current directory first
|
|
||||||
if [ -f ".gitignore" ]; then
|
|
||||||
gitignore_file=".gitignore"
|
|
||||||
else
|
|
||||||
# Try to find git root and check for .gitignore there
|
|
||||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
|
||||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
|
||||||
gitignore_file="$git_root/.gitignore"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse .gitignore if found
|
|
||||||
if [ -n "$gitignore_file" ]; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
|
||||||
|
|
||||||
# Remove trailing slash and whitespace
|
|
||||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
|
||||||
|
|
||||||
# Skip wildcards patterns (too complex for simple find)
|
|
||||||
[[ "$line" =~ \* ]] && continue
|
|
||||||
|
|
||||||
# Add to filters
|
|
||||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
|
||||||
done < "$gitignore_file"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$filters"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Scan directory structure and generate structured information
|
|
||||||
scan_directory_structure() {
|
|
||||||
local target_path="$1"
|
|
||||||
local strategy="$2"
|
|
||||||
|
|
||||||
if [ ! -d "$target_path" ]; then
|
|
||||||
echo "Directory not found: $target_path"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
local structure_info=""
|
|
||||||
|
|
||||||
# Get basic directory info
|
|
||||||
local dir_name=$(basename "$target_path")
|
|
||||||
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
|
|
||||||
structure_info+="Directory: $dir_name\n"
|
|
||||||
structure_info+="Total files: $total_files\n"
|
|
||||||
structure_info+="Total directories: $total_dirs\n\n"
|
|
||||||
|
|
||||||
if [ "$strategy" = "multi-layer" ]; then
|
|
||||||
# For multi-layer: show all subdirectories with file counts
|
|
||||||
structure_info+="Subdirectories with files:\n"
|
|
||||||
while IFS= read -r dir; do
|
|
||||||
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
|
||||||
local rel_path=${dir#$target_path/}
|
|
||||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
if [ $file_count -gt 0 ]; then
|
|
||||||
structure_info+=" - $rel_path/ ($file_count files)\n"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
|
||||||
else
|
|
||||||
# For single-layer: show direct children only
|
|
||||||
structure_info+="Direct subdirectories:\n"
|
|
||||||
while IFS= read -r dir; do
|
|
||||||
if [ -n "$dir" ]; then
|
|
||||||
local dir_name=$(basename "$dir")
|
|
||||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local has_claude=$([ -f "$dir/CLAUDE.md" ] && echo " [has CLAUDE.md]" || echo "")
|
|
||||||
structure_info+=" - $dir_name/ ($file_count files)$has_claude\n"
|
|
||||||
fi
|
|
||||||
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Show main file types in current directory
|
|
||||||
structure_info+="\nCurrent directory files:\n"
|
|
||||||
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
|
|
||||||
structure_info+=" - Code files: $code_files\n"
|
|
||||||
structure_info+=" - Config files: $config_files\n"
|
|
||||||
structure_info+=" - Documentation: $doc_files\n"
|
|
||||||
|
|
||||||
printf "%b" "$structure_info"
|
|
||||||
}
|
|
||||||
|
|
||||||
update_module_claude() {
|
|
||||||
local strategy="$1"
|
|
||||||
local module_path="$2"
|
|
||||||
local tool="${3:-gemini}"
|
|
||||||
local model="$4"
|
|
||||||
|
|
||||||
# Validate parameters
|
|
||||||
if [ -z "$strategy" ] || [ -z "$module_path" ]; then
|
|
||||||
echo "❌ Error: Strategy and module path are required"
|
|
||||||
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
|
|
||||||
echo "Strategies: single-layer|multi-layer"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate strategy
|
|
||||||
if [ "$strategy" != "single-layer" ] && [ "$strategy" != "multi-layer" ]; then
|
|
||||||
echo "❌ Error: Invalid strategy '$strategy'"
|
|
||||||
echo "Valid strategies: single-layer, multi-layer"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -d "$module_path" ]; then
|
|
||||||
echo "❌ Error: Directory '$module_path' does not exist"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set default models if not specified
|
|
||||||
if [ -z "$model" ]; then
|
|
||||||
case "$tool" in
|
|
||||||
gemini)
|
|
||||||
model="gemini-2.5-flash"
|
|
||||||
;;
|
|
||||||
qwen)
|
|
||||||
model="coder-model"
|
|
||||||
;;
|
|
||||||
codex)
|
|
||||||
model="gpt5-codex"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
model=""
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build exclusion filters from .gitignore
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
|
|
||||||
# Check if directory has files (excluding gitignored paths)
|
|
||||||
local file_count=$(eval "find \"$module_path\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
if [ $file_count -eq 0 ]; then
|
|
||||||
echo "⚠️ Skipping '$module_path' - no files found (after .gitignore filtering)"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Use unified template for all modules
|
|
||||||
local template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/02-document-module-structure.txt"
|
|
||||||
|
|
||||||
# Read template content directly
|
|
||||||
local template_content=""
|
|
||||||
if [ -f "$template_path" ]; then
|
|
||||||
template_content=$(cat "$template_path")
|
|
||||||
echo " 📋 Loaded template: $(wc -l < "$template_path") lines"
|
|
||||||
else
|
|
||||||
echo " ⚠️ Template not found: $template_path"
|
|
||||||
echo " Using fallback template..."
|
|
||||||
template_content="Create comprehensive CLAUDE.md documentation following standard structure with Purpose, Structure, Components, Dependencies, Integration, and Implementation sections."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Scan directory structure first
|
|
||||||
echo " 🔍 Scanning directory structure..."
|
|
||||||
local structure_info=$(scan_directory_structure "$module_path" "$strategy")
|
|
||||||
|
|
||||||
# Prepare logging info
|
|
||||||
local module_name=$(basename "$module_path")
|
|
||||||
|
|
||||||
echo "⚡ Updating: $module_path"
|
|
||||||
echo " Strategy: $strategy | Tool: $tool | Model: $model | Files: $file_count"
|
|
||||||
echo " Template: $(basename "$template_path") ($(echo "$template_content" | wc -l) lines)"
|
|
||||||
echo " Structure: Scanned $(echo "$structure_info" | wc -l) lines of structure info"
|
|
||||||
|
|
||||||
# Build minimal strategy-specific prompt with explicit paths and structure info
|
|
||||||
local final_prompt=""
|
|
||||||
|
|
||||||
if [ "$strategy" = "multi-layer" ]; then
|
|
||||||
# multi-layer strategy: read all, generate for each directory
|
|
||||||
final_prompt="Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
Read: @**/*
|
|
||||||
|
|
||||||
Generate CLAUDE.md files:
|
|
||||||
- Primary: ./CLAUDE.md (current directory)
|
|
||||||
- Additional: CLAUDE.md in each subdirectory containing files
|
|
||||||
|
|
||||||
Template Guidelines:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Work bottom-up: deepest directories first
|
|
||||||
- Parent directories reference children
|
|
||||||
- Each CLAUDE.md file must be in its respective directory
|
|
||||||
- Follow the template guidelines above for consistent structure
|
|
||||||
- Use the structure analysis to understand directory hierarchy"
|
|
||||||
else
|
|
||||||
# single-layer strategy: read current + child CLAUDE.md, generate current only
|
|
||||||
final_prompt="Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
Read: @*/CLAUDE.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.md @*.json @*.yaml @*.yml
|
|
||||||
|
|
||||||
Generate single file: ./CLAUDE.md
|
|
||||||
|
|
||||||
Template Guidelines:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create exactly one CLAUDE.md file in the current directory
|
|
||||||
- Reference child CLAUDE.md files, do not duplicate their content
|
|
||||||
- Follow the template guidelines above for consistent structure
|
|
||||||
- Use the structure analysis to understand the current directory context"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Execute update
|
|
||||||
local start_time=$(date +%s)
|
|
||||||
echo " 🔄 Starting update..."
|
|
||||||
|
|
||||||
if cd "$module_path" 2>/dev/null; then
|
|
||||||
local tool_result=0
|
|
||||||
|
|
||||||
# Execute with selected tool
|
|
||||||
# NOTE: Model parameter (-m) is placed AFTER the prompt
|
|
||||||
case "$tool" in
|
|
||||||
qwen)
|
|
||||||
if [ "$model" = "coder-model" ]; then
|
|
||||||
# coder-model is default, -m is optional
|
|
||||||
qwen -p "$final_prompt" --yolo 2>&1
|
|
||||||
else
|
|
||||||
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
fi
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
codex)
|
|
||||||
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
gemini)
|
|
||||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
|
||||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
if [ $tool_result -eq 0 ]; then
|
|
||||||
local end_time=$(date +%s)
|
|
||||||
local duration=$((end_time - start_time))
|
|
||||||
echo " ✅ Completed in ${duration}s"
|
|
||||||
cd - > /dev/null
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
echo " ❌ Update failed for $module_path"
|
|
||||||
cd - > /dev/null
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ❌ Cannot access directory: $module_path"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Execute function if script is run directly
|
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
|
||||||
# Show help if no arguments or help requested
|
|
||||||
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
|
||||||
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
|
|
||||||
echo ""
|
|
||||||
echo "Strategies:"
|
|
||||||
echo " single-layer - Read current dir code + child CLAUDE.md, generate ./CLAUDE.md"
|
|
||||||
echo " multi-layer - Read all files, generate CLAUDE.md for each directory"
|
|
||||||
echo ""
|
|
||||||
echo "Tools: gemini (default), qwen, codex"
|
|
||||||
echo "Models: Use tool defaults if not specified"
|
|
||||||
echo ""
|
|
||||||
echo "Examples:"
|
|
||||||
echo " ./update_module_claude.sh single-layer ./src/auth"
|
|
||||||
echo " ./update_module_claude.sh multi-layer ./components gemini gemini-2.5-flash"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
update_module_claude "$@"
|
|
||||||
fi
|
|
||||||
@@ -1,13 +1,13 @@
|
|||||||
---
|
---
|
||||||
name: command-guide
|
name: command-guide
|
||||||
description: Workflow command guide for Claude DMS3 (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
||||||
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
||||||
version: 5.8.0
|
version: 5.8.0
|
||||||
---
|
---
|
||||||
|
|
||||||
# Command Guide Skill
|
# Command Guide Skill
|
||||||
|
|
||||||
Comprehensive command guide for Claude DMS3 workflow system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
|
Comprehensive command guide for Claude Code Workflow (CCW) system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
|
||||||
|
|
||||||
## 🆕 What's New in v5.8.0
|
## 🆕 What's New in v5.8.0
|
||||||
|
|
||||||
@@ -200,21 +200,21 @@ Comprehensive command guide for Claude DMS3 workflow system covering 78 commands
|
|||||||
|
|
||||||
**Complex Query** (CLI-assisted analysis):
|
**Complex Query** (CLI-assisted analysis):
|
||||||
1. **Detect complexity indicators** (多个命令对比、工作流程分析、最佳实践)
|
1. **Detect complexity indicators** (多个命令对比、工作流程分析、最佳实践)
|
||||||
2. **Design targeted analysis prompt** for gemini/qwen:
|
2. **Design targeted analysis prompt** for gemini/qwen via CCW:
|
||||||
- Frame user's question precisely
|
- Frame user's question precisely
|
||||||
- Specify required analysis depth
|
- Specify required analysis depth
|
||||||
- Request structured comparison/synthesis
|
- Request structured comparison/synthesis
|
||||||
```bash
|
```bash
|
||||||
gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Analyze command documentation to answer user query
|
PURPOSE: Analyze command documentation to answer user query
|
||||||
TASK: [extracted user question with context]
|
TASK: • [extracted user question with context]
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: Comprehensive answer with examples and recommendations
|
EXPECTED: Comprehensive answer with examples and recommendations
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage | analysis=READ-ONLY
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage | analysis=READ-ONLY
|
||||||
" -m gemini-3-pro-preview-11-2025 --include-directories ~/.claude/skills/command-guide/reference
|
" --tool gemini --cd ~/.claude/skills/command-guide/reference
|
||||||
```
|
```
|
||||||
Note: Use absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
|
Note: Use `--cd` with absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
|
||||||
3. **Process and integrate CLI analysis**:
|
3. **Process and integrate CLI analysis**:
|
||||||
- Extract key insights from CLI output
|
- Extract key insights from CLI output
|
||||||
- Add context-specific examples
|
- Add context-specific examples
|
||||||
@@ -385,4 +385,4 @@ This SKILL documentation is kept in sync with command implementations through a
|
|||||||
- 4 issue templates for standardized problem reporting
|
- 4 issue templates for standardized problem reporting
|
||||||
- CLI-assisted complex query analysis with gemini/qwen integration
|
- CLI-assisted complex query analysis with gemini/qwen integration
|
||||||
|
|
||||||
**Maintainer**: Claude DMS3 Team
|
**Maintainer**: CCW Team
|
||||||
|
|||||||
@@ -1,26 +1,4 @@
|
|||||||
[
|
[
|
||||||
{
|
|
||||||
"name": "analyze",
|
|
||||||
"command": "/cli:analyze",
|
|
||||||
"description": "Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/analyze.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "chat",
|
|
||||||
"command": "/cli:chat",
|
|
||||||
"description": "Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] inquiry",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/chat.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "cli-init",
|
"name": "cli-init",
|
||||||
"command": "/cli:cli-init",
|
"command": "/cli:cli-init",
|
||||||
@@ -32,83 +10,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "cli/cli-init.md"
|
"file_path": "cli/cli-init.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "codex-execute",
|
|
||||||
"command": "/cli:codex-execute",
|
|
||||||
"description": "Multi-stage Codex execution with automatic task decomposition into grouped subtasks using resume mechanism for context continuity",
|
|
||||||
"arguments": "[--verify-git] task description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/codex-execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "discuss-plan",
|
|
||||||
"command": "/cli:discuss-plan",
|
|
||||||
"description": "Multi-round collaborative planning using Gemini, Codex, and Claude synthesis with iterative discussion cycles (read-only, no code changes)",
|
|
||||||
"arguments": "[--topic '...'] [--task-id '...'] [--rounds N]",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/discuss-plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "execute",
|
|
||||||
"command": "/cli:execute",
|
|
||||||
"description": "Autonomous code implementation with YOLO auto-approval using Gemini/Qwen/Codex, supports task ID or description input with automatic file pattern detection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "bug-diagnosis",
|
|
||||||
"command": "/cli:mode:bug-diagnosis",
|
|
||||||
"description": "Read-only bug root cause analysis using Gemini/Qwen/Codex with systematic diagnosis template for fix suggestions",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/bug-diagnosis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "code-analysis",
|
|
||||||
"command": "/cli:mode:code-analysis",
|
|
||||||
"description": "Read-only execution path tracing using Gemini/Qwen/Codex with specialized analysis template for call flow and optimization",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/code-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "document-analysis",
|
|
||||||
"command": "/cli:mode:document-analysis",
|
|
||||||
"description": "Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/document-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "plan",
|
|
||||||
"command": "/cli:mode:plan",
|
|
||||||
"description": "Read-only architecture planning using Gemini/Qwen/Codex with strategic planning template for modification plans and impact analysis",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/plan.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "enhance-prompt",
|
"name": "enhance-prompt",
|
||||||
"command": "/enhance-prompt",
|
"command": "/enhance-prompt",
|
||||||
@@ -211,7 +112,7 @@
|
|||||||
{
|
{
|
||||||
"name": "tech-research",
|
"name": "tech-research",
|
||||||
"command": "/memory:tech-research",
|
"command": "/memory:tech-research",
|
||||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||||
"category": "memory",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
@@ -508,8 +409,8 @@
|
|||||||
{
|
{
|
||||||
"name": "plan",
|
"name": "plan",
|
||||||
"command": "/workflow:plan",
|
"command": "/workflow:plan",
|
||||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
|
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||||
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
|
"arguments": "\\\"text description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -527,6 +428,39 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/replan.md"
|
"file_path": "workflow/replan.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "review-fix",
|
||||||
|
"command": "/workflow:review-fix",
|
||||||
|
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||||
|
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-fix.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-module-cycle",
|
||||||
|
"command": "/workflow:review-module-cycle",
|
||||||
|
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||||
|
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-module-cycle.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-session-cycle",
|
||||||
|
"command": "/workflow:review-session-cycle",
|
||||||
|
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||||
|
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "session-management",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-session-cycle.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "review",
|
"name": "review",
|
||||||
"command": "/workflow:review",
|
"command": "/workflow:review",
|
||||||
@@ -575,29 +509,18 @@
|
|||||||
"name": "start",
|
"name": "start",
|
||||||
"command": "/workflow:session:start",
|
"command": "/workflow:session:start",
|
||||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "session",
|
"subcategory": "session",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/session/start.md"
|
"file_path": "workflow/session/start.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "workflow:status",
|
|
||||||
"command": "/workflow:status",
|
|
||||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
|
||||||
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/status.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "tdd-plan",
|
"name": "tdd-plan",
|
||||||
"command": "/workflow:tdd-plan",
|
"command": "/workflow:tdd-plan",
|
||||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||||
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
|
"arguments": "\\\"feature description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -630,7 +553,7 @@
|
|||||||
"name": "test-fix-gen",
|
"name": "test-fix-gen",
|
||||||
"command": "/workflow:test-fix-gen",
|
"command": "/workflow:test-fix-gen",
|
||||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||||
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -641,7 +564,7 @@
|
|||||||
"name": "test-gen",
|
"name": "test-gen",
|
||||||
"command": "/workflow:test-gen",
|
"command": "/workflow:test-gen",
|
||||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||||
"arguments": "[--use-codex] [--cli-execute] source-session-id",
|
"arguments": "source-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -673,8 +596,8 @@
|
|||||||
{
|
{
|
||||||
"name": "task-generate-agent",
|
"name": "task-generate-agent",
|
||||||
"command": "/workflow:tools:task-generate-agent",
|
"command": "/workflow:tools:task-generate-agent",
|
||||||
"description": "Autonomous task generation using action-planning-agent with discovery and output phases for workflow planning",
|
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
@@ -685,24 +608,13 @@
|
|||||||
"name": "task-generate-tdd",
|
"name": "task-generate-tdd",
|
||||||
"command": "/workflow:tools:task-generate-tdd",
|
"command": "/workflow:tools:task-generate-tdd",
|
||||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Advanced",
|
"difficulty": "Advanced",
|
||||||
"file_path": "workflow/tools/task-generate-tdd.md"
|
"file_path": "workflow/tools/task-generate-tdd.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "task-generate",
|
|
||||||
"command": "/workflow:tools:task-generate",
|
|
||||||
"description": "Generate task JSON files and IMPL_PLAN.md from analysis results using action-planning-agent with artifact integration",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/tools/task-generate.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "tdd-coverage-analysis",
|
"name": "tdd-coverage-analysis",
|
||||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||||
@@ -717,7 +629,7 @@
|
|||||||
{
|
{
|
||||||
"name": "test-concept-enhanced",
|
"name": "test-concept-enhanced",
|
||||||
"command": "/workflow:tools:test-concept-enhanced",
|
"command": "/workflow:tools:test-concept-enhanced",
|
||||||
"description": "Analyze test requirements and generate test generation strategy using Gemini with test-context package",
|
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
@@ -739,8 +651,8 @@
|
|||||||
{
|
{
|
||||||
"name": "test-task-generate",
|
"name": "test-task-generate",
|
||||||
"command": "/workflow:tools:test-task-generate",
|
"command": "/workflow:tools:test-task-generate",
|
||||||
"description": "Autonomous test-fix task generation using action-planning-agent with test-fix-retest cycle specification and discovery phase",
|
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||||
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
|
"arguments": "--session WFS-test-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
|
|||||||
@@ -1,28 +1,6 @@
|
|||||||
{
|
{
|
||||||
"cli": {
|
"cli": {
|
||||||
"_root": [
|
"_root": [
|
||||||
{
|
|
||||||
"name": "analyze",
|
|
||||||
"command": "/cli:analyze",
|
|
||||||
"description": "Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/analyze.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "chat",
|
|
||||||
"command": "/cli:chat",
|
|
||||||
"description": "Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] inquiry",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/chat.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "cli-init",
|
"name": "cli-init",
|
||||||
"command": "/cli:cli-init",
|
"command": "/cli:cli-init",
|
||||||
@@ -33,85 +11,6 @@
|
|||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "cli/cli-init.md"
|
"file_path": "cli/cli-init.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "codex-execute",
|
|
||||||
"command": "/cli:codex-execute",
|
|
||||||
"description": "Multi-stage Codex execution with automatic task decomposition into grouped subtasks using resume mechanism for context continuity",
|
|
||||||
"arguments": "[--verify-git] task description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/codex-execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "discuss-plan",
|
|
||||||
"command": "/cli:discuss-plan",
|
|
||||||
"description": "Multi-round collaborative planning using Gemini, Codex, and Claude synthesis with iterative discussion cycles (read-only, no code changes)",
|
|
||||||
"arguments": "[--topic '...'] [--task-id '...'] [--rounds N]",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/discuss-plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "execute",
|
|
||||||
"command": "/cli:execute",
|
|
||||||
"description": "Autonomous code implementation with YOLO auto-approval using Gemini/Qwen/Codex, supports task ID or description input with automatic file pattern detection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/execute.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"mode": [
|
|
||||||
{
|
|
||||||
"name": "bug-diagnosis",
|
|
||||||
"command": "/cli:mode:bug-diagnosis",
|
|
||||||
"description": "Read-only bug root cause analysis using Gemini/Qwen/Codex with systematic diagnosis template for fix suggestions",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/bug-diagnosis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "code-analysis",
|
|
||||||
"command": "/cli:mode:code-analysis",
|
|
||||||
"description": "Read-only execution path tracing using Gemini/Qwen/Codex with specialized analysis template for call flow and optimization",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/code-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "document-analysis",
|
|
||||||
"command": "/cli:mode:document-analysis",
|
|
||||||
"description": "Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/document-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "plan",
|
|
||||||
"command": "/cli:mode:plan",
|
|
||||||
"description": "Read-only architecture planning using Gemini/Qwen/Codex with strategic planning template for modification plans and impact analysis",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/plan.md"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -234,7 +133,7 @@
|
|||||||
{
|
{
|
||||||
"name": "tech-research",
|
"name": "tech-research",
|
||||||
"command": "/memory:tech-research",
|
"command": "/memory:tech-research",
|
||||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||||
"category": "memory",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
@@ -396,8 +295,8 @@
|
|||||||
{
|
{
|
||||||
"name": "plan",
|
"name": "plan",
|
||||||
"command": "/workflow:plan",
|
"command": "/workflow:plan",
|
||||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
|
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||||
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
|
"arguments": "\\\"text description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -415,6 +314,39 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/replan.md"
|
"file_path": "workflow/replan.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "review-fix",
|
||||||
|
"command": "/workflow:review-fix",
|
||||||
|
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||||
|
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-fix.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-module-cycle",
|
||||||
|
"command": "/workflow:review-module-cycle",
|
||||||
|
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||||
|
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-module-cycle.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-session-cycle",
|
||||||
|
"command": "/workflow:review-session-cycle",
|
||||||
|
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||||
|
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "session-management",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-session-cycle.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "review",
|
"name": "review",
|
||||||
"command": "/workflow:review",
|
"command": "/workflow:review",
|
||||||
@@ -426,22 +358,11 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/review.md"
|
"file_path": "workflow/review.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "workflow:status",
|
|
||||||
"command": "/workflow:status",
|
|
||||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
|
||||||
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/status.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "tdd-plan",
|
"name": "tdd-plan",
|
||||||
"command": "/workflow:tdd-plan",
|
"command": "/workflow:tdd-plan",
|
||||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||||
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
|
"arguments": "\\\"feature description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -474,7 +395,7 @@
|
|||||||
"name": "test-fix-gen",
|
"name": "test-fix-gen",
|
||||||
"command": "/workflow:test-fix-gen",
|
"command": "/workflow:test-fix-gen",
|
||||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||||
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -485,7 +406,7 @@
|
|||||||
"name": "test-gen",
|
"name": "test-gen",
|
||||||
"command": "/workflow:test-gen",
|
"command": "/workflow:test-gen",
|
||||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||||
"arguments": "[--use-codex] [--cli-execute] source-session-id",
|
"arguments": "source-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -665,7 +586,7 @@
|
|||||||
"name": "start",
|
"name": "start",
|
||||||
"command": "/workflow:session:start",
|
"command": "/workflow:session:start",
|
||||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "session",
|
"subcategory": "session",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -699,8 +620,8 @@
|
|||||||
{
|
{
|
||||||
"name": "task-generate-agent",
|
"name": "task-generate-agent",
|
||||||
"command": "/workflow:tools:task-generate-agent",
|
"command": "/workflow:tools:task-generate-agent",
|
||||||
"description": "Autonomous task generation using action-planning-agent with discovery and output phases for workflow planning",
|
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
@@ -711,24 +632,13 @@
|
|||||||
"name": "task-generate-tdd",
|
"name": "task-generate-tdd",
|
||||||
"command": "/workflow:tools:task-generate-tdd",
|
"command": "/workflow:tools:task-generate-tdd",
|
||||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Advanced",
|
"difficulty": "Advanced",
|
||||||
"file_path": "workflow/tools/task-generate-tdd.md"
|
"file_path": "workflow/tools/task-generate-tdd.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "task-generate",
|
|
||||||
"command": "/workflow:tools:task-generate",
|
|
||||||
"description": "Generate task JSON files and IMPL_PLAN.md from analysis results using action-planning-agent with artifact integration",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/tools/task-generate.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "tdd-coverage-analysis",
|
"name": "tdd-coverage-analysis",
|
||||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||||
@@ -743,7 +653,7 @@
|
|||||||
{
|
{
|
||||||
"name": "test-concept-enhanced",
|
"name": "test-concept-enhanced",
|
||||||
"command": "/workflow:tools:test-concept-enhanced",
|
"command": "/workflow:tools:test-concept-enhanced",
|
||||||
"description": "Analyze test requirements and generate test generation strategy using Gemini with test-context package",
|
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
@@ -765,8 +675,8 @@
|
|||||||
{
|
{
|
||||||
"name": "test-task-generate",
|
"name": "test-task-generate",
|
||||||
"command": "/workflow:tools:test-task-generate",
|
"command": "/workflow:tools:test-task-generate",
|
||||||
"description": "Autonomous test-fix task generation using action-planning-agent with test-fix-retest cycle specification and discovery phase",
|
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||||
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
|
"arguments": "--session WFS-test-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
|
|||||||
@@ -1,51 +1,5 @@
|
|||||||
{
|
{
|
||||||
"analysis": [
|
|
||||||
{
|
|
||||||
"name": "analyze",
|
|
||||||
"command": "/cli:analyze",
|
|
||||||
"description": "Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/analyze.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "bug-diagnosis",
|
|
||||||
"command": "/cli:mode:bug-diagnosis",
|
|
||||||
"description": "Read-only bug root cause analysis using Gemini/Qwen/Codex with systematic diagnosis template for fix suggestions",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/bug-diagnosis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "review",
|
|
||||||
"command": "/workflow:review",
|
|
||||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
|
||||||
"arguments": "[--type=security|architecture|action-items|quality] [optional: session-id]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/review.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"general": [
|
"general": [
|
||||||
{
|
|
||||||
"name": "chat",
|
|
||||||
"command": "/cli:chat",
|
|
||||||
"description": "Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] inquiry",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/chat.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "cli-init",
|
"name": "cli-init",
|
||||||
"command": "/cli:cli-init",
|
"command": "/cli:cli-init",
|
||||||
@@ -57,28 +11,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "cli/cli-init.md"
|
"file_path": "cli/cli-init.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "code-analysis",
|
|
||||||
"command": "/cli:mode:code-analysis",
|
|
||||||
"description": "Read-only execution path tracing using Gemini/Qwen/Codex with specialized analysis template for call flow and optimization",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/code-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "document-analysis",
|
|
||||||
"command": "/cli:mode:document-analysis",
|
|
||||||
"description": "Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/document-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "enhance-prompt",
|
"name": "enhance-prompt",
|
||||||
"command": "/enhance-prompt",
|
"command": "/enhance-prompt",
|
||||||
@@ -104,7 +36,7 @@
|
|||||||
{
|
{
|
||||||
"name": "tech-research",
|
"name": "tech-research",
|
||||||
"command": "/memory:tech-research",
|
"command": "/memory:tech-research",
|
||||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||||
"category": "memory",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
@@ -292,7 +224,7 @@
|
|||||||
"name": "start",
|
"name": "start",
|
||||||
"command": "/workflow:session:start",
|
"command": "/workflow:session:start",
|
||||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "session",
|
"subcategory": "session",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -377,307 +309,6 @@
|
|||||||
"file_path": "workflow/ui-design/style-extract.md"
|
"file_path": "workflow/ui-design/style-extract.md"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"implementation": [
|
|
||||||
{
|
|
||||||
"name": "codex-execute",
|
|
||||||
"command": "/cli:codex-execute",
|
|
||||||
"description": "Multi-stage Codex execution with automatic task decomposition into grouped subtasks using resume mechanism for context continuity",
|
|
||||||
"arguments": "[--verify-git] task description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/codex-execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "execute",
|
|
||||||
"command": "/cli:execute",
|
|
||||||
"description": "Autonomous code implementation with YOLO auto-approval using Gemini/Qwen/Codex, supports task ID or description input with automatic file pattern detection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "create",
|
|
||||||
"command": "/task:create",
|
|
||||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
|
||||||
"arguments": "\\\"task title\\",
|
|
||||||
"category": "task",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "task/create.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "execute",
|
|
||||||
"command": "/task:execute",
|
|
||||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
|
||||||
"arguments": "task-id",
|
|
||||||
"category": "task",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "task/execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "execute",
|
|
||||||
"command": "/workflow:execute",
|
|
||||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
|
||||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "lite-execute",
|
|
||||||
"command": "/workflow:lite-execute",
|
|
||||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
|
||||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/lite-execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "test-cycle-execute",
|
|
||||||
"command": "/workflow:test-cycle-execute",
|
|
||||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
|
||||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/test-cycle-execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "task-generate-agent",
|
|
||||||
"command": "/workflow:tools:task-generate-agent",
|
|
||||||
"description": "Autonomous task generation using action-planning-agent with discovery and output phases for workflow planning",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Advanced",
|
|
||||||
"file_path": "workflow/tools/task-generate-agent.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "task-generate-tdd",
|
|
||||||
"command": "/workflow:tools:task-generate-tdd",
|
|
||||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Advanced",
|
|
||||||
"file_path": "workflow/tools/task-generate-tdd.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "task-generate",
|
|
||||||
"command": "/workflow:tools:task-generate",
|
|
||||||
"description": "Generate task JSON files and IMPL_PLAN.md from analysis results using action-planning-agent with artifact integration",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/tools/task-generate.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "test-task-generate",
|
|
||||||
"command": "/workflow:tools:test-task-generate",
|
|
||||||
"description": "Autonomous test-fix task generation using action-planning-agent with test-fix-retest cycle specification and discovery phase",
|
|
||||||
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/tools/test-task-generate.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "generate",
|
|
||||||
"command": "/workflow:ui-design:generate",
|
|
||||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
|
||||||
"arguments": "[--design-id <id>] [--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/generate.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"planning": [
|
|
||||||
{
|
|
||||||
"name": "discuss-plan",
|
|
||||||
"command": "/cli:discuss-plan",
|
|
||||||
"description": "Multi-round collaborative planning using Gemini, Codex, and Claude synthesis with iterative discussion cycles (read-only, no code changes)",
|
|
||||||
"arguments": "[--topic '...'] [--task-id '...'] [--rounds N]",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/discuss-plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "plan",
|
|
||||||
"command": "/cli:mode:plan",
|
|
||||||
"description": "Read-only architecture planning using Gemini/Qwen/Codex with strategic planning template for modification plans and impact analysis",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "breakdown",
|
|
||||||
"command": "/task:breakdown",
|
|
||||||
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
|
||||||
"arguments": "task-id",
|
|
||||||
"category": "task",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "task/breakdown.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "replan",
|
|
||||||
"command": "/task:replan",
|
|
||||||
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
|
||||||
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
|
||||||
"category": "task",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "task/replan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "action-plan-verify",
|
|
||||||
"command": "/workflow:action-plan-verify",
|
|
||||||
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
|
||||||
"arguments": "[optional: --session session-id]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/action-plan-verify.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "api-designer",
|
|
||||||
"command": "/workflow:brainstorm:api-designer",
|
|
||||||
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
|
||||||
"arguments": "optional topic - uses existing framework if available",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "brainstorm",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/brainstorm/api-designer.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "ui-designer",
|
|
||||||
"command": "/workflow:brainstorm:ui-designer",
|
|
||||||
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
|
||||||
"arguments": "optional topic - uses existing framework if available",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "brainstorm",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/brainstorm/ui-designer.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "lite-plan",
|
|
||||||
"command": "/workflow:lite-plan",
|
|
||||||
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
|
||||||
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/lite-plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "plan",
|
|
||||||
"command": "/workflow:plan",
|
|
||||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
|
|
||||||
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "replan",
|
|
||||||
"command": "/workflow:replan",
|
|
||||||
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
|
||||||
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/replan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "tdd-plan",
|
|
||||||
"command": "/workflow:tdd-plan",
|
|
||||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
|
||||||
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Advanced",
|
|
||||||
"file_path": "workflow/tdd-plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "workflow:ui-design:codify-style",
|
|
||||||
"command": "/workflow:ui-design:codify-style",
|
|
||||||
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
|
||||||
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/codify-style.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "design-sync",
|
|
||||||
"command": "/workflow:ui-design:design-sync",
|
|
||||||
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
|
||||||
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/design-sync.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "workflow:ui-design:import-from-code",
|
|
||||||
"command": "/workflow:ui-design:import-from-code",
|
|
||||||
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
|
||||||
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/import-from-code.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "workflow:ui-design:reference-page-generator",
|
|
||||||
"command": "/workflow:ui-design:reference-page-generator",
|
|
||||||
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
|
||||||
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/reference-page-generator.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"documentation": [
|
"documentation": [
|
||||||
{
|
{
|
||||||
"name": "code-map-memory",
|
"name": "code-map-memory",
|
||||||
@@ -768,7 +399,299 @@
|
|||||||
"file_path": "memory/workflow-skill-memory.md"
|
"file_path": "memory/workflow-skill-memory.md"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
"planning": [
|
||||||
|
{
|
||||||
|
"name": "breakdown",
|
||||||
|
"command": "/task:breakdown",
|
||||||
|
"description": "Decompose complex task into subtasks with dependency mapping, creates child task JSONs with parent references and execution order",
|
||||||
|
"arguments": "task-id",
|
||||||
|
"category": "task",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "task/breakdown.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "replan",
|
||||||
|
"command": "/task:replan",
|
||||||
|
"description": "Update task JSON with new requirements or batch-update multiple tasks from verification report, tracks changes in task-changes.json",
|
||||||
|
"arguments": "task-id [\\\"text\\\"|file.md] | --batch [verification-report.md]",
|
||||||
|
"category": "task",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "task/replan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "action-plan-verify",
|
||||||
|
"command": "/workflow:action-plan-verify",
|
||||||
|
"description": "Perform non-destructive cross-artifact consistency analysis between IMPL_PLAN.md and task JSONs with quality gate validation",
|
||||||
|
"arguments": "[optional: --session session-id]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/action-plan-verify.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "api-designer",
|
||||||
|
"command": "/workflow:brainstorm:api-designer",
|
||||||
|
"description": "Generate or update api-designer/analysis.md addressing guidance-specification discussion points for API design perspective",
|
||||||
|
"arguments": "optional topic - uses existing framework if available",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "brainstorm",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/brainstorm/api-designer.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ui-designer",
|
||||||
|
"command": "/workflow:brainstorm:ui-designer",
|
||||||
|
"description": "Generate or update ui-designer/analysis.md addressing guidance-specification discussion points for UI design perspective",
|
||||||
|
"arguments": "optional topic - uses existing framework if available",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "brainstorm",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/brainstorm/ui-designer.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-plan",
|
||||||
|
"command": "/workflow:lite-plan",
|
||||||
|
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||||
|
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-plan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "plan",
|
||||||
|
"command": "/workflow:plan",
|
||||||
|
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||||
|
"arguments": "\\\"text description\\\"|file.md",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/plan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "replan",
|
||||||
|
"command": "/workflow:replan",
|
||||||
|
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||||
|
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/replan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "tdd-plan",
|
||||||
|
"command": "/workflow:tdd-plan",
|
||||||
|
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||||
|
"arguments": "\\\"feature description\\\"|file.md",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Advanced",
|
||||||
|
"file_path": "workflow/tdd-plan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "workflow:ui-design:codify-style",
|
||||||
|
"command": "/workflow:ui-design:codify-style",
|
||||||
|
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
|
||||||
|
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/codify-style.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "design-sync",
|
||||||
|
"command": "/workflow:ui-design:design-sync",
|
||||||
|
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
|
||||||
|
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/design-sync.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "workflow:ui-design:import-from-code",
|
||||||
|
"command": "/workflow:ui-design:import-from-code",
|
||||||
|
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
|
||||||
|
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/import-from-code.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "workflow:ui-design:reference-page-generator",
|
||||||
|
"command": "/workflow:ui-design:reference-page-generator",
|
||||||
|
"description": "Generate multi-component reference pages and documentation from design run extraction",
|
||||||
|
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/reference-page-generator.md"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"implementation": [
|
||||||
|
{
|
||||||
|
"name": "create",
|
||||||
|
"command": "/task:create",
|
||||||
|
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||||
|
"arguments": "\\\"task title\\",
|
||||||
|
"category": "task",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "task/create.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "execute",
|
||||||
|
"command": "/task:execute",
|
||||||
|
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||||
|
"arguments": "task-id",
|
||||||
|
"category": "task",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "task/execute.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "execute",
|
||||||
|
"command": "/workflow:execute",
|
||||||
|
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||||
|
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/execute.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-execute",
|
||||||
|
"command": "/workflow:lite-execute",
|
||||||
|
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||||
|
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-execute.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "test-cycle-execute",
|
||||||
|
"command": "/workflow:test-cycle-execute",
|
||||||
|
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||||
|
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/test-cycle-execute.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "task-generate-agent",
|
||||||
|
"command": "/workflow:tools:task-generate-agent",
|
||||||
|
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||||
|
"arguments": "--session WFS-session-id",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "tools",
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Advanced",
|
||||||
|
"file_path": "workflow/tools/task-generate-agent.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "task-generate-tdd",
|
||||||
|
"command": "/workflow:tools:task-generate-tdd",
|
||||||
|
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||||
|
"arguments": "--session WFS-session-id",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "tools",
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Advanced",
|
||||||
|
"file_path": "workflow/tools/task-generate-tdd.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "test-task-generate",
|
||||||
|
"command": "/workflow:tools:test-task-generate",
|
||||||
|
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||||
|
"arguments": "--session WFS-test-session-id",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "tools",
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/tools/test-task-generate.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "generate",
|
||||||
|
"command": "/workflow:ui-design:generate",
|
||||||
|
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||||
|
"arguments": "[--design-id <id>] [--session <id>]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/generate.md"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"analysis": [
|
||||||
|
{
|
||||||
|
"name": "review-fix",
|
||||||
|
"command": "/workflow:review-fix",
|
||||||
|
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||||
|
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-fix.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-module-cycle",
|
||||||
|
"command": "/workflow:review-module-cycle",
|
||||||
|
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||||
|
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-module-cycle.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review",
|
||||||
|
"command": "/workflow:review",
|
||||||
|
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||||
|
"arguments": "[--type=security|architecture|action-items|quality] [optional: session-id]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review.md"
|
||||||
|
}
|
||||||
|
],
|
||||||
"session-management": [
|
"session-management": [
|
||||||
|
{
|
||||||
|
"name": "review-session-cycle",
|
||||||
|
"command": "/workflow:review-session-cycle",
|
||||||
|
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||||
|
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "session-management",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-session-cycle.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "complete",
|
"name": "complete",
|
||||||
"command": "/workflow:session:complete",
|
"command": "/workflow:session:complete",
|
||||||
@@ -790,17 +713,6 @@
|
|||||||
"usage_scenario": "session-management",
|
"usage_scenario": "session-management",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/session/resume.md"
|
"file_path": "workflow/session/resume.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "workflow:status",
|
|
||||||
"command": "/workflow:status",
|
|
||||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
|
||||||
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/status.md"
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"testing": [
|
"testing": [
|
||||||
@@ -819,7 +731,7 @@
|
|||||||
"name": "test-fix-gen",
|
"name": "test-fix-gen",
|
||||||
"command": "/workflow:test-fix-gen",
|
"command": "/workflow:test-fix-gen",
|
||||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||||
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -830,7 +742,7 @@
|
|||||||
"name": "test-gen",
|
"name": "test-gen",
|
||||||
"command": "/workflow:test-gen",
|
"command": "/workflow:test-gen",
|
||||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||||
"arguments": "[--use-codex] [--cli-execute] source-session-id",
|
"arguments": "source-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -851,7 +763,7 @@
|
|||||||
{
|
{
|
||||||
"name": "test-concept-enhanced",
|
"name": "test-concept-enhanced",
|
||||||
"command": "/workflow:tools:test-concept-enhanced",
|
"command": "/workflow:tools:test-concept-enhanced",
|
||||||
"description": "Analyze test requirements and generate test generation strategy using Gemini with test-context package",
|
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"workflow:session:start",
|
"workflow:session:start",
|
||||||
"workflow:tools:context-gather",
|
"workflow:tools:context-gather",
|
||||||
"workflow:tools:conflict-resolution",
|
"workflow:tools:conflict-resolution",
|
||||||
"workflow:tools:task-generate",
|
|
||||||
"workflow:tools:task-generate-agent"
|
"workflow:tools:task-generate-agent"
|
||||||
],
|
],
|
||||||
"next_steps": [
|
"next_steps": [
|
||||||
@@ -239,5 +238,70 @@
|
|||||||
"next_steps": [
|
"next_steps": [
|
||||||
"workflow:ui-design:generate"
|
"workflow:ui-design:generate"
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
"workflow:lite-plan": {
|
||||||
|
"calls_internally": [
|
||||||
|
"workflow:lite-execute"
|
||||||
|
],
|
||||||
|
"next_steps": [
|
||||||
|
"workflow:lite-execute",
|
||||||
|
"workflow:status"
|
||||||
|
],
|
||||||
|
"alternatives": [
|
||||||
|
"workflow:plan"
|
||||||
|
],
|
||||||
|
"prerequisites": []
|
||||||
|
},
|
||||||
|
"workflow:lite-fix": {
|
||||||
|
"next_steps": [
|
||||||
|
"workflow:lite-execute",
|
||||||
|
"workflow:status"
|
||||||
|
],
|
||||||
|
"alternatives": [
|
||||||
|
"workflow:lite-plan"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:test-cycle-execute"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workflow:lite-execute": {
|
||||||
|
"prerequisites": [
|
||||||
|
"workflow:lite-plan",
|
||||||
|
"workflow:lite-fix"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:execute",
|
||||||
|
"workflow:status"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workflow:review-module-cycle": {
|
||||||
|
"next_steps": [
|
||||||
|
"workflow:review-fix"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:review-session-cycle",
|
||||||
|
"workflow:review"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workflow:review-session-cycle": {
|
||||||
|
"prerequisites": [
|
||||||
|
"workflow:execute"
|
||||||
|
],
|
||||||
|
"next_steps": [
|
||||||
|
"workflow:review-fix"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:review-module-cycle",
|
||||||
|
"workflow:review"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workflow:review-fix": {
|
||||||
|
"prerequisites": [
|
||||||
|
"workflow:review-module-cycle",
|
||||||
|
"workflow:review-session-cycle"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:test-cycle-execute"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,9 +1,31 @@
|
|||||||
[
|
[
|
||||||
|
{
|
||||||
|
"name": "lite-plan",
|
||||||
|
"command": "/workflow:lite-plan",
|
||||||
|
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||||
|
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-plan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-fix",
|
||||||
|
"command": "/workflow:lite-fix",
|
||||||
|
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||||
|
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "general",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-fix.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "plan",
|
"name": "plan",
|
||||||
"command": "/workflow:plan",
|
"command": "/workflow:plan",
|
||||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
|
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||||
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
|
"arguments": "\\\"text description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -21,22 +43,11 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/execute.md"
|
"file_path": "workflow/execute.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "workflow:status",
|
|
||||||
"command": "/workflow:status",
|
|
||||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
|
||||||
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/status.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "start",
|
"name": "start",
|
||||||
"command": "/workflow:session:start",
|
"command": "/workflow:session:start",
|
||||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "session",
|
"subcategory": "session",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -44,37 +55,15 @@
|
|||||||
"file_path": "workflow/session/start.md"
|
"file_path": "workflow/session/start.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "execute",
|
"name": "review-session-cycle",
|
||||||
"command": "/task:execute",
|
"command": "/workflow:review-session-cycle",
|
||||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||||
"arguments": "task-id",
|
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
"category": "task",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "session-management",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "task/execute.md"
|
"file_path": "workflow/review-session-cycle.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "analyze",
|
|
||||||
"command": "/cli:analyze",
|
|
||||||
"description": "Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/analyze.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "chat",
|
|
||||||
"command": "/cli:chat",
|
|
||||||
"description": "Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] inquiry",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/chat.md"
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "docs",
|
"name": "docs",
|
||||||
@@ -109,17 +98,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/action-plan-verify.md"
|
"file_path": "workflow/action-plan-verify.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "review",
|
|
||||||
"command": "/workflow:review",
|
|
||||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
|
||||||
"arguments": "[--type=security|architecture|action-items|quality] [optional: session-id]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/review.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "version",
|
"name": "version",
|
||||||
"command": "/version",
|
"command": "/version",
|
||||||
@@ -130,16 +108,5 @@
|
|||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Beginner",
|
"difficulty": "Beginner",
|
||||||
"file_path": "version.md"
|
"file_path": "version.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "enhance-prompt",
|
|
||||||
"command": "/enhance-prompt",
|
|
||||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
|
||||||
"arguments": "user input to enhance",
|
|
||||||
"category": "general",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "enhance-prompt.md"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@@ -16,11 +16,24 @@ description: |
|
|||||||
color: yellow
|
color: yellow
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
|
## Overview
|
||||||
|
|
||||||
## Execution Process
|
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
|
||||||
|
|
||||||
### Input Processing
|
**Core Capabilities**:
|
||||||
|
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
|
||||||
|
- Generate task JSON files with 6-field schema and artifact integration
|
||||||
|
- Create IMPL_PLAN.md and TODO_LIST.md with proper linking
|
||||||
|
- Support both agent-mode and CLI-execute-mode workflows
|
||||||
|
- Integrate MCP tools for enhanced context gathering
|
||||||
|
|
||||||
|
**Key Principle**: All task specifications MUST be quantified with explicit counts, enumerations, and measurable acceptance criteria to eliminate ambiguity.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Input & Execution
|
||||||
|
|
||||||
|
### 1.1 Input Processing
|
||||||
|
|
||||||
**What you receive from command layer:**
|
**What you receive from command layer:**
|
||||||
- **Session Paths**: File paths to load content autonomously
|
- **Session Paths**: File paths to load content autonomously
|
||||||
@@ -28,7 +41,6 @@ You are a pure execution agent specialized in creating actionable implementation
|
|||||||
- `context_package_path`: Context package with brainstorming artifacts catalog
|
- `context_package_path`: Context package with brainstorming artifacts catalog
|
||||||
- **Metadata**: Simple values
|
- **Metadata**: Simple values
|
||||||
- `session_id`: Workflow session identifier (WFS-[topic])
|
- `session_id`: Workflow session identifier (WFS-[topic])
|
||||||
- `execution_mode`: agent-mode | cli-execute-mode
|
|
||||||
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
|
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
|
||||||
|
|
||||||
**Legacy Support** (backward compatibility):
|
**Legacy Support** (backward compatibility):
|
||||||
@@ -36,54 +48,66 @@ You are a pure execution agent specialized in creating actionable implementation
|
|||||||
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
||||||
- **Task requirements**: Direct task description
|
- **Task requirements**: Direct task description
|
||||||
|
|
||||||
### Execution Flow (Two-Phase)
|
### 1.2 Execution Flow
|
||||||
|
|
||||||
|
#### Phase 1: Context Loading & Assembly
|
||||||
|
|
||||||
|
**Step-by-step execution**:
|
||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: Content Loading & Context Assembly
|
|
||||||
1. Load session metadata → Extract user input
|
1. Load session metadata → Extract user input
|
||||||
- User description: Original task/feature requirements
|
- User description: Original task/feature requirements
|
||||||
- Project scope: User-specified boundaries and goals
|
- Project scope: User-specified boundaries and goals
|
||||||
- Technical constraints: User-provided technical requirements
|
- Technical constraints: User-provided technical requirements
|
||||||
|
|
||||||
2. Load context package → Extract key fields
|
2. Load context package → Extract structured context
|
||||||
- brainstorm_artifacts: Catalog of brainstorming outputs
|
Commands: Read({{context_package_path}})
|
||||||
- guidance_specification: Path to overall framework
|
Output: Complete context package object
|
||||||
- role_analyses[]: Array of role analysis files with priorities
|
|
||||||
- synthesis_output: Path to synthesis results (if exists)
|
|
||||||
- conflict_resolution: Conflict status and affected files
|
|
||||||
- focus_areas: Target directories for implementation
|
|
||||||
- assets: Existing code patterns to reuse
|
|
||||||
- conflict_risk: Risk level (low/medium/high)
|
|
||||||
|
|
||||||
3. Load brainstorming artifacts (in priority order)
|
3. Check existing plan (if resuming)
|
||||||
|
- If IMPL_PLAN.md exists: Read for continuity
|
||||||
|
- If task JSONs exist: Load for context
|
||||||
|
|
||||||
|
4. Load brainstorming artifacts (in priority order)
|
||||||
a. guidance-specification.md (Highest Priority)
|
a. guidance-specification.md (Highest Priority)
|
||||||
→ Overall design framework and architectural decisions
|
→ Overall design framework and architectural decisions
|
||||||
b. Role analyses (High Priority - load ALL files)
|
b. Role analyses (progressive loading: load incrementally by priority)
|
||||||
→ system-architect/analysis.md
|
→ Load role analysis files one at a time as needed
|
||||||
→ subject-matter-expert/analysis.md
|
→ Reason: Each analysis.md is long; progressive loading prevents token overflow
|
||||||
→ (Other roles as listed in context package)
|
|
||||||
c. Synthesis output (if exists)
|
c. Synthesis output (if exists)
|
||||||
→ Integrated view with clarifications
|
→ Integrated view with clarifications
|
||||||
d. Conflict resolution (if conflict_risk ≥ medium)
|
d. Conflict resolution (if conflict_risk ≥ medium)
|
||||||
→ Review resolved conflicts in artifacts
|
→ Review resolved conflicts in artifacts
|
||||||
|
|
||||||
4. Optional MCP enhancement
|
5. Optional MCP enhancement
|
||||||
→ mcp__exa__get_code_context_exa() for best practices
|
→ mcp__exa__get_code_context_exa() for best practices
|
||||||
→ mcp__exa__web_search_exa() for external research
|
→ mcp__exa__web_search_exa() for external research
|
||||||
|
|
||||||
5. Assess task complexity (simple/medium/complex)
|
6. Assess task complexity (simple/medium/complex)
|
||||||
|
|
||||||
Phase 2: Document Generation (Autonomous Output)
|
|
||||||
1. Synthesize requirements from all sources (user input + brainstorming artifacts)
|
|
||||||
2. Generate task JSON files with 6-field schema + artifacts integration
|
|
||||||
3. Create IMPL_PLAN.md with context analysis and artifact references
|
|
||||||
4. Generate TODO_LIST.md with proper structure (▸, [ ], [x])
|
|
||||||
5. Update session state for execution readiness
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Context Package Fields to Load
|
**MCP Integration** (when `mcp_capabilities` available):
|
||||||
|
|
||||||
**Load from `context_package_path` - fields defined by context-search-agent**:
|
```javascript
|
||||||
|
// Exa Code Context (mcp_capabilities.exa_code = true)
|
||||||
|
mcp__exa__get_code_context_exa(
|
||||||
|
query="TypeScript OAuth2 JWT authentication patterns",
|
||||||
|
tokensNum="dynamic"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Integration in flow_control.pre_analysis
|
||||||
|
{
|
||||||
|
"step": "local_codebase_exploration",
|
||||||
|
"action": "Explore codebase structure",
|
||||||
|
"commands": [
|
||||||
|
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
|
||||||
|
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
|
||||||
|
],
|
||||||
|
"output_to": "codebase_structure"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Context Package Structure** (fields defined by context-search-agent):
|
||||||
|
|
||||||
**Always Present**:
|
**Always Present**:
|
||||||
- `metadata.task_description`: User's original task description
|
- `metadata.task_description`: User's original task description
|
||||||
@@ -126,73 +150,138 @@ if (contextPackage.brainstorm_artifacts?.guidance_specification?.exists) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
|
if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
|
||||||
|
// Progressive loading: load role analyses incrementally by priority
|
||||||
contextPackage.brainstorm_artifacts.role_analyses.forEach(role => {
|
contextPackage.brainstorm_artifacts.role_analyses.forEach(role => {
|
||||||
role.files.forEach(file => {
|
role.files.forEach(file => {
|
||||||
const analysis = file.content || Read(file.path);
|
const analysis = file.content || Read(file.path); // Load one at a time
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### MCP Integration Guidelines
|
#### Phase 2: Document Generation
|
||||||
|
|
||||||
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
|
**Autonomous output generation**:
|
||||||
```javascript
|
|
||||||
// Get best practices and examples
|
```
|
||||||
mcp__exa__get_code_context_exa(
|
1. Synthesize requirements from all sources
|
||||||
query="TypeScript OAuth2 JWT authentication patterns",
|
- User input (session metadata)
|
||||||
tokensNum="dynamic"
|
- Brainstorming artifacts (guidance, role analyses, synthesis)
|
||||||
)
|
- Context package (project structure, dependencies, patterns)
|
||||||
|
|
||||||
|
2. Generate task JSON files
|
||||||
|
- Apply 6-field schema (id, title, status, meta, context, flow_control)
|
||||||
|
- Integrate artifacts catalog into context.artifacts array
|
||||||
|
- Add quantified requirements and measurable acceptance criteria
|
||||||
|
|
||||||
|
3. Create IMPL_PLAN.md
|
||||||
|
- Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
|
||||||
|
- Follow template structure and validation checklist
|
||||||
|
- Populate all 8 sections with synthesized context
|
||||||
|
- Document CCW workflow phase progression
|
||||||
|
- Update quality gate status
|
||||||
|
|
||||||
|
4. Generate TODO_LIST.md
|
||||||
|
- Flat structure ([ ] for pending, [x] for completed)
|
||||||
|
- Link to task JSONs and summaries
|
||||||
|
|
||||||
|
5. Update session state for execution readiness
|
||||||
```
|
```
|
||||||
|
|
||||||
**Integration in flow_control.pre_analysis**:
|
---
|
||||||
```json
|
|
||||||
{
|
|
||||||
"step": "local_codebase_exploration",
|
|
||||||
"action": "Explore codebase structure",
|
|
||||||
"commands": [
|
|
||||||
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
|
|
||||||
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
|
|
||||||
],
|
|
||||||
"output_to": "codebase_structure"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Core Functions
|
## 2. Output Specifications
|
||||||
|
|
||||||
### 1. Stage Design
|
### 2.1 Task JSON Schema (6-Field)
|
||||||
Break work into 3-5 logical implementation stages with:
|
|
||||||
- Specific, measurable deliverables
|
|
||||||
- Clear success criteria and test cases
|
|
||||||
- Dependencies on previous stages
|
|
||||||
- Estimated complexity and time requirements
|
|
||||||
|
|
||||||
### 2. Task JSON Generation (6-Field Schema + Artifacts)
|
Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||||
Generate individual `.task/IMPL-*.json` files with:
|
|
||||||
|
|
||||||
#### Top-Level Fields
|
#### Top-Level Fields
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"id": "IMPL-N[.M]",
|
"id": "IMPL-N",
|
||||||
"title": "Descriptive task name",
|
"title": "Descriptive task name",
|
||||||
"status": "pending|active|completed|blocked|container",
|
"status": "pending|active|completed|blocked",
|
||||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
|
||||||
|
"cli_execution_id": "WFS-{session}-IMPL-N",
|
||||||
|
"cli_execution": {
|
||||||
|
"strategy": "new|resume|fork|merge_fork",
|
||||||
|
"resume_from": "parent-cli-id",
|
||||||
|
"merge_from": ["id1", "id2"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Field Descriptions**:
|
**Field Descriptions**:
|
||||||
- `id`: Task identifier (format: `IMPL-N` or `IMPL-N.M` for subtasks, max 2 levels)
|
- `id`: Task identifier
|
||||||
|
- Single module format: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||||
|
- Multi-module format: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1, IMPL-C1)
|
||||||
|
- Prefix: A, B, C... (assigned by module detection order)
|
||||||
|
- Sequence: 1, 2, 3... (per-module increment)
|
||||||
- `title`: Descriptive task name summarizing the work
|
- `title`: Descriptive task name summarizing the work
|
||||||
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies), `container` (has subtasks, cannot be executed directly)
|
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
||||||
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||||
|
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
|
||||||
|
- `cli_execution`: CLI execution strategy based on task dependencies
|
||||||
|
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
|
||||||
|
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
|
||||||
|
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
|
||||||
|
|
||||||
|
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
|
||||||
|
|
||||||
|
| Dependency Pattern | Strategy | CLI Command Pattern |
|
||||||
|
|--------------------|----------|---------------------|
|
||||||
|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
|
||||||
|
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
|
||||||
|
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||||
|
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
|
||||||
|
|
||||||
|
**Strategy Selection Algorithm**:
|
||||||
|
```javascript
|
||||||
|
function computeCliStrategy(task, allTasks) {
|
||||||
|
const deps = task.context?.depends_on || []
|
||||||
|
const childCount = allTasks.filter(t =>
|
||||||
|
t.context?.depends_on?.includes(task.id)
|
||||||
|
).length
|
||||||
|
|
||||||
|
if (deps.length === 0) {
|
||||||
|
return { strategy: "new" }
|
||||||
|
} else if (deps.length === 1) {
|
||||||
|
const parentTask = allTasks.find(t => t.id === deps[0])
|
||||||
|
const parentChildCount = allTasks.filter(t =>
|
||||||
|
t.context?.depends_on?.includes(deps[0])
|
||||||
|
).length
|
||||||
|
|
||||||
|
if (parentChildCount === 1) {
|
||||||
|
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
|
||||||
|
} else {
|
||||||
|
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
const mergeFrom = deps.map(depId =>
|
||||||
|
allTasks.find(t => t.id === depId).cli_execution_id
|
||||||
|
)
|
||||||
|
return { strategy: "merge_fork", merge_from: mergeFrom }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Meta Object
|
#### Meta Object
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"meta": {
|
"meta": {
|
||||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||||
"execution_group": "parallel-abc123|null"
|
"execution_group": "parallel-abc123|null",
|
||||||
|
"module": "frontend|backend|shared|null",
|
||||||
|
"execution_config": {
|
||||||
|
"method": "agent|hybrid|cli",
|
||||||
|
"cli_tool": "codex|gemini|qwen|auto",
|
||||||
|
"enable_resume": true,
|
||||||
|
"previous_cli_id": "string|null"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -201,8 +290,34 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
||||||
- `agent`: Assigned agent for execution
|
- `agent`: Assigned agent for execution
|
||||||
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||||
|
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||||
|
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
|
||||||
|
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
|
||||||
|
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
|
||||||
|
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
|
||||||
|
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
|
||||||
|
|
||||||
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"type": "test-gen|test-fix",
|
||||||
|
"agent": "@code-developer|@test-fix-agent",
|
||||||
|
"test_framework": "jest|vitest|pytest|junit|mocha",
|
||||||
|
"coverage_target": "80%"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test-Specific Fields**:
|
||||||
|
- `test_framework`: Existing test framework from project (required for test tasks)
|
||||||
|
- `coverage_target`: Target code coverage percentage (optional)
|
||||||
|
|
||||||
|
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
|
||||||
|
|
||||||
#### Context Object
|
#### Context Object
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"context": {
|
"context": {
|
||||||
@@ -217,7 +332,6 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
"5 files created: verify by ls src/auth/*.ts | wc -l = 5",
|
"5 files created: verify by ls src/auth/*.ts | wc -l = 5",
|
||||||
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
||||||
],
|
],
|
||||||
"parent": "IMPL-N",
|
|
||||||
"depends_on": ["IMPL-N"],
|
"depends_on": ["IMPL-N"],
|
||||||
"inherited": {
|
"inherited": {
|
||||||
"from": "IMPL-N",
|
"from": "IMPL-N",
|
||||||
@@ -246,16 +360,61 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
- `requirements`: **QUANTIFIED** implementation requirements (MUST include explicit counts and enumerated lists, e.g., "5 files: [list]")
|
- `requirements`: **QUANTIFIED** implementation requirements (MUST include explicit counts and enumerated lists, e.g., "5 files: [list]")
|
||||||
- `focus_paths`: Target directories/files (concrete paths without wildcards)
|
- `focus_paths`: Target directories/files (concrete paths without wildcards)
|
||||||
- `acceptance`: **MEASURABLE** acceptance criteria (MUST include verification commands, e.g., "verify by ls ... | wc -l = N")
|
- `acceptance`: **MEASURABLE** acceptance criteria (MUST include verification commands, e.g., "verify by ls ... | wc -l = N")
|
||||||
- `parent`: Parent task ID for subtasks (establishes container/subtask hierarchy)
|
|
||||||
- `depends_on`: Prerequisite task IDs that must complete before this task starts
|
- `depends_on`: Prerequisite task IDs that must complete before this task starts
|
||||||
- `inherited`: Context, patterns, and dependencies passed from parent task
|
- `inherited`: Context, patterns, and dependencies passed from parent task
|
||||||
- `shared_context`: Tech stack, conventions, and architectural strategies for the task
|
- `shared_context`: Tech stack, conventions, and architectural strategies for the task
|
||||||
- `artifacts`: Referenced brainstorming outputs with detailed metadata
|
- `artifacts`: Referenced brainstorming outputs with detailed metadata
|
||||||
|
|
||||||
|
**Artifact Mapping** (from context package):
|
||||||
|
- Use `artifacts_inventory` from context package
|
||||||
|
- **Priority levels**:
|
||||||
|
- **Highest**: synthesis_specification (integrated view with clarifications)
|
||||||
|
- **High**: topic_framework (guidance-specification.md)
|
||||||
|
- **Medium**: individual_role_analysis (system-architect, subject-matter-expert, etc.)
|
||||||
|
- **Low**: supporting documentation
|
||||||
|
|
||||||
#### Flow Control Object
|
#### Flow Control Object
|
||||||
|
|
||||||
**IMPORTANT**: The `pre_analysis` examples below are **reference templates only**. Agent MUST dynamically select, adapt, and expand steps based on actual task requirements. Apply the principle of **"举一反三"** (draw inferences from examples) - use these patterns as inspiration to create task-specific analysis steps.
|
**IMPORTANT**: The `pre_analysis` examples below are **reference templates only**. Agent MUST dynamically select, adapt, and expand steps based on actual task requirements. Apply the principle of **"举一反三"** (draw inferences from examples) - use these patterns as inspiration to create task-specific analysis steps.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"flow_control": {
|
||||||
|
"pre_analysis": [...],
|
||||||
|
"implementation_approach": [...],
|
||||||
|
"target_files": [...]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"flow_control": {
|
||||||
|
"pre_analysis": [...],
|
||||||
|
"implementation_approach": [...],
|
||||||
|
"target_files": [...],
|
||||||
|
"reusable_test_tools": [
|
||||||
|
"tests/helpers/testUtils.ts",
|
||||||
|
"tests/fixtures/mockData.ts",
|
||||||
|
"tests/setup/testSetup.ts"
|
||||||
|
],
|
||||||
|
"test_commands": {
|
||||||
|
"run_tests": "npm test",
|
||||||
|
"run_coverage": "npm test -- --coverage",
|
||||||
|
"run_specific": "npm test -- {test_file}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test-Specific Fields**:
|
||||||
|
- `reusable_test_tools`: List of existing test utility files to reuse (helpers, fixtures, mocks)
|
||||||
|
- `test_commands`: Test execution commands from project config (package.json, pytest.ini)
|
||||||
|
|
||||||
|
##### Pre-Analysis Patterns
|
||||||
|
|
||||||
**Dynamic Step Selection Guidelines**:
|
**Dynamic Step Selection Guidelines**:
|
||||||
- **Context Loading**: Always include context package and role analysis loading
|
- **Context Loading**: Always include context package and role analysis loading
|
||||||
- **Architecture Analysis**: Add module structure analysis for complex projects
|
- **Architecture Analysis**: Add module structure analysis for complex projects
|
||||||
@@ -263,11 +422,9 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
- **Tech-Specific Analysis**: Add language/framework-specific searches for specialized tasks
|
- **Tech-Specific Analysis**: Add language/framework-specific searches for specialized tasks
|
||||||
- **MCP Integration**: Utilize MCP tools when available for enhanced context
|
- **MCP Integration**: Utilize MCP tools when available for enhanced context
|
||||||
|
|
||||||
|
**Required Steps** (Always Include):
|
||||||
```json
|
```json
|
||||||
{
|
[
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [
|
|
||||||
// === REQUIRED: Context Package Loading (Always Include) ===
|
|
||||||
{
|
{
|
||||||
"step": "load_context_package",
|
"step": "load_context_package",
|
||||||
"action": "Load context package for artifact paths and smart context",
|
"action": "Load context package for artifact paths and smart context",
|
||||||
@@ -277,22 +434,26 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"step": "load_role_analysis_artifacts",
|
"step": "load_role_analysis_artifacts",
|
||||||
"action": "Load role analyses from context-package.json",
|
"action": "Load role analyses from context-package.json (progressive loading by priority)",
|
||||||
"commands": [
|
"commands": [
|
||||||
"Read({{context_package_path}})",
|
"Read({{context_package_path}})",
|
||||||
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||||
"Read(each extracted path)"
|
"Read(extracted paths progressively)"
|
||||||
],
|
],
|
||||||
"output_to": "role_analysis_artifacts",
|
"output_to": "role_analysis_artifacts",
|
||||||
"on_error": "skip_optional"
|
"on_error": "skip_optional"
|
||||||
},
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
// === OPTIONAL: Select and adapt based on task needs ===
|
**Optional Steps** (Select and adapt based on task needs):
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
// Pattern: Project structure analysis
|
// Pattern: Project structure analysis
|
||||||
{
|
{
|
||||||
"step": "analyze_project_architecture",
|
"step": "analyze_project_architecture",
|
||||||
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
|
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
|
||||||
"output_to": "project_architecture"
|
"output_to": "project_architecture"
|
||||||
},
|
},
|
||||||
|
|
||||||
@@ -309,14 +470,14 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
// Pattern: Gemini CLI deep analysis
|
// Pattern: Gemini CLI deep analysis
|
||||||
{
|
{
|
||||||
"step": "gemini_analyze_[aspect]",
|
"step": "gemini_analyze_[aspect]",
|
||||||
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
|
||||||
"output_to": "analysis_result"
|
"output_to": "analysis_result"
|
||||||
},
|
},
|
||||||
|
|
||||||
// Pattern: Qwen CLI analysis (fallback/alternative)
|
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||||
{
|
{
|
||||||
"step": "qwen_analyze_[aspect]",
|
"step": "qwen_analyze_[aspect]",
|
||||||
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
|
||||||
"output_to": "analysis_result"
|
"output_to": "analysis_result"
|
||||||
},
|
},
|
||||||
|
|
||||||
@@ -326,8 +487,96 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
"command": "mcp__[tool]__[function](parameters)",
|
"command": "mcp__[tool]__[function](parameters)",
|
||||||
"output_to": "mcp_results"
|
"output_to": "mcp_results"
|
||||||
}
|
}
|
||||||
],
|
]
|
||||||
"implementation_approach": [
|
```
|
||||||
|
|
||||||
|
**Step Selection Strategy** (举一反三 Principle):
|
||||||
|
|
||||||
|
The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
||||||
|
|
||||||
|
1. **Always Include** (Required):
|
||||||
|
- `load_context_package` - Essential for all tasks
|
||||||
|
- `load_role_analysis_artifacts` - Critical for accessing brainstorming insights
|
||||||
|
|
||||||
|
2. **Progressive Addition of Analysis Steps**:
|
||||||
|
Include additional analysis steps as needed for comprehensive planning:
|
||||||
|
- **Architecture analysis**: Project structure + architecture patterns
|
||||||
|
- **Execution flow analysis**: Code tracing + quality analysis
|
||||||
|
- **Component analysis**: Component searches + pattern analysis
|
||||||
|
- **Data analysis**: Schema review + endpoint searches
|
||||||
|
- **Security analysis**: Vulnerability scans + security patterns
|
||||||
|
- **Performance analysis**: Bottleneck identification + profiling
|
||||||
|
|
||||||
|
Default: Include progressively based on planning requirements, not limited by task type.
|
||||||
|
|
||||||
|
3. **Tool Selection Strategy**:
|
||||||
|
- **Gemini CLI**: Deep analysis (architecture, execution flow, patterns)
|
||||||
|
- **Qwen CLI**: Fallback or code quality analysis
|
||||||
|
- **Bash/rg/find**: Quick pattern matching and file discovery
|
||||||
|
- **MCP tools**: Semantic search and external research
|
||||||
|
|
||||||
|
4. **Command Composition Patterns**:
|
||||||
|
- **Single command**: `bash([simple_search])`
|
||||||
|
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||||
|
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
|
||||||
|
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||||
|
|
||||||
|
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||||
|
|
||||||
|
##### Implementation Approach
|
||||||
|
|
||||||
|
**Execution Modes**:
|
||||||
|
|
||||||
|
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
|
||||||
|
|
||||||
|
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
|
||||||
|
- Agent interprets `modification_points` and `logic_flow` autonomously
|
||||||
|
- Direct agent execution with full context awareness
|
||||||
|
- No external tool overhead
|
||||||
|
- **Use for**: Standard implementation tasks where agent capability is sufficient
|
||||||
|
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
|
||||||
|
|
||||||
|
2. **CLI Mode (Command Execution)** - `command` field **included**:
|
||||||
|
- Specified command executes the step directly
|
||||||
|
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||||
|
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||||
|
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
|
||||||
|
- **Command patterns** (with resume support):
|
||||||
|
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
|
||||||
|
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
|
||||||
|
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
|
||||||
|
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
|
||||||
|
|
||||||
|
**Semantic CLI Tool Selection**:
|
||||||
|
|
||||||
|
Agent determines CLI tool usage per-step based on user semantics and task nature.
|
||||||
|
|
||||||
|
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
|
||||||
|
|
||||||
|
**User Semantic Triggers** (patterns to detect in task_description):
|
||||||
|
- "use Codex/codex" → Add `command` field with Codex CLI
|
||||||
|
- "use Gemini/gemini" → Add `command` field with Gemini CLI
|
||||||
|
- "use Qwen/qwen" → Add `command` field with Qwen CLI
|
||||||
|
- "CLI execution" / "automated" → Infer appropriate CLI tool
|
||||||
|
|
||||||
|
**Task-Based Selection** (when no explicit user preference):
|
||||||
|
- **Implementation/coding**: Codex preferred for autonomous development
|
||||||
|
- **Analysis/exploration**: Gemini preferred for large context analysis
|
||||||
|
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
|
||||||
|
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
||||||
|
|
||||||
|
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
||||||
|
- Agent orchestrates task execution
|
||||||
|
- When step has `command` field, agent executes it via CCW CLI
|
||||||
|
- When step has no `command` field, agent implements directly
|
||||||
|
- This maintains agent control while leveraging CLI tool power
|
||||||
|
|
||||||
|
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
|
||||||
|
|
||||||
|
**Examples**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
// === DEFAULT MODE: Agent Execution (no command field) ===
|
// === DEFAULT MODE: Agent Execution (no command field) ===
|
||||||
{
|
{
|
||||||
"step": 1,
|
"step": 1,
|
||||||
@@ -372,13 +621,34 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
"step": 3,
|
"step": 3,
|
||||||
"title": "Execute implementation using CLI tool",
|
"title": "Execute implementation using CLI tool",
|
||||||
"description": "Use Codex/Gemini for complex autonomous execution",
|
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||||
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
|
||||||
"modification_points": ["[Same as default mode]"],
|
"modification_points": ["[Same as default mode]"],
|
||||||
"logic_flow": ["[Same as default mode]"],
|
"logic_flow": ["[Same as default mode]"],
|
||||||
"depends_on": [1, 2],
|
"depends_on": [1, 2],
|
||||||
"output": "cli_implementation"
|
"output": "cli_implementation",
|
||||||
|
"cli_output_id": "step3_cli_id" // Store execution ID for resume
|
||||||
|
},
|
||||||
|
|
||||||
|
// === CLI MODE with Resume: Continue from previous CLI execution ===
|
||||||
|
{
|
||||||
|
"step": 4,
|
||||||
|
"title": "Continue implementation with context",
|
||||||
|
"description": "Resume from previous step with accumulated context",
|
||||||
|
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
|
||||||
|
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
|
||||||
|
"modification_points": ["[Continue from step 3]"],
|
||||||
|
"logic_flow": ["[Build on previous output]"],
|
||||||
|
"depends_on": [3],
|
||||||
|
"output": "continued_implementation",
|
||||||
|
"cli_output_id": "step4_cli_id"
|
||||||
}
|
}
|
||||||
],
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Target Files
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
"target_files": [
|
"target_files": [
|
||||||
"src/auth/auth.service.ts",
|
"src/auth/auth.service.ts",
|
||||||
"src/auth/auth.controller.ts",
|
"src/auth/auth.controller.ts",
|
||||||
@@ -388,160 +658,129 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
"src/users/users.service.ts:validateUser:45-60",
|
"src/users/users.service.ts:validateUser:45-60",
|
||||||
"src/utils/utils.ts:hashPassword:120-135"
|
"src/utils/utils.ts:hashPassword:120-135"
|
||||||
]
|
]
|
||||||
}
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Field Descriptions**:
|
**Format**:
|
||||||
- `pre_analysis`: Context loading and preparation steps (executed sequentially before implementation)
|
- New files: `file_path`
|
||||||
- `implementation_approach`: Implementation steps with dependency management (array of step objects)
|
- Existing files with modifications: `file_path:function_name:line_range`
|
||||||
- `target_files`: Specific files/functions/lines to modify (format: `file:function:lines` for existing, `file` for new)
|
|
||||||
|
|
||||||
**Implementation Approach Execution Modes**:
|
### 2.2 IMPL_PLAN.md Structure
|
||||||
|
|
||||||
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
|
**Template-Based Generation**:
|
||||||
|
|
||||||
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
|
```
|
||||||
- Agent interprets `modification_points` and `logic_flow` autonomously
|
1. Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
|
||||||
- Direct agent execution with full context awareness
|
2. Populate all sections following template structure
|
||||||
- No external tool overhead
|
3. Complete template validation checklist
|
||||||
- **Use for**: Standard implementation tasks where agent capability is sufficient
|
4. Generate at .workflow/active/{session_id}/IMPL_PLAN.md
|
||||||
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
|
|
||||||
|
|
||||||
2. **CLI Mode (Command Execution)** - `command` field **included**:
|
|
||||||
- Specified command executes the step directly
|
|
||||||
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
|
||||||
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
|
||||||
- **Required fields**: Same as default mode **PLUS** `command`
|
|
||||||
- **Command patterns**:
|
|
||||||
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
|
||||||
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
|
||||||
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
|
||||||
|
|
||||||
**Mode Selection Strategy**:
|
|
||||||
- **Default to agent execution** for most tasks
|
|
||||||
- **Use CLI mode** when:
|
|
||||||
- User explicitly requests CLI tool (codex/gemini/qwen)
|
|
||||||
- Task requires multi-step autonomous reasoning beyond agent capability
|
|
||||||
- Complex refactoring needs specialized tool analysis
|
|
||||||
- Building on previous CLI execution context (use `resume --last`)
|
|
||||||
|
|
||||||
**Key Principle**: The `command` field is **optional**. Agent must decide based on task complexity and user preference.
|
|
||||||
|
|
||||||
**Pre-Analysis Step Selection Guide (举一反三 Principle)**:
|
|
||||||
|
|
||||||
The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
|
||||||
|
|
||||||
1. **Always Include** (Required):
|
|
||||||
- `load_context_package` - Essential for all tasks
|
|
||||||
- `load_role_analysis_artifacts` - Critical for accessing brainstorming insights
|
|
||||||
|
|
||||||
2. **Selectively Include Based on Task Type**:
|
|
||||||
- **Architecture tasks**: Project structure + Gemini architecture analysis
|
|
||||||
- **Refactoring tasks**: Gemini execution flow tracing + code quality analysis
|
|
||||||
- **Frontend tasks**: React/Vue component searches + UI pattern analysis
|
|
||||||
- **Backend tasks**: Database schema + API endpoint searches
|
|
||||||
- **Security tasks**: Vulnerability scans + security pattern analysis
|
|
||||||
- **Performance tasks**: Bottleneck identification + profiling data
|
|
||||||
|
|
||||||
3. **Tool Selection Strategy**:
|
|
||||||
- **Gemini CLI**: Deep analysis (architecture, execution flow, patterns)
|
|
||||||
- **Qwen CLI**: Fallback or code quality analysis
|
|
||||||
- **Bash/rg/find**: Quick pattern matching and file discovery
|
|
||||||
- **MCP tools**: Semantic search and external research
|
|
||||||
|
|
||||||
4. **Command Composition Patterns**:
|
|
||||||
- **Single command**: `bash([simple_search])`
|
|
||||||
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
|
||||||
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
|
||||||
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
|
||||||
|
|
||||||
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
|
||||||
|
|
||||||
**Artifact Mapping**:
|
|
||||||
- Use `artifacts_inventory` from context package
|
|
||||||
- Highest priority: synthesis_specification
|
|
||||||
- Medium priority: topic_framework
|
|
||||||
- Low priority: role_analyses
|
|
||||||
|
|
||||||
### 3. Implementation Plan Creation
|
|
||||||
Generate `IMPL_PLAN.md` at `.workflow/active/{session_id}/IMPL_PLAN.md`:
|
|
||||||
|
|
||||||
**Structure**:
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
identifier: {session_id}
|
|
||||||
source: "User requirements"
|
|
||||||
analysis: .workflow/active/{session_id}/.process/ANALYSIS_RESULTS.md
|
|
||||||
---
|
|
||||||
|
|
||||||
# Implementation Plan: {Project Title}
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
{Core requirements and technical approach from analysis_results}
|
|
||||||
|
|
||||||
## Context Analysis
|
|
||||||
- **Project**: {from session_metadata and context_package}
|
|
||||||
- **Modules**: {from analysis_results}
|
|
||||||
- **Dependencies**: {from context_package}
|
|
||||||
- **Patterns**: {from analysis_results}
|
|
||||||
|
|
||||||
## Brainstorming Artifacts
|
|
||||||
{List from artifacts_inventory with priorities}
|
|
||||||
|
|
||||||
## Task Breakdown
|
|
||||||
- **Task Count**: {from analysis_results.tasks.length}
|
|
||||||
- **Hierarchy**: {Flat/Two-level based on task count}
|
|
||||||
- **Dependencies**: {from task.depends_on relationships}
|
|
||||||
|
|
||||||
## Implementation Plan
|
|
||||||
- **Execution Strategy**: {Sequential/Parallel}
|
|
||||||
- **Resource Requirements**: {Tools, dependencies}
|
|
||||||
- **Success Criteria**: {from analysis_results}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. TODO List Generation
|
**Data Sources**:
|
||||||
Generate `TODO_LIST.md` at `.workflow/active/{session_id}/TODO_LIST.md`:
|
- Session metadata (user requirements, session_id)
|
||||||
|
- Context package (project structure, dependencies, focus_paths)
|
||||||
|
- Analysis results (technical approach, architecture decisions)
|
||||||
|
- Brainstorming artifacts (role analyses, guidance specifications)
|
||||||
|
|
||||||
**Structure**:
|
**Multi-Module Format** (when modules detected):
|
||||||
|
|
||||||
|
When multiple modules are detected (frontend/backend, etc.), organize IMPL_PLAN.md by module:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Implementation Plan
|
||||||
|
|
||||||
|
## Module A: Frontend (N tasks)
|
||||||
|
### IMPL-A1: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
### IMPL-A2: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
## Module B: Backend (N tasks)
|
||||||
|
### IMPL-B1: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
### IMPL-B2: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
## Cross-Module Dependencies
|
||||||
|
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||||
|
- IMPL-A2 → IMPL-B2 (UI state depends on Backend service)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Cross-Module Dependency Notation**:
|
||||||
|
- During parallel planning, use `CROSS::{module}::{pattern}` format
|
||||||
|
- Example: `depends_on: ["CROSS::B::api-endpoint"]`
|
||||||
|
- Integration phase resolves to actual task IDs: `CROSS::B::api → IMPL-B1`
|
||||||
|
|
||||||
|
### 2.3 TODO_LIST.md Structure
|
||||||
|
|
||||||
|
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
|
||||||
|
|
||||||
|
**Single Module Format**:
|
||||||
```markdown
|
```markdown
|
||||||
# Tasks: {Session Topic}
|
# Tasks: {Session Topic}
|
||||||
|
|
||||||
## Task Progress
|
## Task Progress
|
||||||
▸ **IMPL-001**: [Main Task] → [📋](./.task/IMPL-001.json)
|
- [ ] **IMPL-001**: [Task Title] → [📋](./.task/IMPL-001.json)
|
||||||
- [ ] **IMPL-001.1**: [Subtask] → [📋](./.task/IMPL-001.1.json)
|
- [ ] **IMPL-002**: [Task Title] → [📋](./.task/IMPL-002.json)
|
||||||
|
- [x] **IMPL-003**: [Task Title] → [✅](./.summaries/IMPL-003-summary.md)
|
||||||
- [ ] **IMPL-002**: [Simple Task] → [📋](./.task/IMPL-002.json)
|
|
||||||
|
|
||||||
## Status Legend
|
## Status Legend
|
||||||
- `▸` = Container task (has subtasks)
|
- `- [ ]` = Pending task
|
||||||
- `- [ ]` = Pending leaf task
|
- `- [x]` = Completed task
|
||||||
- `- [x]` = Completed leaf task
|
```
|
||||||
|
|
||||||
|
**Multi-Module Format** (hierarchical by module):
|
||||||
|
```markdown
|
||||||
|
# Tasks: {Session Topic}
|
||||||
|
|
||||||
|
## Module A (Frontend)
|
||||||
|
- [ ] **IMPL-A1**: [Task Title] → [📋](./.task/IMPL-A1.json)
|
||||||
|
- [ ] **IMPL-A2**: [Task Title] → [📋](./.task/IMPL-A2.json)
|
||||||
|
|
||||||
|
## Module B (Backend)
|
||||||
|
- [ ] **IMPL-B1**: [Task Title] → [📋](./.task/IMPL-B1.json)
|
||||||
|
- [ ] **IMPL-B2**: [Task Title] → [📋](./.task/IMPL-B2.json)
|
||||||
|
|
||||||
|
## Cross-Module Dependencies
|
||||||
|
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||||
|
|
||||||
|
## Status Legend
|
||||||
|
- `- [ ]` = Pending task
|
||||||
|
- `- [x]` = Completed task
|
||||||
```
|
```
|
||||||
|
|
||||||
**Linking Rules**:
|
**Linking Rules**:
|
||||||
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
||||||
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
||||||
- Consistent ID schemes: IMPL-XXX, IMPL-XXX.Y (max 2 levels)
|
- Consistent ID schemes: `IMPL-N` (single) or `IMPL-{prefix}{seq}` (multi-module)
|
||||||
|
|
||||||
|
### 2.4 Complexity & Structure Selection
|
||||||
|
|
||||||
|
|
||||||
### 5. Complexity Assessment & Document Structure
|
|
||||||
Use `analysis_results.complexity` or task count to determine structure:
|
Use `analysis_results.complexity` or task count to determine structure:
|
||||||
|
|
||||||
**Simple Tasks** (≤5 tasks):
|
**Single Module Mode**:
|
||||||
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
- **Simple Tasks** (≤5 tasks): Flat structure
|
||||||
- No container tasks, all leaf tasks
|
- **Medium Tasks** (6-12 tasks): Flat structure
|
||||||
|
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
|
||||||
|
|
||||||
**Medium Tasks** (6-12 tasks):
|
**Multi-Module Mode** (N+1 parallel planning):
|
||||||
- Two-level hierarchy: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
- **Per-module limit**: ≤9 tasks per module
|
||||||
- Optional container tasks for grouping
|
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
|
||||||
|
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||||
|
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
|
||||||
|
|
||||||
**Complex Tasks** (>12 tasks):
|
**Multi-Module Detection Triggers**:
|
||||||
- **Re-scope required**: Maximum 12 tasks hard limit
|
- Explicit frontend/backend separation (`src/frontend`, `src/backend`)
|
||||||
- If analysis_results contains >12 tasks, consolidate or request re-scoping
|
- Monorepo structure (`packages/*`, `apps/*`)
|
||||||
|
- Context-package dependency clustering (2+ distinct module groups)
|
||||||
|
|
||||||
## Quantification Requirements (MANDATORY)
|
---
|
||||||
|
|
||||||
|
## 3. Quality Standards
|
||||||
|
|
||||||
|
### 3.1 Quantification Requirements (MANDATORY)
|
||||||
|
|
||||||
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
|
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
|
||||||
|
|
||||||
@@ -565,41 +804,48 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
- [ ] Each implementation step has its own acceptance criteria
|
- [ ] Each implementation step has its own acceptance criteria
|
||||||
|
|
||||||
**Examples**:
|
**Examples**:
|
||||||
- ✅ GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||||
- ❌ BAD: `"Implement new commands"`
|
- BAD: `"Implement new commands"`
|
||||||
- ✅ GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||||
- ❌ BAD: `"All commands implemented successfully"`
|
- BAD: `"All commands implemented successfully"`
|
||||||
|
|
||||||
## Quality Standards
|
### 3.2 Planning & Organization Standards
|
||||||
|
|
||||||
**Planning Principles:**
|
**Planning Principles**:
|
||||||
- Each stage produces working, testable code
|
- Each stage produces working, testable code
|
||||||
- Clear success criteria for each deliverable
|
- Clear success criteria for each deliverable
|
||||||
- Dependencies clearly identified between stages
|
- Dependencies clearly identified between stages
|
||||||
- Incremental progress over big bangs
|
- Incremental progress over big bangs
|
||||||
|
|
||||||
**File Organization:**
|
**File Organization**:
|
||||||
- Session naming: `WFS-[topic-slug]`
|
- Session naming: `WFS-[topic-slug]`
|
||||||
- Task IDs: IMPL-XXX, IMPL-XXX.Y, IMPL-XXX.Y.Z
|
- Task IDs:
|
||||||
- Directory structure follows complexity (Level 0/1/2)
|
- Single module: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||||
|
- Multi-module: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||||
|
- Directory structure: flat task organization (all tasks in `.task/`)
|
||||||
|
|
||||||
**Document Standards:**
|
**Document Standards**:
|
||||||
- Proper linking between documents
|
- Proper linking between documents
|
||||||
- Consistent navigation and references
|
- Consistent navigation and references
|
||||||
|
|
||||||
## Key Reminders
|
### 3.3 Guidelines Checklist
|
||||||
|
|
||||||
**ALWAYS:**
|
**ALWAYS:**
|
||||||
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
|
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||||
- **Use provided context package**: Extract all information from structured context
|
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
|
||||||
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
|
- Use provided context package: Extract all information from structured context
|
||||||
- **Follow 5-field schema**: All task JSONs must have id, title, status, meta, context, flow_control
|
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
||||||
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
|
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||||
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
|
||||||
- **Validate task count**: Maximum 12 tasks hard limit, request re-scope if exceeded
|
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
|
||||||
- **Use session paths**: Construct all paths using provided session_id
|
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
||||||
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
|
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||||
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
|
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||||
|
- Use session paths: Construct all paths using provided session_id
|
||||||
|
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
|
||||||
|
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
|
||||||
|
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
|
||||||
|
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
|
||||||
|
|
||||||
**NEVER:**
|
**NEVER:**
|
||||||
- Load files directly (use provided context package instead)
|
- Load files directly (use provided context package instead)
|
||||||
@@ -608,3 +854,4 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
- Exceed 12 tasks without re-scoping
|
- Exceed 12 tasks without re-scoping
|
||||||
- Skip artifact integration when artifacts_inventory is provided
|
- Skip artifact integration when artifacts_inventory is provided
|
||||||
- Ignore MCP capabilities when available
|
- Ignore MCP capabilities when available
|
||||||
|
- Use fixed pre-analysis steps without task-specific adaptation
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ Score = 0
|
|||||||
|
|
||||||
**1. Project Structure**:
|
**1. Project Structure**:
|
||||||
```bash
|
```bash
|
||||||
~/.claude/scripts/get_modules_by_depth.sh
|
ccw tool exec get_modules_by_depth '{}'
|
||||||
```
|
```
|
||||||
|
|
||||||
**2. Content Search**:
|
**2. Content Search**:
|
||||||
@@ -100,7 +100,7 @@ CONTEXT: @**/*
|
|||||||
# Specific patterns
|
# Specific patterns
|
||||||
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
||||||
|
|
||||||
# Cross-directory (requires --include-directories)
|
# Cross-directory (requires --includeDirs)
|
||||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
|
|||||||
```
|
```
|
||||||
analyze|plan → gemini (qwen fallback) + mode=analysis
|
analyze|plan → gemini (qwen fallback) + mode=analysis
|
||||||
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
||||||
execute (complex) → codex + mode=auto
|
execute (complex) → codex + mode=write
|
||||||
discuss → multi (gemini + codex parallel)
|
discuss → multi (gemini + codex parallel)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
|
|||||||
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
||||||
- **Position**: `-m` after prompt, before flags
|
- **Position**: `-m` after prompt, before flags
|
||||||
|
|
||||||
### Command Templates
|
### Command Templates (CCW Unified CLI)
|
||||||
|
|
||||||
**Gemini/Qwen (Analysis)**:
|
**Gemini/Qwen (Analysis)**:
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: {goal}
|
PURPOSE: {goal}
|
||||||
TASK: {task}
|
TASK: {task}
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: {output}
|
EXPECTED: {output}
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||||
" -m gemini-2.5-pro
|
" --tool gemini --mode analysis --cd {dir}
|
||||||
|
|
||||||
# Qwen fallback: Replace 'gemini' with 'qwen'
|
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||||
```
|
```
|
||||||
|
|
||||||
**Gemini/Qwen (Write)**:
|
**Gemini/Qwen (Write)**:
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "..." --approval-mode yolo
|
ccw cli -p "..." --tool gemini --mode write --cd {dir}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Codex (Auto)**:
|
**Codex (Write)**:
|
||||||
```bash
|
```bash
|
||||||
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
ccw cli -p "..." --tool codex --mode write --cd {dir}
|
||||||
|
|
||||||
# Resume: Add 'resume --last' after prompt
|
|
||||||
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Cross-Directory** (Gemini/Qwen):
|
**Cross-Directory** (Gemini/Qwen):
|
||||||
```bash
|
```bash
|
||||||
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
|
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
|
||||||
```
|
```
|
||||||
|
|
||||||
**Directory Scope**:
|
**Directory Scope**:
|
||||||
- `@` only references current directory + subdirectories
|
- `@` only references current directory + subdirectories
|
||||||
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
|
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
|
||||||
|
|
||||||
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
||||||
|
|
||||||
|
|||||||
@@ -1,620 +1,182 @@
|
|||||||
---
|
---
|
||||||
name: cli-explore-agent
|
name: cli-explore-agent
|
||||||
description: |
|
description: |
|
||||||
Read-only code exploration and structural analysis agent specialized in module discovery, dependency mapping, and architecture comprehension using dual-source strategy (Bash rapid scan + Gemini CLI semantic analysis).
|
Read-only code exploration agent with dual-source analysis strategy (Bash + Gemini CLI).
|
||||||
|
Orchestrates 4-phase workflow: Task Understanding → Analysis Execution → Schema Validation → Output Generation
|
||||||
Core capabilities:
|
|
||||||
- Multi-layer module structure analysis (directory tree, file patterns, symbol discovery)
|
|
||||||
- Dependency graph construction (imports, exports, call chains, circular detection)
|
|
||||||
- Pattern discovery (design patterns, architectural styles, naming conventions)
|
|
||||||
- Code provenance tracing (definition lookup, usage sites, call hierarchies)
|
|
||||||
- Architecture summarization (component relationships, integration points, data flows)
|
|
||||||
|
|
||||||
Integration points:
|
|
||||||
- Gemini CLI: Deep semantic understanding, design intent analysis, non-standard pattern discovery
|
|
||||||
- Qwen CLI: Fallback for Gemini, specialized for code analysis tasks
|
|
||||||
- Bash tools: rg, tree, find, get_modules_by_depth.sh for rapid structural scanning
|
|
||||||
- MCP Code Index: Optional integration for enhanced file discovery and search
|
|
||||||
|
|
||||||
Key optimizations:
|
|
||||||
- Dual-source strategy: Bash structural scan (speed) + Gemini semantic analysis (depth)
|
|
||||||
- Language-agnostic analysis with syntax-aware extensions
|
|
||||||
- Progressive disclosure: Quick overview → detailed analysis → dependency deep-dive
|
|
||||||
- Context-aware filtering based on task requirements
|
|
||||||
|
|
||||||
color: yellow
|
color: yellow
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a specialized **CLI Exploration Agent** that executes read-only code analysis tasks autonomously to discover module structures, map dependencies, and understand architectural patterns.
|
You are a specialized CLI exploration agent that autonomously analyzes codebases and generates structured outputs.
|
||||||
|
|
||||||
## Agent Operation
|
## Core Capabilities
|
||||||
|
|
||||||
### Execution Flow
|
1. **Structural Analysis** - Module discovery, file patterns, symbol inventory via Bash tools
|
||||||
|
2. **Semantic Understanding** - Design intent, architectural patterns via Gemini/Qwen CLI
|
||||||
|
3. **Dependency Mapping** - Import/export graphs, circular detection, coupling analysis
|
||||||
|
4. **Structured Output** - Schema-compliant JSON generation with validation
|
||||||
|
|
||||||
```
|
**Analysis Modes**:
|
||||||
STEP 1: Parse Analysis Request
|
- `quick-scan` → Bash only (10-30s)
|
||||||
→ Extract task intent (structure, dependencies, patterns, provenance, summary)
|
- `deep-scan` → Bash + Gemini dual-source (2-5min)
|
||||||
→ Identify analysis mode (quick-scan | deep-scan | dependency-map)
|
- `dependency-map` → Graph construction (3-8min)
|
||||||
→ Determine scope (directory, file patterns, language filters)
|
|
||||||
|
|
||||||
STEP 2: Initialize Analysis Environment
|
|
||||||
→ Set project root and working directory
|
|
||||||
→ Validate access to required tools (rg, tree, find, Gemini CLI)
|
|
||||||
→ Optional: Initialize Code Index MCP for enhanced discovery
|
|
||||||
→ Load project context (CLAUDE.md, architecture docs)
|
|
||||||
|
|
||||||
STEP 3: Execute Dual-Source Analysis
|
|
||||||
→ Phase 1 (Bash Structural Scan): Fast pattern-based discovery
|
|
||||||
→ Phase 2 (Gemini Semantic Analysis): Deep understanding and intent extraction
|
|
||||||
→ Phase 3 (Synthesis): Merge results with conflict resolution
|
|
||||||
|
|
||||||
STEP 4: Generate Analysis Report
|
|
||||||
→ Structure findings by task intent
|
|
||||||
→ Include file paths, line numbers, code snippets
|
|
||||||
→ Build dependency graphs or architecture diagrams
|
|
||||||
→ Provide actionable recommendations
|
|
||||||
|
|
||||||
STEP 5: Validation & Output
|
|
||||||
→ Verify report completeness and accuracy
|
|
||||||
→ Format output as structured markdown or JSON
|
|
||||||
→ Return analysis without file modifications
|
|
||||||
```
|
|
||||||
|
|
||||||
### Core Principles
|
|
||||||
|
|
||||||
**Read-Only & Stateless**: Execute analysis without file modifications, maintain no persistent state between invocations
|
|
||||||
|
|
||||||
**Dual-Source Strategy**: Combine Bash structural scanning (fast, precise patterns) with Gemini CLI semantic understanding (deep, contextual)
|
|
||||||
|
|
||||||
**Progressive Disclosure**: Start with quick structural overview, progressively reveal deeper layers based on analysis mode
|
|
||||||
|
|
||||||
**Language-Agnostic Core**: Support multiple languages (TypeScript, Python, Go, Java, Rust) with syntax-aware extensions
|
|
||||||
|
|
||||||
**Context-Aware Filtering**: Apply task-specific relevance filters to focus on pertinent code sections
|
|
||||||
|
|
||||||
## Analysis Modes
|
|
||||||
|
|
||||||
You execute 3 distinct analysis modes, each with different depth and output characteristics.
|
|
||||||
|
|
||||||
### Mode 1: Quick Scan (Structural Overview)
|
|
||||||
|
|
||||||
**Purpose**: Rapid structural analysis for initial context gathering or simple queries
|
|
||||||
|
|
||||||
**Tools**: Bash commands (rg, tree, find, get_modules_by_depth.sh)
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
1. **Project Structure**: Run get_modules_by_depth.sh for hierarchical overview
|
|
||||||
2. **File Discovery**: Use find/glob patterns to locate relevant files
|
|
||||||
3. **Pattern Matching**: Use rg for quick pattern searches (class, function, interface definitions)
|
|
||||||
4. **Basic Metrics**: Count files, lines, major components
|
|
||||||
|
|
||||||
**Output**: Structured markdown with directory tree, file lists, basic component inventory
|
|
||||||
|
|
||||||
**Time Estimate**: 10-30 seconds
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Initial project exploration
|
|
||||||
- Quick file/pattern lookups
|
|
||||||
- Pre-planning reconnaissance
|
|
||||||
- Context package generation (breadth-first)
|
|
||||||
|
|
||||||
### Mode 2: Deep Scan (Semantic Analysis)
|
|
||||||
|
|
||||||
**Purpose**: Comprehensive understanding of code intent, design patterns, and architectural decisions
|
|
||||||
|
|
||||||
**Tools**: Bash commands (Phase 1) + Gemini CLI (Phase 2) + Synthesis (Phase 3)
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
|
|
||||||
**Phase 1: Bash Structural Pre-Scan** (Fast & Precise)
|
|
||||||
- Purpose: Discover standard patterns with zero ambiguity
|
|
||||||
- Execution:
|
|
||||||
```bash
|
|
||||||
# TypeScript/JavaScript
|
|
||||||
rg "^export (class|interface|type|function) " --type ts -n --max-count 50
|
|
||||||
rg "^import .* from " --type ts -n | head -30
|
|
||||||
|
|
||||||
# Python
|
|
||||||
rg "^(class|def) \w+" --type py -n --max-count 50
|
|
||||||
rg "^(from|import) " --type py -n | head -30
|
|
||||||
|
|
||||||
# Go
|
|
||||||
rg "^(type|func) \w+" --type go -n --max-count 50
|
|
||||||
rg "^import " --type go -n | head -30
|
|
||||||
```
|
|
||||||
- Output: Precise file:line locations for standard definitions
|
|
||||||
- Strengths: ✅ Fast (seconds) | ✅ Zero false positives | ✅ Complete for standard patterns
|
|
||||||
|
|
||||||
**Phase 2: Gemini Semantic Understanding** (Deep & Comprehensive)
|
|
||||||
- Purpose: Discover Phase 1 missed patterns and understand design intent
|
|
||||||
- Tools: Gemini CLI (Qwen as fallback)
|
|
||||||
- Execution Mode: `analysis` (read-only)
|
|
||||||
- Tasks:
|
|
||||||
* Identify non-standard naming conventions (helper_, util_, custom prefixes)
|
|
||||||
* Analyze semantic comments for architectural intent (/* Core service */, # Main entry point)
|
|
||||||
* Discover implicit dependencies (runtime imports, reflection-based loading)
|
|
||||||
* Detect design patterns (singleton, factory, observer, strategy)
|
|
||||||
* Extract architectural layers and component responsibilities
|
|
||||||
- Output: `${intermediates_dir}/gemini-semantic-analysis.json`
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"bash_missed_patterns": [
|
|
||||||
{
|
|
||||||
"pattern_type": "non_standard_export",
|
|
||||||
"location": "src/services/helper_auth.ts:45",
|
|
||||||
"naming_convention": "helper_ prefix pattern",
|
|
||||||
"confidence": "high"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"design_intent_summary": "Layered architecture with service-repository pattern",
|
|
||||||
"architectural_patterns": ["MVC", "Dependency Injection", "Repository Pattern"],
|
|
||||||
"implicit_dependencies": ["Config loaded via environment", "Logger injected at runtime"],
|
|
||||||
"recommendations": ["Standardize naming to match project conventions"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
- Strengths: ✅ Discovers hidden patterns | ✅ Understands intent | ✅ Finds non-standard code
|
|
||||||
|
|
||||||
**Phase 3: Dual-Source Synthesis** (Best of Both)
|
|
||||||
- Merge Bash (precise locations) + Gemini (semantic understanding)
|
|
||||||
- Strategy:
|
|
||||||
* Standard patterns: Use Bash results (file:line precision)
|
|
||||||
* Supplementary discoveries: Adopt Gemini findings
|
|
||||||
* Conflicting interpretations: Use Gemini semantic context for resolution
|
|
||||||
- Validation: Cross-reference both sources for completeness
|
|
||||||
- Attribution: Mark each finding as "bash-discovered" or "gemini-discovered"
|
|
||||||
|
|
||||||
**Output**: Comprehensive analysis report with architectural insights, design patterns, code intent
|
|
||||||
|
|
||||||
**Time Estimate**: 2-5 minutes
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Architecture review and refactoring planning
|
|
||||||
- Understanding unfamiliar codebase sections
|
|
||||||
- Pattern discovery for standardization
|
|
||||||
- Pre-implementation deep-dive
|
|
||||||
|
|
||||||
### Mode 3: Dependency Map (Relationship Analysis)
|
|
||||||
|
|
||||||
**Purpose**: Build complete dependency graphs with import/export chains and circular dependency detection
|
|
||||||
|
|
||||||
**Tools**: Bash + Gemini CLI + Graph construction logic
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
1. **Direct Dependencies** (Bash):
|
|
||||||
```bash
|
|
||||||
# Extract all imports
|
|
||||||
rg "^import .* from ['\"](.+)['\"]" --type ts -o -r '$1' -n
|
|
||||||
|
|
||||||
# Extract all exports
|
|
||||||
rg "^export .* (class|function|const|type|interface) (\w+)" --type ts -o -r '$2' -n
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Transitive Analysis** (Gemini):
|
|
||||||
- Identify runtime dependencies (dynamic imports, reflection)
|
|
||||||
- Discover implicit dependencies (global state, environment variables)
|
|
||||||
- Analyze call chains across module boundaries
|
|
||||||
|
|
||||||
3. **Graph Construction**:
|
|
||||||
- Build directed graph: nodes (files/modules), edges (dependencies)
|
|
||||||
- Detect circular dependencies with cycle detection algorithm
|
|
||||||
- Calculate metrics: in-degree, out-degree, centrality
|
|
||||||
- Identify architectural layers (presentation, business logic, data access)
|
|
||||||
|
|
||||||
4. **Risk Assessment**:
|
|
||||||
- Flag circular dependencies with impact analysis
|
|
||||||
- Identify highly coupled modules (fan-in/fan-out >10)
|
|
||||||
- Detect orphaned modules (no inbound references)
|
|
||||||
- Calculate change risk scores
|
|
||||||
|
|
||||||
**Output**: Dependency graph (JSON/DOT format) + risk assessment report
|
|
||||||
|
|
||||||
**Time Estimate**: 3-8 minutes (depends on project size)
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Refactoring impact analysis
|
|
||||||
- Module extraction planning
|
|
||||||
- Circular dependency resolution
|
|
||||||
- Architecture optimization
|
|
||||||
|
|
||||||
## Tool Integration
|
|
||||||
|
|
||||||
### Bash Structural Tools
|
|
||||||
|
|
||||||
**get_modules_by_depth.sh**:
|
|
||||||
- Purpose: Generate hierarchical project structure
|
|
||||||
- Usage: `bash ~/.claude/scripts/get_modules_by_depth.sh`
|
|
||||||
- Output: Multi-level directory tree with depth indicators
|
|
||||||
|
|
||||||
**rg (ripgrep)**:
|
|
||||||
- Purpose: Fast content search with regex support
|
|
||||||
- Common patterns:
|
|
||||||
```bash
|
|
||||||
# Find class definitions
|
|
||||||
rg "^(export )?class \w+" --type ts -n
|
|
||||||
|
|
||||||
# Find function definitions
|
|
||||||
rg "^(export )?(function|const) \w+\s*=" --type ts -n
|
|
||||||
|
|
||||||
# Find imports
|
|
||||||
rg "^import .* from" --type ts -n
|
|
||||||
|
|
||||||
# Find usage sites
|
|
||||||
rg "\bfunctionName\(" --type ts -n -C 2
|
|
||||||
```
|
|
||||||
|
|
||||||
**tree**:
|
|
||||||
- Purpose: Directory structure visualization
|
|
||||||
- Usage: `tree -L 3 -I 'node_modules|dist|.git'`
|
|
||||||
|
|
||||||
**find**:
|
|
||||||
- Purpose: File discovery by name patterns
|
|
||||||
- Usage: `find . -name "*.ts" -type f | grep -v node_modules`
|
|
||||||
|
|
||||||
### Gemini CLI (Primary Semantic Analysis)
|
|
||||||
|
|
||||||
**Command Template**:
|
|
||||||
```bash
|
|
||||||
cd [target_directory] && gemini -p "
|
|
||||||
PURPOSE: [Analysis objective - what to discover and why]
|
|
||||||
TASK:
|
|
||||||
• [Specific analysis task 1]
|
|
||||||
• [Specific analysis task 2]
|
|
||||||
• [Specific analysis task 3]
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @**/* | Memory: [Previous findings, related modules, architectural context]
|
|
||||||
EXPECTED: [Report format, key insights, specific deliverables]
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on [scope constraints] | analysis=READ-ONLY
|
|
||||||
" -m gemini-2.5-pro
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Non-standard pattern discovery
|
|
||||||
- Design intent extraction
|
|
||||||
- Architectural layer identification
|
|
||||||
- Code smell detection
|
|
||||||
|
|
||||||
**Fallback**: Qwen CLI with same command structure
|
|
||||||
|
|
||||||
### MCP Code Index (Optional Enhancement)
|
|
||||||
|
|
||||||
**Tools**:
|
|
||||||
- `mcp__code-index__set_project_path(path)` - Initialize index
|
|
||||||
- `mcp__code-index__find_files(pattern)` - File discovery
|
|
||||||
- `mcp__code-index__search_code_advanced(pattern, file_pattern, regex)` - Content search
|
|
||||||
- `mcp__code-index__get_file_summary(file_path)` - File structure analysis
|
|
||||||
|
|
||||||
**Integration Strategy**: Use as primary discovery tool when available, fallback to bash/rg otherwise
|
|
||||||
|
|
||||||
## Output Formats
|
|
||||||
|
|
||||||
### Structural Overview Report
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Code Structure Analysis: {Module/Directory Name}
|
|
||||||
|
|
||||||
## Project Structure
|
|
||||||
{Output from get_modules_by_depth.sh}
|
|
||||||
|
|
||||||
## File Inventory
|
|
||||||
- **Total Files**: {count}
|
|
||||||
- **Primary Language**: {language}
|
|
||||||
- **Key Directories**:
|
|
||||||
- `src/`: {brief description}
|
|
||||||
- `tests/`: {brief description}
|
|
||||||
|
|
||||||
## Component Discovery
|
|
||||||
### Classes ({count})
|
|
||||||
- {ClassName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
### Functions ({count})
|
|
||||||
- {functionName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
### Interfaces/Types ({count})
|
|
||||||
- {TypeName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
## Analysis Summary
|
|
||||||
- **Complexity**: {low|medium|high}
|
|
||||||
- **Architecture Style**: {pattern name}
|
|
||||||
- **Key Patterns**: {list}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Semantic Analysis Report
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Deep Code Analysis: {Module/Directory Name}
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
{High-level findings from Gemini semantic analysis}
|
|
||||||
|
|
||||||
## Architectural Patterns
|
|
||||||
- **Primary Pattern**: {pattern name}
|
|
||||||
- **Layer Structure**: {layers identified}
|
|
||||||
- **Design Intent**: {extracted from comments/structure}
|
|
||||||
|
|
||||||
## Dual-Source Findings
|
|
||||||
|
|
||||||
### Bash Structural Scan Results
|
|
||||||
- **Standard Patterns Found**: {count}
|
|
||||||
- **Key Exports**: {list with file:line}
|
|
||||||
- **Import Structure**: {summary}
|
|
||||||
|
|
||||||
### Gemini Semantic Discoveries
|
|
||||||
- **Non-Standard Patterns**: {list with explanations}
|
|
||||||
- **Implicit Dependencies**: {list}
|
|
||||||
- **Design Intent Summary**: {paragraph}
|
|
||||||
- **Recommendations**: {list}
|
|
||||||
|
|
||||||
### Synthesis
|
|
||||||
{Merged understanding with attributed sources}
|
|
||||||
|
|
||||||
## Code Inventory (Attributed)
|
|
||||||
### Classes
|
|
||||||
- {ClassName} [{bash-discovered|gemini-discovered}]
|
|
||||||
- Location: {file}:{line}
|
|
||||||
- Purpose: {from semantic analysis}
|
|
||||||
- Pattern: {design pattern if applicable}
|
|
||||||
|
|
||||||
### Functions
|
|
||||||
- {functionName} [{source}]
|
|
||||||
- Location: {file}:{line}
|
|
||||||
- Role: {from semantic analysis}
|
|
||||||
- Callers: {list if known}
|
|
||||||
|
|
||||||
## Actionable Insights
|
|
||||||
1. {Finding with recommendation}
|
|
||||||
2. {Finding with recommendation}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Dependency Map Report
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"analysis_metadata": {
|
|
||||||
"project_root": "/path/to/project",
|
|
||||||
"timestamp": "2025-01-25T10:30:00Z",
|
|
||||||
"analysis_mode": "dependency-map",
|
|
||||||
"languages": ["typescript"]
|
|
||||||
},
|
|
||||||
"dependency_graph": {
|
|
||||||
"nodes": [
|
|
||||||
{
|
|
||||||
"id": "src/auth/service.ts",
|
|
||||||
"type": "module",
|
|
||||||
"exports": ["AuthService", "login", "logout"],
|
|
||||||
"imports_count": 3,
|
|
||||||
"dependents_count": 5,
|
|
||||||
"layer": "business-logic"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"edges": [
|
|
||||||
{
|
|
||||||
"from": "src/auth/controller.ts",
|
|
||||||
"to": "src/auth/service.ts",
|
|
||||||
"type": "direct-import",
|
|
||||||
"symbols": ["AuthService"]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"circular_dependencies": [
|
|
||||||
{
|
|
||||||
"cycle": ["A.ts", "B.ts", "C.ts", "A.ts"],
|
|
||||||
"risk_level": "high",
|
|
||||||
"impact": "Refactoring A.ts requires changes to B.ts and C.ts"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"risk_assessment": {
|
|
||||||
"high_coupling": [
|
|
||||||
{
|
|
||||||
"module": "src/utils/helpers.ts",
|
|
||||||
"dependents_count": 23,
|
|
||||||
"risk": "Changes impact 23 modules"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"orphaned_modules": [
|
|
||||||
{
|
|
||||||
"module": "src/legacy/old_auth.ts",
|
|
||||||
"risk": "Dead code, candidate for removal"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"recommendations": [
|
|
||||||
"Break circular dependency between A.ts and B.ts by introducing interface abstraction",
|
|
||||||
"Refactor helpers.ts to reduce coupling (split into domain-specific utilities)"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Patterns
|
|
||||||
|
|
||||||
### Pattern 1: Quick Project Reconnaissance
|
|
||||||
|
|
||||||
**Trigger**: User asks "What's the structure of X module?" or "Where is X defined?"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Run get_modules_by_depth.sh for structural overview
|
|
||||||
2. Use rg to find definitions: rg "class|function|interface X" -n
|
|
||||||
3. Generate structural overview report
|
|
||||||
4. Return markdown report without Gemini analysis
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Structural Overview Report
|
|
||||||
**Time**: <30 seconds
|
|
||||||
|
|
||||||
### Pattern 2: Architecture Deep-Dive
|
|
||||||
|
|
||||||
**Trigger**: User asks "How does X work?" or "Explain the architecture of X"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Phase 1 (Bash): Scan for standard patterns (classes, functions, imports)
|
|
||||||
2. Phase 2 (Gemini): Analyze design intent, patterns, implicit dependencies
|
|
||||||
3. Phase 3 (Synthesis): Merge results with attribution
|
|
||||||
4. Generate semantic analysis report with architectural insights
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Semantic Analysis Report
|
|
||||||
**Time**: 2-5 minutes
|
|
||||||
|
|
||||||
### Pattern 3: Refactoring Impact Analysis
|
|
||||||
|
|
||||||
**Trigger**: User asks "What depends on X?" or "Impact of changing X?"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Build dependency graph using rg for direct dependencies
|
|
||||||
2. Use Gemini to discover runtime/implicit dependencies
|
|
||||||
3. Detect circular dependencies and high-coupling modules
|
|
||||||
4. Calculate change risk scores
|
|
||||||
5. Generate dependency map report with recommendations
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Dependency Map Report (JSON + Markdown summary)
|
|
||||||
**Time**: 3-8 minutes
|
|
||||||
|
|
||||||
## Quality Assurance
|
|
||||||
|
|
||||||
### Validation Checks
|
|
||||||
|
|
||||||
**Completeness**:
|
|
||||||
- ✅ All requested analysis objectives addressed
|
|
||||||
- ✅ Key components inventoried with file:line locations
|
|
||||||
- ✅ Dual-source strategy applied (Bash + Gemini) for deep-scan mode
|
|
||||||
- ✅ Findings attributed to discovery source (bash/gemini)
|
|
||||||
|
|
||||||
**Accuracy**:
|
|
||||||
- ✅ File paths verified (exist and accessible)
|
|
||||||
- ✅ Line numbers accurate (cross-referenced with actual files)
|
|
||||||
- ✅ Code snippets match source (no fabrication)
|
|
||||||
- ✅ Dependency relationships validated (bidirectional checks)
|
|
||||||
|
|
||||||
**Actionability**:
|
|
||||||
- ✅ Recommendations specific and implementable
|
|
||||||
- ✅ Risk assessments quantified (low/medium/high with metrics)
|
|
||||||
- ✅ Next steps clearly defined
|
|
||||||
- ✅ No ambiguous findings (everything has file:line context)
|
|
||||||
|
|
||||||
### Error Recovery
|
|
||||||
|
|
||||||
**Common Issues**:
|
|
||||||
1. **Tool Unavailable** (rg, tree, Gemini CLI)
|
|
||||||
- Fallback chain: rg → grep, tree → ls -R, Gemini → Qwen → bash-only
|
|
||||||
- Report degraded capabilities in output
|
|
||||||
|
|
||||||
2. **Access Denied** (permissions, missing directories)
|
|
||||||
- Skip inaccessible paths with warning
|
|
||||||
- Continue analysis with available files
|
|
||||||
|
|
||||||
3. **Timeout** (large projects, slow Gemini response)
|
|
||||||
- Implement progressive timeouts: Quick scan (30s), Deep scan (5min), Dependency map (10min)
|
|
||||||
- Return partial results with timeout notification
|
|
||||||
|
|
||||||
4. **Ambiguous Patterns** (conflicting interpretations)
|
|
||||||
- Use Gemini semantic analysis as tiebreaker
|
|
||||||
- Document uncertainty in report with attribution
|
|
||||||
|
|
||||||
## Available Tools & Services
|
|
||||||
|
|
||||||
This agent can leverage the following tools to enhance analysis:
|
|
||||||
|
|
||||||
**Context Search Agent** (`context-search-agent`):
|
|
||||||
- **Use Case**: Get project-wide context before analysis
|
|
||||||
- **When to use**: Need comprehensive project understanding beyond file structure
|
|
||||||
- **Integration**: Call context-search-agent first, then use results to guide exploration
|
|
||||||
|
|
||||||
**MCP Tools** (Code Index):
|
|
||||||
- **Use Case**: Enhanced file discovery and search capabilities
|
|
||||||
- **When to use**: Large codebases requiring fast pattern discovery
|
|
||||||
- **Integration**: Prefer Code Index MCP when available, fallback to rg/bash tools
|
|
||||||
|
|
||||||
## Key Reminders
|
|
||||||
|
|
||||||
### ALWAYS
|
|
||||||
|
|
||||||
**Analysis Integrity**: ✅ Read-only operations | ✅ No file modifications | ✅ No state persistence | ✅ Verify file paths before reporting
|
|
||||||
|
|
||||||
**Dual-Source Strategy** (Deep-Scan Mode): ✅ Execute Bash scan first (Phase 1) | ✅ Run Gemini analysis (Phase 2) | ✅ Synthesize with attribution (Phase 3) | ✅ Cross-validate findings
|
|
||||||
|
|
||||||
**Tool Chain**: ✅ Prefer Code Index MCP when available | ✅ Fallback to rg/bash tools | ✅ Use Gemini CLI for semantic analysis (Qwen as fallback) | ✅ Handle tool unavailability gracefully
|
|
||||||
|
|
||||||
**Output Standards**: ✅ Include file:line locations | ✅ Attribute findings to source (bash/gemini) | ✅ Provide actionable recommendations | ✅ Use standardized report formats
|
|
||||||
|
|
||||||
**Mode Selection**: ✅ Match mode to task intent (quick-scan for simple queries, deep-scan for architecture, dependency-map for refactoring) | ✅ Communicate mode choice to user
|
|
||||||
|
|
||||||
### NEVER
|
|
||||||
|
|
||||||
**File Operations**: ❌ Modify files | ❌ Create/delete files | ❌ Execute write operations | ❌ Run build/test commands that change state
|
|
||||||
|
|
||||||
**Analysis Scope**: ❌ Exceed requested scope | ❌ Analyze unrelated modules | ❌ Include irrelevant findings | ❌ Mix multiple unrelated queries
|
|
||||||
|
|
||||||
**Output Quality**: ❌ Fabricate code snippets | ❌ Guess file locations | ❌ Report unverified dependencies | ❌ Provide ambiguous recommendations without context
|
|
||||||
|
|
||||||
**Tool Usage**: ❌ Skip Bash scan in deep-scan mode | ❌ Use Gemini for quick-scan mode (overkill) | ❌ Ignore fallback chain when tool fails | ❌ Proceed with incomplete tool setup
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Command Templates by Language
|
## 4-Phase Execution Workflow
|
||||||
|
|
||||||
### TypeScript/JavaScript
|
```
|
||||||
|
Phase 1: Task Understanding
|
||||||
```bash
|
↓ Parse prompt for: analysis scope, output requirements, schema path
|
||||||
# Quick structural scan
|
Phase 2: Analysis Execution
|
||||||
rg "^export (class|interface|type|function|const) " --type ts -n
|
↓ Bash structural scan + Gemini semantic analysis (based on mode)
|
||||||
|
Phase 3: Schema Validation (MANDATORY if schema specified)
|
||||||
# Find component definitions (React)
|
↓ Read schema → Extract EXACT field names → Validate structure
|
||||||
rg "^export (default )?(function|const) \w+.*=.*\(" --type tsx -n
|
Phase 4: Output Generation
|
||||||
|
↓ Agent report + File output (strictly schema-compliant)
|
||||||
# Find imports
|
|
||||||
rg "^import .* from ['\"](.+)['\"]" --type ts -o -r '$1'
|
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "*.test.ts" -o -name "*.spec.ts" | grep -v node_modules
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Python
|
---
|
||||||
|
|
||||||
|
## Phase 1: Task Understanding
|
||||||
|
|
||||||
|
**Extract from prompt**:
|
||||||
|
- Analysis target and scope
|
||||||
|
- Analysis mode (quick-scan / deep-scan / dependency-map)
|
||||||
|
- Output file path (if specified)
|
||||||
|
- Schema file path (if specified)
|
||||||
|
- Additional requirements and constraints
|
||||||
|
|
||||||
|
**Determine analysis depth from prompt keywords**:
|
||||||
|
- Quick lookup, structure overview → quick-scan
|
||||||
|
- Deep analysis, design intent, architecture → deep-scan
|
||||||
|
- Dependencies, impact analysis, coupling → dependency-map
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Analysis Execution
|
||||||
|
|
||||||
|
### Available Tools
|
||||||
|
|
||||||
|
- `Read()` - Load package.json, requirements.txt, pyproject.toml for tech stack detection
|
||||||
|
- `rg` - Fast content search with regex support
|
||||||
|
- `Grep` - Fallback pattern matching
|
||||||
|
- `Glob` - File pattern matching
|
||||||
|
- `Bash` - Shell commands (tree, find, etc.)
|
||||||
|
|
||||||
|
### Bash Structural Scan
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Find class definitions
|
# Project structure
|
||||||
rg "^class \w+.*:" --type py -n
|
ccw tool exec get_modules_by_depth '{}'
|
||||||
|
|
||||||
# Find function definitions
|
# Pattern discovery (adapt based on language)
|
||||||
rg "^def \w+\(" --type py -n
|
rg "^export (class|interface|function) " --type ts -n
|
||||||
|
rg "^(class|def) \w+" --type py -n
|
||||||
# Find imports
|
rg "^import .* from " -n | head -30
|
||||||
rg "^(from .* import|import )" --type py -n
|
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "test_*.py" -o -name "*_test.py"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Go
|
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Find type definitions
|
ccw cli -p "
|
||||||
rg "^type \w+ (struct|interface)" --type go -n
|
PURPOSE: {from prompt}
|
||||||
|
TASK: {from prompt}
|
||||||
# Find function definitions
|
MODE: analysis
|
||||||
rg "^func (\(\w+ \*?\w+\) )?\w+\(" --type go -n
|
CONTEXT: @**/*
|
||||||
|
EXPECTED: {from prompt}
|
||||||
# Find imports
|
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
||||||
rg "^import \(" --type go -A 10
|
" --tool gemini --mode analysis --cd {dir}
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "*_test.go"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Java
|
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|
||||||
```bash
|
### Dual-Source Synthesis
|
||||||
# Find class definitions
|
|
||||||
rg "^(public |private |protected )?(class|interface|enum) \w+" --type java -n
|
|
||||||
|
|
||||||
# Find method definitions
|
1. Bash results: Precise file:line locations
|
||||||
rg "^\s+(public |private |protected ).*\w+\(.*\)" --type java -n
|
2. Gemini results: Semantic understanding, design intent
|
||||||
|
3. Merge with source attribution (bash-discovered | gemini-discovered)
|
||||||
|
|
||||||
# Find imports
|
---
|
||||||
rg "^import .*;" --type java -n
|
|
||||||
|
|
||||||
# Find test files
|
## Phase 3: Schema Validation
|
||||||
find . -name "*Test.java" -o -name "*Tests.java"
|
|
||||||
|
### ⚠️ CRITICAL: Schema Compliance Protocol
|
||||||
|
|
||||||
|
**This phase is MANDATORY when schema file is specified in prompt.**
|
||||||
|
|
||||||
|
**Step 1: Read Schema FIRST**
|
||||||
```
|
```
|
||||||
|
Read(schema_file_path)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Extract Schema Requirements**
|
||||||
|
|
||||||
|
Parse and memorize:
|
||||||
|
1. **Root structure** - Is it array `[...]` or object `{...}`?
|
||||||
|
2. **Required fields** - List all `"required": [...]` arrays
|
||||||
|
3. **Field names EXACTLY** - Copy character-by-character (case-sensitive)
|
||||||
|
4. **Enum values** - Copy exact strings (e.g., `"critical"` not `"Critical"`)
|
||||||
|
5. **Nested structures** - Note flat vs nested requirements
|
||||||
|
|
||||||
|
**Step 3: Pre-Output Validation Checklist**
|
||||||
|
|
||||||
|
Before writing ANY JSON output, verify:
|
||||||
|
|
||||||
|
- [ ] Root structure matches schema (array vs object)
|
||||||
|
- [ ] ALL required fields present at each level
|
||||||
|
- [ ] Field names EXACTLY match schema (character-by-character)
|
||||||
|
- [ ] Enum values EXACTLY match schema (case-sensitive)
|
||||||
|
- [ ] Nested structures follow schema pattern (flat vs nested)
|
||||||
|
- [ ] Data types correct (string, integer, array, object)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: Output Generation
|
||||||
|
|
||||||
|
### Agent Output (return to caller)
|
||||||
|
|
||||||
|
Brief summary:
|
||||||
|
- Task completion status
|
||||||
|
- Key findings summary
|
||||||
|
- Generated file paths (if any)
|
||||||
|
|
||||||
|
### File Output (as specified in prompt)
|
||||||
|
|
||||||
|
**⚠️ MANDATORY WORKFLOW**:
|
||||||
|
|
||||||
|
1. `Read()` schema file BEFORE generating output
|
||||||
|
2. Extract ALL field names from schema
|
||||||
|
3. Build JSON using ONLY schema field names
|
||||||
|
4. Validate against checklist before writing
|
||||||
|
5. Write file with validated content
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Tool Fallback**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|
||||||
|
**Schema Validation Failure**: Identify error → Correct → Re-validate
|
||||||
|
|
||||||
|
**Timeout**: Return partial results + timeout notification
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Reminders
|
||||||
|
|
||||||
|
**ALWAYS**:
|
||||||
|
1. Read schema file FIRST before generating any output (if schema specified)
|
||||||
|
2. Copy field names EXACTLY from schema (case-sensitive)
|
||||||
|
3. Verify root structure matches schema (array vs object)
|
||||||
|
4. Match nested/flat structures as schema requires
|
||||||
|
5. Use exact enum values from schema (case-sensitive)
|
||||||
|
6. Include ALL required fields at every level
|
||||||
|
7. Include file:line references in findings
|
||||||
|
8. Attribute discovery source (bash/gemini)
|
||||||
|
|
||||||
|
**NEVER**:
|
||||||
|
1. Modify any files (read-only agent)
|
||||||
|
2. Skip schema reading step when schema is specified
|
||||||
|
3. Guess field names - ALWAYS copy from schema
|
||||||
|
4. Assume structure - ALWAYS verify against schema
|
||||||
|
5. Omit required fields
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -66,8 +66,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
|||||||
"task_config": {
|
"task_config": {
|
||||||
"agent": "@test-fix-agent",
|
"agent": "@test-fix-agent",
|
||||||
"type": "test-fix-iteration",
|
"type": "test-fix-iteration",
|
||||||
"max_iterations": 5,
|
"max_iterations": 5
|
||||||
"use_codex": false
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -108,7 +107,7 @@ Phase 3: Task JSON Generation
|
|||||||
|
|
||||||
**Template-Based Command Construction with Test Layer Awareness**:
|
**Template-Based Command Construction with Test Layer Awareness**:
|
||||||
```bash
|
```bash
|
||||||
cd {project_root} && {cli_tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||||
TASK:
|
TASK:
|
||||||
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
||||||
@@ -135,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
|||||||
- Consider previous iteration failures
|
- Consider previous iteration failures
|
||||||
- Validate fix doesn't introduce new vulnerabilities
|
- Validate fix doesn't introduce new vulnerabilities
|
||||||
- analysis=READ-ONLY
|
- analysis=READ-ONLY
|
||||||
" {timeout_flag}
|
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer-Specific Guidance Injection**:
|
**Layer-Specific Guidance Injection**:
|
||||||
@@ -263,7 +262,6 @@ function extractModificationPoints() {
|
|||||||
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
||||||
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
||||||
"max_iterations": "{task_config.max_iterations}",
|
"max_iterations": "{task_config.max_iterations}",
|
||||||
"use_codex": "{task_config.use_codex}",
|
|
||||||
"parent_task": "{parent_task_id}",
|
"parent_task": "{parent_task_id}",
|
||||||
"created_by": "@cli-planning-agent",
|
"created_by": "@cli-planning-agent",
|
||||||
"created_at": "{timestamp}"
|
"created_at": "{timestamp}"
|
||||||
@@ -529,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
|||||||
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||||
2. **Execute CLI**:
|
2. **Execute CLI**:
|
||||||
```bash
|
```bash
|
||||||
gemini -p "PURPOSE: Analyze integration test failure...
|
ccw cli -p "PURPOSE: Analyze integration test failure...
|
||||||
TASK: Examine component interactions, data flow, interface contracts...
|
TASK: Examine component interactions, data flow, interface contracts...
|
||||||
RULES: Analyze full call stack and data flow across components"
|
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
|
||||||
```
|
```
|
||||||
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||||
4. **Generate Task JSON** (IMPL-fix-1.json):
|
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||||
|
|||||||
@@ -24,8 +24,6 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- **Context-driven** - Use provided context and existing code patterns
|
- **Context-driven** - Use provided context and existing code patterns
|
||||||
- **Quality over speed** - Write boring, reliable code that works
|
- **Quality over speed** - Write boring, reliable code that works
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
### 1. Context Assessment
|
### 1. Context Assessment
|
||||||
@@ -36,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- **context-package.json** (when available in workflow tasks)
|
- **context-package.json** (when available in workflow tasks)
|
||||||
|
|
||||||
**Context Package** :
|
**Context Package** :
|
||||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
`context-package.json` provides artifact paths - read using Read tool or ccw session:
|
||||||
```bash
|
```bash
|
||||||
# Get role analysis paths from context package
|
# Get context package content from session using Read tool
|
||||||
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
|
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
|
||||||
|
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
|
||||||
```
|
```
|
||||||
|
|
||||||
**Pre-Analysis: Smart Tech Stack Loading**:
|
**Pre-Analysis: Smart Tech Stack Loading**:
|
||||||
@@ -123,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
|
|||||||
- If `command` field present, execute it; otherwise use agent capabilities
|
- If `command` field present, execute it; otherwise use agent capabilities
|
||||||
|
|
||||||
**CLI Command Execution (CLI Execute Mode)**:
|
**CLI Command Execution (CLI Execute Mode)**:
|
||||||
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
|
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
|
||||||
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
|
||||||
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
|
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
|
||||||
|
|
||||||
**Test-Driven Development**:
|
**Test-Driven Development**:
|
||||||
- Write tests first (red → green → refactor)
|
- Write tests first (red → green → refactor)
|
||||||
|
|||||||
@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata
|
- Action: Load session metadata
|
||||||
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_metadata
|
- Output: session_metadata
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -119,17 +119,6 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
- No dependency management
|
- No dependency management
|
||||||
- Used for temporary context preparation
|
- Used for temporary context preparation
|
||||||
|
|
||||||
### NOT Handled by This Agent
|
|
||||||
|
|
||||||
**JSON format** (used by code-developer, test-fix-agent):
|
|
||||||
```json
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [...],
|
|
||||||
"implementation_approach": [...]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
This complete JSON format is stored in `.task/IMPL-*.json` files and handled by implementation agents, not conceptual-planning-agent.
|
|
||||||
|
|
||||||
### Role-Specific Analysis Dimensions
|
### Role-Specific Analysis Dimensions
|
||||||
|
|
||||||
@@ -146,14 +135,14 @@ This complete JSON format is stored in `.task/IMPL-*.json` files and handled by
|
|||||||
|
|
||||||
### Output Integration
|
### Output Integration
|
||||||
|
|
||||||
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into the single role's output:
|
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into role output documents:
|
||||||
- Enhanced `analysis.md` with codebase insights and architectural patterns
|
- Enhanced analysis documents with codebase insights and architectural patterns
|
||||||
- Role-specific technical recommendations based on existing conventions
|
- Role-specific technical recommendations based on existing conventions
|
||||||
- Pattern-based best practices from actual code examination
|
- Pattern-based best practices from actual code examination
|
||||||
- Realistic feasibility assessments based on current implementation
|
- Realistic feasibility assessments based on current implementation
|
||||||
|
|
||||||
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
||||||
- Enhanced `analysis.md` with autonomous development recommendations
|
- Enhanced analysis documents with autonomous development recommendations
|
||||||
- Role-specific strategy based on intelligent system understanding
|
- Role-specific strategy based on intelligent system understanding
|
||||||
- Autonomous development approaches and implementation guidance
|
- Autonomous development approaches and implementation guidance
|
||||||
- Self-guided optimization and integration recommendations
|
- Self-guided optimization and integration recommendations
|
||||||
@@ -166,7 +155,7 @@ When called, you receive:
|
|||||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||||
- **Output Location**: Directory path for generated analysis files
|
- **Output Location**: Directory path for generated analysis files
|
||||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||||
|
|
||||||
@@ -229,26 +218,23 @@ Generate documents according to loaded role template specifications:
|
|||||||
|
|
||||||
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
||||||
|
|
||||||
**Required Files**:
|
**Output Files**:
|
||||||
- **analysis.md**: Main role perspective analysis incorporating user context and role template
|
- **analysis.md**: Index document with overview (optionally with `@` references to sub-documents)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
|
||||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
||||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
- **analysis-{slug}.md**: Section content documents (slug from section heading: lowercase, hyphens)
|
||||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
- Maximum 5 sub-documents (merge related sections if needed)
|
||||||
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
|
- **Content**: Analysis AND recommendations sections
|
||||||
|
|
||||||
**File Structure Example**:
|
**File Structure Example**:
|
||||||
```
|
```
|
||||||
.workflow/WFS-[session]/.brainstorming/system-architect/
|
.workflow/WFS-[session]/.brainstorming/system-architect/
|
||||||
├── analysis.md # Main system architecture analysis with recommendations
|
├── analysis.md # Index with overview + @references
|
||||||
├── analysis-1.md # (Optional) Continuation if content >800 lines
|
├── analysis-architecture-assessment.md # Section content
|
||||||
└── deliverables/ # (Optional) Additional role-specific outputs
|
├── analysis-technology-evaluation.md # Section content
|
||||||
├── technical-architecture.md # System design specifications
|
├── analysis-integration-strategy.md # Section content
|
||||||
├── technology-stack.md # Technology selection rationale
|
└── analysis-recommendations.md # Section content (max 5 sub-docs total)
|
||||||
└── scalability-plan.md # Scaling strategy
|
|
||||||
|
|
||||||
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
|
NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
|
||||||
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Role-Specific Planning Process
|
## Role-Specific Planning Process
|
||||||
@@ -268,14 +254,10 @@ FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefi
|
|||||||
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
||||||
|
|
||||||
### 3. Brainstorming Documentation Phase
|
### 3. Brainstorming Documentation Phase
|
||||||
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
|
- **Create analysis.md**: Main document with overview (optionally with `@` references)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
- **Create sub-documents**: `analysis-{slug}.md` for major sections (max 5)
|
||||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
|
||||||
- **Content**: Include both analysis AND recommendations sections within analysis files
|
|
||||||
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
|
||||||
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
|
|
||||||
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
||||||
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
|
- **Naming Validation**: Verify ALL files start with `analysis` prefix
|
||||||
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
||||||
|
|
||||||
## Role-Specific Analysis Framework
|
## Role-Specific Analysis Framework
|
||||||
@@ -324,5 +306,3 @@ When analysis is complete, ensure:
|
|||||||
- **Relevance**: Directly addresses user's specified requirements
|
- **Relevance**: Directly addresses user's specified requirements
|
||||||
- **Actionability**: Provides concrete next steps and recommendations
|
- **Actionability**: Provides concrete next steps and recommendations
|
||||||
|
|
||||||
### Windows Path Format Guidelines
|
|
||||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
|
||||||
|
|||||||
@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
|
|||||||
### 1. Reference Documentation (Project Standards)
|
### 1. Reference Documentation (Project Standards)
|
||||||
**Tools**:
|
**Tools**:
|
||||||
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
||||||
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
|
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
|
||||||
- `Glob()` - Find documentation files
|
- `Glob()` - Find documentation files
|
||||||
|
|
||||||
**Use**: Phase 0 foundation setup
|
**Use**: Phase 0 foundation setup
|
||||||
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
|
|||||||
**Use**: Unfamiliar APIs/libraries/patterns
|
**Use**: Unfamiliar APIs/libraries/patterns
|
||||||
|
|
||||||
### 3. Existing Code Discovery
|
### 3. Existing Code Discovery
|
||||||
**Primary (Code-Index MCP)**:
|
**Primary (CCW CodexLens MCP)**:
|
||||||
- `mcp__code-index__set_project_path()` - Initialize index
|
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
|
||||||
- `mcp__code-index__find_files(pattern)` - File pattern matching
|
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
|
||||||
- `mcp__code-index__search_code_advanced()` - Content search
|
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
|
||||||
- `mcp__code-index__get_file_summary()` - File structure analysis
|
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
|
||||||
- `mcp__code-index__refresh_index()` - Update index
|
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
|
||||||
|
|
||||||
**Fallback (CLI)**:
|
**Fallback (CLI)**:
|
||||||
- `rg` (ripgrep) - Fast content search
|
- `rg` (ripgrep) - Fast content search
|
||||||
- `find` - File discovery
|
- `find` - File discovery
|
||||||
- `Grep` - Pattern matching
|
- `Grep` - Pattern matching
|
||||||
|
|
||||||
**Priority**: Code-Index MCP > ripgrep > find > grep
|
**Priority**: CodexLens MCP > ripgrep > find > grep
|
||||||
|
|
||||||
## Simplified Execution Process (3 Phases)
|
## Simplified Execution Process (3 Phases)
|
||||||
|
|
||||||
@@ -77,12 +77,11 @@ if (file_exists(contextPackagePath)) {
|
|||||||
|
|
||||||
**1.2 Foundation Setup**:
|
**1.2 Foundation Setup**:
|
||||||
```javascript
|
```javascript
|
||||||
// 1. Initialize Code Index (if available)
|
// 1. Initialize CodexLens (if available)
|
||||||
mcp__code-index__set_project_path(process.cwd())
|
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
|
||||||
mcp__code-index__refresh_index()
|
|
||||||
|
|
||||||
// 2. Project Structure
|
// 2. Project Structure
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
bash(ccw tool exec get_modules_by_depth '{}')
|
||||||
|
|
||||||
// 3. Load Documentation (if not in memory)
|
// 3. Load Documentation (if not in memory)
|
||||||
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
||||||
@@ -100,10 +99,88 @@ if (!memory.has("README.md")) Read(README.md)
|
|||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
### Phase 2: Multi-Source Context Discovery
|
||||||
|
|
||||||
Execute all 3 tracks in parallel for comprehensive coverage.
|
Execute all tracks in parallel for comprehensive coverage.
|
||||||
|
|
||||||
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
|
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
|
||||||
|
|
||||||
|
#### Track 0: Exploration Synthesis (Optional)
|
||||||
|
|
||||||
|
**Trigger**: When `explorations-manifest.json` exists in session `.process/` folder
|
||||||
|
|
||||||
|
**Purpose**: Transform raw exploration data into prioritized, deduplicated insights. This is NOT simple aggregation - it synthesizes `critical_files` (priority-ranked), deduplicates patterns/integration_points, and generates `conflict_indicators`.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check for exploration results from context-gather parallel explore phase
|
||||||
|
const manifestPath = `.workflow/active/${session_id}/.process/explorations-manifest.json`;
|
||||||
|
if (file_exists(manifestPath)) {
|
||||||
|
const manifest = JSON.parse(Read(manifestPath));
|
||||||
|
|
||||||
|
// Load full exploration data from each file
|
||||||
|
const explorationData = manifest.explorations.map(exp => ({
|
||||||
|
...exp,
|
||||||
|
data: JSON.parse(Read(exp.path))
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Build explorations array with summaries
|
||||||
|
const explorations = explorationData.map(exp => ({
|
||||||
|
angle: exp.angle,
|
||||||
|
file: exp.file,
|
||||||
|
path: exp.path,
|
||||||
|
index: exp.data._metadata?.exploration_index || exp.index,
|
||||||
|
summary: {
|
||||||
|
relevant_files_count: exp.data.relevant_files?.length || 0,
|
||||||
|
key_patterns: exp.data.patterns,
|
||||||
|
integration_points: exp.data.integration_points
|
||||||
|
}
|
||||||
|
}));
|
||||||
|
|
||||||
|
// SYNTHESIS (not aggregation): Transform raw data into prioritized insights
|
||||||
|
const aggregated_insights = {
|
||||||
|
// CRITICAL: Synthesize priority-ranked critical_files from multiple relevant_files lists
|
||||||
|
// - Deduplicate by path
|
||||||
|
// - Rank by: mention count across angles + individual relevance scores
|
||||||
|
// - Top 10-15 files only (focused, actionable)
|
||||||
|
critical_files: synthesizeCriticalFiles(explorationData.flatMap(e => e.data.relevant_files || [])),
|
||||||
|
|
||||||
|
// SYNTHESIS: Generate conflict indicators from pattern mismatches, constraint violations
|
||||||
|
conflict_indicators: synthesizeConflictIndicators(explorationData),
|
||||||
|
|
||||||
|
// Deduplicate clarification questions (merge similar questions)
|
||||||
|
clarification_needs: deduplicateQuestions(explorationData.flatMap(e => e.data.clarification_needs || [])),
|
||||||
|
|
||||||
|
// Preserve source attribution for traceability
|
||||||
|
constraints: explorationData.map(e => ({ constraint: e.data.constraints, source_angle: e.angle })).filter(c => c.constraint),
|
||||||
|
|
||||||
|
// Deduplicate patterns across angles (merge identical patterns)
|
||||||
|
all_patterns: deduplicatePatterns(explorationData.map(e => ({ patterns: e.data.patterns, source_angle: e.angle }))),
|
||||||
|
|
||||||
|
// Deduplicate integration points (merge by file:line location)
|
||||||
|
all_integration_points: deduplicateIntegrationPoints(explorationData.map(e => ({ points: e.data.integration_points, source_angle: e.angle })))
|
||||||
|
};
|
||||||
|
|
||||||
|
// Store for Phase 3 packaging
|
||||||
|
exploration_results = { manifest_path: manifestPath, exploration_count: manifest.exploration_count,
|
||||||
|
complexity: manifest.complexity, angles: manifest.angles_explored,
|
||||||
|
explorations, aggregated_insights };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Synthesis helper functions (conceptual)
|
||||||
|
function synthesizeCriticalFiles(allRelevantFiles) {
|
||||||
|
// 1. Group by path
|
||||||
|
// 2. Count mentions across angles
|
||||||
|
// 3. Average relevance scores
|
||||||
|
// 4. Rank by: (mention_count * 0.6) + (avg_relevance * 0.4)
|
||||||
|
// 5. Return top 10-15 with mentioned_by_angles attribution
|
||||||
|
}
|
||||||
|
|
||||||
|
function synthesizeConflictIndicators(explorationData) {
|
||||||
|
// 1. Detect pattern mismatches across angles
|
||||||
|
// 2. Identify constraint violations
|
||||||
|
// 3. Flag files mentioned with conflicting integration approaches
|
||||||
|
// 4. Assign severity: critical/high/medium/low
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Track 1: Reference Documentation
|
#### Track 1: Reference Documentation
|
||||||
|
|
||||||
Extract from Phase 0 loaded docs:
|
Extract from Phase 0 loaded docs:
|
||||||
@@ -134,18 +211,18 @@ mcp__exa__web_search_exa({
|
|||||||
|
|
||||||
**Layer 1: File Pattern Discovery**
|
**Layer 1: File Pattern Discovery**
|
||||||
```javascript
|
```javascript
|
||||||
// Primary: Code-Index MCP
|
// Primary: CodexLens MCP
|
||||||
const files = mcp__code-index__find_files("*{keyword}*")
|
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
|
||||||
// Fallback: find . -iname "*{keyword}*" -type f
|
// Fallback: find . -iname "*{keyword}*" -type f
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer 2: Content Search**
|
**Layer 2: Content Search**
|
||||||
```javascript
|
```javascript
|
||||||
// Primary: Code-Index MCP
|
// Primary: CodexLens MCP
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "{keyword}",
|
action: "search",
|
||||||
file_pattern: "*.ts",
|
query: "{keyword}",
|
||||||
output_mode: "files_with_matches"
|
path: "."
|
||||||
})
|
})
|
||||||
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
||||||
```
|
```
|
||||||
@@ -153,11 +230,10 @@ mcp__code-index__search_code_advanced({
|
|||||||
**Layer 3: Semantic Patterns**
|
**Layer 3: Semantic Patterns**
|
||||||
```javascript
|
```javascript
|
||||||
// Find definitions (class, interface, function)
|
// Find definitions (class, interface, function)
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
|
action: "search",
|
||||||
regex: true,
|
query: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||||
output_mode: "content",
|
path: "."
|
||||||
context_lines: 2
|
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -165,21 +241,22 @@ mcp__code-index__search_code_advanced({
|
|||||||
```javascript
|
```javascript
|
||||||
// Get file summaries for imports/exports
|
// Get file summaries for imports/exports
|
||||||
for (const file of discovered_files) {
|
for (const file of discovered_files) {
|
||||||
const summary = mcp__code-index__get_file_summary(file)
|
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
|
||||||
// summary: {imports, functions, classes, line_count}
|
// summary: {symbols: [{name, type, line}]}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer 5: Config & Tests**
|
**Layer 5: Config & Tests**
|
||||||
```javascript
|
```javascript
|
||||||
// Config files
|
// Config files
|
||||||
mcp__code-index__find_files("*.config.*")
|
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
|
||||||
mcp__code-index__find_files("package.json")
|
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
|
||||||
|
|
||||||
// Tests
|
// Tests
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "(describe|it|test).*{keyword}",
|
action: "search",
|
||||||
file_pattern: "*.{test,spec}.*"
|
query: "(describe|it|test).*{keyword}",
|
||||||
|
path: "."
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -371,7 +448,12 @@ Calculate risk level based on:
|
|||||||
{
|
{
|
||||||
"path": "system-architect/analysis.md",
|
"path": "system-architect/analysis.md",
|
||||||
"type": "primary",
|
"type": "primary",
|
||||||
"content": "# System Architecture Analysis\n\n## Overview\n..."
|
"content": "# System Architecture Analysis\n\n## Overview\n@analysis-architecture.md\n@analysis-recommendations.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "system-architect/analysis-architecture.md",
|
||||||
|
"type": "supplementary",
|
||||||
|
"content": "# Architecture Assessment\n\n..."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -393,33 +475,40 @@ Calculate risk level based on:
|
|||||||
},
|
},
|
||||||
"affected_modules": ["auth", "user-model", "middleware"],
|
"affected_modules": ["auth", "user-model", "middleware"],
|
||||||
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
||||||
|
},
|
||||||
|
"exploration_results": {
|
||||||
|
"manifest_path": ".workflow/active/{session}/.process/explorations-manifest.json",
|
||||||
|
"exploration_count": 3,
|
||||||
|
"complexity": "Medium",
|
||||||
|
"angles": ["architecture", "dependencies", "testing"],
|
||||||
|
"explorations": [
|
||||||
|
{
|
||||||
|
"angle": "architecture",
|
||||||
|
"file": "exploration-architecture.json",
|
||||||
|
"path": ".workflow/active/{session}/.process/exploration-architecture.json",
|
||||||
|
"index": 1,
|
||||||
|
"summary": {
|
||||||
|
"relevant_files_count": 5,
|
||||||
|
"key_patterns": "Service layer with DI",
|
||||||
|
"integration_points": "Container.registerService:45-60"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"aggregated_insights": {
|
||||||
|
"critical_files": [{"path": "src/auth/AuthService.ts", "relevance": 0.95, "mentioned_by_angles": ["architecture"]}],
|
||||||
|
"conflict_indicators": [{"type": "pattern_mismatch", "description": "...", "source_angle": "architecture", "severity": "medium"}],
|
||||||
|
"clarification_needs": [{"question": "...", "context": "...", "options": [], "source_angle": "architecture"}],
|
||||||
|
"constraints": [{"constraint": "Must follow existing DI pattern", "source_angle": "architecture"}],
|
||||||
|
"all_patterns": [{"patterns": "Service layer with DI", "source_angle": "architecture"}],
|
||||||
|
"all_integration_points": [{"points": "Container.registerService:45-60", "source_angle": "architecture"}]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Execution Mode: Brainstorm vs Plan
|
**Note**: `exploration_results` is populated when exploration files exist (from context-gather parallel explore phase). If no explorations, this field is omitted or empty.
|
||||||
|
|
||||||
### Brainstorm Mode (Lightweight)
|
|
||||||
**Purpose**: Provide high-level context for generating brainstorming questions
|
|
||||||
**Execution**: Phase 1-2 only (skip deep analysis)
|
|
||||||
**Output**:
|
|
||||||
- Lightweight context-package with:
|
|
||||||
- Project structure overview
|
|
||||||
- Tech stack identification
|
|
||||||
- High-level existing module names
|
|
||||||
- Basic conflict risk (file count only)
|
|
||||||
- Skip: Detailed dependency graphs, deep code analysis, web research
|
|
||||||
|
|
||||||
### Plan Mode (Comprehensive)
|
|
||||||
**Purpose**: Detailed implementation planning with conflict detection
|
|
||||||
**Execution**: Full Phase 1-3 (complete discovery + analysis)
|
|
||||||
**Output**:
|
|
||||||
- Comprehensive context-package with:
|
|
||||||
- Detailed dependency graphs
|
|
||||||
- Deep code structure analysis
|
|
||||||
- Conflict detection with mitigation strategies
|
|
||||||
- Web research for unfamiliar tech
|
|
||||||
- Include: All discovery tracks, relevance scoring, 3-source synthesis
|
|
||||||
|
|
||||||
## Quality Validation
|
## Quality Validation
|
||||||
|
|
||||||
@@ -470,14 +559,14 @@ Output: .workflow/session/{session}/.process/context-package.json
|
|||||||
- Expose sensitive data (credentials, keys)
|
- Expose sensitive data (credentials, keys)
|
||||||
- Exceed file limits (50 total)
|
- Exceed file limits (50 total)
|
||||||
- Include binaries/generated files
|
- Include binaries/generated files
|
||||||
- Use ripgrep if code-index available
|
- Use ripgrep if CodexLens available
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Initialize code-index in Phase 0
|
- Initialize CodexLens in Phase 0
|
||||||
- Execute get_modules_by_depth.sh
|
- Execute get_modules_by_depth.sh
|
||||||
- Load CLAUDE.md/README.md (unless in memory)
|
- Load CLAUDE.md/README.md (unless in memory)
|
||||||
- Execute all 3 discovery tracks
|
- Execute all 3 discovery tracks
|
||||||
- Use code-index MCP as primary
|
- Use CodexLens MCP as primary
|
||||||
- Fallback to ripgrep only when needed
|
- Fallback to ripgrep only when needed
|
||||||
- Use Exa for unfamiliar APIs
|
- Use Exa for unfamiliar APIs
|
||||||
- Apply multi-factor scoring
|
- Apply multi-factor scoring
|
||||||
|
|||||||
@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
|
|
||||||
**Step 2** (CLI execution):
|
**Step 2** (CLI execution):
|
||||||
- Agent substitutes [target_folders] into command
|
- Agent substitutes [target_folders] into command
|
||||||
- Agent executes CLI command via Bash tool:
|
- Agent executes CLI command via CCW:
|
||||||
```bash
|
```bash
|
||||||
bash(cd src/modules && gemini --approval-mode yolo -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate module documentation
|
PURPOSE: Generate module documentation
|
||||||
TASK: Create API.md and README.md for each module
|
TASK: Create API.md and README.md for each module
|
||||||
MODE: write
|
MODE: write
|
||||||
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
./src/modules/api|code|code:3|dirs:0
|
./src/modules/api|code|code:3|dirs:0
|
||||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||||
")
|
" --tool gemini --mode write --cd src/modules
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **CLI Execution** (Gemini CLI):
|
4. **CLI Execution** (Gemini CLI):
|
||||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
|||||||
{
|
{
|
||||||
"step": "analyze_module_structure",
|
"step": "analyze_module_structure",
|
||||||
"action": "Deep analysis of module structure and API",
|
"action": "Deep analysis of module structure and API",
|
||||||
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
|
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||||
"output_to": "module_analysis",
|
"output_to": "module_analysis",
|
||||||
"on_error": "fail"
|
"on_error": "fail"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
|
|||||||
|
|
||||||
## Core Mission
|
## Core Mission
|
||||||
|
|
||||||
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
|
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
|
||||||
|
|
||||||
## Input Context
|
## Input Context
|
||||||
|
|
||||||
@@ -42,12 +42,12 @@ TodoWrite([
|
|||||||
# 3. Launch parallel jobs (max 4)
|
# 3. Launch parallel jobs (max 4)
|
||||||
|
|
||||||
# Depth 5 example (Layer 3 - use multi-layer):
|
# Depth 5 example (Layer 3 - use multi-layer):
|
||||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
|
||||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
|
||||||
|
|
||||||
# Depth 1 example (Layer 2 - use single-layer):
|
# Depth 1 example (Layer 2 - use single-layer):
|
||||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
|
||||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
|
||||||
# ... up to 4 concurrent jobs
|
# ... up to 4 concurrent jobs
|
||||||
|
|
||||||
# 4. Wait for all depth jobs to complete
|
# 4. Wait for all depth jobs to complete
|
||||||
|
|||||||
@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
|
|||||||
**Use**: Phase 1 source context loading
|
**Use**: Phase 1 source context loading
|
||||||
|
|
||||||
### 2. Test Coverage Discovery
|
### 2. Test Coverage Discovery
|
||||||
**Primary (Code-Index MCP)**:
|
**Primary (CCW CodexLens MCP)**:
|
||||||
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
|
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
|
||||||
- `mcp__code-index__search_code_advanced()` - Search test patterns
|
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
|
||||||
- `mcp__code-index__get_file_summary()` - Analyze test structure
|
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
|
||||||
|
|
||||||
**Fallback (CLI)**:
|
**Fallback (CLI)**:
|
||||||
- `rg` (ripgrep) - Fast test pattern search
|
- `rg` (ripgrep) - Fast test pattern search
|
||||||
@@ -120,9 +120,10 @@ for (const summary_path of summaries) {
|
|||||||
|
|
||||||
**2.1 Existing Test Discovery**:
|
**2.1 Existing Test Discovery**:
|
||||||
```javascript
|
```javascript
|
||||||
// Method 1: Code-Index MCP (preferred)
|
// Method 1: CodexLens MCP (preferred)
|
||||||
const test_files = mcp__code-index__find_files({
|
const test_files = mcp__ccw-tools__codex_lens({
|
||||||
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
|
action: "search_files",
|
||||||
|
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
|
||||||
});
|
});
|
||||||
|
|
||||||
// Method 2: Fallback CLI
|
// Method 2: Fallback CLI
|
||||||
@@ -397,23 +398,3 @@ function detect_framework_from_config() {
|
|||||||
- ✅ All missing tests catalogued with priority
|
- ✅ All missing tests catalogued with priority
|
||||||
- ✅ Execution time < 30 seconds (< 60s for large codebases)
|
- ✅ Execution time < 30 seconds (< 60s for large codebases)
|
||||||
|
|
||||||
## Integration Points
|
|
||||||
|
|
||||||
### Called By
|
|
||||||
- `/workflow:tools:test-context-gather` - Orchestrator command
|
|
||||||
|
|
||||||
### Calls
|
|
||||||
- Code-Index MCP tools (preferred)
|
|
||||||
- ripgrep/find (fallback)
|
|
||||||
- Bash file operations
|
|
||||||
|
|
||||||
### Followed By
|
|
||||||
- `/workflow:tools:test-concept-enhanced` - Test generation analysis
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
- **Detection-first**: Always check for existing test-context-package before analysis
|
|
||||||
- **Code-Index priority**: Use MCP tools when available, fallback to CLI
|
|
||||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, etc.
|
|
||||||
- **Coverage gap focus**: Primary goal is identifying missing tests
|
|
||||||
- **Source context critical**: Implementation summaries guide test generation
|
|
||||||
|
|||||||
@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
|
|||||||
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
||||||
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
||||||
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
||||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||||
- Identify test commands from project configuration
|
- Identify test commands from project configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Detect test framework and multi-layered commands
|
# Detect test framework and multi-layered commands
|
||||||
if [ -f "package.json" ]; then
|
if [ -f "package.json" ]; then
|
||||||
# Extract layer-specific test commands
|
# Extract layer-specific test commands using Read tool or jq
|
||||||
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
|
PKG_JSON=$(cat package.json)
|
||||||
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
|
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
|
||||||
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
|
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||||
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
|
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
|
||||||
|
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
|
||||||
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
||||||
LINT_CMD="ruff check . || flake8 ."
|
LINT_CMD="ruff check . || flake8 ."
|
||||||
UNIT_CMD="pytest tests/unit/"
|
UNIT_CMD="pytest tests/unit/"
|
||||||
@@ -142,9 +143,9 @@ run_test_layer "L1-unit" "$UNIT_CMD"
|
|||||||
|
|
||||||
### 3. Failure Diagnosis & Fixing Loop
|
### 3. Failure Diagnosis & Fixing Loop
|
||||||
|
|
||||||
**Execution Modes**:
|
**Execution Modes** (determined by `flow_control.implementation_approach`):
|
||||||
|
|
||||||
**A. Manual Mode (Default, meta.use_codex=false)**:
|
**A. Agent Mode (Default, no `command` field in steps)**:
|
||||||
```
|
```
|
||||||
WHILE tests are failing AND iterations < max_iterations:
|
WHILE tests are failing AND iterations < max_iterations:
|
||||||
1. Use Gemini to diagnose failure (bug-fix template)
|
1. Use Gemini to diagnose failure (bug-fix template)
|
||||||
@@ -155,17 +156,17 @@ WHILE tests are failing AND iterations < max_iterations:
|
|||||||
END WHILE
|
END WHILE
|
||||||
```
|
```
|
||||||
|
|
||||||
**B. Codex Mode (meta.use_codex=true)**:
|
**B. CLI Mode (`command` field present in implementation_approach steps)**:
|
||||||
```
|
```
|
||||||
WHILE tests are failing AND iterations < max_iterations:
|
WHILE tests are failing AND iterations < max_iterations:
|
||||||
1. Use Gemini to diagnose failure (bug-fix template)
|
1. Use Gemini to diagnose failure (bug-fix template)
|
||||||
2. Use Codex to apply fixes automatically with resume mechanism
|
2. Execute `command` field (e.g., Codex) to apply fixes automatically
|
||||||
3. Re-run test suite
|
3. Re-run test suite
|
||||||
4. Verify fix doesn't break other tests
|
4. Verify fix doesn't break other tests
|
||||||
END WHILE
|
END WHILE
|
||||||
```
|
```
|
||||||
|
|
||||||
**Codex Resume in Test-Fix Cycle** (when `meta.use_codex=true`):
|
**Codex Resume in Test-Fix Cycle** (when step has `command` with Codex):
|
||||||
- First iteration: Start new Codex session with full context
|
- First iteration: Start new Codex session with full context
|
||||||
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies
|
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies
|
||||||
|
|
||||||
@@ -331,6 +332,8 @@ When generating test results for orchestrator (saved to `.process/test-results.j
|
|||||||
- Break existing passing tests
|
- Break existing passing tests
|
||||||
- Skip final verification
|
- Skip final verification
|
||||||
- Leave tests failing - must achieve 100% pass rate
|
- Leave tests failing - must achieve 100% pass rate
|
||||||
|
- Use `run_in_background` for Bash() commands - always set `run_in_background=false` to ensure tests run in foreground for proper output capture
|
||||||
|
- Use complex bash pipe chains (`cmd | grep | awk | sed`) - prefer dedicated tools (Read, Grep, Glob) for file operations and content extraction; simple single-pipe commands are acceptable when necessary
|
||||||
|
|
||||||
## Quality Certification
|
## Quality Certification
|
||||||
|
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user