mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-08 02:14:08 +08:00
Compare commits
177 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
faa86eded0 | ||
|
|
44fa6e0a42 | ||
|
|
be9a1c76d4 | ||
|
|
fcc811d6a1 | ||
|
|
906404f075 | ||
|
|
1267c8d0f4 | ||
|
|
eb1093128e | ||
|
|
4ddeb6551e | ||
|
|
7252c2ff3d | ||
|
|
8dee45c0a3 | ||
|
|
99ead8b165 | ||
|
|
0c7f13d9a4 | ||
|
|
ed32b95de1 | ||
|
|
beacc2e26b | ||
|
|
389621c954 | ||
|
|
2ba7756d13 | ||
|
|
02f77c0a51 | ||
|
|
5aa8d37cd0 | ||
|
|
a7b8ffc716 | ||
|
|
b0bc53646e | ||
|
|
5f31c9ad7e | ||
|
|
818d9f3f5d | ||
|
|
1c3c070db4 | ||
|
|
91e4792aa9 | ||
|
|
813bfa8f97 | ||
|
|
8b29f6bb7c | ||
|
|
27273405f7 | ||
|
|
f4299457fb | ||
|
|
06983a35ad | ||
|
|
a80953527b | ||
|
|
0f469e225b | ||
|
|
5dca69fbec | ||
|
|
ac626e5895 | ||
|
|
cb78758839 | ||
|
|
844a2412b2 | ||
|
|
650d877430 | ||
|
|
f459061ad5 | ||
|
|
a6f9701679 | ||
|
|
26a325efff | ||
|
|
0a96ee16a8 | ||
|
|
43c962b48b | ||
|
|
724545ebd6 | ||
|
|
a9a2004d4a | ||
|
|
5b14c8a832 | ||
|
|
e2c5a514cb | ||
|
|
296761a34e | ||
|
|
1d3436d51b | ||
|
|
60bb11c315 | ||
|
|
72fe6195af | ||
|
|
04fb3b7ee3 | ||
|
|
942fca7ad8 | ||
|
|
39df995e37 | ||
|
|
efaa8b6620 | ||
|
|
35bd0aa8f6 | ||
|
|
0f9adc59f9 | ||
|
|
c43a72ef46 | ||
|
|
7a61119c55 | ||
|
|
d620eac621 | ||
|
|
1dbffbee2d | ||
|
|
c67817f46e | ||
|
|
d654419423 | ||
|
|
1e2240dbe9 | ||
|
|
b3778ef48c | ||
|
|
a16cf5c8d3 | ||
|
|
d82bf5a823 | ||
|
|
132eec900c | ||
|
|
09114f59c8 | ||
|
|
72099ae951 | ||
|
|
d66064024c | ||
|
|
8c93848303 | ||
|
|
57a86ab36f | ||
|
|
e75cdf0b61 | ||
|
|
79b13f363b | ||
|
|
87d5a1292d | ||
|
|
3e6ed5e4c3 | ||
|
|
96dd9bef5f | ||
|
|
697a646fc9 | ||
|
|
cde17bd668 | ||
|
|
98b72f086d | ||
|
|
196b805499 | ||
|
|
beb839d8e2 | ||
|
|
2aa39bd355 | ||
|
|
a62d30acb9 | ||
|
|
8bc5b40957 | ||
|
|
2a11d5f190 | ||
|
|
964bbbf5bc | ||
|
|
75ad427862 | ||
|
|
edda988790 | ||
|
|
a8961761ec | ||
|
|
2b80a02d51 | ||
|
|
969242dbbc | ||
|
|
ef09914f94 | ||
|
|
2f4ecf9ae3 | ||
|
|
b000359e69 | ||
|
|
84b428b52f | ||
|
|
2443c64c61 | ||
|
|
f7593387a0 | ||
|
|
64674803c4 | ||
|
|
1252f4f7c6 | ||
|
|
c862ac225b | ||
|
|
5375c991ba | ||
|
|
7b692ce415 | ||
|
|
2cf8efec74 | ||
|
|
34a9a23d5b | ||
|
|
cf6a0f1bc0 | ||
|
|
247db0d041 | ||
|
|
fec5d9a605 | ||
|
|
97fea9f19e | ||
|
|
6717e2a59b | ||
|
|
84c180ab66 | ||
|
|
e70f086b7e | ||
|
|
6359a364cb | ||
|
|
8f2126677f | ||
|
|
c3e87db5be | ||
|
|
a6561a7d01 | ||
|
|
4bd732c4db | ||
|
|
152303f1b8 | ||
|
|
2d66c1b092 | ||
|
|
93d8e79b71 | ||
|
|
1e69539837 | ||
|
|
cd206f275e | ||
|
|
d99448ffd5 | ||
|
|
217f30044a | ||
|
|
7e60e3e198 | ||
|
|
783ee4b570 | ||
|
|
7725e733b0 | ||
|
|
2e8fe1e77a | ||
|
|
32c9595818 | ||
|
|
bb427dc639 | ||
|
|
97b2247896 | ||
|
|
ed7dfad0a5 | ||
|
|
19acaea0f9 | ||
|
|
481a716c09 | ||
|
|
07775cda30 | ||
|
|
3acf6fcba8 | ||
|
|
f798dd4172 | ||
|
|
aabc6294f4 | ||
|
|
adbb2070bb | ||
|
|
3c9cf3a677 | ||
|
|
ff808ed539 | ||
|
|
99a5c75b13 | ||
|
|
7453987cfe | ||
|
|
4bb4bdc124 | ||
|
|
3915f7cb35 | ||
|
|
657af628fd | ||
|
|
b649360cd6 | ||
|
|
20aa0f3a0b | ||
|
|
c8dd1adc69 | ||
|
|
d53e7e18db | ||
|
|
0207677857 | ||
|
|
72f27fb2f8 | ||
|
|
be129f5821 | ||
|
|
b1bb74af0d | ||
|
|
a7a654805c | ||
|
|
c0c894ced1 | ||
|
|
7517f4f8ec | ||
|
|
0b45ff7345 | ||
|
|
0416b23186 | ||
|
|
948cf3fcd7 | ||
|
|
4272ca9ebd | ||
|
|
73fed4893b | ||
|
|
f09c6e2a7a | ||
|
|
65a204a563 | ||
|
|
ffbc440a7e | ||
|
|
3c28c61bea | ||
|
|
b0b99a4217 | ||
|
|
4f533f6fd5 | ||
|
|
530c348e95 | ||
|
|
a98b26b111 | ||
|
|
9f7e33cbde | ||
|
|
a25464ce28 | ||
|
|
0a3f2a5b03 | ||
|
|
1929b7f72d | ||
|
|
b8889d99c9 | ||
|
|
a79a3221ce | ||
|
|
67c18d1b03 | ||
|
|
2301f263cd |
@@ -16,107 +16,86 @@ description: |
|
|||||||
color: yellow
|
color: yellow
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
|
## Overview
|
||||||
|
|
||||||
## Execution Process
|
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
|
||||||
|
|
||||||
### Input Processing
|
**Core Capabilities**:
|
||||||
**What you receive:**
|
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
|
||||||
- **Execution Context Package**: Structured context from command layer
|
- Generate task JSON files with 6-field schema and artifact integration
|
||||||
|
- Create IMPL_PLAN.md and TODO_LIST.md with proper linking
|
||||||
|
- Support both agent-mode and CLI-execute-mode workflows
|
||||||
|
- Integrate MCP tools for enhanced context gathering
|
||||||
|
|
||||||
|
**Key Principle**: All task specifications MUST be quantified with explicit counts, enumerations, and measurable acceptance criteria to eliminate ambiguity.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Input & Execution
|
||||||
|
|
||||||
|
### 1.1 Input Processing
|
||||||
|
|
||||||
|
**What you receive from command layer:**
|
||||||
|
- **Session Paths**: File paths to load content autonomously
|
||||||
|
- `session_metadata_path`: Session configuration and user input
|
||||||
|
- `context_package_path`: Context package with brainstorming artifacts catalog
|
||||||
|
- **Metadata**: Simple values
|
||||||
- `session_id`: Workflow session identifier (WFS-[topic])
|
- `session_id`: Workflow session identifier (WFS-[topic])
|
||||||
- `session_metadata`: Session configuration and state
|
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
|
||||||
- `analysis_results`: Analysis recommendations and task breakdown
|
|
||||||
- `artifacts_inventory`: Detected brainstorming outputs (role analyses, guidance-specification, role analyses)
|
|
||||||
- `context_package`: Project context and assets
|
|
||||||
- `mcp_capabilities`: Available MCP tools (exa-code, exa-web)
|
|
||||||
- `mcp_analysis`: Optional pre-executed MCP analysis results
|
|
||||||
|
|
||||||
**Legacy Support** (backward compatibility):
|
**Legacy Support** (backward compatibility):
|
||||||
- **pre_analysis configuration**: Multi-step array format with action, template, method fields
|
- **pre_analysis configuration**: Multi-step array format with action, template, method fields
|
||||||
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
||||||
- **Task requirements**: Direct task description
|
- **Task requirements**: Direct task description
|
||||||
|
|
||||||
### Execution Flow (Two-Phase)
|
### 1.2 Execution Flow
|
||||||
|
|
||||||
|
#### Phase 1: Context Loading & Assembly
|
||||||
|
|
||||||
|
**Step-by-step execution**:
|
||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: Context Validation & Enhancement (Discovery Results Provided)
|
1. Load session metadata → Extract user input
|
||||||
1. Receive and validate execution context package
|
- User description: Original task/feature requirements
|
||||||
2. Check memory-first rule compliance:
|
- Project scope: User-specified boundaries and goals
|
||||||
→ session_metadata: Use provided content (from memory or file)
|
- Technical constraints: User-provided technical requirements
|
||||||
→ analysis_results: Use provided content (from memory or file)
|
|
||||||
→ artifacts_inventory: Use provided list (from memory or scan)
|
2. Load context package → Extract structured context
|
||||||
→ mcp_analysis: Use provided results (optional)
|
Commands: Read({{context_package_path}})
|
||||||
3. Optional MCP enhancement (if not pre-executed):
|
Output: Complete context package object
|
||||||
|
|
||||||
|
3. Check existing plan (if resuming)
|
||||||
|
- If IMPL_PLAN.md exists: Read for continuity
|
||||||
|
- If task JSONs exist: Load for context
|
||||||
|
|
||||||
|
4. Load brainstorming artifacts (in priority order)
|
||||||
|
a. guidance-specification.md (Highest Priority)
|
||||||
|
→ Overall design framework and architectural decisions
|
||||||
|
b. Role analyses (progressive loading: load incrementally by priority)
|
||||||
|
→ Load role analysis files one at a time as needed
|
||||||
|
→ Reason: Each analysis.md is long; progressive loading prevents token overflow
|
||||||
|
c. Synthesis output (if exists)
|
||||||
|
→ Integrated view with clarifications
|
||||||
|
d. Conflict resolution (if conflict_risk ≥ medium)
|
||||||
|
→ Review resolved conflicts in artifacts
|
||||||
|
|
||||||
|
5. Optional MCP enhancement
|
||||||
→ mcp__exa__get_code_context_exa() for best practices
|
→ mcp__exa__get_code_context_exa() for best practices
|
||||||
→ mcp__exa__web_search_exa() for external research
|
→ mcp__exa__web_search_exa() for external research
|
||||||
4. Assess task complexity (simple/medium/complex) from analysis
|
|
||||||
|
|
||||||
Phase 2: Document Generation (Autonomous Output)
|
6. Assess task complexity (simple/medium/complex)
|
||||||
1. Extract task definitions from analysis_results
|
|
||||||
2. Generate task JSON files with 5-field schema + artifacts
|
|
||||||
3. Create IMPL_PLAN.md with context analysis and artifact references
|
|
||||||
4. Generate TODO_LIST.md with proper structure (▸, [ ], [x])
|
|
||||||
5. Update session state for execution readiness
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Context Package Usage
|
**MCP Integration** (when `mcp_capabilities` available):
|
||||||
|
|
||||||
**Standard Context Structure**:
|
|
||||||
```javascript
|
```javascript
|
||||||
{
|
// Exa Code Context (mcp_capabilities.exa_code = true)
|
||||||
"session_id": "WFS-auth-system",
|
|
||||||
"session_metadata": {
|
|
||||||
"project": "OAuth2 authentication",
|
|
||||||
"type": "medium",
|
|
||||||
"current_phase": "PLAN"
|
|
||||||
},
|
|
||||||
"analysis_results": {
|
|
||||||
"tasks": [
|
|
||||||
{"id": "IMPL-1", "title": "...", "requirements": [...]}
|
|
||||||
],
|
|
||||||
"complexity": "medium",
|
|
||||||
"dependencies": [...]
|
|
||||||
},
|
|
||||||
"artifacts_inventory": {
|
|
||||||
"synthesis_specification": ".workflow/WFS-auth/.brainstorming/role analysis documents",
|
|
||||||
"topic_framework": ".workflow/WFS-auth/.brainstorming/guidance-specification.md",
|
|
||||||
"role_analyses": [
|
|
||||||
".workflow/WFS-auth/.brainstorming/system-architect/analysis.md",
|
|
||||||
".workflow/WFS-auth/.brainstorming/subject-matter-expert/analysis.md"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"context_package": {
|
|
||||||
"assets": [...],
|
|
||||||
"focus_areas": [...]
|
|
||||||
},
|
|
||||||
"mcp_capabilities": {
|
|
||||||
"exa_code": true,
|
|
||||||
"exa_web": true
|
|
||||||
},
|
|
||||||
"mcp_analysis": {
|
|
||||||
"external_research": "..."
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Using Context in Task Generation**:
|
|
||||||
1. **Extract Tasks**: Parse `analysis_results.tasks` array
|
|
||||||
2. **Map Artifacts**: Use `artifacts_inventory` to add artifact references to task.context
|
|
||||||
3. **Assess Complexity**: Use `analysis_results.complexity` for document structure decision
|
|
||||||
4. **Session Paths**: Use `session_id` to construct output paths (.workflow/active/{session_id}/)
|
|
||||||
|
|
||||||
### MCP Integration Guidelines
|
|
||||||
|
|
||||||
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
|
|
||||||
```javascript
|
|
||||||
// Get best practices and examples
|
|
||||||
mcp__exa__get_code_context_exa(
|
mcp__exa__get_code_context_exa(
|
||||||
query="TypeScript OAuth2 JWT authentication patterns",
|
query="TypeScript OAuth2 JWT authentication patterns",
|
||||||
tokensNum="dynamic"
|
tokensNum="dynamic"
|
||||||
)
|
)
|
||||||
```
|
|
||||||
|
|
||||||
**Integration in flow_control.pre_analysis**:
|
// Integration in flow_control.pre_analysis
|
||||||
```json
|
|
||||||
{
|
{
|
||||||
"step": "local_codebase_exploration",
|
"step": "local_codebase_exploration",
|
||||||
"action": "Explore codebase structure",
|
"action": "Explore codebase structure",
|
||||||
@@ -128,28 +107,158 @@ mcp__exa__get_code_context_exa(
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Core Functions
|
**Context Package Structure** (fields defined by context-search-agent):
|
||||||
|
|
||||||
### 1. Stage Design
|
**Always Present**:
|
||||||
Break work into 3-5 logical implementation stages with:
|
- `metadata.task_description`: User's original task description
|
||||||
- Specific, measurable deliverables
|
- `metadata.keywords`: Extracted technical keywords
|
||||||
- Clear success criteria and test cases
|
- `metadata.complexity`: Task complexity level (simple/medium/complex)
|
||||||
- Dependencies on previous stages
|
- `metadata.session_id`: Workflow session identifier
|
||||||
- Estimated complexity and time requirements
|
- `project_context.architecture_patterns`: Architecture patterns (MVC, Service layer, etc.)
|
||||||
|
- `project_context.tech_stack`: Language, frameworks, libraries
|
||||||
|
- `project_context.coding_conventions`: Naming, error handling, async patterns
|
||||||
|
- `assets.source_code[]`: Relevant existing files with paths and metadata
|
||||||
|
- `assets.documentation[]`: Reference docs (CLAUDE.md, API docs)
|
||||||
|
- `assets.config[]`: Configuration files (package.json, .env.example)
|
||||||
|
- `assets.tests[]`: Test files
|
||||||
|
- `dependencies.internal[]`: Module dependencies
|
||||||
|
- `dependencies.external[]`: Package dependencies
|
||||||
|
- `conflict_detection.risk_level`: Conflict risk (low/medium/high)
|
||||||
|
|
||||||
### 2. Task JSON Generation (5-Field Schema + Artifacts)
|
**Conditionally Present** (check existence before loading):
|
||||||
Generate individual `.task/IMPL-*.json` files with:
|
- `brainstorm_artifacts.guidance_specification`: Overall design framework (if exists)
|
||||||
|
- Check: `brainstorm_artifacts?.guidance_specification?.exists === true`
|
||||||
|
- Content: Use `content` field if present, else load from `path`
|
||||||
|
- `brainstorm_artifacts.role_analyses[]`: Role-specific analyses (if array not empty)
|
||||||
|
- Each role: `role_analyses[i].files[j]` has `path` and `content`
|
||||||
|
- `brainstorm_artifacts.synthesis_output`: Synthesis results (if exists)
|
||||||
|
- Check: `brainstorm_artifacts?.synthesis_output?.exists === true`
|
||||||
|
- Content: Use `content` field if present, else load from `path`
|
||||||
|
- `conflict_detection.affected_modules[]`: Modules with potential conflicts (if risk ≥ medium)
|
||||||
|
|
||||||
|
**Field Access Examples**:
|
||||||
|
```javascript
|
||||||
|
// Always safe - direct field access
|
||||||
|
const techStack = contextPackage.project_context.tech_stack;
|
||||||
|
const riskLevel = contextPackage.conflict_detection.risk_level;
|
||||||
|
const existingCode = contextPackage.assets.source_code; // Array of files
|
||||||
|
|
||||||
|
// Conditional - use content if available, else load from path
|
||||||
|
if (contextPackage.brainstorm_artifacts?.guidance_specification?.exists) {
|
||||||
|
const spec = contextPackage.brainstorm_artifacts.guidance_specification;
|
||||||
|
const content = spec.content || Read(spec.path);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
|
||||||
|
// Progressive loading: load role analyses incrementally by priority
|
||||||
|
contextPackage.brainstorm_artifacts.role_analyses.forEach(role => {
|
||||||
|
role.files.forEach(file => {
|
||||||
|
const analysis = file.content || Read(file.path); // Load one at a time
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Phase 2: Document Generation
|
||||||
|
|
||||||
|
**Autonomous output generation**:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Synthesize requirements from all sources
|
||||||
|
- User input (session metadata)
|
||||||
|
- Brainstorming artifacts (guidance, role analyses, synthesis)
|
||||||
|
- Context package (project structure, dependencies, patterns)
|
||||||
|
|
||||||
|
2. Generate task JSON files
|
||||||
|
- Apply 6-field schema (id, title, status, meta, context, flow_control)
|
||||||
|
- Integrate artifacts catalog into context.artifacts array
|
||||||
|
- Add quantified requirements and measurable acceptance criteria
|
||||||
|
|
||||||
|
3. Create IMPL_PLAN.md
|
||||||
|
- Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
|
||||||
|
- Follow template structure and validation checklist
|
||||||
|
- Populate all 8 sections with synthesized context
|
||||||
|
- Document CCW workflow phase progression
|
||||||
|
- Update quality gate status
|
||||||
|
|
||||||
|
4. Generate TODO_LIST.md
|
||||||
|
- Flat structure ([ ] for pending, [x] for completed)
|
||||||
|
- Link to task JSONs and summaries
|
||||||
|
|
||||||
|
5. Update session state for execution readiness
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Output Specifications
|
||||||
|
|
||||||
|
### 2.1 Task JSON Schema (6-Field)
|
||||||
|
|
||||||
|
Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||||
|
|
||||||
|
#### Top-Level Fields
|
||||||
|
|
||||||
**Required Fields**:
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"id": "IMPL-N[.M]",
|
"id": "IMPL-N",
|
||||||
"title": "Descriptive task name",
|
"title": "Descriptive task name",
|
||||||
"status": "pending",
|
"status": "pending|active|completed|blocked",
|
||||||
|
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `id`: Task identifier
|
||||||
|
- Single module format: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||||
|
- Multi-module format: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1, IMPL-C1)
|
||||||
|
- Prefix: A, B, C... (assigned by module detection order)
|
||||||
|
- Sequence: 1, 2, 3... (per-module increment)
|
||||||
|
- `title`: Descriptive task name summarizing the work
|
||||||
|
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
||||||
|
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||||
|
|
||||||
|
#### Meta Object
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
"meta": {
|
"meta": {
|
||||||
"type": "feature|bugfix|refactor|test|docs",
|
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||||
"agent": "@code-developer"
|
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||||
},
|
"execution_group": "parallel-abc123|null",
|
||||||
|
"module": "frontend|backend|shared|null"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
||||||
|
- `agent`: Assigned agent for execution
|
||||||
|
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||||
|
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||||
|
|
||||||
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"type": "test-gen|test-fix",
|
||||||
|
"agent": "@code-developer|@test-fix-agent",
|
||||||
|
"test_framework": "jest|vitest|pytest|junit|mocha",
|
||||||
|
"coverage_target": "80%"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test-Specific Fields**:
|
||||||
|
- `test_framework`: Existing test framework from project (required for test tasks)
|
||||||
|
- `coverage_target`: Target code coverage percentage (optional)
|
||||||
|
|
||||||
|
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
|
||||||
|
|
||||||
|
#### Context Object
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
"context": {
|
"context": {
|
||||||
"requirements": [
|
"requirements": [
|
||||||
"Implement 3 features: [authentication, authorization, session management]",
|
"Implement 3 features: [authentication, authorization, session management]",
|
||||||
@@ -163,164 +272,438 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
||||||
],
|
],
|
||||||
"depends_on": ["IMPL-N"],
|
"depends_on": ["IMPL-N"],
|
||||||
|
"inherited": {
|
||||||
|
"from": "IMPL-N",
|
||||||
|
"context": ["Authentication system design completed", "JWT strategy defined"]
|
||||||
|
},
|
||||||
|
"shared_context": {
|
||||||
|
"tech_stack": ["Node.js", "TypeScript", "Express"],
|
||||||
|
"auth_strategy": "JWT with refresh tokens",
|
||||||
|
"conventions": ["Follow existing auth patterns in src/auth/legacy/"]
|
||||||
|
},
|
||||||
"artifacts": [
|
"artifacts": [
|
||||||
{
|
{
|
||||||
"type": "synthesis_specification",
|
"type": "synthesis_specification|topic_framework|individual_role_analysis",
|
||||||
|
"source": "brainstorm_clarification|brainstorm_framework|brainstorm_roles",
|
||||||
"path": "{from artifacts_inventory}",
|
"path": "{from artifacts_inventory}",
|
||||||
"priority": "highest"
|
"priority": "highest|high|medium|low",
|
||||||
|
"usage": "Architecture decisions and API specifications",
|
||||||
|
"contains": "role_specific_requirements_and_design"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
},
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [
|
|
||||||
{
|
|
||||||
"step": "load_synthesis_specification",
|
|
||||||
"commands": ["bash(ls {path} 2>/dev/null)", "Read({path})"],
|
|
||||||
"output_to": "synthesis_specification",
|
|
||||||
"on_error": "skip_optional"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"step": "mcp_codebase_exploration",
|
|
||||||
"command": "mcp__code-index__find_files() && mcp__code-index__search_code_advanced()",
|
|
||||||
"output_to": "codebase_structure"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"implementation_approach": [
|
|
||||||
{
|
|
||||||
"step": 1,
|
|
||||||
"title": "Load and analyze role analyses",
|
|
||||||
"description": "Load 3 role analysis files and extract quantified requirements",
|
|
||||||
"modification_points": [
|
|
||||||
"Load 3 role analysis files: [system-architect/analysis.md, product-manager/analysis.md, ui-designer/analysis.md]",
|
|
||||||
"Extract 15 requirements from role analyses",
|
|
||||||
"Parse 8 architecture decisions from system-architect analysis"
|
|
||||||
],
|
|
||||||
"logic_flow": [
|
|
||||||
"Read 3 role analyses from artifacts inventory",
|
|
||||||
"Parse architecture decisions (8 total)",
|
|
||||||
"Extract implementation requirements (15 total)",
|
|
||||||
"Build consolidated requirements list"
|
|
||||||
],
|
|
||||||
"depends_on": [],
|
|
||||||
"output": "synthesis_requirements"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"step": 2,
|
|
||||||
"title": "Implement following specification",
|
|
||||||
"description": "Implement 3 features across 5 files following consolidated role analyses",
|
|
||||||
"modification_points": [
|
|
||||||
"Create 5 new files in src/auth/: [auth.service.ts (180 lines), auth.controller.ts (120 lines), auth.middleware.ts (60 lines), auth.types.ts (40 lines), auth.test.ts (200 lines)]",
|
|
||||||
"Modify 2 functions: [validateUser() in users.service.ts lines 45-60, hashPassword() in utils.ts lines 120-135]",
|
|
||||||
"Implement 3 core features: [JWT authentication, role-based authorization, session management]"
|
|
||||||
],
|
|
||||||
"logic_flow": [
|
|
||||||
"Apply 15 requirements from [synthesis_requirements]",
|
|
||||||
"Implement 3 features across 5 new files (600 total lines)",
|
|
||||||
"Modify 2 existing functions (30 lines total)",
|
|
||||||
"Write 25 test cases covering all features",
|
|
||||||
"Validate against 3 acceptance criteria"
|
|
||||||
],
|
|
||||||
"depends_on": [1],
|
|
||||||
"output": "implementation"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"target_files": [
|
|
||||||
"src/auth/auth.service.ts",
|
|
||||||
"src/auth/auth.controller.ts",
|
|
||||||
"src/auth/auth.middleware.ts",
|
|
||||||
"src/auth/auth.types.ts",
|
|
||||||
"tests/auth/auth.test.ts",
|
|
||||||
"src/users/users.service.ts:validateUser:45-60",
|
|
||||||
"src/utils/utils.ts:hashPassword:120-135"
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Artifact Mapping**:
|
**Field Descriptions**:
|
||||||
|
- `requirements`: **QUANTIFIED** implementation requirements (MUST include explicit counts and enumerated lists, e.g., "5 files: [list]")
|
||||||
|
- `focus_paths`: Target directories/files (concrete paths without wildcards)
|
||||||
|
- `acceptance`: **MEASURABLE** acceptance criteria (MUST include verification commands, e.g., "verify by ls ... | wc -l = N")
|
||||||
|
- `depends_on`: Prerequisite task IDs that must complete before this task starts
|
||||||
|
- `inherited`: Context, patterns, and dependencies passed from parent task
|
||||||
|
- `shared_context`: Tech stack, conventions, and architectural strategies for the task
|
||||||
|
- `artifacts`: Referenced brainstorming outputs with detailed metadata
|
||||||
|
|
||||||
|
**Artifact Mapping** (from context package):
|
||||||
- Use `artifacts_inventory` from context package
|
- Use `artifacts_inventory` from context package
|
||||||
- Highest priority: synthesis_specification
|
- **Priority levels**:
|
||||||
- Medium priority: topic_framework
|
- **Highest**: synthesis_specification (integrated view with clarifications)
|
||||||
- Low priority: role_analyses
|
- **High**: topic_framework (guidance-specification.md)
|
||||||
|
- **Medium**: individual_role_analysis (system-architect, subject-matter-expert, etc.)
|
||||||
|
- **Low**: supporting documentation
|
||||||
|
|
||||||
### 3. Implementation Plan Creation
|
#### Flow Control Object
|
||||||
Generate `IMPL_PLAN.md` at `.workflow/active/{session_id}/IMPL_PLAN.md`:
|
|
||||||
|
|
||||||
**Structure**:
|
**IMPORTANT**: The `pre_analysis` examples below are **reference templates only**. Agent MUST dynamically select, adapt, and expand steps based on actual task requirements. Apply the principle of **"举一反三"** (draw inferences from examples) - use these patterns as inspiration to create task-specific analysis steps.
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
identifier: {session_id}
|
|
||||||
source: "User requirements"
|
|
||||||
analysis: .workflow/active/{session_id}/.process/ANALYSIS_RESULTS.md
|
|
||||||
---
|
|
||||||
|
|
||||||
# Implementation Plan: {Project Title}
|
```json
|
||||||
|
{
|
||||||
## Summary
|
"flow_control": {
|
||||||
{Core requirements and technical approach from analysis_results}
|
"pre_analysis": [...],
|
||||||
|
"implementation_approach": [...],
|
||||||
## Context Analysis
|
"target_files": [...]
|
||||||
- **Project**: {from session_metadata and context_package}
|
}
|
||||||
- **Modules**: {from analysis_results}
|
}
|
||||||
- **Dependencies**: {from context_package}
|
|
||||||
- **Patterns**: {from analysis_results}
|
|
||||||
|
|
||||||
## Brainstorming Artifacts
|
|
||||||
{List from artifacts_inventory with priorities}
|
|
||||||
|
|
||||||
## Task Breakdown
|
|
||||||
- **Task Count**: {from analysis_results.tasks.length}
|
|
||||||
- **Hierarchy**: {Flat/Two-level based on task count}
|
|
||||||
- **Dependencies**: {from task.depends_on relationships}
|
|
||||||
|
|
||||||
## Implementation Plan
|
|
||||||
- **Execution Strategy**: {Sequential/Parallel}
|
|
||||||
- **Resource Requirements**: {Tools, dependencies}
|
|
||||||
- **Success Criteria**: {from analysis_results}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. TODO List Generation
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
Generate `TODO_LIST.md` at `.workflow/active/{session_id}/TODO_LIST.md`:
|
|
||||||
|
|
||||||
**Structure**:
|
```json
|
||||||
|
{
|
||||||
|
"flow_control": {
|
||||||
|
"pre_analysis": [...],
|
||||||
|
"implementation_approach": [...],
|
||||||
|
"target_files": [...],
|
||||||
|
"reusable_test_tools": [
|
||||||
|
"tests/helpers/testUtils.ts",
|
||||||
|
"tests/fixtures/mockData.ts",
|
||||||
|
"tests/setup/testSetup.ts"
|
||||||
|
],
|
||||||
|
"test_commands": {
|
||||||
|
"run_tests": "npm test",
|
||||||
|
"run_coverage": "npm test -- --coverage",
|
||||||
|
"run_specific": "npm test -- {test_file}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test-Specific Fields**:
|
||||||
|
- `reusable_test_tools`: List of existing test utility files to reuse (helpers, fixtures, mocks)
|
||||||
|
- `test_commands`: Test execution commands from project config (package.json, pytest.ini)
|
||||||
|
|
||||||
|
##### Pre-Analysis Patterns
|
||||||
|
|
||||||
|
**Dynamic Step Selection Guidelines**:
|
||||||
|
- **Context Loading**: Always include context package and role analysis loading
|
||||||
|
- **Architecture Analysis**: Add module structure analysis for complex projects
|
||||||
|
- **Pattern Discovery**: Use CLI tools (gemini/qwen/bash) based on task complexity and available tools
|
||||||
|
- **Tech-Specific Analysis**: Add language/framework-specific searches for specialized tasks
|
||||||
|
- **MCP Integration**: Utilize MCP tools when available for enhanced context
|
||||||
|
|
||||||
|
**Required Steps** (Always Include):
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"step": "load_context_package",
|
||||||
|
"action": "Load context package for artifact paths and smart context",
|
||||||
|
"commands": ["Read({{context_package_path}})"],
|
||||||
|
"output_to": "context_package",
|
||||||
|
"on_error": "fail"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"step": "load_role_analysis_artifacts",
|
||||||
|
"action": "Load role analyses from context-package.json (progressive loading by priority)",
|
||||||
|
"commands": [
|
||||||
|
"Read({{context_package_path}})",
|
||||||
|
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||||
|
"Read(extracted paths progressively)"
|
||||||
|
],
|
||||||
|
"output_to": "role_analysis_artifacts",
|
||||||
|
"on_error": "skip_optional"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Optional Steps** (Select and adapt based on task needs):
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
// Pattern: Project structure analysis
|
||||||
|
{
|
||||||
|
"step": "analyze_project_architecture",
|
||||||
|
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
|
||||||
|
"output_to": "project_architecture"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Local search (bash/rg/find)
|
||||||
|
{
|
||||||
|
"step": "search_existing_patterns",
|
||||||
|
"commands": [
|
||||||
|
"bash(rg '[pattern]' --type [lang] -n --max-count [N])",
|
||||||
|
"bash(find . -name '[pattern]' -type f | head -[N])"
|
||||||
|
],
|
||||||
|
"output_to": "search_results"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Gemini CLI deep analysis
|
||||||
|
{
|
||||||
|
"step": "gemini_analyze_[aspect]",
|
||||||
|
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
||||||
|
"output_to": "analysis_result"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||||
|
{
|
||||||
|
"step": "qwen_analyze_[aspect]",
|
||||||
|
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
||||||
|
"output_to": "analysis_result"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: MCP tools
|
||||||
|
{
|
||||||
|
"step": "mcp_search_[target]",
|
||||||
|
"command": "mcp__[tool]__[function](parameters)",
|
||||||
|
"output_to": "mcp_results"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step Selection Strategy** (举一反三 Principle):
|
||||||
|
|
||||||
|
The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
||||||
|
|
||||||
|
1. **Always Include** (Required):
|
||||||
|
- `load_context_package` - Essential for all tasks
|
||||||
|
- `load_role_analysis_artifacts` - Critical for accessing brainstorming insights
|
||||||
|
|
||||||
|
2. **Progressive Addition of Analysis Steps**:
|
||||||
|
Include additional analysis steps as needed for comprehensive planning:
|
||||||
|
- **Architecture analysis**: Project structure + architecture patterns
|
||||||
|
- **Execution flow analysis**: Code tracing + quality analysis
|
||||||
|
- **Component analysis**: Component searches + pattern analysis
|
||||||
|
- **Data analysis**: Schema review + endpoint searches
|
||||||
|
- **Security analysis**: Vulnerability scans + security patterns
|
||||||
|
- **Performance analysis**: Bottleneck identification + profiling
|
||||||
|
|
||||||
|
Default: Include progressively based on planning requirements, not limited by task type.
|
||||||
|
|
||||||
|
3. **Tool Selection Strategy**:
|
||||||
|
- **Gemini CLI**: Deep analysis (architecture, execution flow, patterns)
|
||||||
|
- **Qwen CLI**: Fallback or code quality analysis
|
||||||
|
- **Bash/rg/find**: Quick pattern matching and file discovery
|
||||||
|
- **MCP tools**: Semantic search and external research
|
||||||
|
|
||||||
|
4. **Command Composition Patterns**:
|
||||||
|
- **Single command**: `bash([simple_search])`
|
||||||
|
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||||
|
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
||||||
|
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||||
|
|
||||||
|
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||||
|
|
||||||
|
##### Implementation Approach
|
||||||
|
|
||||||
|
**Execution Modes**:
|
||||||
|
|
||||||
|
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
|
||||||
|
|
||||||
|
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
|
||||||
|
- Agent interprets `modification_points` and `logic_flow` autonomously
|
||||||
|
- Direct agent execution with full context awareness
|
||||||
|
- No external tool overhead
|
||||||
|
- **Use for**: Standard implementation tasks where agent capability is sufficient
|
||||||
|
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
|
||||||
|
|
||||||
|
2. **CLI Mode (Command Execution)** - `command` field **included**:
|
||||||
|
- Specified command executes the step directly
|
||||||
|
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||||
|
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||||
|
- **Required fields**: Same as default mode **PLUS** `command`
|
||||||
|
- **Command patterns**:
|
||||||
|
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
||||||
|
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
||||||
|
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
||||||
|
|
||||||
|
**Semantic CLI Tool Selection**:
|
||||||
|
|
||||||
|
Agent determines CLI tool usage per-step based on user semantics and task nature.
|
||||||
|
|
||||||
|
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
|
||||||
|
|
||||||
|
**User Semantic Triggers** (patterns to detect in task_description):
|
||||||
|
- "use Codex/codex" → Add `command` field with Codex CLI
|
||||||
|
- "use Gemini/gemini" → Add `command` field with Gemini CLI
|
||||||
|
- "use Qwen/qwen" → Add `command` field with Qwen CLI
|
||||||
|
- "CLI execution" / "automated" → Infer appropriate CLI tool
|
||||||
|
|
||||||
|
**Task-Based Selection** (when no explicit user preference):
|
||||||
|
- **Implementation/coding**: Codex preferred for autonomous development
|
||||||
|
- **Analysis/exploration**: Gemini preferred for large context analysis
|
||||||
|
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
|
||||||
|
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
||||||
|
|
||||||
|
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
||||||
|
- Agent orchestrates task execution
|
||||||
|
- When step has `command` field, agent executes it via Bash
|
||||||
|
- When step has no `command` field, agent implements directly
|
||||||
|
- This maintains agent control while leveraging CLI tool power
|
||||||
|
|
||||||
|
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
|
||||||
|
|
||||||
|
**Examples**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
// === DEFAULT MODE: Agent Execution (no command field) ===
|
||||||
|
{
|
||||||
|
"step": 1,
|
||||||
|
"title": "Load and analyze role analyses",
|
||||||
|
"description": "Load role analysis files and extract quantified requirements",
|
||||||
|
"modification_points": [
|
||||||
|
"Load N role analysis files: [list]",
|
||||||
|
"Extract M requirements from role analyses",
|
||||||
|
"Parse K architecture decisions"
|
||||||
|
],
|
||||||
|
"logic_flow": [
|
||||||
|
"Read role analyses from artifacts inventory",
|
||||||
|
"Parse architecture decisions",
|
||||||
|
"Extract implementation requirements",
|
||||||
|
"Build consolidated requirements list"
|
||||||
|
],
|
||||||
|
"depends_on": [],
|
||||||
|
"output": "synthesis_requirements"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"step": 2,
|
||||||
|
"title": "Implement following specification",
|
||||||
|
"description": "Implement features following consolidated role analyses",
|
||||||
|
"modification_points": [
|
||||||
|
"Create N new files: [list with line counts]",
|
||||||
|
"Modify M functions: [func() in file lines X-Y]",
|
||||||
|
"Implement K core features: [list]"
|
||||||
|
],
|
||||||
|
"logic_flow": [
|
||||||
|
"Apply requirements from [synthesis_requirements]",
|
||||||
|
"Implement features across new files",
|
||||||
|
"Modify existing functions",
|
||||||
|
"Write test cases covering all features",
|
||||||
|
"Validate against acceptance criteria"
|
||||||
|
],
|
||||||
|
"depends_on": [1],
|
||||||
|
"output": "implementation"
|
||||||
|
},
|
||||||
|
|
||||||
|
// === CLI MODE: Command Execution (optional command field) ===
|
||||||
|
{
|
||||||
|
"step": 3,
|
||||||
|
"title": "Execute implementation using CLI tool",
|
||||||
|
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||||
|
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
||||||
|
"modification_points": ["[Same as default mode]"],
|
||||||
|
"logic_flow": ["[Same as default mode]"],
|
||||||
|
"depends_on": [1, 2],
|
||||||
|
"output": "cli_implementation"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Target Files
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"target_files": [
|
||||||
|
"src/auth/auth.service.ts",
|
||||||
|
"src/auth/auth.controller.ts",
|
||||||
|
"src/auth/auth.middleware.ts",
|
||||||
|
"src/auth/auth.types.ts",
|
||||||
|
"tests/auth/auth.test.ts",
|
||||||
|
"src/users/users.service.ts:validateUser:45-60",
|
||||||
|
"src/utils/utils.ts:hashPassword:120-135"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Format**:
|
||||||
|
- New files: `file_path`
|
||||||
|
- Existing files with modifications: `file_path:function_name:line_range`
|
||||||
|
|
||||||
|
### 2.2 IMPL_PLAN.md Structure
|
||||||
|
|
||||||
|
**Template-Based Generation**:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
|
||||||
|
2. Populate all sections following template structure
|
||||||
|
3. Complete template validation checklist
|
||||||
|
4. Generate at .workflow/active/{session_id}/IMPL_PLAN.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**Data Sources**:
|
||||||
|
- Session metadata (user requirements, session_id)
|
||||||
|
- Context package (project structure, dependencies, focus_paths)
|
||||||
|
- Analysis results (technical approach, architecture decisions)
|
||||||
|
- Brainstorming artifacts (role analyses, guidance specifications)
|
||||||
|
|
||||||
|
**Multi-Module Format** (when modules detected):
|
||||||
|
|
||||||
|
When multiple modules are detected (frontend/backend, etc.), organize IMPL_PLAN.md by module:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Implementation Plan
|
||||||
|
|
||||||
|
## Module A: Frontend (N tasks)
|
||||||
|
### IMPL-A1: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
### IMPL-A2: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
## Module B: Backend (N tasks)
|
||||||
|
### IMPL-B1: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
### IMPL-B2: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
## Cross-Module Dependencies
|
||||||
|
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||||
|
- IMPL-A2 → IMPL-B2 (UI state depends on Backend service)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Cross-Module Dependency Notation**:
|
||||||
|
- During parallel planning, use `CROSS::{module}::{pattern}` format
|
||||||
|
- Example: `depends_on: ["CROSS::B::api-endpoint"]`
|
||||||
|
- Integration phase resolves to actual task IDs: `CROSS::B::api → IMPL-B1`
|
||||||
|
|
||||||
|
### 2.3 TODO_LIST.md Structure
|
||||||
|
|
||||||
|
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
|
||||||
|
|
||||||
|
**Single Module Format**:
|
||||||
```markdown
|
```markdown
|
||||||
# Tasks: {Session Topic}
|
# Tasks: {Session Topic}
|
||||||
|
|
||||||
## Task Progress
|
## Task Progress
|
||||||
▸ **IMPL-001**: [Main Task] → [📋](./.task/IMPL-001.json)
|
- [ ] **IMPL-001**: [Task Title] → [📋](./.task/IMPL-001.json)
|
||||||
- [ ] **IMPL-001.1**: [Subtask] → [📋](./.task/IMPL-001.1.json)
|
- [ ] **IMPL-002**: [Task Title] → [📋](./.task/IMPL-002.json)
|
||||||
|
- [x] **IMPL-003**: [Task Title] → [✅](./.summaries/IMPL-003-summary.md)
|
||||||
- [ ] **IMPL-002**: [Simple Task] → [📋](./.task/IMPL-002.json)
|
|
||||||
|
|
||||||
## Status Legend
|
## Status Legend
|
||||||
- `▸` = Container task (has subtasks)
|
- `- [ ]` = Pending task
|
||||||
- `- [ ]` = Pending leaf task
|
- `- [x]` = Completed task
|
||||||
- `- [x]` = Completed leaf task
|
```
|
||||||
|
|
||||||
|
**Multi-Module Format** (hierarchical by module):
|
||||||
|
```markdown
|
||||||
|
# Tasks: {Session Topic}
|
||||||
|
|
||||||
|
## Module A (Frontend)
|
||||||
|
- [ ] **IMPL-A1**: [Task Title] → [📋](./.task/IMPL-A1.json)
|
||||||
|
- [ ] **IMPL-A2**: [Task Title] → [📋](./.task/IMPL-A2.json)
|
||||||
|
|
||||||
|
## Module B (Backend)
|
||||||
|
- [ ] **IMPL-B1**: [Task Title] → [📋](./.task/IMPL-B1.json)
|
||||||
|
- [ ] **IMPL-B2**: [Task Title] → [📋](./.task/IMPL-B2.json)
|
||||||
|
|
||||||
|
## Cross-Module Dependencies
|
||||||
|
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||||
|
|
||||||
|
## Status Legend
|
||||||
|
- `- [ ]` = Pending task
|
||||||
|
- `- [x]` = Completed task
|
||||||
```
|
```
|
||||||
|
|
||||||
**Linking Rules**:
|
**Linking Rules**:
|
||||||
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
||||||
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
||||||
- Consistent ID schemes: IMPL-XXX, IMPL-XXX.Y (max 2 levels)
|
- Consistent ID schemes: `IMPL-N` (single) or `IMPL-{prefix}{seq}` (multi-module)
|
||||||
|
|
||||||
|
### 2.4 Complexity & Structure Selection
|
||||||
|
|
||||||
|
|
||||||
### 5. Complexity Assessment & Document Structure
|
|
||||||
Use `analysis_results.complexity` or task count to determine structure:
|
Use `analysis_results.complexity` or task count to determine structure:
|
||||||
|
|
||||||
**Simple Tasks** (≤5 tasks):
|
**Single Module Mode**:
|
||||||
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
- **Simple Tasks** (≤5 tasks): Flat structure
|
||||||
- No container tasks, all leaf tasks
|
- **Medium Tasks** (6-12 tasks): Flat structure
|
||||||
|
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
|
||||||
|
|
||||||
**Medium Tasks** (6-10 tasks):
|
**Multi-Module Mode** (N+1 parallel planning):
|
||||||
- Two-level hierarchy: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
- **Per-module limit**: ≤9 tasks per module
|
||||||
- Optional container tasks for grouping
|
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
|
||||||
|
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||||
|
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
|
||||||
|
|
||||||
**Complex Tasks** (>10 tasks):
|
**Multi-Module Detection Triggers**:
|
||||||
- **Re-scope required**: Maximum 10 tasks hard limit
|
- Explicit frontend/backend separation (`src/frontend`, `src/backend`)
|
||||||
- If analysis_results contains >10 tasks, consolidate or request re-scoping
|
- Monorepo structure (`packages/*`, `apps/*`)
|
||||||
|
- Context-package dependency clustering (2+ distinct module groups)
|
||||||
|
|
||||||
## Quantification Requirements (MANDATORY)
|
---
|
||||||
|
|
||||||
|
## 3. Quality Standards
|
||||||
|
|
||||||
|
### 3.1 Quantification Requirements (MANDATORY)
|
||||||
|
|
||||||
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
|
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
|
||||||
|
|
||||||
@@ -344,46 +727,52 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
- [ ] Each implementation step has its own acceptance criteria
|
- [ ] Each implementation step has its own acceptance criteria
|
||||||
|
|
||||||
**Examples**:
|
**Examples**:
|
||||||
- ✅ GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||||
- ❌ BAD: `"Implement new commands"`
|
- BAD: `"Implement new commands"`
|
||||||
- ✅ GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||||
- ❌ BAD: `"All commands implemented successfully"`
|
- BAD: `"All commands implemented successfully"`
|
||||||
|
|
||||||
## Quality Standards
|
### 3.2 Planning & Organization Standards
|
||||||
|
|
||||||
**Planning Principles:**
|
**Planning Principles**:
|
||||||
- Each stage produces working, testable code
|
- Each stage produces working, testable code
|
||||||
- Clear success criteria for each deliverable
|
- Clear success criteria for each deliverable
|
||||||
- Dependencies clearly identified between stages
|
- Dependencies clearly identified between stages
|
||||||
- Incremental progress over big bangs
|
- Incremental progress over big bangs
|
||||||
|
|
||||||
**File Organization:**
|
**File Organization**:
|
||||||
- Session naming: `WFS-[topic-slug]`
|
- Session naming: `WFS-[topic-slug]`
|
||||||
- Task IDs: IMPL-XXX, IMPL-XXX.Y, IMPL-XXX.Y.Z
|
- Task IDs:
|
||||||
- Directory structure follows complexity (Level 0/1/2)
|
- Single module: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||||
|
- Multi-module: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||||
|
- Directory structure: flat task organization (all tasks in `.task/`)
|
||||||
|
|
||||||
**Document Standards:**
|
**Document Standards**:
|
||||||
- Proper linking between documents
|
- Proper linking between documents
|
||||||
- Consistent navigation and references
|
- Consistent navigation and references
|
||||||
|
|
||||||
## Key Reminders
|
### 3.3 Guidelines Checklist
|
||||||
|
|
||||||
**ALWAYS:**
|
**ALWAYS:**
|
||||||
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
|
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||||
- **Use provided context package**: Extract all information from structured context
|
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
|
||||||
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
|
- Use provided context package: Extract all information from structured context
|
||||||
- **Follow 5-field schema**: All task JSONs must have id, title, status, meta, context, flow_control
|
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
||||||
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
|
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||||
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
||||||
- **Validate task count**: Maximum 10 tasks hard limit, request re-scope if exceeded
|
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||||
- **Use session paths**: Construct all paths using provided session_id
|
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||||
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
|
- Use session paths: Construct all paths using provided session_id
|
||||||
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
|
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
|
||||||
|
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
|
||||||
|
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
|
||||||
|
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
|
||||||
|
|
||||||
**NEVER:**
|
**NEVER:**
|
||||||
- Load files directly (use provided context package instead)
|
- Load files directly (use provided context package instead)
|
||||||
- Assume default locations (always use session_id in paths)
|
- Assume default locations (always use session_id in paths)
|
||||||
- Create circular dependencies in task.depends_on
|
- Create circular dependencies in task.depends_on
|
||||||
- Exceed 10 tasks without re-scoping
|
- Exceed 12 tasks without re-scoping
|
||||||
- Skip artifact integration when artifacts_inventory is provided
|
- Skip artifact integration when artifacts_inventory is provided
|
||||||
- Ignore MCP capabilities when available
|
- Ignore MCP capabilities when available
|
||||||
|
- Use fixed pre-analysis steps without task-specific adaptation
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ Score = 0
|
|||||||
|
|
||||||
**1. Project Structure**:
|
**1. Project Structure**:
|
||||||
```bash
|
```bash
|
||||||
~/.claude/scripts/get_modules_by_depth.sh
|
ccw tool exec get_modules_by_depth '{}'
|
||||||
```
|
```
|
||||||
|
|
||||||
**2. Content Search**:
|
**2. Content Search**:
|
||||||
|
|||||||
@@ -1,620 +1,182 @@
|
|||||||
---
|
---
|
||||||
name: cli-explore-agent
|
name: cli-explore-agent
|
||||||
description: |
|
description: |
|
||||||
Read-only code exploration and structural analysis agent specialized in module discovery, dependency mapping, and architecture comprehension using dual-source strategy (Bash rapid scan + Gemini CLI semantic analysis).
|
Read-only code exploration agent with dual-source analysis strategy (Bash + Gemini CLI).
|
||||||
|
Orchestrates 4-phase workflow: Task Understanding → Analysis Execution → Schema Validation → Output Generation
|
||||||
Core capabilities:
|
|
||||||
- Multi-layer module structure analysis (directory tree, file patterns, symbol discovery)
|
|
||||||
- Dependency graph construction (imports, exports, call chains, circular detection)
|
|
||||||
- Pattern discovery (design patterns, architectural styles, naming conventions)
|
|
||||||
- Code provenance tracing (definition lookup, usage sites, call hierarchies)
|
|
||||||
- Architecture summarization (component relationships, integration points, data flows)
|
|
||||||
|
|
||||||
Integration points:
|
|
||||||
- Gemini CLI: Deep semantic understanding, design intent analysis, non-standard pattern discovery
|
|
||||||
- Qwen CLI: Fallback for Gemini, specialized for code analysis tasks
|
|
||||||
- Bash tools: rg, tree, find, get_modules_by_depth.sh for rapid structural scanning
|
|
||||||
- MCP Code Index: Optional integration for enhanced file discovery and search
|
|
||||||
|
|
||||||
Key optimizations:
|
|
||||||
- Dual-source strategy: Bash structural scan (speed) + Gemini semantic analysis (depth)
|
|
||||||
- Language-agnostic analysis with syntax-aware extensions
|
|
||||||
- Progressive disclosure: Quick overview → detailed analysis → dependency deep-dive
|
|
||||||
- Context-aware filtering based on task requirements
|
|
||||||
|
|
||||||
color: yellow
|
color: yellow
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a specialized **CLI Exploration Agent** that executes read-only code analysis tasks autonomously to discover module structures, map dependencies, and understand architectural patterns.
|
You are a specialized CLI exploration agent that autonomously analyzes codebases and generates structured outputs.
|
||||||
|
|
||||||
## Agent Operation
|
## Core Capabilities
|
||||||
|
|
||||||
### Execution Flow
|
1. **Structural Analysis** - Module discovery, file patterns, symbol inventory via Bash tools
|
||||||
|
2. **Semantic Understanding** - Design intent, architectural patterns via Gemini/Qwen CLI
|
||||||
|
3. **Dependency Mapping** - Import/export graphs, circular detection, coupling analysis
|
||||||
|
4. **Structured Output** - Schema-compliant JSON generation with validation
|
||||||
|
|
||||||
```
|
**Analysis Modes**:
|
||||||
STEP 1: Parse Analysis Request
|
- `quick-scan` → Bash only (10-30s)
|
||||||
→ Extract task intent (structure, dependencies, patterns, provenance, summary)
|
- `deep-scan` → Bash + Gemini dual-source (2-5min)
|
||||||
→ Identify analysis mode (quick-scan | deep-scan | dependency-map)
|
- `dependency-map` → Graph construction (3-8min)
|
||||||
→ Determine scope (directory, file patterns, language filters)
|
|
||||||
|
|
||||||
STEP 2: Initialize Analysis Environment
|
|
||||||
→ Set project root and working directory
|
|
||||||
→ Validate access to required tools (rg, tree, find, Gemini CLI)
|
|
||||||
→ Optional: Initialize Code Index MCP for enhanced discovery
|
|
||||||
→ Load project context (CLAUDE.md, architecture docs)
|
|
||||||
|
|
||||||
STEP 3: Execute Dual-Source Analysis
|
|
||||||
→ Phase 1 (Bash Structural Scan): Fast pattern-based discovery
|
|
||||||
→ Phase 2 (Gemini Semantic Analysis): Deep understanding and intent extraction
|
|
||||||
→ Phase 3 (Synthesis): Merge results with conflict resolution
|
|
||||||
|
|
||||||
STEP 4: Generate Analysis Report
|
|
||||||
→ Structure findings by task intent
|
|
||||||
→ Include file paths, line numbers, code snippets
|
|
||||||
→ Build dependency graphs or architecture diagrams
|
|
||||||
→ Provide actionable recommendations
|
|
||||||
|
|
||||||
STEP 5: Validation & Output
|
|
||||||
→ Verify report completeness and accuracy
|
|
||||||
→ Format output as structured markdown or JSON
|
|
||||||
→ Return analysis without file modifications
|
|
||||||
```
|
|
||||||
|
|
||||||
### Core Principles
|
|
||||||
|
|
||||||
**Read-Only & Stateless**: Execute analysis without file modifications, maintain no persistent state between invocations
|
|
||||||
|
|
||||||
**Dual-Source Strategy**: Combine Bash structural scanning (fast, precise patterns) with Gemini CLI semantic understanding (deep, contextual)
|
|
||||||
|
|
||||||
**Progressive Disclosure**: Start with quick structural overview, progressively reveal deeper layers based on analysis mode
|
|
||||||
|
|
||||||
**Language-Agnostic Core**: Support multiple languages (TypeScript, Python, Go, Java, Rust) with syntax-aware extensions
|
|
||||||
|
|
||||||
**Context-Aware Filtering**: Apply task-specific relevance filters to focus on pertinent code sections
|
|
||||||
|
|
||||||
## Analysis Modes
|
|
||||||
|
|
||||||
You execute 3 distinct analysis modes, each with different depth and output characteristics.
|
|
||||||
|
|
||||||
### Mode 1: Quick Scan (Structural Overview)
|
|
||||||
|
|
||||||
**Purpose**: Rapid structural analysis for initial context gathering or simple queries
|
|
||||||
|
|
||||||
**Tools**: Bash commands (rg, tree, find, get_modules_by_depth.sh)
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
1. **Project Structure**: Run get_modules_by_depth.sh for hierarchical overview
|
|
||||||
2. **File Discovery**: Use find/glob patterns to locate relevant files
|
|
||||||
3. **Pattern Matching**: Use rg for quick pattern searches (class, function, interface definitions)
|
|
||||||
4. **Basic Metrics**: Count files, lines, major components
|
|
||||||
|
|
||||||
**Output**: Structured markdown with directory tree, file lists, basic component inventory
|
|
||||||
|
|
||||||
**Time Estimate**: 10-30 seconds
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Initial project exploration
|
|
||||||
- Quick file/pattern lookups
|
|
||||||
- Pre-planning reconnaissance
|
|
||||||
- Context package generation (breadth-first)
|
|
||||||
|
|
||||||
### Mode 2: Deep Scan (Semantic Analysis)
|
|
||||||
|
|
||||||
**Purpose**: Comprehensive understanding of code intent, design patterns, and architectural decisions
|
|
||||||
|
|
||||||
**Tools**: Bash commands (Phase 1) + Gemini CLI (Phase 2) + Synthesis (Phase 3)
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
|
|
||||||
**Phase 1: Bash Structural Pre-Scan** (Fast & Precise)
|
|
||||||
- Purpose: Discover standard patterns with zero ambiguity
|
|
||||||
- Execution:
|
|
||||||
```bash
|
|
||||||
# TypeScript/JavaScript
|
|
||||||
rg "^export (class|interface|type|function) " --type ts -n --max-count 50
|
|
||||||
rg "^import .* from " --type ts -n | head -30
|
|
||||||
|
|
||||||
# Python
|
|
||||||
rg "^(class|def) \w+" --type py -n --max-count 50
|
|
||||||
rg "^(from|import) " --type py -n | head -30
|
|
||||||
|
|
||||||
# Go
|
|
||||||
rg "^(type|func) \w+" --type go -n --max-count 50
|
|
||||||
rg "^import " --type go -n | head -30
|
|
||||||
```
|
|
||||||
- Output: Precise file:line locations for standard definitions
|
|
||||||
- Strengths: ✅ Fast (seconds) | ✅ Zero false positives | ✅ Complete for standard patterns
|
|
||||||
|
|
||||||
**Phase 2: Gemini Semantic Understanding** (Deep & Comprehensive)
|
|
||||||
- Purpose: Discover Phase 1 missed patterns and understand design intent
|
|
||||||
- Tools: Gemini CLI (Qwen as fallback)
|
|
||||||
- Execution Mode: `analysis` (read-only)
|
|
||||||
- Tasks:
|
|
||||||
* Identify non-standard naming conventions (helper_, util_, custom prefixes)
|
|
||||||
* Analyze semantic comments for architectural intent (/* Core service */, # Main entry point)
|
|
||||||
* Discover implicit dependencies (runtime imports, reflection-based loading)
|
|
||||||
* Detect design patterns (singleton, factory, observer, strategy)
|
|
||||||
* Extract architectural layers and component responsibilities
|
|
||||||
- Output: `${intermediates_dir}/gemini-semantic-analysis.json`
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"bash_missed_patterns": [
|
|
||||||
{
|
|
||||||
"pattern_type": "non_standard_export",
|
|
||||||
"location": "src/services/helper_auth.ts:45",
|
|
||||||
"naming_convention": "helper_ prefix pattern",
|
|
||||||
"confidence": "high"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"design_intent_summary": "Layered architecture with service-repository pattern",
|
|
||||||
"architectural_patterns": ["MVC", "Dependency Injection", "Repository Pattern"],
|
|
||||||
"implicit_dependencies": ["Config loaded via environment", "Logger injected at runtime"],
|
|
||||||
"recommendations": ["Standardize naming to match project conventions"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
- Strengths: ✅ Discovers hidden patterns | ✅ Understands intent | ✅ Finds non-standard code
|
|
||||||
|
|
||||||
**Phase 3: Dual-Source Synthesis** (Best of Both)
|
|
||||||
- Merge Bash (precise locations) + Gemini (semantic understanding)
|
|
||||||
- Strategy:
|
|
||||||
* Standard patterns: Use Bash results (file:line precision)
|
|
||||||
* Supplementary discoveries: Adopt Gemini findings
|
|
||||||
* Conflicting interpretations: Use Gemini semantic context for resolution
|
|
||||||
- Validation: Cross-reference both sources for completeness
|
|
||||||
- Attribution: Mark each finding as "bash-discovered" or "gemini-discovered"
|
|
||||||
|
|
||||||
**Output**: Comprehensive analysis report with architectural insights, design patterns, code intent
|
|
||||||
|
|
||||||
**Time Estimate**: 2-5 minutes
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Architecture review and refactoring planning
|
|
||||||
- Understanding unfamiliar codebase sections
|
|
||||||
- Pattern discovery for standardization
|
|
||||||
- Pre-implementation deep-dive
|
|
||||||
|
|
||||||
### Mode 3: Dependency Map (Relationship Analysis)
|
|
||||||
|
|
||||||
**Purpose**: Build complete dependency graphs with import/export chains and circular dependency detection
|
|
||||||
|
|
||||||
**Tools**: Bash + Gemini CLI + Graph construction logic
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
1. **Direct Dependencies** (Bash):
|
|
||||||
```bash
|
|
||||||
# Extract all imports
|
|
||||||
rg "^import .* from ['\"](.+)['\"]" --type ts -o -r '$1' -n
|
|
||||||
|
|
||||||
# Extract all exports
|
|
||||||
rg "^export .* (class|function|const|type|interface) (\w+)" --type ts -o -r '$2' -n
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Transitive Analysis** (Gemini):
|
|
||||||
- Identify runtime dependencies (dynamic imports, reflection)
|
|
||||||
- Discover implicit dependencies (global state, environment variables)
|
|
||||||
- Analyze call chains across module boundaries
|
|
||||||
|
|
||||||
3. **Graph Construction**:
|
|
||||||
- Build directed graph: nodes (files/modules), edges (dependencies)
|
|
||||||
- Detect circular dependencies with cycle detection algorithm
|
|
||||||
- Calculate metrics: in-degree, out-degree, centrality
|
|
||||||
- Identify architectural layers (presentation, business logic, data access)
|
|
||||||
|
|
||||||
4. **Risk Assessment**:
|
|
||||||
- Flag circular dependencies with impact analysis
|
|
||||||
- Identify highly coupled modules (fan-in/fan-out >10)
|
|
||||||
- Detect orphaned modules (no inbound references)
|
|
||||||
- Calculate change risk scores
|
|
||||||
|
|
||||||
**Output**: Dependency graph (JSON/DOT format) + risk assessment report
|
|
||||||
|
|
||||||
**Time Estimate**: 3-8 minutes (depends on project size)
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Refactoring impact analysis
|
|
||||||
- Module extraction planning
|
|
||||||
- Circular dependency resolution
|
|
||||||
- Architecture optimization
|
|
||||||
|
|
||||||
## Tool Integration
|
|
||||||
|
|
||||||
### Bash Structural Tools
|
|
||||||
|
|
||||||
**get_modules_by_depth.sh**:
|
|
||||||
- Purpose: Generate hierarchical project structure
|
|
||||||
- Usage: `bash ~/.claude/scripts/get_modules_by_depth.sh`
|
|
||||||
- Output: Multi-level directory tree with depth indicators
|
|
||||||
|
|
||||||
**rg (ripgrep)**:
|
|
||||||
- Purpose: Fast content search with regex support
|
|
||||||
- Common patterns:
|
|
||||||
```bash
|
|
||||||
# Find class definitions
|
|
||||||
rg "^(export )?class \w+" --type ts -n
|
|
||||||
|
|
||||||
# Find function definitions
|
|
||||||
rg "^(export )?(function|const) \w+\s*=" --type ts -n
|
|
||||||
|
|
||||||
# Find imports
|
|
||||||
rg "^import .* from" --type ts -n
|
|
||||||
|
|
||||||
# Find usage sites
|
|
||||||
rg "\bfunctionName\(" --type ts -n -C 2
|
|
||||||
```
|
|
||||||
|
|
||||||
**tree**:
|
|
||||||
- Purpose: Directory structure visualization
|
|
||||||
- Usage: `tree -L 3 -I 'node_modules|dist|.git'`
|
|
||||||
|
|
||||||
**find**:
|
|
||||||
- Purpose: File discovery by name patterns
|
|
||||||
- Usage: `find . -name "*.ts" -type f | grep -v node_modules`
|
|
||||||
|
|
||||||
### Gemini CLI (Primary Semantic Analysis)
|
|
||||||
|
|
||||||
**Command Template**:
|
|
||||||
```bash
|
|
||||||
cd [target_directory] && gemini -p "
|
|
||||||
PURPOSE: [Analysis objective - what to discover and why]
|
|
||||||
TASK:
|
|
||||||
• [Specific analysis task 1]
|
|
||||||
• [Specific analysis task 2]
|
|
||||||
• [Specific analysis task 3]
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @**/* | Memory: [Previous findings, related modules, architectural context]
|
|
||||||
EXPECTED: [Report format, key insights, specific deliverables]
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on [scope constraints] | analysis=READ-ONLY
|
|
||||||
" -m gemini-2.5-pro
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Non-standard pattern discovery
|
|
||||||
- Design intent extraction
|
|
||||||
- Architectural layer identification
|
|
||||||
- Code smell detection
|
|
||||||
|
|
||||||
**Fallback**: Qwen CLI with same command structure
|
|
||||||
|
|
||||||
### MCP Code Index (Optional Enhancement)
|
|
||||||
|
|
||||||
**Tools**:
|
|
||||||
- `mcp__code-index__set_project_path(path)` - Initialize index
|
|
||||||
- `mcp__code-index__find_files(pattern)` - File discovery
|
|
||||||
- `mcp__code-index__search_code_advanced(pattern, file_pattern, regex)` - Content search
|
|
||||||
- `mcp__code-index__get_file_summary(file_path)` - File structure analysis
|
|
||||||
|
|
||||||
**Integration Strategy**: Use as primary discovery tool when available, fallback to bash/rg otherwise
|
|
||||||
|
|
||||||
## Output Formats
|
|
||||||
|
|
||||||
### Structural Overview Report
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Code Structure Analysis: {Module/Directory Name}
|
|
||||||
|
|
||||||
## Project Structure
|
|
||||||
{Output from get_modules_by_depth.sh}
|
|
||||||
|
|
||||||
## File Inventory
|
|
||||||
- **Total Files**: {count}
|
|
||||||
- **Primary Language**: {language}
|
|
||||||
- **Key Directories**:
|
|
||||||
- `src/`: {brief description}
|
|
||||||
- `tests/`: {brief description}
|
|
||||||
|
|
||||||
## Component Discovery
|
|
||||||
### Classes ({count})
|
|
||||||
- {ClassName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
### Functions ({count})
|
|
||||||
- {functionName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
### Interfaces/Types ({count})
|
|
||||||
- {TypeName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
## Analysis Summary
|
|
||||||
- **Complexity**: {low|medium|high}
|
|
||||||
- **Architecture Style**: {pattern name}
|
|
||||||
- **Key Patterns**: {list}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Semantic Analysis Report
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Deep Code Analysis: {Module/Directory Name}
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
{High-level findings from Gemini semantic analysis}
|
|
||||||
|
|
||||||
## Architectural Patterns
|
|
||||||
- **Primary Pattern**: {pattern name}
|
|
||||||
- **Layer Structure**: {layers identified}
|
|
||||||
- **Design Intent**: {extracted from comments/structure}
|
|
||||||
|
|
||||||
## Dual-Source Findings
|
|
||||||
|
|
||||||
### Bash Structural Scan Results
|
|
||||||
- **Standard Patterns Found**: {count}
|
|
||||||
- **Key Exports**: {list with file:line}
|
|
||||||
- **Import Structure**: {summary}
|
|
||||||
|
|
||||||
### Gemini Semantic Discoveries
|
|
||||||
- **Non-Standard Patterns**: {list with explanations}
|
|
||||||
- **Implicit Dependencies**: {list}
|
|
||||||
- **Design Intent Summary**: {paragraph}
|
|
||||||
- **Recommendations**: {list}
|
|
||||||
|
|
||||||
### Synthesis
|
|
||||||
{Merged understanding with attributed sources}
|
|
||||||
|
|
||||||
## Code Inventory (Attributed)
|
|
||||||
### Classes
|
|
||||||
- {ClassName} [{bash-discovered|gemini-discovered}]
|
|
||||||
- Location: {file}:{line}
|
|
||||||
- Purpose: {from semantic analysis}
|
|
||||||
- Pattern: {design pattern if applicable}
|
|
||||||
|
|
||||||
### Functions
|
|
||||||
- {functionName} [{source}]
|
|
||||||
- Location: {file}:{line}
|
|
||||||
- Role: {from semantic analysis}
|
|
||||||
- Callers: {list if known}
|
|
||||||
|
|
||||||
## Actionable Insights
|
|
||||||
1. {Finding with recommendation}
|
|
||||||
2. {Finding with recommendation}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Dependency Map Report
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"analysis_metadata": {
|
|
||||||
"project_root": "/path/to/project",
|
|
||||||
"timestamp": "2025-01-25T10:30:00Z",
|
|
||||||
"analysis_mode": "dependency-map",
|
|
||||||
"languages": ["typescript"]
|
|
||||||
},
|
|
||||||
"dependency_graph": {
|
|
||||||
"nodes": [
|
|
||||||
{
|
|
||||||
"id": "src/auth/service.ts",
|
|
||||||
"type": "module",
|
|
||||||
"exports": ["AuthService", "login", "logout"],
|
|
||||||
"imports_count": 3,
|
|
||||||
"dependents_count": 5,
|
|
||||||
"layer": "business-logic"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"edges": [
|
|
||||||
{
|
|
||||||
"from": "src/auth/controller.ts",
|
|
||||||
"to": "src/auth/service.ts",
|
|
||||||
"type": "direct-import",
|
|
||||||
"symbols": ["AuthService"]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"circular_dependencies": [
|
|
||||||
{
|
|
||||||
"cycle": ["A.ts", "B.ts", "C.ts", "A.ts"],
|
|
||||||
"risk_level": "high",
|
|
||||||
"impact": "Refactoring A.ts requires changes to B.ts and C.ts"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"risk_assessment": {
|
|
||||||
"high_coupling": [
|
|
||||||
{
|
|
||||||
"module": "src/utils/helpers.ts",
|
|
||||||
"dependents_count": 23,
|
|
||||||
"risk": "Changes impact 23 modules"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"orphaned_modules": [
|
|
||||||
{
|
|
||||||
"module": "src/legacy/old_auth.ts",
|
|
||||||
"risk": "Dead code, candidate for removal"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"recommendations": [
|
|
||||||
"Break circular dependency between A.ts and B.ts by introducing interface abstraction",
|
|
||||||
"Refactor helpers.ts to reduce coupling (split into domain-specific utilities)"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Patterns
|
|
||||||
|
|
||||||
### Pattern 1: Quick Project Reconnaissance
|
|
||||||
|
|
||||||
**Trigger**: User asks "What's the structure of X module?" or "Where is X defined?"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Run get_modules_by_depth.sh for structural overview
|
|
||||||
2. Use rg to find definitions: rg "class|function|interface X" -n
|
|
||||||
3. Generate structural overview report
|
|
||||||
4. Return markdown report without Gemini analysis
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Structural Overview Report
|
|
||||||
**Time**: <30 seconds
|
|
||||||
|
|
||||||
### Pattern 2: Architecture Deep-Dive
|
|
||||||
|
|
||||||
**Trigger**: User asks "How does X work?" or "Explain the architecture of X"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Phase 1 (Bash): Scan for standard patterns (classes, functions, imports)
|
|
||||||
2. Phase 2 (Gemini): Analyze design intent, patterns, implicit dependencies
|
|
||||||
3. Phase 3 (Synthesis): Merge results with attribution
|
|
||||||
4. Generate semantic analysis report with architectural insights
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Semantic Analysis Report
|
|
||||||
**Time**: 2-5 minutes
|
|
||||||
|
|
||||||
### Pattern 3: Refactoring Impact Analysis
|
|
||||||
|
|
||||||
**Trigger**: User asks "What depends on X?" or "Impact of changing X?"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Build dependency graph using rg for direct dependencies
|
|
||||||
2. Use Gemini to discover runtime/implicit dependencies
|
|
||||||
3. Detect circular dependencies and high-coupling modules
|
|
||||||
4. Calculate change risk scores
|
|
||||||
5. Generate dependency map report with recommendations
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Dependency Map Report (JSON + Markdown summary)
|
|
||||||
**Time**: 3-8 minutes
|
|
||||||
|
|
||||||
## Quality Assurance
|
|
||||||
|
|
||||||
### Validation Checks
|
|
||||||
|
|
||||||
**Completeness**:
|
|
||||||
- ✅ All requested analysis objectives addressed
|
|
||||||
- ✅ Key components inventoried with file:line locations
|
|
||||||
- ✅ Dual-source strategy applied (Bash + Gemini) for deep-scan mode
|
|
||||||
- ✅ Findings attributed to discovery source (bash/gemini)
|
|
||||||
|
|
||||||
**Accuracy**:
|
|
||||||
- ✅ File paths verified (exist and accessible)
|
|
||||||
- ✅ Line numbers accurate (cross-referenced with actual files)
|
|
||||||
- ✅ Code snippets match source (no fabrication)
|
|
||||||
- ✅ Dependency relationships validated (bidirectional checks)
|
|
||||||
|
|
||||||
**Actionability**:
|
|
||||||
- ✅ Recommendations specific and implementable
|
|
||||||
- ✅ Risk assessments quantified (low/medium/high with metrics)
|
|
||||||
- ✅ Next steps clearly defined
|
|
||||||
- ✅ No ambiguous findings (everything has file:line context)
|
|
||||||
|
|
||||||
### Error Recovery
|
|
||||||
|
|
||||||
**Common Issues**:
|
|
||||||
1. **Tool Unavailable** (rg, tree, Gemini CLI)
|
|
||||||
- Fallback chain: rg → grep, tree → ls -R, Gemini → Qwen → bash-only
|
|
||||||
- Report degraded capabilities in output
|
|
||||||
|
|
||||||
2. **Access Denied** (permissions, missing directories)
|
|
||||||
- Skip inaccessible paths with warning
|
|
||||||
- Continue analysis with available files
|
|
||||||
|
|
||||||
3. **Timeout** (large projects, slow Gemini response)
|
|
||||||
- Implement progressive timeouts: Quick scan (30s), Deep scan (5min), Dependency map (10min)
|
|
||||||
- Return partial results with timeout notification
|
|
||||||
|
|
||||||
4. **Ambiguous Patterns** (conflicting interpretations)
|
|
||||||
- Use Gemini semantic analysis as tiebreaker
|
|
||||||
- Document uncertainty in report with attribution
|
|
||||||
|
|
||||||
## Available Tools & Services
|
|
||||||
|
|
||||||
This agent can leverage the following tools to enhance analysis:
|
|
||||||
|
|
||||||
**Context Search Agent** (`context-search-agent`):
|
|
||||||
- **Use Case**: Get project-wide context before analysis
|
|
||||||
- **When to use**: Need comprehensive project understanding beyond file structure
|
|
||||||
- **Integration**: Call context-search-agent first, then use results to guide exploration
|
|
||||||
|
|
||||||
**MCP Tools** (Code Index):
|
|
||||||
- **Use Case**: Enhanced file discovery and search capabilities
|
|
||||||
- **When to use**: Large codebases requiring fast pattern discovery
|
|
||||||
- **Integration**: Prefer Code Index MCP when available, fallback to rg/bash tools
|
|
||||||
|
|
||||||
## Key Reminders
|
|
||||||
|
|
||||||
### ALWAYS
|
|
||||||
|
|
||||||
**Analysis Integrity**: ✅ Read-only operations | ✅ No file modifications | ✅ No state persistence | ✅ Verify file paths before reporting
|
|
||||||
|
|
||||||
**Dual-Source Strategy** (Deep-Scan Mode): ✅ Execute Bash scan first (Phase 1) | ✅ Run Gemini analysis (Phase 2) | ✅ Synthesize with attribution (Phase 3) | ✅ Cross-validate findings
|
|
||||||
|
|
||||||
**Tool Chain**: ✅ Prefer Code Index MCP when available | ✅ Fallback to rg/bash tools | ✅ Use Gemini CLI for semantic analysis (Qwen as fallback) | ✅ Handle tool unavailability gracefully
|
|
||||||
|
|
||||||
**Output Standards**: ✅ Include file:line locations | ✅ Attribute findings to source (bash/gemini) | ✅ Provide actionable recommendations | ✅ Use standardized report formats
|
|
||||||
|
|
||||||
**Mode Selection**: ✅ Match mode to task intent (quick-scan for simple queries, deep-scan for architecture, dependency-map for refactoring) | ✅ Communicate mode choice to user
|
|
||||||
|
|
||||||
### NEVER
|
|
||||||
|
|
||||||
**File Operations**: ❌ Modify files | ❌ Create/delete files | ❌ Execute write operations | ❌ Run build/test commands that change state
|
|
||||||
|
|
||||||
**Analysis Scope**: ❌ Exceed requested scope | ❌ Analyze unrelated modules | ❌ Include irrelevant findings | ❌ Mix multiple unrelated queries
|
|
||||||
|
|
||||||
**Output Quality**: ❌ Fabricate code snippets | ❌ Guess file locations | ❌ Report unverified dependencies | ❌ Provide ambiguous recommendations without context
|
|
||||||
|
|
||||||
**Tool Usage**: ❌ Skip Bash scan in deep-scan mode | ❌ Use Gemini for quick-scan mode (overkill) | ❌ Ignore fallback chain when tool fails | ❌ Proceed with incomplete tool setup
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Command Templates by Language
|
## 4-Phase Execution Workflow
|
||||||
|
|
||||||
### TypeScript/JavaScript
|
```
|
||||||
|
Phase 1: Task Understanding
|
||||||
```bash
|
↓ Parse prompt for: analysis scope, output requirements, schema path
|
||||||
# Quick structural scan
|
Phase 2: Analysis Execution
|
||||||
rg "^export (class|interface|type|function|const) " --type ts -n
|
↓ Bash structural scan + Gemini semantic analysis (based on mode)
|
||||||
|
Phase 3: Schema Validation (MANDATORY if schema specified)
|
||||||
# Find component definitions (React)
|
↓ Read schema → Extract EXACT field names → Validate structure
|
||||||
rg "^export (default )?(function|const) \w+.*=.*\(" --type tsx -n
|
Phase 4: Output Generation
|
||||||
|
↓ Agent report + File output (strictly schema-compliant)
|
||||||
# Find imports
|
|
||||||
rg "^import .* from ['\"](.+)['\"]" --type ts -o -r '$1'
|
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "*.test.ts" -o -name "*.spec.ts" | grep -v node_modules
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Python
|
---
|
||||||
|
|
||||||
|
## Phase 1: Task Understanding
|
||||||
|
|
||||||
|
**Extract from prompt**:
|
||||||
|
- Analysis target and scope
|
||||||
|
- Analysis mode (quick-scan / deep-scan / dependency-map)
|
||||||
|
- Output file path (if specified)
|
||||||
|
- Schema file path (if specified)
|
||||||
|
- Additional requirements and constraints
|
||||||
|
|
||||||
|
**Determine analysis depth from prompt keywords**:
|
||||||
|
- Quick lookup, structure overview → quick-scan
|
||||||
|
- Deep analysis, design intent, architecture → deep-scan
|
||||||
|
- Dependencies, impact analysis, coupling → dependency-map
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Analysis Execution
|
||||||
|
|
||||||
|
### Available Tools
|
||||||
|
|
||||||
|
- `Read()` - Load package.json, requirements.txt, pyproject.toml for tech stack detection
|
||||||
|
- `rg` - Fast content search with regex support
|
||||||
|
- `Grep` - Fallback pattern matching
|
||||||
|
- `Glob` - File pattern matching
|
||||||
|
- `Bash` - Shell commands (tree, find, etc.)
|
||||||
|
|
||||||
|
### Bash Structural Scan
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Find class definitions
|
# Project structure
|
||||||
rg "^class \w+.*:" --type py -n
|
ccw tool exec get_modules_by_depth '{}'
|
||||||
|
|
||||||
# Find function definitions
|
# Pattern discovery (adapt based on language)
|
||||||
rg "^def \w+\(" --type py -n
|
rg "^export (class|interface|function) " --type ts -n
|
||||||
|
rg "^(class|def) \w+" --type py -n
|
||||||
# Find imports
|
rg "^import .* from " -n | head -30
|
||||||
rg "^(from .* import|import )" --type py -n
|
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "test_*.py" -o -name "*_test.py"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Go
|
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Find type definitions
|
cd {dir} && gemini -p "
|
||||||
rg "^type \w+ (struct|interface)" --type go -n
|
PURPOSE: {from prompt}
|
||||||
|
TASK: {from prompt}
|
||||||
# Find function definitions
|
MODE: analysis
|
||||||
rg "^func (\(\w+ \*?\w+\) )?\w+\(" --type go -n
|
CONTEXT: @**/*
|
||||||
|
EXPECTED: {from prompt}
|
||||||
# Find imports
|
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
||||||
rg "^import \(" --type go -A 10
|
"
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "*_test.go"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Java
|
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|
||||||
```bash
|
### Dual-Source Synthesis
|
||||||
# Find class definitions
|
|
||||||
rg "^(public |private |protected )?(class|interface|enum) \w+" --type java -n
|
|
||||||
|
|
||||||
# Find method definitions
|
1. Bash results: Precise file:line locations
|
||||||
rg "^\s+(public |private |protected ).*\w+\(.*\)" --type java -n
|
2. Gemini results: Semantic understanding, design intent
|
||||||
|
3. Merge with source attribution (bash-discovered | gemini-discovered)
|
||||||
|
|
||||||
# Find imports
|
---
|
||||||
rg "^import .*;" --type java -n
|
|
||||||
|
|
||||||
# Find test files
|
## Phase 3: Schema Validation
|
||||||
find . -name "*Test.java" -o -name "*Tests.java"
|
|
||||||
|
### ⚠️ CRITICAL: Schema Compliance Protocol
|
||||||
|
|
||||||
|
**This phase is MANDATORY when schema file is specified in prompt.**
|
||||||
|
|
||||||
|
**Step 1: Read Schema FIRST**
|
||||||
```
|
```
|
||||||
|
Read(schema_file_path)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Extract Schema Requirements**
|
||||||
|
|
||||||
|
Parse and memorize:
|
||||||
|
1. **Root structure** - Is it array `[...]` or object `{...}`?
|
||||||
|
2. **Required fields** - List all `"required": [...]` arrays
|
||||||
|
3. **Field names EXACTLY** - Copy character-by-character (case-sensitive)
|
||||||
|
4. **Enum values** - Copy exact strings (e.g., `"critical"` not `"Critical"`)
|
||||||
|
5. **Nested structures** - Note flat vs nested requirements
|
||||||
|
|
||||||
|
**Step 3: Pre-Output Validation Checklist**
|
||||||
|
|
||||||
|
Before writing ANY JSON output, verify:
|
||||||
|
|
||||||
|
- [ ] Root structure matches schema (array vs object)
|
||||||
|
- [ ] ALL required fields present at each level
|
||||||
|
- [ ] Field names EXACTLY match schema (character-by-character)
|
||||||
|
- [ ] Enum values EXACTLY match schema (case-sensitive)
|
||||||
|
- [ ] Nested structures follow schema pattern (flat vs nested)
|
||||||
|
- [ ] Data types correct (string, integer, array, object)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: Output Generation
|
||||||
|
|
||||||
|
### Agent Output (return to caller)
|
||||||
|
|
||||||
|
Brief summary:
|
||||||
|
- Task completion status
|
||||||
|
- Key findings summary
|
||||||
|
- Generated file paths (if any)
|
||||||
|
|
||||||
|
### File Output (as specified in prompt)
|
||||||
|
|
||||||
|
**⚠️ MANDATORY WORKFLOW**:
|
||||||
|
|
||||||
|
1. `Read()` schema file BEFORE generating output
|
||||||
|
2. Extract ALL field names from schema
|
||||||
|
3. Build JSON using ONLY schema field names
|
||||||
|
4. Validate against checklist before writing
|
||||||
|
5. Write file with validated content
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Tool Fallback**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|
||||||
|
**Schema Validation Failure**: Identify error → Correct → Re-validate
|
||||||
|
|
||||||
|
**Timeout**: Return partial results + timeout notification
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Reminders
|
||||||
|
|
||||||
|
**ALWAYS**:
|
||||||
|
1. Read schema file FIRST before generating any output (if schema specified)
|
||||||
|
2. Copy field names EXACTLY from schema (case-sensitive)
|
||||||
|
3. Verify root structure matches schema (array vs object)
|
||||||
|
4. Match nested/flat structures as schema requires
|
||||||
|
5. Use exact enum values from schema (case-sensitive)
|
||||||
|
6. Include ALL required fields at every level
|
||||||
|
7. Include file:line references in findings
|
||||||
|
8. Attribute discovery source (bash/gemini)
|
||||||
|
|
||||||
|
**NEVER**:
|
||||||
|
1. Modify any files (read-only agent)
|
||||||
|
2. Skip schema reading step when schema is specified
|
||||||
|
3. Guess field names - ALWAYS copy from schema
|
||||||
|
4. Assume structure - ALWAYS verify against schema
|
||||||
|
5. Omit required fields
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -66,8 +66,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
|||||||
"task_config": {
|
"task_config": {
|
||||||
"agent": "@test-fix-agent",
|
"agent": "@test-fix-agent",
|
||||||
"type": "test-fix-iteration",
|
"type": "test-fix-iteration",
|
||||||
"max_iterations": 5,
|
"max_iterations": 5
|
||||||
"use_codex": false
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -263,7 +262,6 @@ function extractModificationPoints() {
|
|||||||
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
||||||
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
||||||
"max_iterations": "{task_config.max_iterations}",
|
"max_iterations": "{task_config.max_iterations}",
|
||||||
"use_codex": "{task_config.use_codex}",
|
|
||||||
"parent_task": "{parent_task_id}",
|
"parent_task": "{parent_task_id}",
|
||||||
"created_by": "@cli-planning-agent",
|
"created_by": "@cli-planning-agent",
|
||||||
"created_at": "{timestamp}"
|
"created_at": "{timestamp}"
|
||||||
|
|||||||
@@ -24,8 +24,6 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- **Context-driven** - Use provided context and existing code patterns
|
- **Context-driven** - Use provided context and existing code patterns
|
||||||
- **Quality over speed** - Write boring, reliable code that works
|
- **Quality over speed** - Write boring, reliable code that works
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
### 1. Context Assessment
|
### 1. Context Assessment
|
||||||
@@ -35,7 +33,7 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- Project CLAUDE.md standards
|
- Project CLAUDE.md standards
|
||||||
- **context-package.json** (when available in workflow tasks)
|
- **context-package.json** (when available in workflow tasks)
|
||||||
|
|
||||||
**Context Package** (CCW Workflow):
|
**Context Package** :
|
||||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
||||||
```bash
|
```bash
|
||||||
# Get role analysis paths from context package
|
# Get role analysis paths from context package
|
||||||
|
|||||||
@@ -119,17 +119,6 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
- No dependency management
|
- No dependency management
|
||||||
- Used for temporary context preparation
|
- Used for temporary context preparation
|
||||||
|
|
||||||
### NOT Handled by This Agent
|
|
||||||
|
|
||||||
**JSON format** (used by code-developer, test-fix-agent):
|
|
||||||
```json
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [...],
|
|
||||||
"implementation_approach": [...]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
This complete JSON format is stored in `.task/IMPL-*.json` files and handled by implementation agents, not conceptual-planning-agent.
|
|
||||||
|
|
||||||
### Role-Specific Analysis Dimensions
|
### Role-Specific Analysis Dimensions
|
||||||
|
|
||||||
@@ -146,14 +135,14 @@ This complete JSON format is stored in `.task/IMPL-*.json` files and handled by
|
|||||||
|
|
||||||
### Output Integration
|
### Output Integration
|
||||||
|
|
||||||
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into the single role's output:
|
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into role output documents:
|
||||||
- Enhanced `analysis.md` with codebase insights and architectural patterns
|
- Enhanced analysis documents with codebase insights and architectural patterns
|
||||||
- Role-specific technical recommendations based on existing conventions
|
- Role-specific technical recommendations based on existing conventions
|
||||||
- Pattern-based best practices from actual code examination
|
- Pattern-based best practices from actual code examination
|
||||||
- Realistic feasibility assessments based on current implementation
|
- Realistic feasibility assessments based on current implementation
|
||||||
|
|
||||||
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
||||||
- Enhanced `analysis.md` with autonomous development recommendations
|
- Enhanced analysis documents with autonomous development recommendations
|
||||||
- Role-specific strategy based on intelligent system understanding
|
- Role-specific strategy based on intelligent system understanding
|
||||||
- Autonomous development approaches and implementation guidance
|
- Autonomous development approaches and implementation guidance
|
||||||
- Self-guided optimization and integration recommendations
|
- Self-guided optimization and integration recommendations
|
||||||
@@ -229,26 +218,23 @@ Generate documents according to loaded role template specifications:
|
|||||||
|
|
||||||
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
||||||
|
|
||||||
**Required Files**:
|
**Output Files**:
|
||||||
- **analysis.md**: Main role perspective analysis incorporating user context and role template
|
- **analysis.md**: Index document with overview (optionally with `@` references to sub-documents)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
|
||||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
||||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
- **analysis-{slug}.md**: Section content documents (slug from section heading: lowercase, hyphens)
|
||||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
- Maximum 5 sub-documents (merge related sections if needed)
|
||||||
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
|
- **Content**: Analysis AND recommendations sections
|
||||||
|
|
||||||
**File Structure Example**:
|
**File Structure Example**:
|
||||||
```
|
```
|
||||||
.workflow/WFS-[session]/.brainstorming/system-architect/
|
.workflow/WFS-[session]/.brainstorming/system-architect/
|
||||||
├── analysis.md # Main system architecture analysis with recommendations
|
├── analysis.md # Index with overview + @references
|
||||||
├── analysis-1.md # (Optional) Continuation if content >800 lines
|
├── analysis-architecture-assessment.md # Section content
|
||||||
└── deliverables/ # (Optional) Additional role-specific outputs
|
├── analysis-technology-evaluation.md # Section content
|
||||||
├── technical-architecture.md # System design specifications
|
├── analysis-integration-strategy.md # Section content
|
||||||
├── technology-stack.md # Technology selection rationale
|
└── analysis-recommendations.md # Section content (max 5 sub-docs total)
|
||||||
└── scalability-plan.md # Scaling strategy
|
|
||||||
|
|
||||||
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
|
NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
|
||||||
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Role-Specific Planning Process
|
## Role-Specific Planning Process
|
||||||
@@ -268,14 +254,10 @@ FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefi
|
|||||||
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
||||||
|
|
||||||
### 3. Brainstorming Documentation Phase
|
### 3. Brainstorming Documentation Phase
|
||||||
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
|
- **Create analysis.md**: Main document with overview (optionally with `@` references)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
- **Create sub-documents**: `analysis-{slug}.md` for major sections (max 5)
|
||||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
|
||||||
- **Content**: Include both analysis AND recommendations sections within analysis files
|
|
||||||
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
|
||||||
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
|
|
||||||
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
||||||
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
|
- **Naming Validation**: Verify ALL files start with `analysis` prefix
|
||||||
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
||||||
|
|
||||||
## Role-Specific Analysis Framework
|
## Role-Specific Analysis Framework
|
||||||
@@ -324,5 +306,3 @@ When analysis is complete, ensure:
|
|||||||
- **Relevance**: Directly addresses user's specified requirements
|
- **Relevance**: Directly addresses user's specified requirements
|
||||||
- **Actionability**: Provides concrete next steps and recommendations
|
- **Actionability**: Provides concrete next steps and recommendations
|
||||||
|
|
||||||
### Windows Path Format Guidelines
|
|
||||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
|
||||||
|
|||||||
@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
|
|||||||
### 1. Reference Documentation (Project Standards)
|
### 1. Reference Documentation (Project Standards)
|
||||||
**Tools**:
|
**Tools**:
|
||||||
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
||||||
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
|
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
|
||||||
- `Glob()` - Find documentation files
|
- `Glob()` - Find documentation files
|
||||||
|
|
||||||
**Use**: Phase 0 foundation setup
|
**Use**: Phase 0 foundation setup
|
||||||
@@ -82,7 +82,7 @@ mcp__code-index__set_project_path(process.cwd())
|
|||||||
mcp__code-index__refresh_index()
|
mcp__code-index__refresh_index()
|
||||||
|
|
||||||
// 2. Project Structure
|
// 2. Project Structure
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
bash(ccw tool exec get_modules_by_depth '{}')
|
||||||
|
|
||||||
// 3. Load Documentation (if not in memory)
|
// 3. Load Documentation (if not in memory)
|
||||||
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
||||||
@@ -100,7 +100,87 @@ if (!memory.has("README.md")) Read(README.md)
|
|||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
### Phase 2: Multi-Source Context Discovery
|
||||||
|
|
||||||
Execute all 3 tracks in parallel for comprehensive coverage.
|
Execute all tracks in parallel for comprehensive coverage.
|
||||||
|
|
||||||
|
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
|
||||||
|
|
||||||
|
#### Track 0: Exploration Synthesis (Optional)
|
||||||
|
|
||||||
|
**Trigger**: When `explorations-manifest.json` exists in session `.process/` folder
|
||||||
|
|
||||||
|
**Purpose**: Transform raw exploration data into prioritized, deduplicated insights. This is NOT simple aggregation - it synthesizes `critical_files` (priority-ranked), deduplicates patterns/integration_points, and generates `conflict_indicators`.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check for exploration results from context-gather parallel explore phase
|
||||||
|
const manifestPath = `.workflow/active/${session_id}/.process/explorations-manifest.json`;
|
||||||
|
if (file_exists(manifestPath)) {
|
||||||
|
const manifest = JSON.parse(Read(manifestPath));
|
||||||
|
|
||||||
|
// Load full exploration data from each file
|
||||||
|
const explorationData = manifest.explorations.map(exp => ({
|
||||||
|
...exp,
|
||||||
|
data: JSON.parse(Read(exp.path))
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Build explorations array with summaries
|
||||||
|
const explorations = explorationData.map(exp => ({
|
||||||
|
angle: exp.angle,
|
||||||
|
file: exp.file,
|
||||||
|
path: exp.path,
|
||||||
|
index: exp.data._metadata?.exploration_index || exp.index,
|
||||||
|
summary: {
|
||||||
|
relevant_files_count: exp.data.relevant_files?.length || 0,
|
||||||
|
key_patterns: exp.data.patterns,
|
||||||
|
integration_points: exp.data.integration_points
|
||||||
|
}
|
||||||
|
}));
|
||||||
|
|
||||||
|
// SYNTHESIS (not aggregation): Transform raw data into prioritized insights
|
||||||
|
const aggregated_insights = {
|
||||||
|
// CRITICAL: Synthesize priority-ranked critical_files from multiple relevant_files lists
|
||||||
|
// - Deduplicate by path
|
||||||
|
// - Rank by: mention count across angles + individual relevance scores
|
||||||
|
// - Top 10-15 files only (focused, actionable)
|
||||||
|
critical_files: synthesizeCriticalFiles(explorationData.flatMap(e => e.data.relevant_files || [])),
|
||||||
|
|
||||||
|
// SYNTHESIS: Generate conflict indicators from pattern mismatches, constraint violations
|
||||||
|
conflict_indicators: synthesizeConflictIndicators(explorationData),
|
||||||
|
|
||||||
|
// Deduplicate clarification questions (merge similar questions)
|
||||||
|
clarification_needs: deduplicateQuestions(explorationData.flatMap(e => e.data.clarification_needs || [])),
|
||||||
|
|
||||||
|
// Preserve source attribution for traceability
|
||||||
|
constraints: explorationData.map(e => ({ constraint: e.data.constraints, source_angle: e.angle })).filter(c => c.constraint),
|
||||||
|
|
||||||
|
// Deduplicate patterns across angles (merge identical patterns)
|
||||||
|
all_patterns: deduplicatePatterns(explorationData.map(e => ({ patterns: e.data.patterns, source_angle: e.angle }))),
|
||||||
|
|
||||||
|
// Deduplicate integration points (merge by file:line location)
|
||||||
|
all_integration_points: deduplicateIntegrationPoints(explorationData.map(e => ({ points: e.data.integration_points, source_angle: e.angle })))
|
||||||
|
};
|
||||||
|
|
||||||
|
// Store for Phase 3 packaging
|
||||||
|
exploration_results = { manifest_path: manifestPath, exploration_count: manifest.exploration_count,
|
||||||
|
complexity: manifest.complexity, angles: manifest.angles_explored,
|
||||||
|
explorations, aggregated_insights };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Synthesis helper functions (conceptual)
|
||||||
|
function synthesizeCriticalFiles(allRelevantFiles) {
|
||||||
|
// 1. Group by path
|
||||||
|
// 2. Count mentions across angles
|
||||||
|
// 3. Average relevance scores
|
||||||
|
// 4. Rank by: (mention_count * 0.6) + (avg_relevance * 0.4)
|
||||||
|
// 5. Return top 10-15 with mentioned_by_angles attribution
|
||||||
|
}
|
||||||
|
|
||||||
|
function synthesizeConflictIndicators(explorationData) {
|
||||||
|
// 1. Detect pattern mismatches across angles
|
||||||
|
// 2. Identify constraint violations
|
||||||
|
// 3. Flag files mentioned with conflicting integration approaches
|
||||||
|
// 4. Assign severity: critical/high/medium/low
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Track 1: Reference Documentation
|
#### Track 1: Reference Documentation
|
||||||
|
|
||||||
@@ -369,7 +449,12 @@ Calculate risk level based on:
|
|||||||
{
|
{
|
||||||
"path": "system-architect/analysis.md",
|
"path": "system-architect/analysis.md",
|
||||||
"type": "primary",
|
"type": "primary",
|
||||||
"content": "# System Architecture Analysis\n\n## Overview\n..."
|
"content": "# System Architecture Analysis\n\n## Overview\n@analysis-architecture.md\n@analysis-recommendations.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "system-architect/analysis-architecture.md",
|
||||||
|
"type": "supplementary",
|
||||||
|
"content": "# Architecture Assessment\n\n..."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -391,33 +476,40 @@ Calculate risk level based on:
|
|||||||
},
|
},
|
||||||
"affected_modules": ["auth", "user-model", "middleware"],
|
"affected_modules": ["auth", "user-model", "middleware"],
|
||||||
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
||||||
|
},
|
||||||
|
"exploration_results": {
|
||||||
|
"manifest_path": ".workflow/active/{session}/.process/explorations-manifest.json",
|
||||||
|
"exploration_count": 3,
|
||||||
|
"complexity": "Medium",
|
||||||
|
"angles": ["architecture", "dependencies", "testing"],
|
||||||
|
"explorations": [
|
||||||
|
{
|
||||||
|
"angle": "architecture",
|
||||||
|
"file": "exploration-architecture.json",
|
||||||
|
"path": ".workflow/active/{session}/.process/exploration-architecture.json",
|
||||||
|
"index": 1,
|
||||||
|
"summary": {
|
||||||
|
"relevant_files_count": 5,
|
||||||
|
"key_patterns": "Service layer with DI",
|
||||||
|
"integration_points": "Container.registerService:45-60"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"aggregated_insights": {
|
||||||
|
"critical_files": [{"path": "src/auth/AuthService.ts", "relevance": 0.95, "mentioned_by_angles": ["architecture"]}],
|
||||||
|
"conflict_indicators": [{"type": "pattern_mismatch", "description": "...", "source_angle": "architecture", "severity": "medium"}],
|
||||||
|
"clarification_needs": [{"question": "...", "context": "...", "options": [], "source_angle": "architecture"}],
|
||||||
|
"constraints": [{"constraint": "Must follow existing DI pattern", "source_angle": "architecture"}],
|
||||||
|
"all_patterns": [{"patterns": "Service layer with DI", "source_angle": "architecture"}],
|
||||||
|
"all_integration_points": [{"points": "Container.registerService:45-60", "source_angle": "architecture"}]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Execution Mode: Brainstorm vs Plan
|
**Note**: `exploration_results` is populated when exploration files exist (from context-gather parallel explore phase). If no explorations, this field is omitted or empty.
|
||||||
|
|
||||||
### Brainstorm Mode (Lightweight)
|
|
||||||
**Purpose**: Provide high-level context for generating brainstorming questions
|
|
||||||
**Execution**: Phase 1-2 only (skip deep analysis)
|
|
||||||
**Output**:
|
|
||||||
- Lightweight context-package with:
|
|
||||||
- Project structure overview
|
|
||||||
- Tech stack identification
|
|
||||||
- High-level existing module names
|
|
||||||
- Basic conflict risk (file count only)
|
|
||||||
- Skip: Detailed dependency graphs, deep code analysis, web research
|
|
||||||
|
|
||||||
### Plan Mode (Comprehensive)
|
|
||||||
**Purpose**: Detailed implementation planning with conflict detection
|
|
||||||
**Execution**: Full Phase 1-3 (complete discovery + analysis)
|
|
||||||
**Output**:
|
|
||||||
- Comprehensive context-package with:
|
|
||||||
- Detailed dependency graphs
|
|
||||||
- Deep code structure analysis
|
|
||||||
- Conflict detection with mitigation strategies
|
|
||||||
- Web research for unfamiliar tech
|
|
||||||
- Include: All discovery tracks, relevance scoring, 3-source synthesis
|
|
||||||
|
|
||||||
## Quality Validation
|
## Quality Validation
|
||||||
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
|
|||||||
|
|
||||||
## Core Mission
|
## Core Mission
|
||||||
|
|
||||||
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
|
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
|
||||||
|
|
||||||
## Input Context
|
## Input Context
|
||||||
|
|
||||||
@@ -42,12 +42,12 @@ TodoWrite([
|
|||||||
# 3. Launch parallel jobs (max 4)
|
# 3. Launch parallel jobs (max 4)
|
||||||
|
|
||||||
# Depth 5 example (Layer 3 - use multi-layer):
|
# Depth 5 example (Layer 3 - use multi-layer):
|
||||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
|
||||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
|
||||||
|
|
||||||
# Depth 1 example (Layer 2 - use single-layer):
|
# Depth 1 example (Layer 2 - use single-layer):
|
||||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
|
||||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
|
||||||
# ... up to 4 concurrent jobs
|
# ... up to 4 concurrent jobs
|
||||||
|
|
||||||
# 4. Wait for all depth jobs to complete
|
# 4. Wait for all depth jobs to complete
|
||||||
|
|||||||
@@ -397,23 +397,3 @@ function detect_framework_from_config() {
|
|||||||
- ✅ All missing tests catalogued with priority
|
- ✅ All missing tests catalogued with priority
|
||||||
- ✅ Execution time < 30 seconds (< 60s for large codebases)
|
- ✅ Execution time < 30 seconds (< 60s for large codebases)
|
||||||
|
|
||||||
## Integration Points
|
|
||||||
|
|
||||||
### Called By
|
|
||||||
- `/workflow:tools:test-context-gather` - Orchestrator command
|
|
||||||
|
|
||||||
### Calls
|
|
||||||
- Code-Index MCP tools (preferred)
|
|
||||||
- ripgrep/find (fallback)
|
|
||||||
- Bash file operations
|
|
||||||
|
|
||||||
### Followed By
|
|
||||||
- `/workflow:tools:test-concept-enhanced` - Test generation analysis
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
- **Detection-first**: Always check for existing test-context-package before analysis
|
|
||||||
- **Code-Index priority**: Use MCP tools when available, fallback to CLI
|
|
||||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, etc.
|
|
||||||
- **Coverage gap focus**: Primary goal is identifying missing tests
|
|
||||||
- **Source context critical**: Implementation summaries guide test generation
|
|
||||||
|
|||||||
@@ -142,9 +142,9 @@ run_test_layer "L1-unit" "$UNIT_CMD"
|
|||||||
|
|
||||||
### 3. Failure Diagnosis & Fixing Loop
|
### 3. Failure Diagnosis & Fixing Loop
|
||||||
|
|
||||||
**Execution Modes**:
|
**Execution Modes** (determined by `flow_control.implementation_approach`):
|
||||||
|
|
||||||
**A. Manual Mode (Default, meta.use_codex=false)**:
|
**A. Agent Mode (Default, no `command` field in steps)**:
|
||||||
```
|
```
|
||||||
WHILE tests are failing AND iterations < max_iterations:
|
WHILE tests are failing AND iterations < max_iterations:
|
||||||
1. Use Gemini to diagnose failure (bug-fix template)
|
1. Use Gemini to diagnose failure (bug-fix template)
|
||||||
@@ -155,17 +155,17 @@ WHILE tests are failing AND iterations < max_iterations:
|
|||||||
END WHILE
|
END WHILE
|
||||||
```
|
```
|
||||||
|
|
||||||
**B. Codex Mode (meta.use_codex=true)**:
|
**B. CLI Mode (`command` field present in implementation_approach steps)**:
|
||||||
```
|
```
|
||||||
WHILE tests are failing AND iterations < max_iterations:
|
WHILE tests are failing AND iterations < max_iterations:
|
||||||
1. Use Gemini to diagnose failure (bug-fix template)
|
1. Use Gemini to diagnose failure (bug-fix template)
|
||||||
2. Use Codex to apply fixes automatically with resume mechanism
|
2. Execute `command` field (e.g., Codex) to apply fixes automatically
|
||||||
3. Re-run test suite
|
3. Re-run test suite
|
||||||
4. Verify fix doesn't break other tests
|
4. Verify fix doesn't break other tests
|
||||||
END WHILE
|
END WHILE
|
||||||
```
|
```
|
||||||
|
|
||||||
**Codex Resume in Test-Fix Cycle** (when `meta.use_codex=true`):
|
**Codex Resume in Test-Fix Cycle** (when step has `command` with Codex):
|
||||||
- First iteration: Start new Codex session with full context
|
- First iteration: Start new Codex session with full context
|
||||||
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies
|
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies
|
||||||
|
|
||||||
@@ -331,6 +331,8 @@ When generating test results for orchestrator (saved to `.process/test-results.j
|
|||||||
- Break existing passing tests
|
- Break existing passing tests
|
||||||
- Skip final verification
|
- Skip final verification
|
||||||
- Leave tests failing - must achieve 100% pass rate
|
- Leave tests failing - must achieve 100% pass rate
|
||||||
|
- Use `run_in_background` for Bash() commands - always set `run_in_background=false` to ensure tests run in foreground for proper output capture
|
||||||
|
- Use complex bash pipe chains (`cmd | grep | awk | sed`) - prefer dedicated tools (Read, Grep, Glob) for file operations and content extraction; simple single-pipe commands are acceptable when necessary
|
||||||
|
|
||||||
## Quality Certification
|
## Quality Certification
|
||||||
|
|
||||||
|
|||||||
@@ -217,11 +217,6 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
|||||||
|
|
||||||
### Structure Optimization
|
### Structure Optimization
|
||||||
|
|
||||||
**Layout Structure Benefits**:
|
|
||||||
- Eliminates redundancy between structure and styling
|
|
||||||
- Layout properties co-located with DOM elements
|
|
||||||
- Responsive overrides apply directly to affected elements
|
|
||||||
- Single source of truth for each element
|
|
||||||
|
|
||||||
**Component State Coverage**:
|
**Component State Coverage**:
|
||||||
- Interactive components (button, input, dropdown) MUST define: default, hover, focus, active, disabled
|
- Interactive components (button, input, dropdown) MUST define: default, hover, focus, active, disabled
|
||||||
@@ -323,270 +318,21 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
|||||||
|
|
||||||
### design-tokens.json
|
### design-tokens.json
|
||||||
|
|
||||||
|
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/design-tokens.json`
|
||||||
|
|
||||||
**Format**: W3C Design Tokens Community Group Specification
|
**Format**: W3C Design Tokens Community Group Specification
|
||||||
|
|
||||||
**Schema Structure**:
|
**Structure Overview**:
|
||||||
```json
|
- **color**: Base colors, interactive states (primary, secondary, accent, destructive), muted, chart, sidebar
|
||||||
{
|
- **typography**: Font families, sizes, line heights, letter spacing, combinations
|
||||||
"$schema": "https://tr.designtokens.org/format/",
|
- **spacing**: Systematic scale (0-64, multiples of 0.25rem)
|
||||||
"name": "string - Token set name",
|
- **opacity**: disabled, hover, active
|
||||||
"description": "string - Token set description",
|
- **shadows**: 2xs to 2xl (8-tier system)
|
||||||
|
- **border_radius**: sm to xl + DEFAULT
|
||||||
"color": {
|
- **breakpoints**: sm to 2xl
|
||||||
"background": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" }, "$description": "optional" },
|
- **component**: 12+ components with base, size, variant, state structures
|
||||||
"foreground": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
- **elevation**: z-index values for layered components
|
||||||
"card": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
- **_metadata**: version, created, source, theme_colors_guide, conflicts, code_snippets, usage_recommendations
|
||||||
"card-foreground": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"border": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"input": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"ring": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
|
|
||||||
"interactive": {
|
|
||||||
"primary": {
|
|
||||||
"default": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"hover": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"active": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"disabled": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"foreground": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } }
|
|
||||||
},
|
|
||||||
"secondary": { "/* Same structure as primary */" },
|
|
||||||
"accent": { "/* Same structure (no disabled state) */" },
|
|
||||||
"destructive": { "/* Same structure (no active/disabled states) */" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"muted": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"muted-foreground": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
|
|
||||||
"chart": {
|
|
||||||
"1": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"2": { "/* ... */" },
|
|
||||||
"3": { "/* ... */" },
|
|
||||||
"4": { "/* ... */" },
|
|
||||||
"5": { "/* ... */" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"sidebar": {
|
|
||||||
"background": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"foreground": { "/* ... */" },
|
|
||||||
"primary": { "/* ... */" },
|
|
||||||
"primary-foreground": { "/* ... */" },
|
|
||||||
"accent": { "/* ... */" },
|
|
||||||
"accent-foreground": { "/* ... */" },
|
|
||||||
"border": { "/* ... */" },
|
|
||||||
"ring": { "/* ... */" }
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"typography": {
|
|
||||||
"font_families": {
|
|
||||||
"sans": "string - 'Font Name', fallback1, fallback2",
|
|
||||||
"serif": "string",
|
|
||||||
"mono": "string"
|
|
||||||
},
|
|
||||||
"font_sizes": {
|
|
||||||
"xs": "0.75rem",
|
|
||||||
"sm": "0.875rem",
|
|
||||||
"base": "1rem",
|
|
||||||
"lg": "1.125rem",
|
|
||||||
"xl": "1.25rem",
|
|
||||||
"2xl": "1.5rem",
|
|
||||||
"3xl": "1.875rem",
|
|
||||||
"4xl": "2.25rem"
|
|
||||||
},
|
|
||||||
"line_heights": {
|
|
||||||
"tight": "number",
|
|
||||||
"normal": "number",
|
|
||||||
"relaxed": "number"
|
|
||||||
},
|
|
||||||
"letter_spacing": {
|
|
||||||
"tight": "string",
|
|
||||||
"normal": "string",
|
|
||||||
"wide": "string"
|
|
||||||
},
|
|
||||||
"combinations": [
|
|
||||||
{
|
|
||||||
"name": "h1|h2|h3|h4|h5|h6|body|caption",
|
|
||||||
"font_family": "sans|serif|mono",
|
|
||||||
"font_size": "string - reference to font_sizes",
|
|
||||||
"font_weight": "number - 400|500|600|700",
|
|
||||||
"line_height": "string",
|
|
||||||
"letter_spacing": "string"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"spacing": {
|
|
||||||
"0": "0",
|
|
||||||
"1": "0.25rem",
|
|
||||||
"2": "0.5rem",
|
|
||||||
"/* Systematic scale: 3, 4, 6, 8, 12, 16, 20, 24, 32, 40, 48, 56, 64 */"
|
|
||||||
},
|
|
||||||
|
|
||||||
"opacity": {
|
|
||||||
"disabled": "0.5",
|
|
||||||
"hover": "0.8",
|
|
||||||
"active": "1"
|
|
||||||
},
|
|
||||||
|
|
||||||
"shadows": {
|
|
||||||
"2xs": "string - CSS shadow value",
|
|
||||||
"xs": "string",
|
|
||||||
"sm": "string",
|
|
||||||
"DEFAULT": "string",
|
|
||||||
"md": "string",
|
|
||||||
"lg": "string",
|
|
||||||
"xl": "string",
|
|
||||||
"2xl": "string"
|
|
||||||
},
|
|
||||||
|
|
||||||
"border_radius": {
|
|
||||||
"sm": "string - calc() or fixed",
|
|
||||||
"md": "string",
|
|
||||||
"lg": "string",
|
|
||||||
"xl": "string",
|
|
||||||
"DEFAULT": "string"
|
|
||||||
},
|
|
||||||
|
|
||||||
"breakpoints": {
|
|
||||||
"sm": "640px",
|
|
||||||
"md": "768px",
|
|
||||||
"lg": "1024px",
|
|
||||||
"xl": "1280px",
|
|
||||||
"2xl": "1536px"
|
|
||||||
},
|
|
||||||
|
|
||||||
"component": {
|
|
||||||
"/* COMPONENT PATTERN - Apply to: button, card, input, dialog, dropdown, toast, accordion, tabs, switch, checkbox, badge, alert */": {
|
|
||||||
"$type": "component",
|
|
||||||
"base": {
|
|
||||||
"/* Layout properties using camelCase */": "value or {token.path}",
|
|
||||||
"display": "inline-flex|flex|block",
|
|
||||||
"alignItems": "center",
|
|
||||||
"borderRadius": "{border_radius.md}",
|
|
||||||
"transition": "{transitions.default}"
|
|
||||||
},
|
|
||||||
"size": {
|
|
||||||
"small": { "height": "32px", "padding": "{spacing.2} {spacing.3}", "fontSize": "{typography.font_sizes.xs}" },
|
|
||||||
"default": { "height": "40px", "padding": "{spacing.2} {spacing.4}" },
|
|
||||||
"large": { "height": "48px", "padding": "{spacing.3} {spacing.6}", "fontSize": "{typography.font_sizes.base}" }
|
|
||||||
},
|
|
||||||
"variant": {
|
|
||||||
"variantName": {
|
|
||||||
"default": { "backgroundColor": "{color.interactive.primary.default}", "color": "{color.interactive.primary.foreground}" },
|
|
||||||
"hover": { "backgroundColor": "{color.interactive.primary.hover}" },
|
|
||||||
"active": { "backgroundColor": "{color.interactive.primary.active}" },
|
|
||||||
"disabled": { "backgroundColor": "{color.interactive.primary.disabled}", "opacity": "{opacity.disabled}", "cursor": "not-allowed" },
|
|
||||||
"focus": { "outline": "2px solid {color.ring}", "outlineOffset": "2px" }
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"state": {
|
|
||||||
"/* For stateful components (dialog, accordion, etc.) */": {
|
|
||||||
"open": { "animation": "{animation.name.component-open} {animation.duration.normal} {animation.easing.ease-out}" },
|
|
||||||
"closed": { "animation": "{animation.name.component-close} {animation.duration.normal} {animation.easing.ease-in}" }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"elevation": {
|
|
||||||
"$type": "elevation",
|
|
||||||
"base": { "$value": "0" },
|
|
||||||
"overlay": { "$value": "40" },
|
|
||||||
"dropdown": { "$value": "50" },
|
|
||||||
"dialog": { "$value": "50" },
|
|
||||||
"tooltip": { "$value": "60" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"_metadata": {
|
|
||||||
"version": "string - W3C version or custom version",
|
|
||||||
"created": "ISO timestamp - 2024-01-01T00:00:00Z",
|
|
||||||
"source": "code-import|explore|text",
|
|
||||||
"theme_colors_guide": {
|
|
||||||
"description": "Theme colors are the core brand identity colors that define the visual hierarchy and emotional tone of the design system",
|
|
||||||
"primary": {
|
|
||||||
"role": "Main brand color",
|
|
||||||
"usage": "Primary actions (CTAs, key interactive elements, navigation highlights, primary buttons)",
|
|
||||||
"contrast_requirement": "WCAG AA - 4.5:1 for text, 3:1 for UI components"
|
|
||||||
},
|
|
||||||
"secondary": {
|
|
||||||
"role": "Supporting brand color",
|
|
||||||
"usage": "Secondary actions and complementary elements (less prominent buttons, secondary navigation, supporting features)",
|
|
||||||
"principle": "Should complement primary without competing for attention"
|
|
||||||
},
|
|
||||||
"accent": {
|
|
||||||
"role": "Highlight color for emphasis",
|
|
||||||
"usage": "Attention-grabbing elements used sparingly (badges, notifications, special promotions, highlights)",
|
|
||||||
"principle": "Should create strong visual contrast to draw focus"
|
|
||||||
},
|
|
||||||
"destructive": {
|
|
||||||
"role": "Error and destructive action color",
|
|
||||||
"usage": "Delete buttons, error messages, critical warnings",
|
|
||||||
"principle": "Must signal danger or caution clearly"
|
|
||||||
},
|
|
||||||
"harmony_note": "All theme colors must work harmoniously together and align with brand identity. In multi-file extraction, prioritize definitions with semantic comments explaining brand intent."
|
|
||||||
},
|
|
||||||
"conflicts": [
|
|
||||||
{
|
|
||||||
"token_name": "string - which token has conflicts",
|
|
||||||
"category": "string - colors|typography|etc",
|
|
||||||
"definitions": [
|
|
||||||
{
|
|
||||||
"value": "string - token value",
|
|
||||||
"source_file": "string - absolute path",
|
|
||||||
"line_number": "number",
|
|
||||||
"context": "string - surrounding comment or null",
|
|
||||||
"semantic_intent": "string - interpretation of definition"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"selected_value": "string - final chosen value",
|
|
||||||
"selection_reason": "string - why this value was chosen"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"code_snippets": [
|
|
||||||
{
|
|
||||||
"category": "colors|typography|spacing|shadows|border_radius|component",
|
|
||||||
"token_name": "string - which token this snippet defines",
|
|
||||||
"source_file": "string - absolute path",
|
|
||||||
"line_start": "number",
|
|
||||||
"line_end": "number",
|
|
||||||
"snippet": "string - complete code block",
|
|
||||||
"context_type": "css-variable|css-class|js-object|scss-variable|etc"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"usage_recommendations": {
|
|
||||||
"typography": {
|
|
||||||
"common_sizes": {
|
|
||||||
"small_text": "sm (0.875rem)",
|
|
||||||
"body_text": "base (1rem)",
|
|
||||||
"heading": "2xl-4xl"
|
|
||||||
},
|
|
||||||
"common_combinations": [
|
|
||||||
{
|
|
||||||
"name": "Heading + Body",
|
|
||||||
"heading": "2xl",
|
|
||||||
"body": "base",
|
|
||||||
"use_case": "Article sections"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"spacing": {
|
|
||||||
"size_guide": {
|
|
||||||
"tight": "1-2 (0.25rem-0.5rem)",
|
|
||||||
"normal": "4-6 (1rem-1.5rem)",
|
|
||||||
"loose": "8-12 (2rem-3rem)"
|
|
||||||
},
|
|
||||||
"common_patterns": [
|
|
||||||
{
|
|
||||||
"pattern": "padding-4 margin-bottom-6",
|
|
||||||
"use_case": "Card content spacing",
|
|
||||||
"pixel_value": "1rem padding, 1.5rem margin"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required Components** (12+ components, use pattern above):
|
**Required Components** (12+ components, use pattern above):
|
||||||
- **button**: 5 variants (primary, secondary, destructive, outline, ghost) + 3 sizes + states (default, hover, active, disabled, focus)
|
- **button**: 5 variants (primary, secondary, destructive, outline, ghost) + 3 sizes + states (default, hover, active, disabled, focus)
|
||||||
@@ -637,136 +383,26 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
|||||||
|
|
||||||
### layout-templates.json
|
### layout-templates.json
|
||||||
|
|
||||||
|
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/layout-templates.json`
|
||||||
|
|
||||||
**Optimization**: Unified structure combining DOM and styling into single hierarchy
|
**Optimization**: Unified structure combining DOM and styling into single hierarchy
|
||||||
|
|
||||||
**Schema Structure**:
|
**Structure Overview**:
|
||||||
```json
|
- **templates[]**: Array of layout templates
|
||||||
{
|
- **target**: page/component name (hero-section, product-card)
|
||||||
"$schema": "https://tr.designtokens.org/format/",
|
- **component_type**: universal | specialized
|
||||||
"templates": [
|
- **device_type**: mobile | tablet | desktop | responsive
|
||||||
{
|
- **layout_strategy**: grid-3col, flex-row, stack, sidebar, etc.
|
||||||
"target": "string - page/component name (e.g., hero-section, product-card)",
|
- **structure**: Unified DOM + layout hierarchy
|
||||||
"description": "string - layout description",
|
- **tag**: HTML5 semantic tags
|
||||||
"component_type": "universal|specialized",
|
- **attributes**: class, role, aria-*, data-state
|
||||||
"device_type": "mobile|tablet|desktop|responsive",
|
- **layout**: Layout properties only (display, grid, flex, position, spacing) using {token.path}
|
||||||
"layout_strategy": "string - grid-3col|flex-row|stack|sidebar|etc",
|
- **responsive**: Breakpoint-specific overrides (ONLY changed properties)
|
||||||
|
- **children**: Recursive structure
|
||||||
"structure": {
|
- **content**: Text or {{placeholder}}
|
||||||
"tag": "string - HTML5 semantic tag (header|nav|main|section|article|aside|footer|div|etc)",
|
- **accessibility**: patterns, keyboard_navigation, focus_management, screen_reader_notes
|
||||||
"attributes": {
|
- **usage_guide**: common_sizes, variant_recommendations, usage_context, accessibility_tips
|
||||||
"class": "string - semantic class name",
|
- **extraction_metadata**: source, created, code_snippets
|
||||||
"role": "string - ARIA role (navigation|main|complementary|etc)",
|
|
||||||
"aria-label": "string - ARIA label",
|
|
||||||
"aria-describedby": "string - ARIA describedby",
|
|
||||||
"data-state": "string - data attributes for state management (open|closed|etc)"
|
|
||||||
},
|
|
||||||
"layout": {
|
|
||||||
"/* LAYOUT PROPERTIES ONLY - Use camelCase for property names */": "",
|
|
||||||
"display": "grid|flex|block|inline-flex",
|
|
||||||
"grid-template-columns": "{spacing.*} or CSS value (repeat(3, 1fr))",
|
|
||||||
"grid-template-rows": "string",
|
|
||||||
"gap": "{spacing.*}",
|
|
||||||
"padding": "{spacing.*}",
|
|
||||||
"margin": "{spacing.*}",
|
|
||||||
"alignItems": "start|center|end|stretch",
|
|
||||||
"justifyContent": "start|center|end|space-between|space-around",
|
|
||||||
"flexDirection": "row|column",
|
|
||||||
"flexWrap": "wrap|nowrap",
|
|
||||||
"position": "relative|absolute|fixed|sticky",
|
|
||||||
"top|right|bottom|left": "string",
|
|
||||||
"width": "string",
|
|
||||||
"height": "string",
|
|
||||||
"maxWidth": "string",
|
|
||||||
"minHeight": "string"
|
|
||||||
},
|
|
||||||
"responsive": {
|
|
||||||
"/* ONLY properties that CHANGE at each breakpoint - NO repetition */": "",
|
|
||||||
"sm": {
|
|
||||||
"grid-template-columns": "1fr",
|
|
||||||
"padding": "{spacing.4}"
|
|
||||||
},
|
|
||||||
"md": {
|
|
||||||
"grid-template-columns": "repeat(2, 1fr)",
|
|
||||||
"gap": "{spacing.6}"
|
|
||||||
},
|
|
||||||
"lg": {
|
|
||||||
"grid-template-columns": "repeat(3, 1fr)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"children": [
|
|
||||||
{
|
|
||||||
"/* Recursive structure - same fields as parent */": "",
|
|
||||||
"tag": "string",
|
|
||||||
"attributes": {},
|
|
||||||
"layout": {},
|
|
||||||
"responsive": {},
|
|
||||||
"children": [],
|
|
||||||
"content": "string or {{placeholder}}"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"content": "string - text content or {{placeholder}} for dynamic content"
|
|
||||||
},
|
|
||||||
|
|
||||||
"accessibility": {
|
|
||||||
"patterns": [
|
|
||||||
"string - ARIA patterns used (e.g., WAI-ARIA Tabs pattern, Dialog pattern)"
|
|
||||||
],
|
|
||||||
"keyboard_navigation": [
|
|
||||||
"string - keyboard shortcuts (e.g., Tab/Shift+Tab navigation, Escape to close)"
|
|
||||||
],
|
|
||||||
"focus_management": "string - focus trap strategy, initial focus target",
|
|
||||||
"screen_reader_notes": [
|
|
||||||
"string - screen reader announcements (e.g., Dialog opened, Tab selected)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"usage_guide": {
|
|
||||||
"common_sizes": {
|
|
||||||
"small": {
|
|
||||||
"dimensions": "string - e.g., px-3 py-1.5 (height: ~32px)",
|
|
||||||
"use_case": "string - Compact UI, mobile views"
|
|
||||||
},
|
|
||||||
"medium": {
|
|
||||||
"dimensions": "string - e.g., px-4 py-2 (height: ~40px)",
|
|
||||||
"use_case": "string - Default size for most contexts"
|
|
||||||
},
|
|
||||||
"large": {
|
|
||||||
"dimensions": "string - e.g., px-6 py-3 (height: ~48px)",
|
|
||||||
"use_case": "string - Prominent CTAs, hero sections"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"variant_recommendations": {
|
|
||||||
"variant_name": {
|
|
||||||
"description": "string - when to use this variant",
|
|
||||||
"typical_actions": ["string - action examples"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"usage_context": [
|
|
||||||
"string - typical usage scenarios (e.g., Landing page hero, Product listing grid)"
|
|
||||||
],
|
|
||||||
"accessibility_tips": [
|
|
||||||
"string - accessibility best practices (e.g., Ensure heading hierarchy, Add aria-label)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"extraction_metadata": {
|
|
||||||
"source": "code-import|explore|text",
|
|
||||||
"created": "ISO timestamp",
|
|
||||||
"code_snippets": [
|
|
||||||
{
|
|
||||||
"component_name": "string - which layout component",
|
|
||||||
"source_file": "string - absolute path",
|
|
||||||
"line_start": "number",
|
|
||||||
"line_end": "number",
|
|
||||||
"snippet": "string - complete HTML/CSS/JS code block",
|
|
||||||
"context_type": "html-structure|css-utility|react-component|vue-component|etc"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Field Rules**:
|
**Field Rules**:
|
||||||
- $schema MUST reference W3C Design Tokens format specification
|
- $schema MUST reference W3C Design Tokens format specification
|
||||||
@@ -784,149 +420,25 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
|||||||
- usage_guide OPTIONAL for specialized components (can be simplified or omitted)
|
- usage_guide OPTIONAL for specialized components (can be simplified or omitted)
|
||||||
- extraction_metadata.code_snippets ONLY present in Code Import mode
|
- extraction_metadata.code_snippets ONLY present in Code Import mode
|
||||||
|
|
||||||
**Structure Optimization Benefits**:
|
|
||||||
- Eliminates redundancy between dom_structure and css_layout_rules
|
|
||||||
- Layout properties are co-located with corresponding DOM elements
|
|
||||||
- Responsive overrides apply directly to the element they affect
|
|
||||||
- Single source of truth for each element's structure and layout
|
|
||||||
- Easier to maintain and understand hierarchy
|
|
||||||
|
|
||||||
### animation-tokens.json
|
### animation-tokens.json
|
||||||
|
|
||||||
**Schema Structure**:
|
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/animation-tokens.json`
|
||||||
```json
|
|
||||||
{
|
|
||||||
"$schema": "https://tr.designtokens.org/format/",
|
|
||||||
|
|
||||||
"duration": {
|
**Structure Overview**:
|
||||||
"$type": "duration",
|
- **duration**: instant (0ms), fast (150ms), normal (300ms), slow (500ms), slower (1000ms)
|
||||||
"instant": { "$value": "0ms" },
|
- **easing**: linear, ease-in, ease-out, ease-in-out, spring, bounce
|
||||||
"fast": { "$value": "150ms" },
|
- **keyframes**: Animation definitions in pairs (in/out, open/close, enter/exit)
|
||||||
"normal": { "$value": "300ms" },
|
- Required: fade-in/out, slide-up/down, scale-in/out, accordion-down/up, dialog-open/close, dropdown-open/close, toast-enter/exit, spin, pulse
|
||||||
"slow": { "$value": "500ms" },
|
- **interactions**: Component interaction animations with property, duration, easing
|
||||||
"slower": { "$value": "1000ms" }
|
- button-hover/active, card-hover, input-focus, dropdown-toggle, accordion-toggle, dialog-toggle, tabs-switch
|
||||||
},
|
- **transitions**: default, colors, transform, opacity, all-smooth
|
||||||
|
- **component_animations**: Maps components to animations (MUST match design-tokens.json components)
|
||||||
"easing": {
|
- State-based: dialog, dropdown, toast, accordion (use keyframes)
|
||||||
"$type": "cubicBezier",
|
- Interaction: button, card, input, tabs (use transitions)
|
||||||
"linear": { "$value": "linear" },
|
- **accessibility**: prefers_reduced_motion with CSS rule
|
||||||
"ease-in": { "$value": "cubic-bezier(0.4, 0, 1, 1)" },
|
- **_metadata**: version, created, source, code_snippets
|
||||||
"ease-out": { "$value": "cubic-bezier(0, 0, 0.2, 1)" },
|
|
||||||
"ease-in-out": { "$value": "cubic-bezier(0.4, 0, 0.2, 1)" },
|
|
||||||
"spring": { "$value": "cubic-bezier(0.68, -0.55, 0.265, 1.55)" },
|
|
||||||
"bounce": { "$value": "cubic-bezier(0.68, -0.6, 0.32, 1.6)" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"keyframes": {
|
|
||||||
"/* PATTERN: Define pairs (in/out, open/close, enter/exit) */": {
|
|
||||||
"0%": { "/* CSS properties */": "value" },
|
|
||||||
"100%": { "/* CSS properties */": "value" }
|
|
||||||
},
|
|
||||||
"/* Required keyframes for components: */": "",
|
|
||||||
"fade-in": { "0%": { "opacity": "0" }, "100%": { "opacity": "1" } },
|
|
||||||
"fade-out": { "/* reverse of fade-in */" },
|
|
||||||
"slide-up": { "0%": { "transform": "translateY(10px)", "opacity": "0" }, "100%": { "transform": "translateY(0)", "opacity": "1" } },
|
|
||||||
"slide-down": { "/* reverse direction */" },
|
|
||||||
"scale-in": { "0%": { "transform": "scale(0.95)", "opacity": "0" }, "100%": { "transform": "scale(1)", "opacity": "1" } },
|
|
||||||
"scale-out": { "/* reverse of scale-in */" },
|
|
||||||
"accordion-down": { "0%": { "height": "0", "opacity": "0" }, "100%": { "height": "var(--radix-accordion-content-height)", "opacity": "1" } },
|
|
||||||
"accordion-up": { "/* reverse */" },
|
|
||||||
"dialog-open": { "0%": { "transform": "translate(-50%, -48%) scale(0.96)", "opacity": "0" }, "100%": { "transform": "translate(-50%, -50%) scale(1)", "opacity": "1" } },
|
|
||||||
"dialog-close": { "/* reverse */" },
|
|
||||||
"dropdown-open": { "0%": { "transform": "scale(0.95) translateY(-4px)", "opacity": "0" }, "100%": { "transform": "scale(1) translateY(0)", "opacity": "1" } },
|
|
||||||
"dropdown-close": { "/* reverse */" },
|
|
||||||
"toast-enter": { "0%": { "transform": "translateX(100%)", "opacity": "0" }, "100%": { "transform": "translateX(0)", "opacity": "1" } },
|
|
||||||
"toast-exit": { "/* reverse */" },
|
|
||||||
"spin": { "0%": { "transform": "rotate(0deg)" }, "100%": { "transform": "rotate(360deg)" } },
|
|
||||||
"pulse": { "0%, 100%": { "opacity": "1" }, "50%": { "opacity": "0.5" } }
|
|
||||||
},
|
|
||||||
|
|
||||||
"interactions": {
|
|
||||||
"/* PATTERN: Define for each interactive component state */": {
|
|
||||||
"property": "string - CSS properties (comma-separated)",
|
|
||||||
"duration": "{duration.*}",
|
|
||||||
"easing": "{easing.*}"
|
|
||||||
},
|
|
||||||
"button-hover": { "property": "background-color, transform", "duration": "{duration.fast}", "easing": "{easing.ease-out}" },
|
|
||||||
"button-active": { "property": "transform", "duration": "{duration.instant}", "easing": "{easing.ease-in}" },
|
|
||||||
"card-hover": { "property": "box-shadow, transform", "duration": "{duration.normal}", "easing": "{easing.ease-in-out}" },
|
|
||||||
"input-focus": { "property": "border-color, box-shadow", "duration": "{duration.fast}", "easing": "{easing.ease-out}" },
|
|
||||||
"dropdown-toggle": { "property": "opacity, transform", "duration": "{duration.fast}", "easing": "{easing.ease-out}" },
|
|
||||||
"accordion-toggle": { "property": "height, opacity", "duration": "{duration.normal}", "easing": "{easing.ease-in-out}" },
|
|
||||||
"dialog-toggle": { "property": "opacity, transform", "duration": "{duration.normal}", "easing": "{easing.spring}" },
|
|
||||||
"tabs-switch": { "property": "color, border-color", "duration": "{duration.fast}", "easing": "{easing.ease-in-out}" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"transitions": {
|
|
||||||
"default": { "$value": "all {duration.normal} {easing.ease-in-out}" },
|
|
||||||
"colors": { "$value": "color {duration.fast} {easing.linear}, background-color {duration.fast} {easing.linear}" },
|
|
||||||
"transform": { "$value": "transform {duration.normal} {easing.spring}" },
|
|
||||||
"opacity": { "$value": "opacity {duration.fast} {easing.linear}" },
|
|
||||||
"all-smooth": { "$value": "all {duration.slow} {easing.ease-in-out}" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"component_animations": {
|
|
||||||
"/* PATTERN: Map each component to its animations - MUST match design-tokens.json component list */": {
|
|
||||||
"stateOrInteraction": {
|
|
||||||
"animation": "keyframe-name {duration.*} {easing.*} OR none",
|
|
||||||
"transition": "{interactions.*} OR none"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"button": {
|
|
||||||
"hover": { "animation": "none", "transition": "{interactions.button-hover}" },
|
|
||||||
"active": { "animation": "none", "transition": "{interactions.button-active}" }
|
|
||||||
},
|
|
||||||
"card": {
|
|
||||||
"hover": { "animation": "none", "transition": "{interactions.card-hover}" }
|
|
||||||
},
|
|
||||||
"input": {
|
|
||||||
"focus": { "animation": "none", "transition": "{interactions.input-focus}" }
|
|
||||||
},
|
|
||||||
"dialog": {
|
|
||||||
"open": { "animation": "dialog-open {duration.normal} {easing.spring}" },
|
|
||||||
"close": { "animation": "dialog-close {duration.normal} {easing.ease-in}" }
|
|
||||||
},
|
|
||||||
"dropdown": {
|
|
||||||
"open": { "animation": "dropdown-open {duration.fast} {easing.ease-out}" },
|
|
||||||
"close": { "animation": "dropdown-close {duration.fast} {easing.ease-in}" }
|
|
||||||
},
|
|
||||||
"toast": {
|
|
||||||
"enter": { "animation": "toast-enter {duration.normal} {easing.ease-out}" },
|
|
||||||
"exit": { "animation": "toast-exit {duration.normal} {easing.ease-in}" }
|
|
||||||
},
|
|
||||||
"accordion": {
|
|
||||||
"open": { "animation": "accordion-down {duration.normal} {easing.ease-out}" },
|
|
||||||
"close": { "animation": "accordion-up {duration.normal} {easing.ease-in}" }
|
|
||||||
},
|
|
||||||
"/* Add mappings for: tabs, switch, checkbox, badge, alert */" : {}
|
|
||||||
},
|
|
||||||
|
|
||||||
"accessibility": {
|
|
||||||
"prefers_reduced_motion": {
|
|
||||||
"duration": "0ms",
|
|
||||||
"keyframes": {},
|
|
||||||
"note": "Disable animations when user prefers reduced motion",
|
|
||||||
"css_rule": "@media (prefers-reduced-motion: reduce) { *, *::before, *::after { animation-duration: 0.01ms !important; animation-iteration-count: 1 !important; transition-duration: 0.01ms !important; } }"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"_metadata": {
|
|
||||||
"version": "string",
|
|
||||||
"created": "ISO timestamp",
|
|
||||||
"source": "code-import|explore|text",
|
|
||||||
"code_snippets": [
|
|
||||||
{
|
|
||||||
"animation_name": "string - keyframe/transition name",
|
|
||||||
"source_file": "string - absolute path",
|
|
||||||
"line_start": "number",
|
|
||||||
"line_end": "number",
|
|
||||||
"snippet": "string - complete @keyframes or transition code",
|
|
||||||
"context_type": "css-keyframes|css-transition|js-animation|scss-animation|etc"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Field Rules**:
|
**Field Rules**:
|
||||||
- $schema MUST reference W3C Design Tokens format specification
|
- $schema MUST reference W3C Design Tokens format specification
|
||||||
|
|||||||
@@ -1,87 +0,0 @@
|
|||||||
---
|
|
||||||
name: analyze
|
|
||||||
description: Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection
|
|
||||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] analysis target"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), TodoWrite(*), Read(*), Glob(*), Task(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Analyze Command (/cli:analyze)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Quick codebase analysis using CLI tools. **Read-only - does NOT modify code**.
|
|
||||||
|
|
||||||
**Tool Selection**:
|
|
||||||
- **gemini** (default) - Best for code analysis
|
|
||||||
- **qwen** - Fallback when Gemini unavailable
|
|
||||||
- **codex** - Alternative for deep analysis
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
|
|
||||||
- `--enhance` - Use `/enhance-prompt` for context-aware enhancement
|
|
||||||
- `<analysis-target>` - Description of what to analyze
|
|
||||||
|
|
||||||
## Tool Usage
|
|
||||||
|
|
||||||
**Gemini** (Primary):
|
|
||||||
```bash
|
|
||||||
--tool gemini # or omit (default)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Qwen** (Fallback):
|
|
||||||
```bash
|
|
||||||
--tool qwen
|
|
||||||
```
|
|
||||||
|
|
||||||
**Codex** (Alternative):
|
|
||||||
```bash
|
|
||||||
--tool codex
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
Uses **cli-execution-agent** (default) for automated analysis:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Task(
|
|
||||||
subagent_type="cli-execution-agent",
|
|
||||||
description="Codebase analysis with pattern detection",
|
|
||||||
prompt=`
|
|
||||||
Task: ${analysis_target}
|
|
||||||
Mode: analyze
|
|
||||||
Tool: ${tool_flag || 'gemini'}
|
|
||||||
Enhance: ${enhance_flag}
|
|
||||||
|
|
||||||
Execute codebase analysis with auto-pattern detection:
|
|
||||||
|
|
||||||
1. Context Discovery:
|
|
||||||
- Extract keywords from analysis target
|
|
||||||
- Auto-detect file patterns (auth→auth files, component→components, etc.)
|
|
||||||
- Discover additional relevant files using MCP
|
|
||||||
- Build comprehensive file context
|
|
||||||
|
|
||||||
2. Template Selection:
|
|
||||||
- Auto-select analysis template based on keywords
|
|
||||||
- Apply appropriate analysis methodology
|
|
||||||
- Include @CLAUDE.md for project context
|
|
||||||
|
|
||||||
3. CLI Command Construction:
|
|
||||||
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for deep analysis)
|
|
||||||
- Context: @CLAUDE.md + auto-detected patterns + discovered files
|
|
||||||
- Mode: analysis (read-only)
|
|
||||||
- Expected: Insights, recommendations, pattern analysis
|
|
||||||
|
|
||||||
4. Execution & Output:
|
|
||||||
- Execute CLI tool with assembled context
|
|
||||||
- Generate comprehensive analysis report
|
|
||||||
- Save to .workflow/active/WFS-[id]/.chat/analyze-[timestamp].md (or .scratchpad/)
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
- **Read-only**: Analyzes code, does NOT modify files
|
|
||||||
- **Auto-pattern**: Detects file patterns from keywords (auth→auth files, component→components, API→api/routes, test→test files)
|
|
||||||
- **Output**: `.workflow/active/WFS-[id]/.chat/analyze-[timestamp].md` (or `.scratchpad/` if no session)
|
|
||||||
@@ -1,82 +0,0 @@
|
|||||||
---
|
|
||||||
name: chat
|
|
||||||
description: Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference
|
|
||||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] inquiry"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), Task(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Chat Command (/cli:chat)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Direct Q&A interaction with CLI tools for codebase analysis. **Read-only - does NOT modify code**.
|
|
||||||
|
|
||||||
**Tool Selection**:
|
|
||||||
- **gemini** (default) - Best for Q&A and explanations
|
|
||||||
- **qwen** - Fallback when Gemini unavailable
|
|
||||||
- **codex** - Alternative for technical deep-dives
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
|
|
||||||
- `--enhance` - Enhance inquiry with `/enhance-prompt`
|
|
||||||
- `<inquiry>` (Required) - Question or analysis request
|
|
||||||
|
|
||||||
## Tool Usage
|
|
||||||
|
|
||||||
**Gemini** (Primary):
|
|
||||||
```bash
|
|
||||||
--tool gemini # or omit (default)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Qwen** (Fallback):
|
|
||||||
```bash
|
|
||||||
--tool qwen
|
|
||||||
```
|
|
||||||
|
|
||||||
**Codex** (Alternative):
|
|
||||||
```bash
|
|
||||||
--tool codex
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
Uses **cli-execution-agent** (default) for automated Q&A:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Task(
|
|
||||||
subagent_type="cli-execution-agent",
|
|
||||||
description="Codebase Q&A with intelligent context discovery",
|
|
||||||
prompt=`
|
|
||||||
Task: ${inquiry}
|
|
||||||
Mode: chat
|
|
||||||
Tool: ${tool_flag || 'gemini'}
|
|
||||||
Enhance: ${enhance_flag}
|
|
||||||
|
|
||||||
Execute codebase Q&A with intelligent context discovery:
|
|
||||||
|
|
||||||
1. Context Discovery:
|
|
||||||
- Parse inquiry to identify relevant topics/keywords
|
|
||||||
- Discover related files using MCP/ripgrep (prioritize precision)
|
|
||||||
- Include @CLAUDE.md + discovered files
|
|
||||||
- Validate context relevance to question
|
|
||||||
|
|
||||||
2. CLI Command Construction:
|
|
||||||
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for deep dives)
|
|
||||||
- Context: @CLAUDE.md + discovered file patterns
|
|
||||||
- Mode: analysis (read-only)
|
|
||||||
- Expected: Clear, accurate answer with code references
|
|
||||||
|
|
||||||
3. Execution & Output:
|
|
||||||
- Execute CLI tool with assembled context
|
|
||||||
- Validate answer completeness
|
|
||||||
- Save to .workflow/active/WFS-[id]/.chat/chat-[timestamp].md (or .scratchpad/)
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
- **Read-only**: Provides answers, does NOT modify code
|
|
||||||
- **Context**: `@CLAUDE.md` + inferred or all files (`@**/*`)
|
|
||||||
- **Output**: `.workflow/active/WFS-[id]/.chat/chat-[timestamp].md` (or `.scratchpad/` if no session)
|
|
||||||
@@ -191,7 +191,7 @@ target/
|
|||||||
### Step 2: Workspace Analysis (MANDATORY FIRST)
|
### Step 2: Workspace Analysis (MANDATORY FIRST)
|
||||||
```bash
|
```bash
|
||||||
# Analyze workspace structure
|
# Analyze workspace structure
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh json)
|
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Technology Detection
|
### Step 3: Technology Detection
|
||||||
@@ -428,15 +428,6 @@ docker-compose.override.yml
|
|||||||
/cli:cli-init --tool all --preview
|
/cli:cli-init --tool all --preview
|
||||||
```
|
```
|
||||||
|
|
||||||
## Key Benefits
|
|
||||||
|
|
||||||
- **Automatic Detection**: No manual configuration needed
|
|
||||||
- **Multi-Tool Support**: Configure Gemini and Qwen simultaneously
|
|
||||||
- **Technology Aware**: Rules adapted to actual project stack
|
|
||||||
- **Maintainable**: Clear sections for easy customization
|
|
||||||
- **Consistent**: Follows gitignore syntax standards
|
|
||||||
- **Safe**: Creates backups of existing files
|
|
||||||
- **Flexible**: Initialize specific tools or all at once
|
|
||||||
|
|
||||||
## Tool Selection Guide
|
## Tool Selection Guide
|
||||||
|
|
||||||
|
|||||||
@@ -1,519 +0,0 @@
|
|||||||
---
|
|
||||||
name: codex-execute
|
|
||||||
description: Multi-stage Codex execution with automatic task decomposition into grouped subtasks using resume mechanism for context continuity
|
|
||||||
argument-hint: "[--verify-git] task description or task-id"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), TodoWrite(*), Read(*), Glob(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Codex Execute Command (/cli:codex-execute)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Automated task decomposition and sequential execution with Codex, using `codex exec "..." resume --last` mechanism for continuity between subtasks.
|
|
||||||
|
|
||||||
**Input**: User description or task ID (automatically loads from `.task/[ID].json` if applicable)
|
|
||||||
|
|
||||||
## Core Workflow
|
|
||||||
|
|
||||||
```
|
|
||||||
Task Input → Analyze Dependencies → Create Task Flow Diagram →
|
|
||||||
Decompose into Subtask Groups → TodoWrite Tracking →
|
|
||||||
For Each Subtask Group:
|
|
||||||
For First Subtask in Group:
|
|
||||||
0. Stage existing changes (git add -A) if valid git repo
|
|
||||||
1. Execute with Codex (new session)
|
|
||||||
2. [Optional] Git verification
|
|
||||||
3. Mark complete in TodoWrite
|
|
||||||
For Related Subtasks in Same Group:
|
|
||||||
0. Stage changes from previous subtask
|
|
||||||
1. Execute with `codex exec "..." resume --last` (continue session)
|
|
||||||
2. [Optional] Git verification
|
|
||||||
3. Mark complete in TodoWrite
|
|
||||||
→ Final Summary
|
|
||||||
```
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `<input>` (Required): Task description or task ID (e.g., "implement auth" or "IMPL-001")
|
|
||||||
- If input matches task ID format, loads from `.task/[ID].json`
|
|
||||||
- Otherwise, uses input as task description
|
|
||||||
- `--verify-git` (Optional): Verify git status after each subtask completion
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
### Phase 1: Input Processing & Task Flow Analysis
|
|
||||||
|
|
||||||
1. **Parse Input**:
|
|
||||||
- Check if input matches task ID pattern (e.g., `IMPL-001`, `TASK-123`)
|
|
||||||
- If yes: Load from `.task/[ID].json` and extract requirements
|
|
||||||
- If no: Use input as task description directly
|
|
||||||
|
|
||||||
2. **Analyze Dependencies & Create Task Flow Diagram**:
|
|
||||||
- Analyze task complexity and scope
|
|
||||||
- Identify dependencies and relationships between subtasks
|
|
||||||
- Create visual task flow diagram showing:
|
|
||||||
- Independent task groups (parallel execution possible)
|
|
||||||
- Sequential dependencies (must use resume)
|
|
||||||
- Branching logic (conditional paths)
|
|
||||||
- Display flow diagram for user review
|
|
||||||
|
|
||||||
**Task Flow Diagram Format**:
|
|
||||||
```
|
|
||||||
[Group A: Auth Core]
|
|
||||||
A1: Create user model ──┐
|
|
||||||
A2: Add validation ─┤─► [resume] ─► A3: Database schema
|
|
||||||
│
|
|
||||||
[Group B: API Layer] │
|
|
||||||
B1: Auth endpoints ─────┘─► [new session]
|
|
||||||
B2: Middleware ────────────► [resume] ─► B3: Error handling
|
|
||||||
|
|
||||||
[Group C: Testing]
|
|
||||||
C1: Unit tests ─────────────► [new session]
|
|
||||||
C2: Integration tests ──────► [resume]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Diagram Symbols**:
|
|
||||||
- `──►` Sequential dependency (must resume previous session)
|
|
||||||
- `─┐` Branch point (multiple paths)
|
|
||||||
- `─┘` Merge point (wait for completion)
|
|
||||||
- `[resume]` Use `codex exec "..." resume --last`
|
|
||||||
- `[new session]` Start fresh Codex session
|
|
||||||
|
|
||||||
3. **Decompose into Subtask Groups**:
|
|
||||||
- Group related subtasks that share context
|
|
||||||
- Break down into 3-8 subtasks total
|
|
||||||
- Assign each subtask to a group
|
|
||||||
- Create TodoWrite tracker with groups
|
|
||||||
- Display decomposition for user review
|
|
||||||
|
|
||||||
**Decomposition Criteria**:
|
|
||||||
- Each subtask: 5-15 minutes completable
|
|
||||||
- Clear, testable outcomes
|
|
||||||
- Explicit dependencies
|
|
||||||
- Focused file scope (1-5 files per subtask)
|
|
||||||
- **Group coherence**: Subtasks in same group share context/files
|
|
||||||
|
|
||||||
### File Discovery for Task Decomposition
|
|
||||||
|
|
||||||
Use `rg` or MCP tools to discover relevant files, then group by domain:
|
|
||||||
|
|
||||||
**Workflow**: Discover → Analyze scope → Group by files → Create task flow
|
|
||||||
|
|
||||||
**Example**:
|
|
||||||
```bash
|
|
||||||
# Discover files
|
|
||||||
rg "authentication" --files-with-matches --type ts
|
|
||||||
|
|
||||||
# Group by domain
|
|
||||||
# Group A: src/auth/model.ts, src/auth/schema.ts
|
|
||||||
# Group B: src/api/auth.ts, src/middleware/auth.ts
|
|
||||||
# Group C: tests/auth/*.test.ts
|
|
||||||
|
|
||||||
# Each group becomes a session with related subtasks
|
|
||||||
```
|
|
||||||
|
|
||||||
File patterns: see intelligent-tools-strategy.md (loaded in memory)
|
|
||||||
|
|
||||||
### Phase 2: Group-Based Execution
|
|
||||||
|
|
||||||
**Pre-Execution Git Staging** (if valid git repository):
|
|
||||||
```bash
|
|
||||||
# Stage all current changes before codex execution
|
|
||||||
# This makes codex changes clearly visible in git diff
|
|
||||||
git add -A
|
|
||||||
git status --short
|
|
||||||
```
|
|
||||||
|
|
||||||
**For First Subtask in Each Group** (New Session):
|
|
||||||
```bash
|
|
||||||
# Start new Codex session for independent task group
|
|
||||||
codex -C [dir] --full-auto exec "
|
|
||||||
PURPOSE: [group goal]
|
|
||||||
TASK: [subtask description - first in group]
|
|
||||||
CONTEXT: @{relevant_files} @CLAUDE.md
|
|
||||||
EXPECTED: [specific deliverables]
|
|
||||||
RULES: [constraints]
|
|
||||||
Group [X]: [group name] - Subtask 1 of N in this group
|
|
||||||
" --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
|
||||||
|
|
||||||
**For Related Subtasks in Same Group** (Resume Session):
|
|
||||||
```bash
|
|
||||||
# Stage changes from previous subtask (if valid git repository)
|
|
||||||
git add -A
|
|
||||||
|
|
||||||
# Resume session ONLY for subtasks in same group
|
|
||||||
codex exec "
|
|
||||||
CONTINUE IN SAME GROUP:
|
|
||||||
Group [X]: [group name] - Subtask N of M
|
|
||||||
|
|
||||||
PURPOSE: [continuation goal within group]
|
|
||||||
TASK: [subtask N description]
|
|
||||||
CONTEXT: Previous work in this group completed, now focus on @{new_relevant_files}
|
|
||||||
EXPECTED: [specific deliverables]
|
|
||||||
RULES: Build on previous subtask in group, maintain consistency
|
|
||||||
" resume --last --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
|
||||||
|
|
||||||
**For First Subtask in Different Group** (New Session):
|
|
||||||
```bash
|
|
||||||
# Stage changes from previous group
|
|
||||||
git add -A
|
|
||||||
|
|
||||||
# Start NEW session for different group (no resume)
|
|
||||||
codex -C [dir] --full-auto exec "
|
|
||||||
PURPOSE: [new group goal]
|
|
||||||
TASK: [subtask description - first in new group]
|
|
||||||
CONTEXT: @{different_files} @CLAUDE.md
|
|
||||||
EXPECTED: [specific deliverables]
|
|
||||||
RULES: [constraints]
|
|
||||||
Group [Y]: [new group name] - Subtask 1 of N in this group
|
|
||||||
" --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
|
||||||
|
|
||||||
**Resume Decision Logic**:
|
|
||||||
```
|
|
||||||
if (subtask.group == previous_subtask.group):
|
|
||||||
use `codex exec "..." resume --last` # Continue session
|
|
||||||
else:
|
|
||||||
use `codex -C [dir] exec "..."` # New session
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 3: Verification (if --verify-git enabled)
|
|
||||||
|
|
||||||
After each subtask completion:
|
|
||||||
```bash
|
|
||||||
# Check git status
|
|
||||||
git status --short
|
|
||||||
|
|
||||||
# Verify expected changes
|
|
||||||
git diff --stat
|
|
||||||
|
|
||||||
# Optional: Check for untracked files that should be committed
|
|
||||||
git ls-files --others --exclude-standard
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verification Checks**:
|
|
||||||
- Files modified match subtask scope
|
|
||||||
- No unexpected changes in unrelated files
|
|
||||||
- No merge conflicts or errors
|
|
||||||
- Code compiles/runs (if applicable)
|
|
||||||
|
|
||||||
### Phase 4: TodoWrite Tracking with Groups
|
|
||||||
|
|
||||||
**Initial Setup with Task Flow**:
|
|
||||||
```javascript
|
|
||||||
TodoWrite({
|
|
||||||
todos: [
|
|
||||||
// Display task flow diagram first
|
|
||||||
{ content: "Task Flow Analysis Complete - See diagram above", status: "completed", activeForm: "Analyzing task flow" },
|
|
||||||
|
|
||||||
// Group A subtasks (will use resume within group)
|
|
||||||
{ content: "[Group A] Subtask 1: [description]", status: "in_progress", activeForm: "Executing Group A subtask 1" },
|
|
||||||
{ content: "[Group A] Subtask 2: [description] [resume]", status: "pending", activeForm: "Executing Group A subtask 2" },
|
|
||||||
|
|
||||||
// Group B subtasks (new session, then resume within group)
|
|
||||||
{ content: "[Group B] Subtask 1: [description] [new session]", status: "pending", activeForm: "Executing Group B subtask 1" },
|
|
||||||
{ content: "[Group B] Subtask 2: [description] [resume]", status: "pending", activeForm: "Executing Group B subtask 2" },
|
|
||||||
|
|
||||||
// Group C subtasks (new session)
|
|
||||||
{ content: "[Group C] Subtask 1: [description] [new session]", status: "pending", activeForm: "Executing Group C subtask 1" },
|
|
||||||
|
|
||||||
{ content: "Final verification and summary", status: "pending", activeForm: "Verifying and summarizing" }
|
|
||||||
]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**After Each Subtask**:
|
|
||||||
```javascript
|
|
||||||
TodoWrite({
|
|
||||||
todos: [
|
|
||||||
{ content: "Task Flow Analysis Complete - See diagram above", status: "completed", activeForm: "Analyzing task flow" },
|
|
||||||
{ content: "[Group A] Subtask 1: [description]", status: "completed", activeForm: "Executing Group A subtask 1" },
|
|
||||||
{ content: "[Group A] Subtask 2: [description] [resume]", status: "in_progress", activeForm: "Executing Group A subtask 2" },
|
|
||||||
// ... update status
|
|
||||||
]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## Codex Resume Mechanism
|
|
||||||
|
|
||||||
**Why Group-Based Resume?**
|
|
||||||
- **Within Group**: Maintains conversation context for related subtasks
|
|
||||||
- Codex remembers previous decisions and patterns
|
|
||||||
- Reduces context repetition
|
|
||||||
- Ensures consistency in implementation style
|
|
||||||
- **Between Groups**: Fresh session for independent tasks
|
|
||||||
- Avoids context pollution from unrelated work
|
|
||||||
- Prevents confusion when switching domains
|
|
||||||
- Maintains focused attention on current group
|
|
||||||
|
|
||||||
**How It Works**:
|
|
||||||
1. **First subtask in Group A**: Creates new Codex session
|
|
||||||
2. **Subsequent subtasks in Group A**: Use `codex resume --last` to continue session
|
|
||||||
3. **First subtask in Group B**: Creates NEW Codex session (no resume)
|
|
||||||
4. **Subsequent subtasks in Group B**: Use `codex resume --last` within Group B
|
|
||||||
5. Each group builds on its own context, isolated from other groups
|
|
||||||
|
|
||||||
**When to Resume vs New Session**:
|
|
||||||
```
|
|
||||||
RESUME (same group):
|
|
||||||
- Subtasks share files/modules
|
|
||||||
- Logical continuation of previous work
|
|
||||||
- Same architectural domain
|
|
||||||
|
|
||||||
NEW SESSION (different group):
|
|
||||||
- Independent task area
|
|
||||||
- Different files/modules
|
|
||||||
- Switching architectural domains
|
|
||||||
- Testing after implementation
|
|
||||||
```
|
|
||||||
|
|
||||||
**Image Support**:
|
|
||||||
```bash
|
|
||||||
# First subtask with design reference
|
|
||||||
codex -C [dir] -i design.png --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
|
||||||
|
|
||||||
# Resume for next subtask (image context preserved)
|
|
||||||
codex exec "CONTINUE TO NEXT SUBTASK: ..." resume --last --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
**Subtask Failure**:
|
|
||||||
1. Mark subtask as blocked in TodoWrite
|
|
||||||
2. Report error details to user
|
|
||||||
3. Pause execution for manual intervention
|
|
||||||
4. Use AskUserQuestion for recovery decision:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "Codex execution failed for the subtask. How should the workflow proceed?",
|
|
||||||
header: "Recovery",
|
|
||||||
options: [
|
|
||||||
{ label: "Retry Subtask", description: "Attempt to execute the same subtask again." },
|
|
||||||
{ label: "Skip Subtask", description: "Continue to the next subtask in the plan." },
|
|
||||||
{ label: "Abort Workflow", description: "Stop the entire execution." }
|
|
||||||
],
|
|
||||||
multiSelect: false
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Git Verification Failure** (if --verify-git):
|
|
||||||
1. Show unexpected changes
|
|
||||||
2. Pause execution
|
|
||||||
3. Request user decision:
|
|
||||||
- Continue anyway
|
|
||||||
- Rollback and retry
|
|
||||||
- Manual fix
|
|
||||||
|
|
||||||
**Codex Session Lost**:
|
|
||||||
1. Detect if `codex exec "..." resume --last` fails
|
|
||||||
2. Attempt retry with fresh session
|
|
||||||
3. Report to user if manual intervention needed
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
|
|
||||||
**During Execution**:
|
|
||||||
```
|
|
||||||
Task Flow Diagram:
|
|
||||||
[Group A: Auth Core]
|
|
||||||
A1: Create user model ──┐
|
|
||||||
A2: Add validation ─┤─► [resume] ─► A3: Database schema
|
|
||||||
│
|
|
||||||
[Group B: API Layer] │
|
|
||||||
B1: Auth endpoints ─────┘─► [new session]
|
|
||||||
B2: Middleware ────────────► [resume] ─► B3: Error handling
|
|
||||||
|
|
||||||
[Group C: Testing]
|
|
||||||
C1: Unit tests ─────────────► [new session]
|
|
||||||
C2: Integration tests ──────► [resume]
|
|
||||||
|
|
||||||
Task Decomposition:
|
|
||||||
[Group A] 1. Create user model
|
|
||||||
[Group A] 2. Add validation logic [resume]
|
|
||||||
[Group A] 3. Implement database schema [resume]
|
|
||||||
[Group B] 4. Create auth endpoints [new session]
|
|
||||||
[Group B] 5. Add middleware [resume]
|
|
||||||
[Group B] 6. Error handling [resume]
|
|
||||||
[Group C] 7. Unit tests [new session]
|
|
||||||
[Group C] 8. Integration tests [resume]
|
|
||||||
|
|
||||||
[Group A] Executing Subtask 1/8: Create user model
|
|
||||||
Starting new Codex session for Group A...
|
|
||||||
[Codex output]
|
|
||||||
Subtask 1 completed
|
|
||||||
|
|
||||||
Git Verification:
|
|
||||||
M src/models/user.ts
|
|
||||||
Changes verified
|
|
||||||
|
|
||||||
[Group A] Executing Subtask 2/8: Add validation logic
|
|
||||||
Resuming Codex session (same group)...
|
|
||||||
[Codex output]
|
|
||||||
Subtask 2 completed
|
|
||||||
|
|
||||||
[Group B] Executing Subtask 4/8: Create auth endpoints
|
|
||||||
Starting NEW Codex session for Group B...
|
|
||||||
[Codex output]
|
|
||||||
Subtask 4 completed
|
|
||||||
...
|
|
||||||
|
|
||||||
All Subtasks Completed
|
|
||||||
Summary: [file references, changes, next steps]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Final Summary**:
|
|
||||||
```markdown
|
|
||||||
# Task Execution Summary: [Task Description]
|
|
||||||
|
|
||||||
## Subtasks Completed
|
|
||||||
1. [Subtask 1]: [files modified]
|
|
||||||
2. [Subtask 2]: [files modified]
|
|
||||||
...
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
- src/file1.ts:10-50 - [changes]
|
|
||||||
- src/file2.ts - [changes]
|
|
||||||
|
|
||||||
## Git Status
|
|
||||||
- N files modified
|
|
||||||
- M files added
|
|
||||||
- No conflicts
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
- [Suggested follow-up actions]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
**Example 1: Simple Task with Groups**
|
|
||||||
```bash
|
|
||||||
/cli:codex-execute "implement user authentication system"
|
|
||||||
|
|
||||||
# Task Flow Diagram:
|
|
||||||
# [Group A: Data Layer]
|
|
||||||
# A1: Create user model ──► [resume] ──► A2: Database schema
|
|
||||||
#
|
|
||||||
# [Group B: Auth Logic]
|
|
||||||
# B1: JWT token generation ──► [new session]
|
|
||||||
# B2: Authentication middleware ──► [resume]
|
|
||||||
#
|
|
||||||
# [Group C: API Endpoints]
|
|
||||||
# C1: Login/logout endpoints ──► [new session]
|
|
||||||
#
|
|
||||||
# [Group D: Testing]
|
|
||||||
# D1: Unit tests ──► [new session]
|
|
||||||
# D2: Integration tests ──► [resume]
|
|
||||||
|
|
||||||
# Execution:
|
|
||||||
# Group A: A1 (new) → A2 (resume)
|
|
||||||
# Group B: B1 (new) → B2 (resume)
|
|
||||||
# Group C: C1 (new)
|
|
||||||
# Group D: D1 (new) → D2 (resume)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example 2: With Git Verification**
|
|
||||||
```bash
|
|
||||||
/cli:codex-execute --verify-git "refactor API layer to use dependency injection"
|
|
||||||
|
|
||||||
# After each subtask, verifies:
|
|
||||||
# - Only expected files modified
|
|
||||||
# - No breaking changes in unrelated code
|
|
||||||
# - Tests still pass
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example 3: With Task ID**
|
|
||||||
```bash
|
|
||||||
/cli:codex-execute IMPL-001
|
|
||||||
|
|
||||||
# Loads task from .task/IMPL-001.json
|
|
||||||
# Decomposes based on task requirements
|
|
||||||
```
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Task Flow First**: Always create visual flow diagram before execution
|
|
||||||
2. **Group Related Work**: Cluster subtasks by domain/files for efficient resume
|
|
||||||
3. **Subtask Granularity**: Keep subtasks small and focused (5-15 min each)
|
|
||||||
4. **Clear Boundaries**: Each subtask should have well-defined input/output
|
|
||||||
5. **Git Hygiene**: Use `--verify-git` for critical refactoring
|
|
||||||
6. **Pre-Execution Staging**: Stage changes before each subtask to clearly see codex modifications
|
|
||||||
7. **Smart Resume**: Use `resume --last` ONLY within same group
|
|
||||||
8. **Fresh Sessions**: Start new session when switching to different group/domain
|
|
||||||
9. **Recovery Points**: TodoWrite with group labels provides clear progress tracking
|
|
||||||
10. **Image References**: Attach design files for UI tasks (first subtask in group)
|
|
||||||
|
|
||||||
## Input Processing
|
|
||||||
|
|
||||||
**Automatic Detection**:
|
|
||||||
- Input matches task ID pattern → Load from `.task/[ID].json`
|
|
||||||
- Otherwise → Use as task description
|
|
||||||
|
|
||||||
**Task JSON Structure** (when loading from file):
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"task_id": "IMPL-001",
|
|
||||||
"title": "Implement user authentication",
|
|
||||||
"description": "Create JWT-based auth system",
|
|
||||||
"acceptance_criteria": [...],
|
|
||||||
"scope": {...},
|
|
||||||
"brainstorming_refs": [...]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Routing
|
|
||||||
|
|
||||||
**Execution Log Destination**:
|
|
||||||
- **IF** active workflow session exists:
|
|
||||||
- Execution log: `.workflow/active/WFS-[id]/.chat/codex-execute-[timestamp].md`
|
|
||||||
- Task summaries: `.workflow/active/WFS-[id]/.summaries/[TASK-ID]-summary.md` (if task ID)
|
|
||||||
- Task updates: `.workflow/active/WFS-[id]/.task/[TASK-ID].json` status updates
|
|
||||||
- TodoWrite tracking: Embedded in execution log
|
|
||||||
- **ELSE** (no active session):
|
|
||||||
- **Recommended**: Create workflow session first (`/workflow:session:start`)
|
|
||||||
- **Alternative**: Save to `.workflow/.scratchpad/codex-execute-[description]-[timestamp].md`
|
|
||||||
|
|
||||||
**Output Files** (during execution):
|
|
||||||
```
|
|
||||||
.workflow/active/WFS-[session-id]/
|
|
||||||
├── .chat/
|
|
||||||
│ └── codex-execute-20250105-143022.md # Full execution log with task flow
|
|
||||||
├── .summaries/
|
|
||||||
│ ├── IMPL-001.1-summary.md # Subtask summaries
|
|
||||||
│ ├── IMPL-001.2-summary.md
|
|
||||||
│ └── IMPL-001-summary.md # Final task summary
|
|
||||||
└── .task/
|
|
||||||
├── IMPL-001.json # Updated task status
|
|
||||||
└── [subtask JSONs if decomposed]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Examples**:
|
|
||||||
- During session `WFS-auth-system`, executing multi-stage auth implementation:
|
|
||||||
- Log: `.workflow/active/WFS-auth-system/.chat/codex-execute-20250105-143022.md`
|
|
||||||
- Summaries: `.workflow/active/WFS-auth-system/.summaries/IMPL-001.{1,2,3}-summary.md`
|
|
||||||
- Task status: `.workflow/active/WFS-auth-system/.task/IMPL-001.json` (status: completed)
|
|
||||||
- No session, ad-hoc multi-stage task:
|
|
||||||
- Log: `.workflow/.scratchpad/codex-execute-auth-refactor-20250105-143045.md`
|
|
||||||
|
|
||||||
**Save Results**:
|
|
||||||
- Execution log with task flow diagram and TodoWrite tracking
|
|
||||||
- Individual summaries for each completed subtask
|
|
||||||
- Final consolidated summary when all subtasks complete
|
|
||||||
- Modified code files throughout project
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
**vs. `/cli:execute`**:
|
|
||||||
- `/cli:execute`: Single-shot execution with Gemini/Qwen/Codex
|
|
||||||
- `/cli:codex-execute`: Multi-stage Codex execution with automatic task decomposition and resume mechanism
|
|
||||||
|
|
||||||
**Input Flexibility**: Accepts both freeform descriptions and task IDs (auto-detects and loads JSON)
|
|
||||||
|
|
||||||
**Context Window**: `codex exec "..." resume --last` maintains conversation history, ensuring consistency across subtasks without redundant context injection.
|
|
||||||
|
|
||||||
**Output Details**:
|
|
||||||
- Session management: see intelligent-tools-strategy.md
|
|
||||||
- **⚠️ Code Modification**: This command performs multi-stage code modifications - execution log tracks all changes
|
|
||||||
@@ -1,320 +0,0 @@
|
|||||||
---
|
|
||||||
name: discuss-plan
|
|
||||||
description: Multi-round collaborative planning using Gemini, Codex, and Claude synthesis with iterative discussion cycles (read-only, no code changes)
|
|
||||||
argument-hint: "[--topic '...'] [--task-id '...'] [--rounds N]"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), TodoWrite(*), Read(*), Glob(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Discuss-Plan Command (/cli:discuss-plan)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Orchestrates a multi-model collaborative discussion for in-depth planning and problem analysis. This command facilitates an iterative dialogue between Gemini, Codex, and Claude (the orchestrating AI) to explore a topic from multiple perspectives, refine ideas, and build a robust plan.
|
|
||||||
|
|
||||||
**This command is for discussion and planning ONLY. It does NOT modify any code.**
|
|
||||||
|
|
||||||
## Core Workflow: The Discussion Loop
|
|
||||||
|
|
||||||
The command operates in iterative rounds, allowing the plan to evolve with each cycle. The user can choose to continue for more rounds or conclude when consensus is reached.
|
|
||||||
|
|
||||||
```
|
|
||||||
Topic Input → [Round 1: Gemini → Codex → Claude] → [User Review] →
|
|
||||||
[Round 2: Gemini → Codex → Claude] → ... → Final Plan
|
|
||||||
```
|
|
||||||
|
|
||||||
### Model Roles & Priority
|
|
||||||
|
|
||||||
**Priority Order**: Gemini > Codex > Claude
|
|
||||||
|
|
||||||
1. **Gemini (The Analyst)** - Priority 1
|
|
||||||
- Kicks off each round with deep analysis
|
|
||||||
- Provides foundational ideas and draft plans
|
|
||||||
- Analyzes current context or previous synthesis
|
|
||||||
|
|
||||||
2. **Codex (The Architect/Critic)** - Priority 2
|
|
||||||
- Reviews Gemini's output critically
|
|
||||||
- Uses deep reasoning for technical trade-offs
|
|
||||||
- Proposes alternative strategies
|
|
||||||
- **Participates purely in conversational/reasoning capacity**
|
|
||||||
- Uses resume mechanism to maintain discussion context
|
|
||||||
|
|
||||||
3. **Claude (The Synthesizer/Moderator)** - Priority 3
|
|
||||||
- Synthesizes discussion from Gemini and Codex
|
|
||||||
- Highlights agreements and contentions
|
|
||||||
- Structures refined plan
|
|
||||||
- Poses key questions for next round
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `<input>` (Required): Topic description or task ID (e.g., "Design a new caching layer" or `PLAN-002`)
|
|
||||||
- `--rounds <N>` (Optional): Maximum number of discussion rounds (default: prompts after each round)
|
|
||||||
- `--task-id <id>` (Optional): Associates discussion with workflow task ID
|
|
||||||
- `--topic <description>` (Optional): High-level topic for discussion
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
### Phase 1: Initial Setup
|
|
||||||
|
|
||||||
1. **Input Processing**: Parse topic or task ID
|
|
||||||
2. **Context Gathering**: Identify relevant files based on topic
|
|
||||||
|
|
||||||
### Phase 2: Discussion Round
|
|
||||||
|
|
||||||
Each round consists of three sequential steps, tracked via `TodoWrite`.
|
|
||||||
|
|
||||||
**Step 1: Gemini's Analysis (Priority 1)**
|
|
||||||
|
|
||||||
Gemini analyzes the topic and proposes preliminary plan.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Round 1: CONTEXT_INPUT is the initial topic
|
|
||||||
# Subsequent rounds: CONTEXT_INPUT is the synthesis from previous round
|
|
||||||
gemini -p "
|
|
||||||
PURPOSE: Analyze and propose a plan for '[topic]'
|
|
||||||
TASK: Provide initial analysis, identify key modules, and draft implementation plan
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @CLAUDE.md [auto-detected files]
|
|
||||||
INPUT: [CONTEXT_INPUT]
|
|
||||||
EXPECTED: Structured analysis and draft plan for discussion
|
|
||||||
RULES: Focus on technical depth and practical considerations
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 2: Codex's Critique (Priority 2)**
|
|
||||||
|
|
||||||
Codex reviews Gemini's output using conversational reasoning. Uses `resume --last` to maintain context across rounds.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# First round (new session)
|
|
||||||
codex --full-auto exec "
|
|
||||||
PURPOSE: Critically review technical plan
|
|
||||||
TASK: Review the provided plan, identify weaknesses, suggest alternatives, reason about trade-offs
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @CLAUDE.md [relevant files]
|
|
||||||
INPUT_PLAN: [Output from Gemini's analysis]
|
|
||||||
EXPECTED: Critical review with alternative ideas and risk analysis
|
|
||||||
RULES: Focus on architectural soundness and implementation feasibility
|
|
||||||
" --skip-git-repo-check
|
|
||||||
|
|
||||||
# Subsequent rounds (resume discussion)
|
|
||||||
codex --full-auto exec "
|
|
||||||
PURPOSE: Re-evaluate plan based on latest synthesis
|
|
||||||
TASK: Review updated plan and discussion points, provide further critique or refined ideas
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: Previous discussion context (maintained via resume)
|
|
||||||
INPUT_PLAN: [Output from Gemini's analysis for current round]
|
|
||||||
EXPECTED: Updated critique building on previous discussion
|
|
||||||
RULES: Build on previous insights, avoid repeating points
|
|
||||||
" resume --last --skip-git-repo-check
|
|
||||||
```
|
|
||||||
|
|
||||||
**Step 3: Claude's Synthesis (Priority 3)**
|
|
||||||
|
|
||||||
Claude (orchestrating AI) synthesizes both outputs:
|
|
||||||
|
|
||||||
- Summarizes Gemini's proposal and Codex's critique
|
|
||||||
- Highlights agreements and disagreements
|
|
||||||
- Structures consolidated plan
|
|
||||||
- Presents open questions for next round
|
|
||||||
- This synthesis becomes input for next round
|
|
||||||
|
|
||||||
### Phase 3: User Review and Iteration
|
|
||||||
|
|
||||||
1. **Present Synthesis**: Show synthesized plan and key discussion points
|
|
||||||
2. **Continue or Conclude**: Use AskUserQuestion to prompt user:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
AskUserQuestion({
|
|
||||||
questions: [{
|
|
||||||
question: "Round of discussion complete. What is the next step?",
|
|
||||||
header: "Next Round",
|
|
||||||
options: [
|
|
||||||
{ label: "Start another round", description: "Continue the discussion to refine the plan further." },
|
|
||||||
{ label: "Conclude and finalize", description: "End the discussion and save the final plan." }
|
|
||||||
],
|
|
||||||
multiSelect: false
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Loop or Finalize**:
|
|
||||||
- Continue → New round with Gemini analyzing latest synthesis
|
|
||||||
- Conclude → Save final synthesized document
|
|
||||||
|
|
||||||
## TodoWrite Tracking
|
|
||||||
|
|
||||||
Progress tracked for each round and model.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Example for 2-round discussion
|
|
||||||
TodoWrite({
|
|
||||||
todos: [
|
|
||||||
// Round 1
|
|
||||||
{ content: "[Round 1] Gemini: Analyzing topic", status: "completed", activeForm: "Analyzing with Gemini" },
|
|
||||||
{ content: "[Round 1] Codex: Critiquing plan", status: "completed", activeForm: "Critiquing with Codex" },
|
|
||||||
{ content: "[Round 1] Claude: Synthesizing discussion", status: "completed", activeForm: "Synthesizing discussion" },
|
|
||||||
{ content: "[User Action] Review Round 1 and decide next step", status: "in_progress", activeForm: "Awaiting user decision" },
|
|
||||||
|
|
||||||
// Round 2
|
|
||||||
{ content: "[Round 2] Gemini: Analyzing refined plan", status: "pending", activeForm: "Analyzing refined plan" },
|
|
||||||
{ content: "[Round 2] Codex: Re-evaluating plan [resume]", status: "pending", activeForm: "Re-evaluating with Codex" },
|
|
||||||
{ content: "[Round 2] Claude: Finalizing plan", status: "pending", activeForm: "Finalizing plan" },
|
|
||||||
{ content: "Discussion complete - Final plan generated", status: "pending", activeForm: "Generating final document" }
|
|
||||||
]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Routing
|
|
||||||
|
|
||||||
- **Primary Log**: Entire multi-round discussion logged to single file:
|
|
||||||
- `.workflow/active/WFS-[id]/.chat/discuss-plan-[topic]-[timestamp].md`
|
|
||||||
- **Final Plan**: Clean final version saved upon conclusion:
|
|
||||||
- `.workflow/active/WFS-[id]/.summaries/plan-[topic].md`
|
|
||||||
- **Scratchpad**: If no session active:
|
|
||||||
- `.workflow/.scratchpad/discuss-plan-[topic]-[timestamp].md`
|
|
||||||
|
|
||||||
## Discussion Structure
|
|
||||||
|
|
||||||
Each round's output is structured as:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## Round N: [Topic]
|
|
||||||
|
|
||||||
### Gemini's Analysis (Priority 1)
|
|
||||||
[Gemini's full analysis and proposal]
|
|
||||||
|
|
||||||
### Codex's Critique (Priority 2)
|
|
||||||
[Codex's critical review and alternatives]
|
|
||||||
|
|
||||||
### Claude's Synthesis (Priority 3)
|
|
||||||
**Points of Agreement:**
|
|
||||||
- [Agreement 1]
|
|
||||||
- [Agreement 2]
|
|
||||||
|
|
||||||
**Points of Contention:**
|
|
||||||
- [Issue 1]: Gemini suggests X, Codex suggests Y
|
|
||||||
- [Issue 2]: Trade-off between A and B
|
|
||||||
|
|
||||||
**Consolidated Plan:**
|
|
||||||
[Structured plan incorporating both perspectives]
|
|
||||||
|
|
||||||
**Open Questions for Next Round:**
|
|
||||||
1. [Question 1]
|
|
||||||
2. [Question 2]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### Example 1: Multi-Round Architecture Discussion
|
|
||||||
|
|
||||||
**Command**: `/cli:discuss-plan --topic "Design a real-time notification system"`
|
|
||||||
|
|
||||||
**Round 1**:
|
|
||||||
1. **Gemini**: Proposes WebSocket-based architecture with RabbitMQ message queue
|
|
||||||
2. **Codex**: Critiques as overly complex for MVP. Suggests Server-Sent Events (SSE) for simplicity (one-way notifications). Questions RabbitMQ necessity, proposes simpler Redis Pub/Sub
|
|
||||||
3. **Claude**: Synthesizes views:
|
|
||||||
- **Plan A (Gemini)**: WebSockets + RabbitMQ (highly scalable, complex)
|
|
||||||
- **Plan B (Codex)**: SSE + Redis (simpler, less overhead)
|
|
||||||
- **Open Question**: Is bi-directional communication critical, or is simplicity priority?
|
|
||||||
4. **User Action**: Opts for another round to explore trade-offs
|
|
||||||
|
|
||||||
**Round 2**:
|
|
||||||
1. **Gemini**: Analyzes synthesized document. Notes that if features like "user is typing" indicators are roadmapped, WebSockets better long-term. Drafts plan starting with SSE/Redis but designing for easy migration
|
|
||||||
2. **Codex**: Reviews migration plan. Reasons that migration itself could be complex. If feature set likely to expand, starting with WebSockets using managed service might be best cost/benefit
|
|
||||||
3. **Claude**: Synthesizes new discussion:
|
|
||||||
- **Consensus**: Simple SSE/Redis too short-sighted
|
|
||||||
- **Refined Options**:
|
|
||||||
1. Phased approach (SSE → WebSocket) with clear migration plan
|
|
||||||
2. Direct WebSocket with managed service (Pusher, Ably) to reduce ops overhead
|
|
||||||
- **Recommendation**: Option 2 most robust and future-proof
|
|
||||||
4. **User Action**: Agrees with recommendation, concludes discussion
|
|
||||||
|
|
||||||
**Final Output**: Planning document saved with:
|
|
||||||
- Chosen architecture (Managed WebSocket service)
|
|
||||||
- Multi-round reasoning
|
|
||||||
- High-level implementation steps
|
|
||||||
|
|
||||||
### Example 2: Feature Design Discussion
|
|
||||||
|
|
||||||
**Command**: `/cli:discuss-plan --topic "Design user permission system" --rounds 2`
|
|
||||||
|
|
||||||
**Round 1**:
|
|
||||||
1. **Gemini**: Proposes RBAC (Role-Based Access Control) with predefined roles
|
|
||||||
2. **Codex**: Suggests ABAC (Attribute-Based Access Control) for more flexibility
|
|
||||||
3. **Claude**: Synthesizes trade-offs between simplicity (RBAC) vs flexibility (ABAC)
|
|
||||||
|
|
||||||
**Round 2**:
|
|
||||||
1. **Gemini**: Analyzes hybrid approach - RBAC for core permissions, attributes for fine-grained control
|
|
||||||
2. **Codex**: Reviews hybrid model, identifies implementation challenges
|
|
||||||
3. **Claude**: Final plan with phased rollout strategy
|
|
||||||
|
|
||||||
**Automatic Conclusion**: Command concludes after 2 rounds as specified
|
|
||||||
|
|
||||||
### Example 3: Problem-Solving Discussion
|
|
||||||
|
|
||||||
**Command**: `/cli:discuss-plan --topic "Debug memory leak in data pipeline" --task-id ISSUE-042`
|
|
||||||
|
|
||||||
**Round 1**:
|
|
||||||
1. **Gemini**: Identifies potential leak sources (unclosed handles, growing cache, event listeners)
|
|
||||||
2. **Codex**: Adds profiling tool recommendations, suggests memory monitoring
|
|
||||||
3. **Claude**: Structures debugging plan with phased approach
|
|
||||||
|
|
||||||
**User Decision**: Single round sufficient, concludes with debugging strategy
|
|
||||||
|
|
||||||
## Consensus Mechanisms
|
|
||||||
|
|
||||||
**When to Continue:**
|
|
||||||
- Significant disagreement between models
|
|
||||||
- Open questions requiring deeper analysis
|
|
||||||
- Trade-offs need more exploration
|
|
||||||
- User wants additional perspectives
|
|
||||||
|
|
||||||
**When to Conclude:**
|
|
||||||
- Models converge on solution
|
|
||||||
- All key questions addressed
|
|
||||||
- User satisfied with plan depth
|
|
||||||
- Maximum rounds reached (if specified)
|
|
||||||
|
|
||||||
## Comparison with Other Commands
|
|
||||||
|
|
||||||
| Command | Models | Rounds | Discussion | Implementation | Use Case |
|
|
||||||
|---------|--------|--------|------------|----------------|----------|
|
|
||||||
| `/cli:mode:plan` | Gemini | 1 | NO | NO | Single-model planning |
|
|
||||||
| `/cli:analyze` | Gemini/Qwen | 1 | NO | NO | Code analysis |
|
|
||||||
| `/cli:execute` | Any | 1 | NO | YES | Direct implementation |
|
|
||||||
| `/cli:codex-execute` | Codex | 1 | NO | YES | Multi-stage implementation |
|
|
||||||
| `/cli:discuss-plan` | **Gemini+Codex+Claude** | **Multiple** | **YES** | **NO** | **Multi-perspective planning** |
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Use for Complex Decisions**: Ideal for architectural decisions, design trade-offs, problem-solving
|
|
||||||
2. **Start with Broad Topic**: Let first round establish scope, subsequent rounds refine
|
|
||||||
3. **Review Each Synthesis**: Claude's synthesis is key decision point - review carefully
|
|
||||||
4. **Know When to Stop**: Don't over-iterate - 2-3 rounds usually sufficient
|
|
||||||
5. **Task Association**: Use `--task-id` for traceability in workflow
|
|
||||||
6. **Save Intermediate Results**: Each round's synthesis saved automatically
|
|
||||||
7. **Let Models Disagree**: Divergent views often reveal important trade-offs
|
|
||||||
8. **Focus Questions**: Use Claude's open questions to guide next round
|
|
||||||
|
|
||||||
## Breaking Discussion Loops
|
|
||||||
|
|
||||||
**Detecting Loops:**
|
|
||||||
- Models repeating same arguments
|
|
||||||
- No new insights emerging
|
|
||||||
- Trade-offs well understood
|
|
||||||
|
|
||||||
**Breaking Strategies:**
|
|
||||||
1. **User Decision**: Make executive decision when enough info gathered
|
|
||||||
2. **Timeboxing**: Set max rounds upfront with `--rounds`
|
|
||||||
3. **Criteria-Based**: Define decision criteria before starting
|
|
||||||
4. **Hybrid Approach**: Accept multiple valid solutions in final plan
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
- **Pure Discussion**: This command NEVER modifies code - only produces planning documents
|
|
||||||
- **Codex Role**: Codex participates as reasoning/critique tool, not executor
|
|
||||||
- **Resume Context**: Codex maintains discussion context via `resume --last`
|
|
||||||
- **Priority System**: Ensures Gemini leads analysis, Codex provides critique, Claude synthesizes
|
|
||||||
- **Output Quality**: Multi-perspective discussion produces more robust plans than single-model analysis
|
|
||||||
- Command patterns and session management: see intelligent-tools-strategy.md (loaded in memory)
|
|
||||||
- For implementation after discussion, use `/cli:execute` or `/cli:codex-execute` separately
|
|
||||||
@@ -1,202 +0,0 @@
|
|||||||
---
|
|
||||||
name: execute
|
|
||||||
description: Autonomous code implementation with YOLO auto-approval using Gemini/Qwen/Codex, supports task ID or description input with automatic file pattern detection
|
|
||||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] description or task-id"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), Task(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Execute Command (/cli:execute)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Execute implementation tasks with **YOLO permissions** (auto-approves all confirmations). **MODIFIES CODE**.
|
|
||||||
|
|
||||||
**Intent**: Autonomous code implementation, modification, and generation
|
|
||||||
**Supported Tools**: codex, gemini (default), qwen
|
|
||||||
**Key Feature**: Automatic context inference and file pattern detection
|
|
||||||
|
|
||||||
## Core Behavior
|
|
||||||
|
|
||||||
1. **Code Modification**: This command MODIFIES, CREATES, and DELETES code files
|
|
||||||
2. **Auto-Approval**: YOLO mode bypasses confirmation prompts for all operations
|
|
||||||
3. **Implementation Focus**: Executes actual code changes, not just recommendations
|
|
||||||
4. **Requires Explicit Intent**: Use only when implementation is intended
|
|
||||||
|
|
||||||
## Core Concepts
|
|
||||||
|
|
||||||
### YOLO Permissions
|
|
||||||
Auto-approves: file pattern inference, execution, **file modifications**, summary generation
|
|
||||||
|
|
||||||
**WARNING**: This command will make actual code changes without manual confirmation
|
|
||||||
|
|
||||||
### Execution Modes
|
|
||||||
|
|
||||||
**1. Description Mode** (supports `--enhance`):
|
|
||||||
- Input: Natural language description
|
|
||||||
- Process: [Optional: Enhance] → Keyword analysis → Pattern inference → Execute
|
|
||||||
|
|
||||||
**2. Task ID Mode** (no `--enhance`):
|
|
||||||
- Input: Workflow task identifier (e.g., `IMPL-001`)
|
|
||||||
- Process: Task JSON parsing → Scope analysis → Execute
|
|
||||||
|
|
||||||
**3. Agent Mode** (default):
|
|
||||||
- Input: Description or task-id
|
|
||||||
- Process: 5-Phase Workflow → Context Discovery → Optimal Tool Selection → Execute
|
|
||||||
|
|
||||||
### Context Inference
|
|
||||||
|
|
||||||
Auto-selects files based on keywords and technology (each @ references one pattern):
|
|
||||||
- "auth" → `@**/*auth* @**/*user*`
|
|
||||||
- "React" → `@src/**/*.jsx @src/**/*.tsx`
|
|
||||||
- "api" → `@**/api/**/* @**/routes/**/*`
|
|
||||||
- Always includes: `@CLAUDE.md @**/*CLAUDE.md`
|
|
||||||
|
|
||||||
For precise file targeting, use `rg` or MCP tools to discover files first.
|
|
||||||
|
|
||||||
### Codex Session Continuity
|
|
||||||
|
|
||||||
**Resume Pattern** for related tasks:
|
|
||||||
```bash
|
|
||||||
# First task - establish session
|
|
||||||
codex -C [dir] --full-auto exec "[task]" --skip-git-repo-check -s danger-full-access
|
|
||||||
|
|
||||||
# Related task - continue session
|
|
||||||
codex --full-auto exec "[related-task]" resume --last --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
|
||||||
|
|
||||||
Use `resume --last` when current task extends/relates to previous execution. See intelligent-tools-strategy.md for auto-resume rules.
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `--tool <codex|gemini|qwen>` - Select CLI tool (default: auto-select by agent based on complexity)
|
|
||||||
- `--enhance` - Enhance input with `/enhance-prompt` first (Description Mode only)
|
|
||||||
- `<description|task-id>` - Natural language description or task identifier
|
|
||||||
- `--debug` - Verbose logging
|
|
||||||
- `--save-session` - Save execution to workflow session
|
|
||||||
|
|
||||||
## Workflow Integration
|
|
||||||
|
|
||||||
**Session Management**: Auto-detects active session from `.workflow/active/` directory
|
|
||||||
- Active session: Save to `.workflow/active/WFS-[id]/.chat/execute-[timestamp].md`
|
|
||||||
- No session: Create new session or save to scratchpad
|
|
||||||
|
|
||||||
**Task Integration**: Load from `.task/[TASK-ID].json`, update status, generate summary
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
Uses **cli-execution-agent** (default) for automated implementation:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Task(
|
|
||||||
subagent_type="cli-execution-agent",
|
|
||||||
description="Autonomous code implementation with YOLO auto-approval",
|
|
||||||
prompt=`
|
|
||||||
Task: ${description_or_task_id}
|
|
||||||
Mode: execute
|
|
||||||
Tool: ${tool_flag || 'auto-select'}
|
|
||||||
Enhance: ${enhance_flag}
|
|
||||||
Task-ID: ${task_id}
|
|
||||||
|
|
||||||
Execute autonomous code implementation with full modification permissions:
|
|
||||||
|
|
||||||
1. Task Analysis:
|
|
||||||
${task_id ? '- Load task spec from .task/' + task_id + '.json' : ''}
|
|
||||||
- Parse requirements and implementation scope
|
|
||||||
- Classify complexity (simple/medium/complex)
|
|
||||||
- Extract keywords for context discovery
|
|
||||||
|
|
||||||
2. Context Discovery:
|
|
||||||
- Discover implementation files using MCP/ripgrep
|
|
||||||
- Identify existing patterns and conventions (CLAUDE.md)
|
|
||||||
- Map dependencies and integration points
|
|
||||||
- Gather related tests and documentation
|
|
||||||
- Auto-detect file patterns from keywords
|
|
||||||
|
|
||||||
3. Tool Selection & Execution:
|
|
||||||
- Complexity assessment:
|
|
||||||
* Simple/Medium → Gemini/Qwen (MODE=write, --approval-mode yolo)
|
|
||||||
* Complex → Codex (MODE=auto, --skip-git-repo-check -s danger-full-access)
|
|
||||||
- Tool preference: ${tool_flag || 'auto-select based on complexity'}
|
|
||||||
- Apply appropriate implementation template
|
|
||||||
- Execute with YOLO auto-approval (bypasses all confirmations)
|
|
||||||
|
|
||||||
4. Implementation:
|
|
||||||
- Modify/create/delete code files per requirements
|
|
||||||
- Follow existing code patterns and conventions
|
|
||||||
- Include comprehensive context in CLI command
|
|
||||||
- Ensure working implementation with proper error handling
|
|
||||||
|
|
||||||
5. Output & Documentation:
|
|
||||||
- Save execution log: .workflow/active/WFS-[id]/.chat/execute-[timestamp].md
|
|
||||||
${task_id ? '- Generate task summary: .workflow/active/WFS-[id]/.summaries/' + task_id + '-summary.md' : ''}
|
|
||||||
${task_id ? '- Update task status in .task/' + task_id + '.json' : ''}
|
|
||||||
- Document all code changes made
|
|
||||||
|
|
||||||
⚠️ YOLO Mode: All file operations auto-approved without confirmation
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: `.workflow/active/WFS-[id]/.chat/execute-[timestamp].md` + `.workflow/active/WFS-[id]/.summaries/[TASK-ID]-summary.md` (or `.scratchpad/` if no session)
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
**Basic Implementation** (modifies code):
|
|
||||||
```bash
|
|
||||||
/cli:execute "implement JWT authentication with middleware"
|
|
||||||
# Agent Phase 1: Classifies intent=execute, complexity=medium, keywords=['jwt', 'auth', 'middleware']
|
|
||||||
# Agent Phase 2: Discovers auth patterns, existing middleware structure
|
|
||||||
# Agent Phase 3: Selects Gemini (medium complexity)
|
|
||||||
# Agent Phase 4: Executes with auto-approval
|
|
||||||
# Result: NEW/MODIFIED code files with JWT implementation
|
|
||||||
```
|
|
||||||
|
|
||||||
**Complex Implementation** (modifies code):
|
|
||||||
```bash
|
|
||||||
/cli:execute "implement OAuth2 authentication with token refresh"
|
|
||||||
# Agent Phase 1: Classifies intent=execute, complexity=complex, keywords=['oauth2', 'auth', 'token', 'refresh']
|
|
||||||
# Agent Phase 2: MCP discovers auth patterns, existing middleware, JWT dependencies
|
|
||||||
# Agent Phase 3: Enhances prompt with discovered patterns and best practices
|
|
||||||
# Agent Phase 4: Selects Codex (complex task), executes with comprehensive context
|
|
||||||
# Agent Phase 5: Saves execution log + generates implementation summary
|
|
||||||
# Result: Complete OAuth2 implementation + detailed execution log
|
|
||||||
```
|
|
||||||
|
|
||||||
**Enhanced Implementation** (modifies code):
|
|
||||||
```bash
|
|
||||||
/cli:execute --enhance "implement JWT authentication"
|
|
||||||
# Step 1: Enhance to expand requirements
|
|
||||||
# Step 2: Execute implementation with auto-approval
|
|
||||||
# Result: Complete auth system with MODIFIED code files
|
|
||||||
```
|
|
||||||
|
|
||||||
**Task Execution** (modifies code):
|
|
||||||
```bash
|
|
||||||
/cli:execute IMPL-001
|
|
||||||
# Reads: .task/IMPL-001.json for requirements
|
|
||||||
# Executes: Implementation based on task spec
|
|
||||||
# Result: Code changes per task definition
|
|
||||||
```
|
|
||||||
|
|
||||||
**Codex Implementation** (modifies code):
|
|
||||||
```bash
|
|
||||||
/cli:execute --tool codex "optimize database queries"
|
|
||||||
# Executes: Codex with full file access
|
|
||||||
# Result: MODIFIED query code, new indexes, updated tests
|
|
||||||
```
|
|
||||||
|
|
||||||
**Qwen Code Generation** (modifies code):
|
|
||||||
```bash
|
|
||||||
/cli:execute --tool qwen --enhance "refactor auth module"
|
|
||||||
# Step 1: Enhanced refactoring plan
|
|
||||||
# Step 2: Execute with MODE=write
|
|
||||||
# Result: REFACTORED auth code with structural changes
|
|
||||||
```
|
|
||||||
|
|
||||||
## Comparison with Analysis Commands
|
|
||||||
|
|
||||||
| Command | Intent | Code Changes | Auto-Approve |
|
|
||||||
|---------|--------|--------------|--------------|
|
|
||||||
| `/cli:analyze` | Understand code | NO | N/A |
|
|
||||||
| `/cli:chat` | Ask questions | NO | N/A |
|
|
||||||
| `/cli:execute` | **Implement** | **YES** | **YES** |
|
|
||||||
@@ -1,96 +0,0 @@
|
|||||||
---
|
|
||||||
name: bug-diagnosis
|
|
||||||
description: Read-only bug root cause analysis using Gemini/Qwen/Codex with systematic diagnosis template for fix suggestions
|
|
||||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), Task(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Mode: Bug Diagnosis (/cli:mode:bug-diagnosis)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Systematic bug diagnosis with root cause analysis template (`~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt`).
|
|
||||||
|
|
||||||
**Tool Selection**:
|
|
||||||
- **gemini** (default) - Best for bug diagnosis
|
|
||||||
- **qwen** - Fallback when Gemini unavailable
|
|
||||||
- **codex** - Alternative for complex bug analysis
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
|
|
||||||
- `--enhance` - Enhance bug description with `/enhance-prompt`
|
|
||||||
- `--cd "path"` - Target directory for focused diagnosis
|
|
||||||
- `<bug-description>` (Required) - Bug description or error details
|
|
||||||
|
|
||||||
## Tool Usage
|
|
||||||
|
|
||||||
**Gemini** (Primary):
|
|
||||||
```bash
|
|
||||||
# Uses gemini by default, or specify explicitly
|
|
||||||
--tool gemini
|
|
||||||
```
|
|
||||||
|
|
||||||
**Qwen** (Fallback):
|
|
||||||
```bash
|
|
||||||
--tool qwen
|
|
||||||
```
|
|
||||||
|
|
||||||
**Codex** (Alternative):
|
|
||||||
```bash
|
|
||||||
--tool codex
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
Uses **cli-execution-agent** (default) for automated bug diagnosis:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Task(
|
|
||||||
subagent_type="cli-execution-agent",
|
|
||||||
description="Bug root cause diagnosis with fix suggestions",
|
|
||||||
prompt=`
|
|
||||||
Task: ${bug_description}
|
|
||||||
Mode: bug-diagnosis
|
|
||||||
Tool: ${tool_flag || 'gemini'}
|
|
||||||
Directory: ${cd_path || '.'}
|
|
||||||
Enhance: ${enhance_flag}
|
|
||||||
Template: ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt
|
|
||||||
|
|
||||||
Execute systematic bug diagnosis and root cause analysis:
|
|
||||||
|
|
||||||
1. Context Discovery:
|
|
||||||
- Locate error traces, stack traces, and log messages
|
|
||||||
- Find related code sections and affected modules
|
|
||||||
- Identify data flow paths leading to the bug
|
|
||||||
- Discover test cases related to bug area
|
|
||||||
- Use MCP/ripgrep for comprehensive context gathering
|
|
||||||
|
|
||||||
2. Root Cause Analysis:
|
|
||||||
- Apply diagnostic template methodology
|
|
||||||
- Trace execution to identify failure point
|
|
||||||
- Analyze state, data, and logic causing issue
|
|
||||||
- Document potential root causes with evidence
|
|
||||||
- Assess bug severity and impact scope
|
|
||||||
|
|
||||||
3. CLI Command Construction:
|
|
||||||
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex bugs)
|
|
||||||
- Directory: cd ${cd_path || '.'} &&
|
|
||||||
- Context: @**/* + error traces + affected code
|
|
||||||
- Mode: analysis (read-only)
|
|
||||||
- Template: analysis/01-diagnose-bug-root-cause.txt
|
|
||||||
|
|
||||||
4. Output Generation:
|
|
||||||
- Root cause diagnosis with evidence
|
|
||||||
- Fix suggestions and recommendations
|
|
||||||
- Prevention strategies
|
|
||||||
- Save to .workflow/active/WFS-[id]/.chat/bug-diagnosis-[timestamp].md (or .scratchpad/)
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
- **Read-only**: Diagnoses bugs, does NOT modify code
|
|
||||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt`
|
|
||||||
- **Output**: `.workflow/active/WFS-[id]/.chat/bug-diagnosis-[timestamp].md` (or `.scratchpad/` if no session)
|
|
||||||
@@ -1,98 +0,0 @@
|
|||||||
---
|
|
||||||
name: code-analysis
|
|
||||||
description: Read-only execution path tracing using Gemini/Qwen/Codex with specialized analysis template for call flow and optimization
|
|
||||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), Task(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Mode: Code Analysis (/cli:mode:code-analysis)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Systematic code analysis with execution path tracing template (`~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt`).
|
|
||||||
|
|
||||||
**Tool Selection**:
|
|
||||||
- **gemini** (default) - Best for code analysis and tracing
|
|
||||||
- **qwen** - Fallback when Gemini unavailable
|
|
||||||
- **codex** - Alternative for complex analysis tasks
|
|
||||||
|
|
||||||
**Key Feature**: `--cd` flag for directory-scoped analysis
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
|
|
||||||
- `--enhance` - Enhance analysis target with `/enhance-prompt` first
|
|
||||||
- `--cd "path"` - Target directory for focused analysis
|
|
||||||
- `<analysis-target>` (Required) - Code analysis target or question
|
|
||||||
|
|
||||||
## Tool Usage
|
|
||||||
|
|
||||||
**Gemini** (Primary):
|
|
||||||
```bash
|
|
||||||
/cli:mode:code-analysis --tool gemini "trace auth flow"
|
|
||||||
# OR (default)
|
|
||||||
/cli:mode:code-analysis "trace auth flow"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Qwen** (Fallback):
|
|
||||||
```bash
|
|
||||||
/cli:mode:code-analysis --tool qwen "trace auth flow"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Codex** (Alternative):
|
|
||||||
```bash
|
|
||||||
/cli:mode:code-analysis --tool codex "trace auth flow"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
Uses **cli-execution-agent** (default) for automated code analysis:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Task(
|
|
||||||
subagent_type="cli-execution-agent",
|
|
||||||
description="Execution path tracing and call flow analysis",
|
|
||||||
prompt=`
|
|
||||||
Task: ${analysis_target}
|
|
||||||
Mode: code-analysis
|
|
||||||
Tool: ${tool_flag || 'gemini'}
|
|
||||||
Directory: ${cd_path || '.'}
|
|
||||||
Enhance: ${enhance_flag}
|
|
||||||
Template: ~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt
|
|
||||||
|
|
||||||
Execute systematic code analysis with execution path tracing:
|
|
||||||
|
|
||||||
1. Context Discovery:
|
|
||||||
- Identify entry points and function signatures
|
|
||||||
- Trace call chains and execution flows
|
|
||||||
- Discover related files (implementations, dependencies, tests)
|
|
||||||
- Map data flow and state transformations
|
|
||||||
- Use MCP/ripgrep for comprehensive file discovery
|
|
||||||
|
|
||||||
2. Analysis Execution:
|
|
||||||
- Apply execution tracing template
|
|
||||||
- Generate call flow diagrams (textual)
|
|
||||||
- Document execution paths and branching logic
|
|
||||||
- Identify optimization opportunities
|
|
||||||
|
|
||||||
3. CLI Command Construction:
|
|
||||||
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex analysis)
|
|
||||||
- Directory: cd ${cd_path || '.'} &&
|
|
||||||
- Context: @**/* + discovered execution context
|
|
||||||
- Mode: analysis (read-only)
|
|
||||||
- Template: analysis/01-trace-code-execution.txt
|
|
||||||
|
|
||||||
4. Output Generation:
|
|
||||||
- Execution trace documentation
|
|
||||||
- Call flow analysis with diagrams
|
|
||||||
- Performance and optimization insights
|
|
||||||
- Save to .workflow/active/WFS-[id]/.chat/code-analysis-[timestamp].md (or .scratchpad/)
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
- **Read-only**: Analyzes code, does NOT modify files
|
|
||||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/analysis/01-trace-code-execution.txt`
|
|
||||||
- **Output**: `.workflow/active/WFS-[id]/.chat/code-analysis-[timestamp].md` (or `.scratchpad/` if no session)
|
|
||||||
@@ -1,126 +0,0 @@
|
|||||||
---
|
|
||||||
name: document-analysis
|
|
||||||
description: Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction
|
|
||||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), Task(*), Read(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Mode: Document Analysis (/cli:mode:document-analysis)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Systematic analysis of technical documents, research papers, API documentation, and technical specifications.
|
|
||||||
|
|
||||||
**Tool Selection**:
|
|
||||||
- **gemini** (default) - Best for document comprehension and structure analysis
|
|
||||||
- **qwen** - Fallback when Gemini unavailable
|
|
||||||
- **codex** - Alternative for complex technical documents
|
|
||||||
|
|
||||||
**Key Feature**: `--cd` flag for directory-scoped document discovery
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
|
|
||||||
- `--enhance` - Enhance analysis target with `/enhance-prompt`
|
|
||||||
- `--cd "path"` - Target directory for document search
|
|
||||||
- `<document-path-or-topic>` (Required) - File path or topic description
|
|
||||||
|
|
||||||
## Tool Usage
|
|
||||||
|
|
||||||
**Gemini** (Primary):
|
|
||||||
```bash
|
|
||||||
/cli:mode:document-analysis "README.md"
|
|
||||||
/cli:mode:document-analysis --tool gemini "analyze API documentation"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Qwen** (Fallback):
|
|
||||||
```bash
|
|
||||||
/cli:mode:document-analysis --tool qwen "docs/architecture.md"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Codex** (Alternative):
|
|
||||||
```bash
|
|
||||||
/cli:mode:document-analysis --tool codex "research paper in docs/"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
Uses **cli-execution-agent** for automated document analysis:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Task(
|
|
||||||
subagent_type="cli-execution-agent",
|
|
||||||
description="Systematic document comprehension and insights extraction",
|
|
||||||
prompt=`
|
|
||||||
Task: ${document_path_or_topic}
|
|
||||||
Mode: document-analysis
|
|
||||||
Tool: ${tool_flag || 'gemini'}
|
|
||||||
Directory: ${cd_path || '.'}
|
|
||||||
Enhance: ${enhance_flag}
|
|
||||||
Template: ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-technical-document.txt
|
|
||||||
|
|
||||||
Execute systematic document analysis:
|
|
||||||
|
|
||||||
1. Document Discovery:
|
|
||||||
- Locate target document(s) via path or topic keywords
|
|
||||||
- Identify document type (README, API docs, research paper, spec, tutorial)
|
|
||||||
- Detect document format (Markdown, PDF, plain text, reStructuredText)
|
|
||||||
- Discover related documents (references, appendices, examples)
|
|
||||||
- Use MCP/ripgrep for comprehensive file discovery
|
|
||||||
|
|
||||||
2. Pre-Analysis Planning (Required):
|
|
||||||
- Determine document structure (sections, hierarchy, flow)
|
|
||||||
- Identify key components (abstract, methodology, implementation details)
|
|
||||||
- Map dependencies and cross-references
|
|
||||||
- Assess document scope and complexity
|
|
||||||
- Plan analysis approach based on document type
|
|
||||||
|
|
||||||
3. CLI Command Construction:
|
|
||||||
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex docs)
|
|
||||||
- Directory: cd ${cd_path || '.'} &&
|
|
||||||
- Context: @{document_paths} + @CLAUDE.md + related files
|
|
||||||
- Mode: analysis (read-only)
|
|
||||||
- Template: analysis/02-analyze-technical-document.txt
|
|
||||||
|
|
||||||
4. Analysis Execution:
|
|
||||||
- Apply 6-field template structure (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
|
||||||
- Execute multi-phase analysis protocol with pre-planning
|
|
||||||
- Perform self-critique before final output
|
|
||||||
- Generate structured report with evidence-based insights
|
|
||||||
|
|
||||||
5. Output Generation:
|
|
||||||
- Comprehensive document analysis report
|
|
||||||
- Structured insights with section references
|
|
||||||
- Critical assessment with evidence
|
|
||||||
- Actionable recommendations
|
|
||||||
- Save to .workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md (or .scratchpad/)
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
- **Read-only**: Analyzes documents, does NOT modify files
|
|
||||||
- **Evidence-based**: All claims must reference specific sections/pages
|
|
||||||
- **Pre-planning**: Requires analysis approach planning before execution
|
|
||||||
- **Precise language**: Direct, accurate wording - no persuasive embellishment
|
|
||||||
- **Output**: `.workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md` (or `.scratchpad/` if no session)
|
|
||||||
|
|
||||||
## Document Types Supported
|
|
||||||
|
|
||||||
| Type | Focus Areas | Key Outputs |
|
|
||||||
|------|-------------|-------------|
|
|
||||||
| README | Purpose, setup, usage | Integration steps, quick-start guide |
|
|
||||||
| API Documentation | Endpoints, parameters, responses | API usage patterns, integration points |
|
|
||||||
| Research Paper | Methodology, findings, validity | Applicable techniques, implementation feasibility |
|
|
||||||
| Specification | Requirements, standards, constraints | Compliance checklist, implementation requirements |
|
|
||||||
| Tutorial | Learning path, examples, exercises | Key concepts, practical applications |
|
|
||||||
| Architecture Docs | System design, components, patterns | Design decisions, integration points, trade-offs |
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Scope Definition**: Clearly define what aspects to analyze before starting
|
|
||||||
2. **Layered Reading**: Structure/Overview → Details → Critical Analysis → Synthesis
|
|
||||||
3. **Evidence Trail**: Track section references for all extracted information
|
|
||||||
4. **Gap Identification**: Note missing information or unclear sections explicitly
|
|
||||||
5. **Actionable Output**: Focus on insights that inform decisions or actions
|
|
||||||
@@ -1,93 +0,0 @@
|
|||||||
---
|
|
||||||
name: plan
|
|
||||||
description: Read-only architecture planning using Gemini/Qwen/Codex with strategic planning template for modification plans and impact analysis
|
|
||||||
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic"
|
|
||||||
allowed-tools: SlashCommand(*), Bash(*), Task(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# CLI Mode: Plan (/cli:mode:plan)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
Strategic software architecture planning template (`~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt`).
|
|
||||||
|
|
||||||
**Tool Selection**:
|
|
||||||
- **gemini** (default) - Best for architecture planning
|
|
||||||
- **qwen** - Fallback when Gemini unavailable
|
|
||||||
- **codex** - Alternative for implementation planning
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
|
|
||||||
- `--enhance` - Enhance task with `/enhance-prompt`
|
|
||||||
- `--cd "path"` - Target directory for focused planning
|
|
||||||
- `<planning-task>` (Required) - Architecture planning task or modification requirements
|
|
||||||
|
|
||||||
## Tool Usage
|
|
||||||
|
|
||||||
**Gemini** (Primary):
|
|
||||||
```bash
|
|
||||||
--tool gemini # or omit (default)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Qwen** (Fallback):
|
|
||||||
```bash
|
|
||||||
--tool qwen
|
|
||||||
```
|
|
||||||
|
|
||||||
**Codex** (Alternative):
|
|
||||||
```bash
|
|
||||||
--tool codex
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Flow
|
|
||||||
|
|
||||||
Uses **cli-execution-agent** (default) for automated planning:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Task(
|
|
||||||
subagent_type="cli-execution-agent",
|
|
||||||
description="Architecture planning with impact analysis",
|
|
||||||
prompt=`
|
|
||||||
Task: ${planning_task}
|
|
||||||
Mode: plan
|
|
||||||
Tool: ${tool_flag || 'gemini'}
|
|
||||||
Directory: ${cd_path || '.'}
|
|
||||||
Enhance: ${enhance_flag}
|
|
||||||
Template: ~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt
|
|
||||||
|
|
||||||
Execute strategic architecture planning:
|
|
||||||
|
|
||||||
1. Context Discovery:
|
|
||||||
- Analyze current architecture structure
|
|
||||||
- Identify affected components and modules
|
|
||||||
- Map dependencies and integration points
|
|
||||||
- Assess modification impacts (scope, complexity, risks)
|
|
||||||
|
|
||||||
2. Planning Analysis:
|
|
||||||
- Apply strategic planning template
|
|
||||||
- Generate modification plan with phases
|
|
||||||
- Document architectural decisions and rationale
|
|
||||||
- Identify potential conflicts and mitigation strategies
|
|
||||||
|
|
||||||
3. CLI Command Construction:
|
|
||||||
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for implementation guidance)
|
|
||||||
- Directory: cd ${cd_path || '.'} &&
|
|
||||||
- Context: @**/* (full architecture context)
|
|
||||||
- Mode: analysis (read-only, no code generation)
|
|
||||||
- Template: planning/01-plan-architecture-design.txt
|
|
||||||
|
|
||||||
4. Output Generation:
|
|
||||||
- Strategic modification plan
|
|
||||||
- Impact analysis and risk assessment
|
|
||||||
- Implementation roadmap
|
|
||||||
- Save to .workflow/active/WFS-[id]/.chat/plan-[timestamp].md (or .scratchpad/)
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
- **Read-only**: Creates modification plans, does NOT generate code
|
|
||||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/planning/01-plan-architecture-design.txt`
|
|
||||||
- **Output**: `.workflow/active/WFS-[id]/.chat/plan-[timestamp].md` (or `.scratchpad/` if no session)
|
|
||||||
@@ -143,68 +143,10 @@ Perform Deep Scan analysis for feature: {FEATURE_KEYWORD}
|
|||||||
- File Discovery: MCP Code Index (preferred) + rg fallback
|
- File Discovery: MCP Code Index (preferred) + rg fallback
|
||||||
- Target: 5-15 most relevant files
|
- Target: 5-15 most relevant files
|
||||||
|
|
||||||
**Expected Output Format**:
|
**MANDATORY FIRST STEP**:
|
||||||
Return comprehensive analysis as structured JSON:
|
Read: ~/.claude/workflows/cli-templates/schemas/codemap-json-schema.json
|
||||||
{
|
|
||||||
\"feature\": \"{FEATURE_KEYWORD}\",
|
**Output**: Return JSON following schema exactly. NO FILE WRITING - return JSON analysis only.
|
||||||
\"analysis_metadata\": {
|
|
||||||
\"tool_used\": \"gemini|qwen\",
|
|
||||||
\"timestamp\": \"ISO_TIMESTAMP\",
|
|
||||||
\"analysis_mode\": \"deep-scan\"
|
|
||||||
},
|
|
||||||
\"files_analyzed\": [
|
|
||||||
{\"file\": \"path/to/file.ts\", \"relevance\": \"high|medium|low\", \"role\": \"brief description\"}
|
|
||||||
],
|
|
||||||
\"architecture\": {
|
|
||||||
\"overview\": \"High-level description\",
|
|
||||||
\"modules\": [
|
|
||||||
{\"name\": \"ModuleName\", \"file\": \"file:line\", \"responsibility\": \"description\", \"dependencies\": [...]}
|
|
||||||
],
|
|
||||||
\"interactions\": [
|
|
||||||
{\"from\": \"ModuleA\", \"to\": \"ModuleB\", \"type\": \"import|call|data-flow\", \"description\": \"...\"}
|
|
||||||
],
|
|
||||||
\"entry_points\": [
|
|
||||||
{\"function\": \"main\", \"file\": \"file:line\", \"description\": \"...\"}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
\"function_calls\": {
|
|
||||||
\"call_chains\": [
|
|
||||||
{
|
|
||||||
\"chain_id\": 1,
|
|
||||||
\"description\": \"User authentication flow\",
|
|
||||||
\"sequence\": [
|
|
||||||
{\"function\": \"login\", \"file\": \"file:line\", \"calls\": [\"validateCredentials\", \"createSession\"]}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
\"sequences\": [
|
|
||||||
{\"from\": \"Client\", \"to\": \"AuthService\", \"method\": \"login(username, password)\", \"returns\": \"Session\"}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
\"data_flow\": {
|
|
||||||
\"structures\": [
|
|
||||||
{\"name\": \"UserData\", \"stage\": \"input\", \"shape\": {\"username\": \"string\", \"password\": \"string\"}}
|
|
||||||
],
|
|
||||||
\"transformations\": [
|
|
||||||
{\"from\": \"RawInput\", \"to\": \"ValidatedData\", \"transformer\": \"validateUser\", \"file\": \"file:line\"}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
\"conditional_logic\": {
|
|
||||||
\"branches\": [
|
|
||||||
{\"condition\": \"isAuthenticated\", \"file\": \"file:line\", \"true_path\": \"...\", \"false_path\": \"...\"}
|
|
||||||
],
|
|
||||||
\"error_handling\": [
|
|
||||||
{\"error_type\": \"AuthenticationError\", \"handler\": \"handleAuthError\", \"file\": \"file:line\", \"recovery\": \"retry|fail\"}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
\"design_patterns\": [
|
|
||||||
{\"pattern\": \"Repository Pattern\", \"location\": \"src/repositories\", \"description\": \"...\"}
|
|
||||||
],
|
|
||||||
\"recommendations\": [
|
|
||||||
\"Consider extracting authentication logic into separate module\",
|
|
||||||
\"Add error recovery for network failures\"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
**Critical Requirements**:
|
**Critical Requirements**:
|
||||||
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
|
- Use Deep Scan mode: Bash (Phase 1 - precise locations) + Gemini CLI (Phase 2 - semantic understanding) + Synthesis (Phase 3 - merge with attribution)
|
||||||
@@ -728,18 +670,6 @@ User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index)
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
- **Per-Feature SKILL**: Independent packages for each analyzed feature
|
|
||||||
- **Specialized Agent**: cli-explore-agent with Deep Scan mode (Bash + Gemini dual-source)
|
|
||||||
- **Professional Analysis**: Pre-defined workflow for code exploration and structure analysis
|
|
||||||
- **Clear Separation**: Agent analyzes (JSON) → Orchestrator documents (Mermaid markdown)
|
|
||||||
- **Multi-Level Detail**: 4 levels (architecture → function → data → conditional)
|
|
||||||
- **Visual Flow**: Embedded Mermaid diagrams for all flow types
|
|
||||||
- **Progressive Loading**: Token-efficient context loading (2K → 30K)
|
|
||||||
- **Auto-Continue**: Fully autonomous 3-phase execution
|
|
||||||
- **Smart Skip**: Detects existing codemap, 10x faster index updates
|
|
||||||
- **CLI Integration**: Gemini/Qwen for deep semantic understanding
|
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
@@ -753,12 +683,5 @@ code-map-memory (orchestrator)
|
|||||||
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
|
│ └─ Transform JSON → 5 Mermaid markdown files + metadata.json
|
||||||
└─ Phase 3: Write SKILL.md (index generation, always runs)
|
└─ Phase 3: Write SKILL.md (index generation, always runs)
|
||||||
|
|
||||||
Benefits:
|
|
||||||
✅ Specialized agent: cli-explore-agent with dual-source strategy (Bash + Gemini)
|
|
||||||
✅ Professional analysis: Pre-defined Deep Scan workflow
|
|
||||||
✅ Clear separation: Agent analyzes (JSON) → Orchestrator documents (Mermaid)
|
|
||||||
✅ Smart skip logic: 10x faster when codemap exists
|
|
||||||
✅ Multi-level detail: Architecture → Functions → Data → Conditionals
|
|
||||||
|
|
||||||
Output: .claude/skills/codemap-{feature}/
|
Output: .claude/skills/codemap-{feature}/
|
||||||
```
|
```
|
||||||
|
|||||||
471
.claude/commands/memory/docs-full-cli.md
Normal file
471
.claude/commands/memory/docs-full-cli.md
Normal file
@@ -0,0 +1,471 @@
|
|||||||
|
---
|
||||||
|
name: docs-full-cli
|
||||||
|
description: Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel
|
||||||
|
argument-hint: "[path] [--tool <gemini|qwen|codex>]"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Full Documentation Generation - CLI Mode (/memory:docs-full-cli)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Orchestrates project-wide documentation generation using CLI-based execution with batched agents and automatic tool fallback.
|
||||||
|
|
||||||
|
**Parameters**:
|
||||||
|
- `path`: Target directory (default: current directory)
|
||||||
|
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
|
||||||
|
|
||||||
|
**Execution Flow**: Discovery → Plan Presentation → Execution → Verification
|
||||||
|
|
||||||
|
## 3-Layer Architecture & Auto-Strategy Selection
|
||||||
|
|
||||||
|
### Layer Definition & Strategy Assignment
|
||||||
|
|
||||||
|
| Layer | Depth | Strategy | Purpose | Context Pattern |
|
||||||
|
|-------|-------|----------|---------|----------------|
|
||||||
|
| **Layer 3** (Deepest) | ≥3 | `full` | Generate docs for all subdirectories with code | `@**/*` (all files) |
|
||||||
|
| **Layer 2** (Middle) | 1-2 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
|
||||||
|
| **Layer 1** (Top) | 0 | `single` | Current dir + child docs | `@*/API.md @*/README.md @*.{ts,tsx,js,...}` |
|
||||||
|
|
||||||
|
**Generation Direction**: Layer 3 → Layer 2 → Layer 1 (bottom-up dependency flow)
|
||||||
|
|
||||||
|
**Strategy Auto-Selection**: Strategies are automatically determined by directory depth - no user configuration needed.
|
||||||
|
|
||||||
|
### Strategy Details
|
||||||
|
|
||||||
|
#### Full Strategy (Layer 3 Only)
|
||||||
|
- **Use Case**: Deepest directories with comprehensive file coverage
|
||||||
|
- **Behavior**: Generates API.md + README.md for current directory AND subdirectories containing code
|
||||||
|
- **Context**: All files in current directory tree (`@**/*`)
|
||||||
|
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
|
||||||
|
|
||||||
|
|
||||||
|
#### Single Strategy (Layers 1-2)
|
||||||
|
- **Use Case**: Upper layers that aggregate from existing documentation
|
||||||
|
- **Behavior**: Generates API.md + README.md only in current directory
|
||||||
|
- **Context**: Direct children docs + current directory code files
|
||||||
|
- **Output**: `.workflow/docs/{project_name}/{path}/API.md` + `README.md`
|
||||||
|
|
||||||
|
### Example Flow
|
||||||
|
```
|
||||||
|
src/auth/handlers/ (depth 3) → FULL STRATEGY
|
||||||
|
CONTEXT: @**/* (all files in handlers/ and subdirs)
|
||||||
|
GENERATES: .workflow/docs/project/src/auth/handlers/{API.md,README.md} + subdirs
|
||||||
|
↓
|
||||||
|
src/auth/ (depth 2) → SINGLE STRATEGY
|
||||||
|
CONTEXT: @*/API.md @*/README.md @*.ts (handlers docs + current code)
|
||||||
|
GENERATES: .workflow/docs/project/src/auth/{API.md,README.md} only
|
||||||
|
↓
|
||||||
|
src/ (depth 1) → SINGLE STRATEGY
|
||||||
|
CONTEXT: @*/API.md @*/README.md (auth docs, utils docs)
|
||||||
|
GENERATES: .workflow/docs/project/src/{API.md,README.md} only
|
||||||
|
↓
|
||||||
|
./ (depth 0) → SINGLE STRATEGY
|
||||||
|
CONTEXT: @*/API.md @*/README.md (src docs, tests docs)
|
||||||
|
GENERATES: .workflow/docs/project/{API.md,README.md} only
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Execution Rules
|
||||||
|
|
||||||
|
1. **Analyze First**: Module discovery + folder classification before generation
|
||||||
|
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||||
|
3. **Execution Strategy**:
|
||||||
|
- **<20 modules**: Direct parallel execution (max 4 concurrent per layer)
|
||||||
|
- **≥20 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
|
||||||
|
4. **Tool Fallback**: Auto-retry with fallback tools on failure
|
||||||
|
5. **Layer Sequential**: Process layers 3→2→1 (bottom-up), parallel batches within layer
|
||||||
|
6. **Safety Check**: Verify only docs files modified in .workflow/docs/
|
||||||
|
7. **Layer-based Grouping**: Group modules by LAYER (not depth) for execution
|
||||||
|
|
||||||
|
## Tool Fallback Hierarchy
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
--tool gemini → [gemini, qwen, codex] // default
|
||||||
|
--tool qwen → [qwen, gemini, codex]
|
||||||
|
--tool codex → [codex, gemini, qwen]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Trigger**: Non-zero exit code from generation script
|
||||||
|
|
||||||
|
| Tool | Best For | Fallback To |
|
||||||
|
|--------|--------------------------------|----------------|
|
||||||
|
| gemini | Documentation, patterns | qwen → codex |
|
||||||
|
| qwen | Architecture, system design | gemini → codex |
|
||||||
|
| codex | Implementation, code quality | gemini → qwen |
|
||||||
|
|
||||||
|
## Execution Phases
|
||||||
|
|
||||||
|
### Phase 1: Discovery & Analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Get project metadata
|
||||||
|
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||||
|
|
||||||
|
// Get module structure with classification
|
||||||
|
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
|
||||||
|
|
||||||
|
// OR with path parameter
|
||||||
|
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
|
||||||
|
|
||||||
|
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack.
|
||||||
|
|
||||||
|
### Phase 2: Plan Presentation
|
||||||
|
|
||||||
|
**For <20 modules**:
|
||||||
|
```
|
||||||
|
Documentation Generation Plan:
|
||||||
|
Tool: gemini (fallback: qwen → codex)
|
||||||
|
Total: 7 modules
|
||||||
|
Execution: Direct parallel (< 20 modules threshold)
|
||||||
|
Project: myproject
|
||||||
|
Output: .workflow/docs/myproject/
|
||||||
|
|
||||||
|
Will generate docs for:
|
||||||
|
- ./core/interfaces (12 files, type: code) - depth 2 [Layer 2] - single strategy
|
||||||
|
- ./core (22 files, type: code) - depth 1 [Layer 2] - single strategy
|
||||||
|
- ./models (9 files, type: code) - depth 1 [Layer 2] - single strategy
|
||||||
|
- ./utils (12 files, type: navigation) - depth 1 [Layer 2] - single strategy
|
||||||
|
- . (5 files, type: code) - depth 0 [Layer 1] - single strategy
|
||||||
|
|
||||||
|
Documentation Strategy (Auto-Selected):
|
||||||
|
- Layer 2 (depth 1-2): API.md + README.md (current dir only, reference child docs)
|
||||||
|
- Layer 1 (depth 0): API.md + README.md (current dir only, reference child docs)
|
||||||
|
|
||||||
|
Output Structure:
|
||||||
|
- Code folders: API.md + README.md
|
||||||
|
- Navigation folders: README.md only
|
||||||
|
|
||||||
|
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
|
||||||
|
Execution order: Layer 2 → Layer 1
|
||||||
|
Estimated time: ~5-10 minutes
|
||||||
|
|
||||||
|
Confirm execution? (y/n)
|
||||||
|
```
|
||||||
|
|
||||||
|
**For ≥20 modules**:
|
||||||
|
```
|
||||||
|
Documentation Generation Plan:
|
||||||
|
Tool: gemini (fallback: qwen → codex)
|
||||||
|
Total: 31 modules
|
||||||
|
Execution: Agent batch processing (4 modules/agent)
|
||||||
|
Project: myproject
|
||||||
|
Output: .workflow/docs/myproject/
|
||||||
|
|
||||||
|
Will generate docs for:
|
||||||
|
- ./src/features/auth (12 files, type: code) - depth 3 [Layer 3] - full strategy
|
||||||
|
- ./.claude/commands/cli (6 files, type: code) - depth 3 [Layer 3] - full strategy
|
||||||
|
- ./src/utils (8 files, type: code) - depth 2 [Layer 2] - single strategy
|
||||||
|
...
|
||||||
|
|
||||||
|
Documentation Strategy (Auto-Selected):
|
||||||
|
- Layer 3 (depth ≥3): API.md + README.md (all subdirs with code)
|
||||||
|
- Layer 2 (depth 1-2): API.md + README.md (current dir only)
|
||||||
|
- Layer 1 (depth 0): API.md + README.md (current dir only)
|
||||||
|
|
||||||
|
Output Structure:
|
||||||
|
- Code folders: API.md + README.md
|
||||||
|
- Navigation folders: README.md only
|
||||||
|
|
||||||
|
Auto-skipped: ./tests, __pycache__, node_modules (15 paths)
|
||||||
|
Execution order: Layer 3 → Layer 2 → Layer 1
|
||||||
|
|
||||||
|
Agent allocation (by LAYER):
|
||||||
|
- Layer 3 (14 modules, depth ≥3): 4 agents [4, 4, 4, 2]
|
||||||
|
- Layer 2 (15 modules, depth 1-2): 4 agents [4, 4, 4, 3]
|
||||||
|
- Layer 1 (2 modules, depth 0): 1 agent [2]
|
||||||
|
|
||||||
|
Estimated time: ~15-25 minutes
|
||||||
|
|
||||||
|
Confirm execution? (y/n)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3A: Direct Execution (<20 modules)
|
||||||
|
|
||||||
|
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
||||||
|
|
||||||
|
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
|
||||||
|
for (let layer of [3, 2, 1]) {
|
||||||
|
if (modules_by_layer[layer].length === 0) continue;
|
||||||
|
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||||
|
|
||||||
|
for (let batch of batches) {
|
||||||
|
let parallel_tasks = batch.map(module => {
|
||||||
|
return async () => {
|
||||||
|
let strategy = module.depth >= 3 ? "full" : "single";
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ ${module.path} (Layer ${layer}) docs generated with ${tool}`);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
report(`❌ FAILED: ${module.path} (Layer ${layer}) failed all tools`);
|
||||||
|
return false;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
await Promise.all(parallel_tasks.map(task => task()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3B: Agent Batch Execution (≥20 modules)
|
||||||
|
|
||||||
|
**Strategy**: Batch modules into groups of 4, spawn memory-bridge agents per batch.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Group modules by LAYER and batch within each layer
|
||||||
|
let modules_by_layer = group_by_layer(module_list);
|
||||||
|
let tool_order = construct_tool_order(primary_tool);
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
|
||||||
|
for (let layer of [3, 2, 1]) {
|
||||||
|
if (modules_by_layer[layer].length === 0) continue;
|
||||||
|
|
||||||
|
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||||
|
let worker_tasks = [];
|
||||||
|
|
||||||
|
for (let batch of batches) {
|
||||||
|
worker_tasks.push(
|
||||||
|
Task(
|
||||||
|
subagent_type="memory-bridge",
|
||||||
|
description=`Generate docs for ${batch.length} modules in Layer ${layer}`,
|
||||||
|
prompt=generate_batch_worker_prompt(batch, tool_order, layer, project_name)
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
await parallel_execute(worker_tasks);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Batch Worker Prompt Template**:
|
||||||
|
```
|
||||||
|
PURPOSE: Generate documentation for assigned modules with tool fallback
|
||||||
|
|
||||||
|
TASK: Generate API.md + README.md for assigned modules using specified strategies.
|
||||||
|
|
||||||
|
PROJECT: {{project_name}}
|
||||||
|
OUTPUT: .workflow/docs/{{project_name}}/
|
||||||
|
|
||||||
|
MODULES:
|
||||||
|
{{module_path_1}} (strategy: {{strategy_1}}, type: {{folder_type_1}})
|
||||||
|
{{module_path_2}} (strategy: {{strategy_2}}, type: {{folder_type_2}})
|
||||||
|
...
|
||||||
|
|
||||||
|
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||||
|
|
||||||
|
EXECUTION SCRIPT: ccw tool exec generate_module_docs
|
||||||
|
- Accepts strategy parameter: full | single
|
||||||
|
- Accepts folder type detection: code | navigation
|
||||||
|
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||||
|
- Output path: .workflow/docs/{{project_name}}/{module_path}/
|
||||||
|
|
||||||
|
EXECUTION FLOW (for each module):
|
||||||
|
1. Tool fallback loop (exit on first success):
|
||||||
|
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||||
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
exit_code=$?
|
||||||
|
|
||||||
|
if [ $exit_code -eq 0 ]; then
|
||||||
|
report "✅ {{module_path}} docs generated with $tool"
|
||||||
|
break
|
||||||
|
else
|
||||||
|
report "⚠️ {{module_path}} failed with $tool, trying next..."
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
2. Handle complete failure (all tools failed):
|
||||||
|
if [ $exit_code -ne 0 ]; then
|
||||||
|
report "❌ FAILED: {{module_path}} - all tools exhausted"
|
||||||
|
# Continue to next module (do not abort batch)
|
||||||
|
fi
|
||||||
|
|
||||||
|
FOLDER TYPE HANDLING:
|
||||||
|
- code: Generate API.md + README.md
|
||||||
|
- navigation: Generate README.md only
|
||||||
|
|
||||||
|
FAILURE HANDLING:
|
||||||
|
- Module-level isolation: One module's failure does not affect others
|
||||||
|
- Exit code detection: Non-zero exit code triggers next tool
|
||||||
|
- Exhaustion reporting: Log modules where all tools failed
|
||||||
|
- Batch continuation: Always process remaining modules
|
||||||
|
|
||||||
|
REPORTING FORMAT:
|
||||||
|
Per-module status:
|
||||||
|
✅ path/to/module docs generated with {tool}
|
||||||
|
⚠️ path/to/module failed with {tool}, trying next...
|
||||||
|
❌ FAILED: path/to/module - all tools exhausted
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Project-Level Documentation
|
||||||
|
|
||||||
|
**After all module documentation is generated, create project-level documentation files.**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
let project_root = get_project_root();
|
||||||
|
|
||||||
|
// Step 1: Generate Project README
|
||||||
|
report("Generating project README.md...");
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ Project README generated with ${tool}`);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 2: Generate Architecture & Examples
|
||||||
|
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ Architecture docs generated with ${tool}`);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 3: Generate HTTP API documentation (if API routes detected)
|
||||||
|
Bash({command: 'rg "router\\.|@Get|@Post" -g "*.{ts,js,py}" 2>/dev/null && echo "API_FOUND" || echo "NO_API"', run_in_background: false});
|
||||||
|
if (bash_result.stdout.includes("API_FOUND")) {
|
||||||
|
report("Generating HTTP API documentation...");
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ HTTP API docs generated with ${tool}`);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**:
|
||||||
|
```
|
||||||
|
Project-Level Documentation:
|
||||||
|
✅ README.md (project root overview)
|
||||||
|
✅ ARCHITECTURE.md (system design)
|
||||||
|
✅ EXAMPLES.md (usage examples)
|
||||||
|
✅ api/README.md (HTTP API reference) [optional]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: Verification
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check documentation files created
|
||||||
|
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
|
||||||
|
|
||||||
|
// Display structure
|
||||||
|
Bash({command: 'tree -L 3 .workflow/docs/', run_in_background: false});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result Summary**:
|
||||||
|
```
|
||||||
|
Documentation Generation Summary:
|
||||||
|
Total: 31 | Success: 29 | Failed: 2
|
||||||
|
Tool usage: gemini: 25, qwen: 4, codex: 0
|
||||||
|
Failed: path1, path2
|
||||||
|
|
||||||
|
Generated documentation:
|
||||||
|
.workflow/docs/myproject/
|
||||||
|
├── src/
|
||||||
|
│ ├── auth/
|
||||||
|
│ │ ├── API.md
|
||||||
|
│ │ └── README.md
|
||||||
|
│ └── utils/
|
||||||
|
│ └── README.md
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Batch Worker**: Tool fallback per module, batch isolation, clear status reporting
|
||||||
|
**Coordinator**: Invalid path abort, user decline handling, verification with cleanup
|
||||||
|
**Fallback Triggers**: Non-zero exit code, script timeout, unexpected output
|
||||||
|
|
||||||
|
## Output Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/docs/{project_name}/
|
||||||
|
├── src/ # Mirrors source structure
|
||||||
|
│ ├── modules/
|
||||||
|
│ │ ├── README.md # Navigation
|
||||||
|
│ │ ├── auth/
|
||||||
|
│ │ │ ├── API.md # API signatures
|
||||||
|
│ │ │ ├── README.md # Module docs
|
||||||
|
│ │ │ └── middleware/
|
||||||
|
│ │ │ ├── API.md
|
||||||
|
│ │ │ └── README.md
|
||||||
|
│ │ └── api/
|
||||||
|
│ │ ├── API.md
|
||||||
|
│ │ └── README.md
|
||||||
|
│ └── utils/
|
||||||
|
│ └── README.md
|
||||||
|
├── lib/
|
||||||
|
│ └── core/
|
||||||
|
│ ├── API.md
|
||||||
|
│ └── README.md
|
||||||
|
├── README.md # ✨ Project root overview (auto-generated)
|
||||||
|
├── ARCHITECTURE.md # ✨ System design (auto-generated)
|
||||||
|
├── EXAMPLES.md # ✨ Usage examples (auto-generated)
|
||||||
|
└── api/ # ✨ Optional (auto-generated if HTTP API detected)
|
||||||
|
└── README.md # HTTP API reference
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full project documentation generation
|
||||||
|
/memory:docs-full-cli
|
||||||
|
|
||||||
|
# Target specific directory
|
||||||
|
/memory:docs-full-cli src/features/auth
|
||||||
|
/memory:docs-full-cli .claude
|
||||||
|
|
||||||
|
# Use specific tool
|
||||||
|
/memory:docs-full-cli --tool qwen
|
||||||
|
/memory:docs-full-cli src --tool qwen
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Advantages
|
||||||
|
|
||||||
|
- **Efficiency**: 30 modules → 8 agents (73% reduction from sequential)
|
||||||
|
- **Resilience**: 3-tier tool fallback per module
|
||||||
|
- **Performance**: Parallel batches, no concurrency limits
|
||||||
|
- **Observability**: Per-module tool usage, batch-level metrics
|
||||||
|
- **Automation**: Zero configuration - strategy auto-selected by directory depth
|
||||||
|
- **Path Mirroring**: Clear 1:1 mapping between source and documentation structure
|
||||||
|
|
||||||
|
## Template Reference
|
||||||
|
|
||||||
|
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
|
||||||
|
- `api.txt`: Code API documentation (Part A: Code API, Part B: HTTP API)
|
||||||
|
- `module-readme.txt`: Module purpose, usage, dependencies
|
||||||
|
- `folder-navigation.txt`: Navigation README for folders with subdirectories
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/memory:docs` - Agent-based documentation planning workflow
|
||||||
|
- `/memory:docs-related-cli` - Update docs for changed modules only
|
||||||
|
- `/workflow:execute` - Execute documentation tasks (when using agent mode)
|
||||||
386
.claude/commands/memory/docs-related-cli.md
Normal file
386
.claude/commands/memory/docs-related-cli.md
Normal file
@@ -0,0 +1,386 @@
|
|||||||
|
---
|
||||||
|
name: docs-related-cli
|
||||||
|
description: Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel
|
||||||
|
argument-hint: "[--tool <gemini|qwen|codex>]"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Related Documentation Generation - CLI Mode (/memory:docs-related-cli)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Orchestrates context-aware documentation generation/update for changed modules using CLI-based execution with batched agents and automatic tool fallback (gemini→qwen→codex).
|
||||||
|
|
||||||
|
**Parameters**:
|
||||||
|
- `--tool <gemini|qwen|codex>`: Primary tool (default: gemini)
|
||||||
|
|
||||||
|
**Execution Flow**:
|
||||||
|
1. Change Detection → 2. Plan Presentation → 3. Batched Execution → 4. Verification
|
||||||
|
|
||||||
|
## Core Rules
|
||||||
|
|
||||||
|
1. **Detect Changes First**: Use git diff to identify affected modules
|
||||||
|
2. **Wait for Approval**: Present plan, no execution without user confirmation
|
||||||
|
3. **Execution Strategy**:
|
||||||
|
- **<15 modules**: Direct parallel execution (max 4 concurrent per depth, no agent overhead)
|
||||||
|
- **≥15 modules**: Agent batch processing (4 modules/agent, 73% overhead reduction)
|
||||||
|
4. **Tool Fallback**: Auto-retry with fallback tools on failure
|
||||||
|
5. **Depth Sequential**: Process depths N→0, parallel batches within depth (both modes)
|
||||||
|
6. **Related Mode**: Generate/update only changed modules and their parent contexts
|
||||||
|
7. **Single Strategy**: Always use `single` strategy (incremental update)
|
||||||
|
|
||||||
|
## Tool Fallback Hierarchy
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
--tool gemini → [gemini, qwen, codex] // default
|
||||||
|
--tool qwen → [qwen, gemini, codex]
|
||||||
|
--tool codex → [codex, gemini, qwen]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Trigger**: Non-zero exit code from generation script
|
||||||
|
|
||||||
|
| Tool | Best For | Fallback To |
|
||||||
|
|--------|--------------------------------|----------------|
|
||||||
|
| gemini | Documentation, patterns | qwen → codex |
|
||||||
|
| qwen | Architecture, system design | gemini → codex |
|
||||||
|
| codex | Implementation, code quality | gemini → qwen |
|
||||||
|
|
||||||
|
## Phase 1: Change Detection & Analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Get project metadata
|
||||||
|
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||||
|
|
||||||
|
// Detect changed modules
|
||||||
|
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
|
||||||
|
|
||||||
|
// Cache git changes
|
||||||
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parse output** `depth:N|path:<PATH>|change:<TYPE>|type:<code|navigation>` to extract affected modules.
|
||||||
|
|
||||||
|
**Smart filter**: Auto-detect and skip tests/build/config/vendor based on project tech stack (Node.js/Python/Go/Rust/etc).
|
||||||
|
|
||||||
|
**Fallback**: If no changes detected, use recent modules (first 10 by depth).
|
||||||
|
|
||||||
|
## Phase 2: Plan Presentation
|
||||||
|
|
||||||
|
**Present filtered plan**:
|
||||||
|
```
|
||||||
|
Related Documentation Generation Plan:
|
||||||
|
Tool: gemini (fallback: qwen → codex)
|
||||||
|
Changed: 4 modules | Batching: 4 modules/agent
|
||||||
|
Project: myproject
|
||||||
|
Output: .workflow/docs/myproject/
|
||||||
|
|
||||||
|
Will generate/update docs for:
|
||||||
|
- ./src/api/auth (5 files, type: code) [new module]
|
||||||
|
- ./src/api (12 files, type: code) [parent of changed auth/]
|
||||||
|
- ./src (8 files, type: code) [parent context]
|
||||||
|
- . (14 files, type: code) [root level]
|
||||||
|
|
||||||
|
Documentation Strategy:
|
||||||
|
- Strategy: single (all modules - incremental update)
|
||||||
|
- Output: API.md + README.md (code folders), README.md only (navigation folders)
|
||||||
|
- Context: Current dir code + child docs
|
||||||
|
|
||||||
|
Auto-skipped (12 paths):
|
||||||
|
- Tests: ./src/api/auth.test.ts (8 paths)
|
||||||
|
- Config: tsconfig.json (3 paths)
|
||||||
|
- Other: node_modules (1 path)
|
||||||
|
|
||||||
|
Agent allocation:
|
||||||
|
- Depth 3 (1 module): 1 agent [1]
|
||||||
|
- Depth 2 (1 module): 1 agent [1]
|
||||||
|
- Depth 1 (1 module): 1 agent [1]
|
||||||
|
- Depth 0 (1 module): 1 agent [1]
|
||||||
|
|
||||||
|
Estimated time: ~5-10 minutes
|
||||||
|
|
||||||
|
Confirm execution? (y/n)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Decision logic**:
|
||||||
|
- User confirms "y": Proceed with execution
|
||||||
|
- User declines "n": Abort, no changes
|
||||||
|
- <15 modules: Direct execution
|
||||||
|
- ≥15 modules: Agent batch execution
|
||||||
|
|
||||||
|
## Phase 3A: Direct Execution (<15 modules)
|
||||||
|
|
||||||
|
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
|
||||||
|
|
||||||
|
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
|
||||||
|
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||||
|
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||||
|
|
||||||
|
for (let batch of batches) {
|
||||||
|
let parallel_tasks = batch.map(module => {
|
||||||
|
return async () => {
|
||||||
|
for (let tool of tool_order) {
|
||||||
|
Bash({
|
||||||
|
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ ${module.path} docs generated with ${tool}`);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
report(`❌ FAILED: ${module.path} failed all tools`);
|
||||||
|
return false;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
await Promise.all(parallel_tasks.map(task => task()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Phase 3B: Agent Batch Execution (≥15 modules)
|
||||||
|
|
||||||
|
### Batching Strategy
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Batch modules into groups of 4
|
||||||
|
function batch_modules(modules, batch_size = 4) {
|
||||||
|
let batches = [];
|
||||||
|
for (let i = 0; i < modules.length; i += batch_size) {
|
||||||
|
batches.push(modules.slice(i, i + batch_size));
|
||||||
|
}
|
||||||
|
return batches;
|
||||||
|
}
|
||||||
|
// Examples: 10→[4,4,2] | 8→[4,4] | 3→[3]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coordinator Orchestration
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let modules_by_depth = group_by_depth(changed_modules);
|
||||||
|
let tool_order = construct_tool_order(primary_tool);
|
||||||
|
let project_name = detect_project_name();
|
||||||
|
|
||||||
|
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||||
|
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||||
|
let worker_tasks = [];
|
||||||
|
|
||||||
|
for (let batch of batches) {
|
||||||
|
worker_tasks.push(
|
||||||
|
Task(
|
||||||
|
subagent_type="memory-bridge",
|
||||||
|
description=`Generate docs for ${batch.length} modules at depth ${depth}`,
|
||||||
|
prompt=generate_batch_worker_prompt(batch, tool_order, depth, project_name, "related")
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
await parallel_execute(worker_tasks); // Batches run in parallel
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Batch Worker Prompt Template
|
||||||
|
|
||||||
|
```
|
||||||
|
PURPOSE: Generate/update documentation for assigned modules with tool fallback (related mode)
|
||||||
|
|
||||||
|
TASK:
|
||||||
|
Generate documentation for the following modules based on recent changes. For each module, try tools in order until success.
|
||||||
|
|
||||||
|
PROJECT: {{project_name}}
|
||||||
|
OUTPUT: .workflow/docs/{{project_name}}/
|
||||||
|
|
||||||
|
MODULES:
|
||||||
|
{{module_path_1}} (type: {{folder_type_1}})
|
||||||
|
{{module_path_2}} (type: {{folder_type_2}})
|
||||||
|
{{module_path_3}} (type: {{folder_type_3}})
|
||||||
|
{{module_path_4}} (type: {{folder_type_4}})
|
||||||
|
|
||||||
|
TOOLS (try in order):
|
||||||
|
1. {{tool_1}}
|
||||||
|
2. {{tool_2}}
|
||||||
|
3. {{tool_3}}
|
||||||
|
|
||||||
|
EXECUTION:
|
||||||
|
For each module above:
|
||||||
|
1. Try tool 1:
|
||||||
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
|
||||||
|
→ Failure: Try tool 2
|
||||||
|
2. Try tool 2:
|
||||||
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
|
||||||
|
→ Failure: Try tool 3
|
||||||
|
3. Try tool 3:
|
||||||
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
|
||||||
|
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
|
||||||
|
|
||||||
|
FOLDER TYPE HANDLING:
|
||||||
|
- code: Generate API.md + README.md
|
||||||
|
- navigation: Generate README.md only
|
||||||
|
|
||||||
|
REPORTING:
|
||||||
|
Report final summary with:
|
||||||
|
- Total processed: X modules
|
||||||
|
- Successful: Y modules
|
||||||
|
- Failed: Z modules
|
||||||
|
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
||||||
|
```
|
||||||
|
|
||||||
|
## Phase 4: Verification
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check documentation files created/updated
|
||||||
|
Bash({command: 'find .workflow/docs -type f -name "*.md" 2>/dev/null | wc -l', run_in_background: false});
|
||||||
|
|
||||||
|
// Display recent changes
|
||||||
|
Bash({command: 'find .workflow/docs -type f -name "*.md" -mmin -60 2>/dev/null', run_in_background: false});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Aggregate results**:
|
||||||
|
```
|
||||||
|
Documentation Generation Summary:
|
||||||
|
Total: 4 | Success: 4 | Failed: 0
|
||||||
|
|
||||||
|
Tool usage:
|
||||||
|
- gemini: 4 modules
|
||||||
|
- qwen: 0 modules (fallback)
|
||||||
|
- codex: 0 modules
|
||||||
|
|
||||||
|
Changes:
|
||||||
|
.workflow/docs/myproject/src/api/auth/API.md (new)
|
||||||
|
.workflow/docs/myproject/src/api/auth/README.md (new)
|
||||||
|
.workflow/docs/myproject/src/api/API.md (updated)
|
||||||
|
.workflow/docs/myproject/src/api/README.md (updated)
|
||||||
|
.workflow/docs/myproject/src/API.md (updated)
|
||||||
|
.workflow/docs/myproject/src/README.md (updated)
|
||||||
|
.workflow/docs/myproject/API.md (updated)
|
||||||
|
.workflow/docs/myproject/README.md (updated)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Summary
|
||||||
|
|
||||||
|
**Module Count Threshold**:
|
||||||
|
- **<15 modules**: Coordinator executes Phase 3A (Direct Execution)
|
||||||
|
- **≥15 modules**: Coordinator executes Phase 3B (Agent Batch Execution)
|
||||||
|
|
||||||
|
**Agent Hierarchy** (for ≥15 modules):
|
||||||
|
- **Coordinator**: Handles batch division, spawns worker agents per depth
|
||||||
|
- **Worker Agents**: Each processes 4 modules with tool fallback (related mode)
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Batch Worker**:
|
||||||
|
- Tool fallback per module (auto-retry)
|
||||||
|
- Batch isolation (failures don't propagate)
|
||||||
|
- Clear per-module status reporting
|
||||||
|
|
||||||
|
**Coordinator**:
|
||||||
|
- No changes: Use fallback (recent 10 modules)
|
||||||
|
- User decline: No execution
|
||||||
|
- Verification fail: Report incomplete modules
|
||||||
|
- Partial failures: Continue execution, report failed modules
|
||||||
|
|
||||||
|
**Fallback Triggers**:
|
||||||
|
- Non-zero exit code
|
||||||
|
- Script timeout
|
||||||
|
- Unexpected output
|
||||||
|
|
||||||
|
## Output Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/docs/{project_name}/
|
||||||
|
├── src/ # Mirrors source structure
|
||||||
|
│ ├── modules/
|
||||||
|
│ │ ├── README.md
|
||||||
|
│ │ ├── auth/
|
||||||
|
│ │ │ ├── API.md # Updated based on code changes
|
||||||
|
│ │ │ └── README.md # Updated based on code changes
|
||||||
|
│ │ └── api/
|
||||||
|
│ │ ├── API.md
|
||||||
|
│ │ └── README.md
|
||||||
|
│ └── utils/
|
||||||
|
│ └── README.md
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Daily development documentation update
|
||||||
|
/memory:docs-related-cli
|
||||||
|
|
||||||
|
# After feature work with specific tool
|
||||||
|
/memory:docs-related-cli --tool qwen
|
||||||
|
|
||||||
|
# Code quality documentation review after implementation
|
||||||
|
/memory:docs-related-cli --tool codex
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Advantages
|
||||||
|
|
||||||
|
**Efficiency**: 30 modules → 8 agents (73% reduction)
|
||||||
|
**Resilience**: 3-tier fallback per module
|
||||||
|
**Performance**: Parallel batches, no concurrency limits
|
||||||
|
**Context-aware**: Updates based on actual git changes
|
||||||
|
**Fast**: Only affected modules, not entire project
|
||||||
|
**Incremental**: Single strategy for focused updates
|
||||||
|
|
||||||
|
## Coordinator Checklist
|
||||||
|
|
||||||
|
- Parse `--tool` (default: gemini)
|
||||||
|
- Get project metadata (name, root)
|
||||||
|
- Detect changed modules via detect_changed_modules.sh
|
||||||
|
- **Smart filter modules** (auto-detect tech stack, skip tests/build/config/vendor)
|
||||||
|
- Cache git changes
|
||||||
|
- Apply fallback if no changes (recent 10 modules)
|
||||||
|
- Construct tool fallback order
|
||||||
|
- **Present filtered plan** with skip reasons and change types
|
||||||
|
- **Wait for y/n confirmation**
|
||||||
|
- Determine execution mode:
|
||||||
|
- **<15 modules**: Direct execution (Phase 3A)
|
||||||
|
- For each depth (N→0): Sequential module updates with tool fallback
|
||||||
|
- **≥15 modules**: Agent batch execution (Phase 3B)
|
||||||
|
- For each depth (N→0): Batch modules (4 per batch), spawn batch workers in parallel
|
||||||
|
- Wait for depth/batch completion
|
||||||
|
- Aggregate results
|
||||||
|
- Verification check (documentation files created/updated)
|
||||||
|
- Display summary + recent changes
|
||||||
|
|
||||||
|
## Comparison with Full Documentation Generation
|
||||||
|
|
||||||
|
| Aspect | Related Generation | Full Generation |
|
||||||
|
|--------|-------------------|-----------------|
|
||||||
|
| **Scope** | Changed modules only | All project modules |
|
||||||
|
| **Speed** | Fast (minutes) | Slower (10-30 min) |
|
||||||
|
| **Use case** | Daily development | Initial setup, major refactoring |
|
||||||
|
| **Strategy** | `single` (all) | `full` (L3) + `single` (L1-2) |
|
||||||
|
| **Trigger** | After commits | After setup or major changes |
|
||||||
|
| **Batching** | 4 modules/agent | 4 modules/agent |
|
||||||
|
| **Fallback** | gemini→qwen→codex | gemini→qwen→codex |
|
||||||
|
| **Complexity threshold** | ≤15 modules | ≤20 modules |
|
||||||
|
|
||||||
|
## Template Reference
|
||||||
|
|
||||||
|
Templates used from `~/.claude/workflows/cli-templates/prompts/documentation/`:
|
||||||
|
- `api.txt`: Code API documentation
|
||||||
|
- `module-readme.txt`: Module purpose, usage, dependencies
|
||||||
|
- `folder-navigation.txt`: Navigation README for folders
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/memory:docs-full-cli` - Full project documentation generation
|
||||||
|
- `/memory:docs` - Agent-based documentation planning workflow
|
||||||
|
- `/memory:update-related` - Update CLAUDE.md for changed modules
|
||||||
@@ -36,7 +36,6 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
|||||||
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
|
| `my_app/src/modules/auth/` | `my_app` | `.workflow/docs/my_app/src/modules/auth/API.md` |
|
||||||
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
|
| `another_project/lib/utils/` | `another_project` | `.workflow/docs/another_project/lib/utils/API.md` |
|
||||||
|
|
||||||
**Benefits**: Easy to locate documentation, maintains logical organization, clear 1:1 mapping, supports any project structure.
|
|
||||||
|
|
||||||
## Parameters
|
## Parameters
|
||||||
|
|
||||||
@@ -65,12 +64,17 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
|||||||
```bash
|
```bash
|
||||||
# Get target path, project name, and root
|
# Get target path, project name, and root
|
||||||
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
||||||
|
```
|
||||||
|
|
||||||
# Create session directories (replace timestamp)
|
```javascript
|
||||||
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
|
// Create docs session (type: docs)
|
||||||
|
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
|
||||||
|
// Parse output to get sessionId
|
||||||
|
```
|
||||||
|
|
||||||
# Create workflow-session.json (replace values)
|
```bash
|
||||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
|
# Update workflow-session.json with docs-specific fields
|
||||||
|
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 2: Analyze Structure
|
### Phase 2: Analyze Structure
|
||||||
@@ -81,10 +85,10 @@ bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentati
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Run folder analysis
|
# 1. Run folder analysis
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
|
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
|
||||||
|
|
||||||
# 2. Get top-level directories (first 2 path levels)
|
# 2. Get top-level directories (first 2 path levels)
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
|
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
|
||||||
|
|
||||||
# 3. Find existing docs (if directory exists)
|
# 3. Find existing docs (if directory exists)
|
||||||
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
|
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
|
||||||
@@ -181,7 +185,6 @@ Large Projects (single dir >10 docs):
|
|||||||
4. If single dir exceeds 10 docs, split by subdirectories
|
4. If single dir exceeds 10 docs, split by subdirectories
|
||||||
5. Create parallel Level 1 tasks with ≤10 docs each
|
5. Create parallel Level 1 tasks with ≤10 docs each
|
||||||
|
|
||||||
**Benefits**: Parallel execution, failure isolation, progress visibility, context sharing, document count control.
|
|
||||||
|
|
||||||
**Commands**:
|
**Commands**:
|
||||||
|
|
||||||
|
|||||||
@@ -109,7 +109,7 @@ Task(
|
|||||||
|
|
||||||
1. **Project Structure**
|
1. **Project Structure**
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
bash(ccw tool exec get_modules_by_depth '{}')
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
2. **Core Documentation**
|
2. **Core Documentation**
|
||||||
|
|||||||
@@ -508,16 +508,7 @@ User triggers command
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
- **Pure Orchestrator**: No task JSON generation, delegates to /memory:docs
|
|
||||||
- **Auto-Continue**: Autonomous 4-phase execution without user interaction
|
|
||||||
- **Intelligent Skip**: Detects existing docs and skips regeneration for fast SKILL updates
|
|
||||||
- **Always Fresh Index**: Phase 4 always executes to ensure SKILL.md stays synchronized
|
|
||||||
- **Simplified**: ~70% less code than previous version
|
|
||||||
- **Maintainable**: Changes to /memory:docs automatically apply
|
|
||||||
- **Direct Generation**: Phase 4 directly writes SKILL.md
|
|
||||||
- **Flexible**: Supports all /memory:docs options (tool, mode, cli-execute)
|
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
|
|||||||
@@ -36,13 +36,12 @@ Orchestrates project-wide CLAUDE.md updates using batched agent execution with a
|
|||||||
- **Use Case**: Deepest directories with unstructured file layouts
|
- **Use Case**: Deepest directories with unstructured file layouts
|
||||||
- **Behavior**: Generates CLAUDE.md for current directory AND each subdirectory containing files
|
- **Behavior**: Generates CLAUDE.md for current directory AND each subdirectory containing files
|
||||||
- **Context**: All files in current directory tree (`@**/*`)
|
- **Context**: All files in current directory tree (`@**/*`)
|
||||||
- **Benefits**: Creates foundation documentation for upper layers to reference
|
|
||||||
|
|
||||||
#### Single-Layer Strategy (Layers 1-2)
|
#### Single-Layer Strategy (Layers 1-2)
|
||||||
- **Use Case**: Upper layers that aggregate from existing documentation
|
- **Use Case**: Upper layers that aggregate from existing documentation
|
||||||
- **Behavior**: Generates CLAUDE.md only for current directory
|
- **Behavior**: Generates CLAUDE.md only for current directory
|
||||||
- **Context**: Direct children CLAUDE.md files + current directory code files
|
- **Context**: Direct children CLAUDE.md files + current directory code files
|
||||||
- **Benefits**: Minimal context consumption, clear layer separation
|
|
||||||
|
|
||||||
### Example Flow
|
### Example Flow
|
||||||
```
|
```
|
||||||
@@ -95,14 +94,15 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
|
|||||||
|
|
||||||
### Phase 1: Discovery & Analysis
|
### Phase 1: Discovery & Analysis
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Cache git changes
|
// Cache git changes
|
||||||
bash(git add -A 2>/dev/null || true)
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
|
|
||||||
# Get module structure
|
// Get module structure
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh list)
|
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
|
||||||
# OR with --path
|
|
||||||
bash(cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list)
|
// OR with --path
|
||||||
|
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
||||||
@@ -172,26 +172,23 @@ Update Plan:
|
|||||||
|
|
||||||
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
**Strategy**: Parallel execution within layer (max 4 concurrent), no agent overhead.
|
||||||
|
|
||||||
```javascript
|
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||||
// Group modules by LAYER (not depth)
|
|
||||||
let modules_by_layer = group_by_layer(module_list);
|
|
||||||
let tool_order = construct_tool_order(primary_tool);
|
|
||||||
|
|
||||||
// Process by LAYER (3 → 2 → 1), not by depth
|
```javascript
|
||||||
for (let layer of [3, 2, 1]) {
|
for (let layer of [3, 2, 1]) {
|
||||||
if (modules_by_layer[layer].length === 0) continue;
|
if (modules_by_layer[layer].length === 0) continue;
|
||||||
|
|
||||||
let batches = batch_modules(modules_by_layer[layer], 4);
|
let batches = batch_modules(modules_by_layer[layer], 4);
|
||||||
|
|
||||||
for (let batch of batches) {
|
for (let batch of batches) {
|
||||||
let parallel_tasks = batch.map(module => {
|
let parallel_tasks = batch.map(module => {
|
||||||
return async () => {
|
return async () => {
|
||||||
// Auto-determine strategy based on depth
|
|
||||||
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
||||||
|
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
let exit_code = bash(`cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`);
|
Bash({
|
||||||
if (exit_code === 0) {
|
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
});
|
||||||
|
if (bash_result.exit_code === 0) {
|
||||||
report(`✅ ${module.path} (Layer ${layer}) updated with ${tool}`);
|
report(`✅ ${module.path} (Layer ${layer}) updated with ${tool}`);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
@@ -200,7 +197,6 @@ for (let layer of [3, 2, 1]) {
|
|||||||
return false;
|
return false;
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
|
||||||
await Promise.all(parallel_tasks.map(task => task()));
|
await Promise.all(parallel_tasks.map(task => task()));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -248,14 +244,17 @@ MODULES:
|
|||||||
|
|
||||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||||
|
|
||||||
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
|
EXECUTION SCRIPT: ccw tool exec update_module_claude
|
||||||
- Accepts strategy parameter: multi-layer | single-layer
|
- Accepts strategy parameter: multi-layer | single-layer
|
||||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||||
|
|
||||||
EXECUTION FLOW (for each module):
|
EXECUTION FLOW (for each module):
|
||||||
1. Tool fallback loop (exit on first success):
|
1. Tool fallback loop (exit on first success):
|
||||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}")
|
Bash({
|
||||||
|
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
exit_code=$?
|
exit_code=$?
|
||||||
|
|
||||||
if [ $exit_code -eq 0 ]; then
|
if [ $exit_code -eq 0 ]; then
|
||||||
@@ -287,12 +286,12 @@ REPORTING FORMAT:
|
|||||||
```
|
```
|
||||||
### Phase 4: Safety Verification
|
### Phase 4: Safety Verification
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Check only CLAUDE.md modified
|
// Check only CLAUDE.md files modified
|
||||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
|
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
|
||||||
|
|
||||||
# Display status
|
// Display status
|
||||||
bash(git status --short)
|
Bash({command: "git status --short", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result Summary**:
|
**Result Summary**:
|
||||||
|
|||||||
@@ -39,12 +39,12 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
|
|||||||
|
|
||||||
## Phase 1: Change Detection & Analysis
|
## Phase 1: Change Detection & Analysis
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Detect changed modules (no index refresh needed)
|
// Detect changed modules
|
||||||
bash(~/.claude/scripts/detect_changed_modules.sh list)
|
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
|
||||||
|
|
||||||
# Cache git changes
|
// Cache git changes
|
||||||
bash(git add -A 2>/dev/null || true)
|
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parse output** `depth:N|path:<PATH>|change:<TYPE>` to extract affected modules.
|
**Parse output** `depth:N|path:<PATH>|change:<TYPE>` to extract affected modules.
|
||||||
@@ -89,47 +89,36 @@ Related Update Plan:
|
|||||||
|
|
||||||
## Phase 3A: Direct Execution (<15 modules)
|
## Phase 3A: Direct Execution (<15 modules)
|
||||||
|
|
||||||
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead, tool fallback per module.
|
**Strategy**: Parallel execution within depth (max 4 concurrent), no agent overhead.
|
||||||
|
|
||||||
|
**CRITICAL**: All Bash commands use `run_in_background: false` for synchronous execution.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
let modules_by_depth = group_by_depth(changed_modules);
|
|
||||||
let tool_order = construct_tool_order(primary_tool);
|
|
||||||
|
|
||||||
for (let depth of sorted_depths.reverse()) { // N → 0
|
for (let depth of sorted_depths.reverse()) { // N → 0
|
||||||
let modules = modules_by_depth[depth];
|
let batches = batch_modules(modules_by_depth[depth], 4);
|
||||||
let batches = batch_modules(modules, 4); // Split into groups of 4
|
|
||||||
|
|
||||||
for (let batch of batches) {
|
for (let batch of batches) {
|
||||||
// Execute batch in parallel (max 4 concurrent)
|
|
||||||
let parallel_tasks = batch.map(module => {
|
let parallel_tasks = batch.map(module => {
|
||||||
return async () => {
|
return async () => {
|
||||||
let success = false;
|
|
||||||
for (let tool of tool_order) {
|
for (let tool of tool_order) {
|
||||||
let exit_code = bash(cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}");
|
Bash({
|
||||||
if (exit_code === 0) {
|
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
|
||||||
report("${module.path} updated with ${tool}");
|
run_in_background: false
|
||||||
success = true;
|
});
|
||||||
break;
|
if (bash_result.exit_code === 0) {
|
||||||
|
report(`✅ ${module.path} updated with ${tool}`);
|
||||||
|
return true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (!success) {
|
report(`❌ FAILED: ${module.path} failed all tools`);
|
||||||
report("FAILED: ${module.path} failed all tools");
|
return false;
|
||||||
}
|
|
||||||
};
|
};
|
||||||
});
|
});
|
||||||
|
await Promise.all(parallel_tasks.map(task => task()));
|
||||||
await Promise.all(parallel_tasks.map(task => task())); // Run batch in parallel
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- No agent startup overhead
|
|
||||||
- Parallel execution within depth (max 4 concurrent)
|
|
||||||
- Tool fallback still applies per module
|
|
||||||
- Faster for small changesets (<15 modules)
|
|
||||||
- Same batching strategy as Phase 3B but without agent layer
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Phase 3B: Agent Batch Execution (≥15 modules)
|
## Phase 3B: Agent Batch Execution (≥15 modules)
|
||||||
@@ -193,19 +182,27 @@ TOOLS (try in order):
|
|||||||
|
|
||||||
EXECUTION:
|
EXECUTION:
|
||||||
For each module above:
|
For each module above:
|
||||||
1. cd "{{module_path}}"
|
1. Try tool 1:
|
||||||
2. Try tool 1:
|
Bash({
|
||||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}")
|
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
|
||||||
→ Success: Report "{{module_path}} updated with {{tool_1}}", proceed to next module
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
|
||||||
→ Failure: Try tool 2
|
→ Failure: Try tool 2
|
||||||
3. Try tool 2:
|
2. Try tool 2:
|
||||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}")
|
Bash({
|
||||||
→ Success: Report "{{module_path}} updated with {{tool_2}}", proceed to next module
|
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
|
||||||
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
|
||||||
→ Failure: Try tool 3
|
→ Failure: Try tool 3
|
||||||
4. Try tool 3:
|
3. Try tool 3:
|
||||||
bash(cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}")
|
Bash({
|
||||||
→ Success: Report "{{module_path}} updated with {{tool_3}}", proceed to next module
|
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
|
||||||
→ Failure: Report "FAILED: {{module_path}} failed all tools", proceed to next module
|
run_in_background: false
|
||||||
|
})
|
||||||
|
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
|
||||||
|
→ Failure: Report "❌ FAILED: {{module_path}} failed all tools", proceed to next module
|
||||||
|
|
||||||
REPORTING:
|
REPORTING:
|
||||||
Report final summary with:
|
Report final summary with:
|
||||||
@@ -213,30 +210,16 @@ Report final summary with:
|
|||||||
- Successful: Y modules
|
- Successful: Y modules
|
||||||
- Failed: Z modules
|
- Failed: Z modules
|
||||||
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
- Tool usage: {{tool_1}}:X, {{tool_2}}:Y, {{tool_3}}:Z
|
||||||
- Detailed results for each module
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example Execution
|
|
||||||
|
|
||||||
**Depth 3 (new module)**:
|
|
||||||
```javascript
|
|
||||||
Task(subagent_type="memory-bridge", batch=[./src/api/auth], mode="related")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- 4 modules → 1 agent (75% reduction)
|
|
||||||
- Parallel batches, sequential within batch
|
|
||||||
- Each module gets full fallback chain
|
|
||||||
- Context-aware updates based on git changes
|
|
||||||
|
|
||||||
## Phase 4: Safety Verification
|
## Phase 4: Safety Verification
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Check only CLAUDE.md modified
|
// Check only CLAUDE.md modified
|
||||||
bash(git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified")
|
Bash({command: 'git diff --cached --name-only | grep -v "CLAUDE.md" || echo "Only CLAUDE.md files modified"', run_in_background: false});
|
||||||
|
|
||||||
# Display statistics
|
// Display statistics
|
||||||
bash(git diff --stat)
|
Bash({command: "git diff --stat", run_in_background: false});
|
||||||
```
|
```
|
||||||
|
|
||||||
**Aggregate results**:
|
**Aggregate results**:
|
||||||
|
|||||||
@@ -32,12 +32,16 @@ Identify inconsistencies, duplications, ambiguities, and underspecified items be
|
|||||||
IF --session parameter provided:
|
IF --session parameter provided:
|
||||||
session_id = provided session
|
session_id = provided session
|
||||||
ELSE:
|
ELSE:
|
||||||
CHECK: find .workflow/active/ -name "WFS-*" -type d
|
# Auto-detect active session
|
||||||
IF active_session EXISTS:
|
active_sessions = bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null)
|
||||||
session_id = get_active_session()
|
IF active_sessions is empty:
|
||||||
ELSE:
|
|
||||||
ERROR: "No active workflow session found. Use --session <session-id>"
|
ERROR: "No active workflow session found. Use --session <session-id>"
|
||||||
EXIT
|
EXIT
|
||||||
|
ELSE IF active_sessions has multiple entries:
|
||||||
|
# Use most recently modified session
|
||||||
|
session_id = bash(ls -td .workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
|
||||||
|
ELSE:
|
||||||
|
session_id = basename(active_sessions[0])
|
||||||
|
|
||||||
# Derive absolute paths
|
# Derive absolute paths
|
||||||
session_dir = .workflow/active/WFS-{session}
|
session_dir = .workflow/active/WFS-{session}
|
||||||
@@ -45,13 +49,15 @@ brainstorm_dir = session_dir/.brainstorming
|
|||||||
task_dir = session_dir/.task
|
task_dir = session_dir/.task
|
||||||
|
|
||||||
# Validate required artifacts
|
# Validate required artifacts
|
||||||
SYNTHESIS = brainstorm_dir/role analysis documents
|
# Note: "role analysis documents" refers to [role]/analysis.md files (e.g., product-manager/analysis.md)
|
||||||
|
SYNTHESIS_DIR = brainstorm_dir # Contains role analysis files: */analysis.md
|
||||||
IMPL_PLAN = session_dir/IMPL_PLAN.md
|
IMPL_PLAN = session_dir/IMPL_PLAN.md
|
||||||
TASK_FILES = Glob(task_dir/*.json)
|
TASK_FILES = Glob(task_dir/*.json)
|
||||||
|
|
||||||
# Abort if missing
|
# Abort if missing
|
||||||
IF NOT EXISTS(SYNTHESIS):
|
SYNTHESIS_FILES = Glob(brainstorm_dir/*/analysis.md)
|
||||||
ERROR: "role analysis documents not found. Run /workflow:brainstorm:synthesis first"
|
IF SYNTHESIS_FILES.count == 0:
|
||||||
|
ERROR: "No role analysis documents found in .brainstorming/*/analysis.md. Run /workflow:brainstorm:synthesis first"
|
||||||
EXIT
|
EXIT
|
||||||
|
|
||||||
IF NOT EXISTS(IMPL_PLAN):
|
IF NOT EXISTS(IMPL_PLAN):
|
||||||
@@ -95,7 +101,7 @@ Load only minimal necessary context from each artifact:
|
|||||||
- Dependencies (depends_on, blocks)
|
- Dependencies (depends_on, blocks)
|
||||||
- Context (requirements, focus_paths, acceptance, artifacts)
|
- Context (requirements, focus_paths, acceptance, artifacts)
|
||||||
- Flow control (pre_analysis, implementation_approach)
|
- Flow control (pre_analysis, implementation_approach)
|
||||||
- Meta (complexity, priority, use_codex)
|
- Meta (complexity, priority)
|
||||||
|
|
||||||
### 3. Build Semantic Models
|
### 3. Build Semantic Models
|
||||||
|
|
||||||
@@ -135,27 +141,27 @@ Focus on high-signal findings. Limit to 50 findings total; aggregate remainder i
|
|||||||
- **Unmapped Tasks**: Tasks with no clear requirement linkage
|
- **Unmapped Tasks**: Tasks with no clear requirement linkage
|
||||||
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
|
- **NFR Coverage Gaps**: Non-functional requirements (performance, security, scalability) not reflected in tasks
|
||||||
|
|
||||||
#### B. Consistency Validation
|
#### C. Consistency Validation
|
||||||
|
|
||||||
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
|
- **Requirement Conflicts**: Tasks contradicting synthesis requirements
|
||||||
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
|
- **Architecture Drift**: IMPL_PLAN architecture not matching synthesis ADRs
|
||||||
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
|
- **Terminology Drift**: Same concept named differently across IMPL_PLAN and tasks
|
||||||
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
|
- **Data Model Inconsistency**: Tasks referencing entities/fields not in synthesis data model
|
||||||
|
|
||||||
#### C. Dependency Integrity
|
#### D. Dependency Integrity
|
||||||
|
|
||||||
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
|
- **Circular Dependencies**: Task A depends on B, B depends on C, C depends on A
|
||||||
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
|
- **Missing Dependencies**: Task requires outputs from another task but no explicit dependency
|
||||||
- **Broken Dependencies**: Task depends on non-existent task ID
|
- **Broken Dependencies**: Task depends on non-existent task ID
|
||||||
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
|
- **Logical Ordering Issues**: Implementation tasks before foundational setup without dependency note
|
||||||
|
|
||||||
#### D. Synthesis Alignment
|
#### E. Synthesis Alignment
|
||||||
|
|
||||||
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
|
- **Priority Conflicts**: High-priority synthesis requirements mapped to low-priority tasks
|
||||||
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
|
- **Success Criteria Mismatch**: IMPL_PLAN success criteria not covering synthesis acceptance criteria
|
||||||
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
|
- **Risk Mitigation Gaps**: Critical risks in synthesis without corresponding mitigation tasks
|
||||||
|
|
||||||
#### E. Task Specification Quality
|
#### F. Task Specification Quality
|
||||||
|
|
||||||
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
|
- **Ambiguous Focus Paths**: Tasks with vague or missing focus_paths
|
||||||
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
|
- **Underspecified Acceptance**: Tasks without clear acceptance criteria
|
||||||
@@ -163,12 +169,12 @@ Focus on high-signal findings. Limit to 50 findings total; aggregate remainder i
|
|||||||
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
|
- **Weak Flow Control**: Tasks without clear implementation_approach or pre_analysis steps
|
||||||
- **Missing Target Files**: Tasks without flow_control.target_files specification
|
- **Missing Target Files**: Tasks without flow_control.target_files specification
|
||||||
|
|
||||||
#### F. Duplication Detection
|
#### G. Duplication Detection
|
||||||
|
|
||||||
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
|
- **Overlapping Task Scope**: Multiple tasks with nearly identical descriptions
|
||||||
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
|
- **Redundant Requirements Coverage**: Same requirement covered by multiple tasks without clear partitioning
|
||||||
|
|
||||||
#### G. Feasibility Assessment
|
#### H. Feasibility Assessment
|
||||||
|
|
||||||
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
|
- **Complexity Misalignment**: Task marked "simple" but requires multiple file modifications
|
||||||
- **Resource Conflicts**: Parallel tasks requiring same resources/files
|
- **Resource Conflicts**: Parallel tasks requiring same resources/files
|
||||||
@@ -203,7 +209,9 @@ Use this heuristic to prioritize findings:
|
|||||||
|
|
||||||
### 6. Produce Compact Analysis Report
|
### 6. Produce Compact Analysis Report
|
||||||
|
|
||||||
Output a Markdown report (no file writes) with the following structure:
|
**Report Generation**: Generate report content and save to file.
|
||||||
|
|
||||||
|
Output a Markdown report with the following structure:
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
## Action Plan Verification Report
|
## Action Plan Verification Report
|
||||||
@@ -217,7 +225,11 @@ Output a Markdown report (no file writes) with the following structure:
|
|||||||
### Executive Summary
|
### Executive Summary
|
||||||
|
|
||||||
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
|
- **Overall Risk Level**: CRITICAL | HIGH | MEDIUM | LOW
|
||||||
- **Recommendation**: BLOCK_EXECUTION | PROCEED_WITH_FIXES | PROCEED_WITH_CAUTION | PROCEED
|
- **Recommendation**: (See decision matrix below)
|
||||||
|
- BLOCK_EXECUTION: Critical issues exist (must fix before proceeding)
|
||||||
|
- PROCEED_WITH_FIXES: High issues exist, no critical (fix recommended before execution)
|
||||||
|
- PROCEED_WITH_CAUTION: Medium issues only (proceed with awareness)
|
||||||
|
- PROCEED: Low issues only or no issues (safe to execute)
|
||||||
- **Critical Issues**: {count}
|
- **Critical Issues**: {count}
|
||||||
- **High Issues**: {count}
|
- **High Issues**: {count}
|
||||||
- **Medium Issues**: {count}
|
- **Medium Issues**: {count}
|
||||||
@@ -322,14 +334,27 @@ Output a Markdown report (no file writes) with the following structure:
|
|||||||
|
|
||||||
#### Action Recommendations
|
#### Action Recommendations
|
||||||
|
|
||||||
**If CRITICAL Issues Exist**:
|
**Recommendation Decision Matrix**:
|
||||||
- **BLOCK EXECUTION** - Resolve critical issues before proceeding
|
|
||||||
- Use TodoWrite to track all required fixes
|
|
||||||
- Fix broken dependencies and circular references
|
|
||||||
|
|
||||||
**If Only HIGH/MEDIUM/LOW Issues**:
|
| Condition | Recommendation | Action |
|
||||||
- **PROCEED WITH CAUTION** - Fix high-priority issues first
|
|-----------|----------------|--------|
|
||||||
- Use TodoWrite to systematically track and complete all improvements
|
| Critical > 0 | BLOCK_EXECUTION | Must resolve all critical issues before proceeding |
|
||||||
|
| Critical = 0, High > 0 | PROCEED_WITH_FIXES | Fix high-priority issues before execution |
|
||||||
|
| Critical = 0, High = 0, Medium > 0 | PROCEED_WITH_CAUTION | Proceed with awareness of medium issues |
|
||||||
|
| Only Low or None | PROCEED | Safe to execute workflow |
|
||||||
|
|
||||||
|
**If CRITICAL Issues Exist** (BLOCK_EXECUTION):
|
||||||
|
- Resolve all critical issues before proceeding
|
||||||
|
- Use TodoWrite to track required fixes
|
||||||
|
- Fix broken dependencies and circular references first
|
||||||
|
|
||||||
|
**If HIGH Issues Exist** (PROCEED_WITH_FIXES):
|
||||||
|
- Fix high-priority issues before execution
|
||||||
|
- Use TodoWrite to systematically track and complete improvements
|
||||||
|
|
||||||
|
**If Only MEDIUM/LOW Issues** (PROCEED_WITH_CAUTION / PROCEED):
|
||||||
|
- Can proceed with execution
|
||||||
|
- Address issues during or after implementation
|
||||||
|
|
||||||
#### TodoWrite-Based Remediation Workflow
|
#### TodoWrite-Based Remediation Workflow
|
||||||
|
|
||||||
@@ -359,13 +384,18 @@ Priority Order:
|
|||||||
|
|
||||||
### 7. Save Report and Execute TodoWrite-Based Remediation
|
### 7. Save Report and Execute TodoWrite-Based Remediation
|
||||||
|
|
||||||
**Save Analysis Report**:
|
**Step 7.1: Save Analysis Report**:
|
||||||
```bash
|
```bash
|
||||||
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
report_path = ".workflow/active/WFS-{session}/.process/ACTION_PLAN_VERIFICATION.md"
|
||||||
Write(report_path, full_report_content)
|
Write(report_path, full_report_content)
|
||||||
```
|
```
|
||||||
|
|
||||||
**After Report Generation**:
|
**Step 7.2: Display Report Summary to User**:
|
||||||
|
- Show executive summary with counts
|
||||||
|
- Display recommendation (BLOCK/PROCEED_WITH_FIXES/PROCEED_WITH_CAUTION/PROCEED)
|
||||||
|
- List critical and high issues if any
|
||||||
|
|
||||||
|
**Step 7.3: After Report Generation**:
|
||||||
|
|
||||||
1. **Extract Findings**: Parse all issues by severity
|
1. **Extract Findings**: Parse all issues by severity
|
||||||
2. **Create TodoWrite Task List**: Convert findings to actionable todos
|
2. **Create TodoWrite Task List**: Convert findings to actionable todos
|
||||||
|
|||||||
@@ -2,452 +2,359 @@
|
|||||||
name: artifacts
|
name: artifacts
|
||||||
description: Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis
|
description: Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis
|
||||||
argument-hint: "topic or challenge description [--count N]"
|
argument-hint: "topic or challenge description [--count N]"
|
||||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
|
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
|
Seven-phase workflow: **Context collection** → **Topic analysis** → **Role selection** → **Role questions** → **Conflict resolution** → **Final check** → **Generate specification**
|
||||||
|
|
||||||
|
All user interactions use AskUserQuestion tool (max 4 questions per call, multi-round).
|
||||||
|
|
||||||
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
||||||
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||||
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
|
**Core Principle**: Questions dynamically generated from project context + topic keywords, NOT generic templates
|
||||||
|
|
||||||
**Parameters**:
|
**Parameters**:
|
||||||
- `topic` (required): Topic or challenge description (structured format recommended)
|
- `topic` (required): Topic or challenge description (structured format recommended)
|
||||||
- `--count N` (optional): Number of roles user WANTS to select (system will recommend N+2 options for user to choose from, default: 3)
|
- `--count N` (optional): Number of roles to select (system recommends N+2 options, default: 3)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### Phase Summary
|
||||||
|
|
||||||
|
| Phase | Goal | AskUserQuestion | Storage |
|
||||||
|
|-------|------|-----------------|---------|
|
||||||
|
| 0 | Context collection | - | context-package.json |
|
||||||
|
| 1 | Topic analysis | 2-4 questions | intent_context |
|
||||||
|
| 2 | Role selection | 1 multi-select | selected_roles |
|
||||||
|
| 3 | Role questions | 3-4 per role | role_decisions[role] |
|
||||||
|
| 4 | Conflict resolution | max 4 per round | cross_role_decisions |
|
||||||
|
| 4.5 | Final check | progressive rounds | additional_decisions |
|
||||||
|
| 5 | Generate spec | - | guidance-specification.md |
|
||||||
|
|
||||||
|
### AskUserQuestion Pattern
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Single-select (Phase 1, 3, 4)
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "{问题文本}",
|
||||||
|
header: "{短标签}", // max 12 chars
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "{选项}", description: "{说明和影响}" },
|
||||||
|
{ label: "{选项}", description: "{说明和影响}" },
|
||||||
|
{ label: "{选项}", description: "{说明和影响}" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
// ... max 4 questions per call
|
||||||
|
]
|
||||||
|
})
|
||||||
|
|
||||||
|
// Multi-select (Phase 2)
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "请选择 {count} 个角色",
|
||||||
|
header: "角色选择",
|
||||||
|
multiSelect: true,
|
||||||
|
options: [/* max 4 options per call */]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-Round Execution
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const BATCH_SIZE = 4;
|
||||||
|
for (let i = 0; i < allQuestions.length; i += BATCH_SIZE) {
|
||||||
|
const batch = allQuestions.slice(i, i + BATCH_SIZE);
|
||||||
|
AskUserQuestion({ questions: batch });
|
||||||
|
// Store responses before next round
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Task Tracking
|
## Task Tracking
|
||||||
|
|
||||||
**⚠️ TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
|
**TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
|
||||||
|
|
||||||
**When called from auto-parallel**:
|
**When called from auto-parallel**:
|
||||||
- Find the artifacts parent task: "Execute artifacts command for interactive framework generation"
|
- Find artifacts parent task → Mark "in_progress"
|
||||||
- Mark parent task as "in_progress"
|
- APPEND sub-tasks (Phase 0-5) → Mark each as completes
|
||||||
- APPEND artifacts sub-tasks AFTER the parent task (Phase 0-5)
|
- When Phase 5 completes → Mark parent "completed"
|
||||||
- Mark each sub-task as it completes
|
- PRESERVE all other auto-parallel tasks
|
||||||
- When Phase 5 completes, mark parent task as "completed"
|
|
||||||
- **PRESERVE all other auto-parallel tasks** (role agents, synthesis)
|
|
||||||
|
|
||||||
**Standalone Mode**:
|
**Standalone Mode**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Initialize session (.workflow/active/ session check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
{"content": "Initialize session", "status": "pending", "activeForm": "Initializing"},
|
||||||
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
|
{"content": "Phase 0: Context collection", "status": "pending", "activeForm": "Phase 0"},
|
||||||
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
|
{"content": "Phase 1: Topic analysis (2-4 questions)", "status": "pending", "activeForm": "Phase 1"},
|
||||||
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
|
{"content": "Phase 2: Role selection", "status": "pending", "activeForm": "Phase 2"},
|
||||||
{"content": "Phase 3: Generate 3-4 questions per role, output and wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 3 role questions"},
|
{"content": "Phase 3: Role questions (per role)", "status": "pending", "activeForm": "Phase 3"},
|
||||||
{"content": "Phase 4: Detect conflicts, output clarifications, wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 4 conflict resolution"},
|
{"content": "Phase 4: Conflict resolution", "status": "pending", "activeForm": "Phase 4"},
|
||||||
{"content": "Phase 5: Transform Q&A to declarative statements, write guidance-specification.md", "status": "pending", "activeForm": "Phase 5 document generation"}
|
{"content": "Phase 4.5: Final clarification", "status": "pending", "activeForm": "Phase 4.5"},
|
||||||
|
{"content": "Phase 5: Generate specification", "status": "pending", "activeForm": "Phase 5"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
## User Interaction Protocol
|
---
|
||||||
|
|
||||||
### Question Output Format
|
|
||||||
|
|
||||||
All questions output as structured text (detailed format with descriptions):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
【问题{N} - {短标签}】{问题文本}
|
|
||||||
a) {选项标签}
|
|
||||||
说明:{选项说明和影响}
|
|
||||||
b) {选项标签}
|
|
||||||
说明:{选项说明和影响}
|
|
||||||
c) {选项标签}
|
|
||||||
说明:{选项说明和影响}
|
|
||||||
|
|
||||||
请回答:{N}a 或 {N}b 或 {N}c
|
|
||||||
```
|
|
||||||
|
|
||||||
**Multi-select format** (Phase 2 role selection):
|
|
||||||
```markdown
|
|
||||||
【角色选择】请选择 {count} 个角色参与头脑风暴分析
|
|
||||||
|
|
||||||
a) {role-name} ({中文名})
|
|
||||||
推荐理由:{基于topic的相关性说明}
|
|
||||||
b) {role-name} ({中文名})
|
|
||||||
推荐理由:{基于topic的相关性说明}
|
|
||||||
...
|
|
||||||
|
|
||||||
支持格式:
|
|
||||||
- 分别选择:2a 2c 2d (选择第2题的a、c、d选项)
|
|
||||||
- 合并语法:2acd (选择a、c、d)
|
|
||||||
- 逗号分隔:2a,c,d
|
|
||||||
|
|
||||||
请输入选择:
|
|
||||||
```
|
|
||||||
|
|
||||||
### Input Parsing Rules
|
|
||||||
|
|
||||||
**Supported formats** (intelligent parsing):
|
|
||||||
|
|
||||||
1. **Space-separated**: `1a 2b 3c` → Q1:a, Q2:b, Q3:c
|
|
||||||
2. **Comma-separated**: `1a,2b,3c` → Q1:a, Q2:b, Q3:c
|
|
||||||
3. **Multi-select combined**: `2abc` → Q2: options a,b,c
|
|
||||||
4. **Multi-select spaces**: `2 a b c` → Q2: options a,b,c
|
|
||||||
5. **Multi-select comma**: `2a,b,c` → Q2: options a,b,c
|
|
||||||
6. **Natural language**: `问题1选a` → 1a (fallback parsing)
|
|
||||||
|
|
||||||
**Parsing algorithm**:
|
|
||||||
- Extract question numbers and option letters
|
|
||||||
- Validate question numbers match output
|
|
||||||
- Validate option letters exist for each question
|
|
||||||
- If ambiguous/invalid, output example format and request re-input
|
|
||||||
|
|
||||||
**Error handling** (lenient):
|
|
||||||
- Recognize common variations automatically
|
|
||||||
- If parsing fails, show example and wait for clarification
|
|
||||||
- Support re-input without penalty
|
|
||||||
|
|
||||||
### Batching Strategy
|
|
||||||
|
|
||||||
**Batch limits**:
|
|
||||||
- **Default**: Maximum 10 questions per round
|
|
||||||
- **Phase 2 (role selection)**: Display all recommended roles at once (count+2 roles)
|
|
||||||
- **Auto-split**: If questions > 10, split into multiple rounds with clear round indicators
|
|
||||||
|
|
||||||
**Round indicators**:
|
|
||||||
```markdown
|
|
||||||
===== 第 1 轮问题 (共2轮) =====
|
|
||||||
【问题1 - ...】...
|
|
||||||
【问题2 - ...】...
|
|
||||||
...
|
|
||||||
【问题10 - ...】...
|
|
||||||
|
|
||||||
请回答 (格式: 1a 2b ... 10c):
|
|
||||||
```
|
|
||||||
|
|
||||||
### Interaction Flow
|
|
||||||
|
|
||||||
**Standard flow**:
|
|
||||||
1. Output questions in formatted text
|
|
||||||
2. Output expected input format example
|
|
||||||
3. Wait for user input
|
|
||||||
4. Parse input with intelligent matching
|
|
||||||
5. If parsing succeeds → Store answers and continue
|
|
||||||
6. If parsing fails → Show error, example, and wait for re-input
|
|
||||||
|
|
||||||
**No question/option limits**: Text-based interaction removes previous 4-question and 4-option restrictions
|
|
||||||
|
|
||||||
## Execution Phases
|
## Execution Phases
|
||||||
|
|
||||||
### Session Management
|
### Session Management
|
||||||
|
|
||||||
- Check `.workflow/active/` for existing sessions
|
- Check `.workflow/active/` for existing sessions
|
||||||
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
- Multiple → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||||
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
- Parse `--count N` parameter (default: 3)
|
||||||
- Store decisions in `workflow-session.json` including count parameter
|
- Store decisions in `workflow-session.json`
|
||||||
|
|
||||||
### Phase 0: Automatic Project Context Collection
|
### Phase 0: Context Collection
|
||||||
|
|
||||||
**Goal**: Gather project architecture, documentation, and relevant code context BEFORE user interaction
|
**Goal**: Gather project context BEFORE user interaction
|
||||||
|
|
||||||
**Detection Mechanism** (execute first):
|
**Steps**:
|
||||||
```javascript
|
1. Check if `context-package.json` exists → Skip if valid
|
||||||
// Check if context-package already exists
|
2. Invoke `context-search-agent` (BRAINSTORM MODE - lightweight)
|
||||||
const contextPackagePath = `.workflow/active/WFS-{session-id}/.process/context-package.json`;
|
3. Output: `.workflow/active/WFS-{session-id}/.process/context-package.json`
|
||||||
|
|
||||||
if (file_exists(contextPackagePath)) {
|
**Graceful Degradation**: If agent fails, continue to Phase 1 without context
|
||||||
// Validate package
|
|
||||||
const package = Read(contextPackagePath);
|
|
||||||
if (package.metadata.session_id === session_id) {
|
|
||||||
console.log("✅ Valid context-package found, skipping Phase 0");
|
|
||||||
return; // Skip to Phase 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Implementation**: Invoke `context-search-agent` only if package doesn't exist
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="context-search-agent",
|
subagent_type="context-search-agent",
|
||||||
description="Gather project context for brainstorm",
|
description="Gather project context for brainstorm",
|
||||||
prompt=`
|
prompt=`
|
||||||
You are executing as context-search-agent (.claude/agents/context-search-agent.md).
|
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
|
||||||
|
|
||||||
## Execution Mode
|
Session: ${session_id}
|
||||||
**BRAINSTORM MODE** (Lightweight) - Phase 1-2 only (skip deep analysis)
|
Task: ${task_description}
|
||||||
|
Output: .workflow/${session_id}/.process/context-package.json
|
||||||
|
|
||||||
## Session Information
|
Required fields: metadata, project_context, assets, dependencies, conflict_detection
|
||||||
- **Session ID**: ${session_id}
|
|
||||||
- **Task Description**: ${task_description}
|
|
||||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
|
||||||
|
|
||||||
## Mission
|
|
||||||
Execute complete context-search-agent workflow for implementation planning:
|
|
||||||
|
|
||||||
### Phase 1: Initialization & Pre-Analysis
|
|
||||||
1. **Detection**: Check for existing context-package (early exit if valid)
|
|
||||||
2. **Foundation**: Initialize code-index, get project structure, load docs
|
|
||||||
3. **Analysis**: Extract keywords, determine scope, classify complexity
|
|
||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
|
||||||
Execute all 3 discovery tracks:
|
|
||||||
- **Track 1**: Reference documentation (CLAUDE.md, architecture docs)
|
|
||||||
- **Track 2**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
|
||||||
- **Track 3**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
|
|
||||||
|
|
||||||
### Phase 3: Synthesis, Assessment & Packaging
|
|
||||||
1. Apply relevance scoring and build dependency graph
|
|
||||||
2. Synthesize 3-source data (docs > code > web)
|
|
||||||
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
|
||||||
4. Perform conflict detection with risk assessment
|
|
||||||
5. Generate and validate context-package.json
|
|
||||||
|
|
||||||
## Output Requirements
|
|
||||||
Complete context-package.json with:
|
|
||||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
|
||||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack
|
|
||||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
|
||||||
- **dependencies**: {internal[], external[]} with dependency graph
|
|
||||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
|
||||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy}
|
|
||||||
|
|
||||||
## Quality Validation
|
|
||||||
Before completion verify:
|
|
||||||
- [ ] Valid JSON format with all required fields
|
|
||||||
- [ ] File relevance accuracy >80%
|
|
||||||
- [ ] Dependency graph complete (max 2 transitive levels)
|
|
||||||
- [ ] Conflict risk level calculated correctly
|
|
||||||
- [ ] No sensitive data exposed
|
|
||||||
- [ ] Total files ≤50 (prioritize high-relevance)
|
|
||||||
|
|
||||||
Execute autonomously following agent documentation.
|
|
||||||
Report completion with statistics.
|
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Graceful Degradation**:
|
### Phase 1: Topic Analysis
|
||||||
- If agent fails: Log warning, continue to Phase 1 without project context
|
|
||||||
- If package invalid: Re-run context-search-agent
|
|
||||||
|
|
||||||
### Phase 1: Topic Analysis & Intent Classification
|
**Goal**: Extract keywords/challenges enriched by Phase 0 context
|
||||||
|
|
||||||
**Goal**: Extract keywords/challenges to drive all subsequent question generation, **enriched by Phase 0 project context**
|
|
||||||
|
|
||||||
**Steps**:
|
**Steps**:
|
||||||
1. **Load Phase 0 context** (if available):
|
1. Load Phase 0 context (tech_stack, modules, conflict_risk)
|
||||||
- Read `.workflow/active/WFS-{session-id}/.process/context-package.json`
|
2. Deep topic analysis (entities, challenges, constraints, metrics)
|
||||||
- Extract: tech_stack, existing modules, conflict_risk, relevant files
|
3. Generate 2-4 context-aware probing questions
|
||||||
|
4. AskUserQuestion → Store to `session.intent_context`
|
||||||
|
|
||||||
2. **Deep topic analysis** (context-aware):
|
**Example**:
|
||||||
- Extract technical entities from topic + existing codebase
|
```javascript
|
||||||
- Identify core challenges considering existing architecture
|
AskUserQuestion({
|
||||||
- Consider constraints (timeline/budget/compliance)
|
questions: [
|
||||||
- Define success metrics based on current project state
|
{
|
||||||
|
question: "实时协作平台的主要技术挑战?",
|
||||||
3. **Generate 2-4 context-aware probing questions**:
|
header: "核心挑战",
|
||||||
- Reference existing tech stack in questions
|
multiSelect: false,
|
||||||
- Consider integration with existing modules
|
options: [
|
||||||
- Address identified conflict risks from Phase 0
|
{ label: "实时数据同步", description: "100+用户同时在线,状态同步复杂度高" },
|
||||||
- Target root challenges and trade-off priorities
|
{ label: "可扩展性架构", description: "用户规模增长时的系统扩展能力" },
|
||||||
|
{ label: "冲突解决机制", description: "多用户同时编辑的冲突处理策略" }
|
||||||
4. **User interaction**: Output questions using text format (see User Interaction Protocol), wait for user input
|
]
|
||||||
|
},
|
||||||
5. **Parse user answers**: Use intelligent parsing to extract answers from user input (support multiple formats)
|
{
|
||||||
|
question: "MVP阶段最关注的指标?",
|
||||||
6. **Storage**: Store answers to `session.intent_context` with `{extracted_keywords, identified_challenges, user_answers, project_context_used}`
|
header: "优先级",
|
||||||
|
multiSelect: false,
|
||||||
**Example Output**:
|
options: [
|
||||||
```markdown
|
{ label: "功能完整性", description: "实现所有核心功能" },
|
||||||
===== Phase 1: 项目意图分析 =====
|
{ label: "用户体验", description: "流畅的交互体验和响应速度" },
|
||||||
|
{ label: "系统稳定性", description: "高可用性和数据一致性" }
|
||||||
【问题1 - 核心挑战】实时协作平台的主要技术挑战?
|
]
|
||||||
a) 实时数据同步
|
}
|
||||||
说明:100+用户同时在线,状态同步复杂度高
|
]
|
||||||
b) 可扩展性架构
|
})
|
||||||
说明:用户规模增长时的系统扩展能力
|
|
||||||
c) 冲突解决机制
|
|
||||||
说明:多用户同时编辑的冲突处理策略
|
|
||||||
|
|
||||||
【问题2 - 优先级】MVP阶段最关注的指标?
|
|
||||||
a) 功能完整性
|
|
||||||
说明:实现所有核心功能
|
|
||||||
b) 用户体验
|
|
||||||
说明:流畅的交互体验和响应速度
|
|
||||||
c) 系统稳定性
|
|
||||||
说明:高可用性和数据一致性
|
|
||||||
|
|
||||||
请回答 (格式: 1a 2b):
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**User input examples**:
|
|
||||||
- `1a 2c` → Q1:a, Q2:c
|
|
||||||
- `1a,2c` → Q1:a, Q2:c
|
|
||||||
|
|
||||||
**⚠️ CRITICAL**: Questions MUST reference topic keywords. Generic "Project type?" violates dynamic generation.
|
**⚠️ CRITICAL**: Questions MUST reference topic keywords. Generic "Project type?" violates dynamic generation.
|
||||||
|
|
||||||
### Phase 2: Role Selection
|
### Phase 2: Role Selection
|
||||||
|
|
||||||
**⚠️ CRITICAL**: User MUST interact to select roles. NEVER auto-select without user confirmation.
|
**Goal**: User selects roles from intelligent recommendations
|
||||||
|
|
||||||
**Available Roles**:
|
**Available Roles**: data-architect, product-manager, product-owner, scrum-master, subject-matter-expert, system-architect, test-strategist, ui-designer, ux-expert
|
||||||
- data-architect (数据架构师)
|
|
||||||
- product-manager (产品经理)
|
|
||||||
- product-owner (产品负责人)
|
|
||||||
- scrum-master (敏捷教练)
|
|
||||||
- subject-matter-expert (领域专家)
|
|
||||||
- system-architect (系统架构师)
|
|
||||||
- test-strategist (测试策略师)
|
|
||||||
- ui-designer (UI 设计师)
|
|
||||||
- ux-expert (UX 专家)
|
|
||||||
|
|
||||||
**Steps**:
|
**Steps**:
|
||||||
1. **Intelligent role recommendation** (AI analysis):
|
1. Analyze Phase 1 keywords → Recommend count+2 roles with rationale
|
||||||
- Analyze Phase 1 extracted keywords and challenges
|
2. AskUserQuestion (multiSelect=true) → Store to `session.selected_roles`
|
||||||
- Use AI reasoning to determine most relevant roles for the specific topic
|
3. If count+2 > 4, split into multiple rounds
|
||||||
- Recommend count+2 roles (e.g., if user wants 3 roles, recommend 5 options)
|
|
||||||
- Provide clear rationale for each recommended role based on topic context
|
|
||||||
|
|
||||||
2. **User selection** (text interaction):
|
**Example**:
|
||||||
- Output all recommended roles at once (no batching needed for count+2 roles)
|
```javascript
|
||||||
- Display roles with labels and relevance rationale
|
AskUserQuestion({
|
||||||
- Wait for user input in multi-select format
|
questions: [{
|
||||||
- Parse user input (support multiple formats)
|
question: "请选择 3 个角色参与头脑风暴分析",
|
||||||
- **Storage**: Store selections to `session.selected_roles`
|
header: "角色选择",
|
||||||
|
multiSelect: true,
|
||||||
**Example Output**:
|
options: [
|
||||||
```markdown
|
{ label: "system-architect", description: "实时同步架构设计和技术选型" },
|
||||||
===== Phase 2: 角色选择 =====
|
{ label: "ui-designer", description: "协作界面用户体验和状态展示" },
|
||||||
|
{ label: "product-manager", description: "功能优先级和MVP范围决策" },
|
||||||
【角色选择】请选择 3 个角色参与头脑风暴分析
|
{ label: "data-architect", description: "数据同步模型和存储方案设计" }
|
||||||
|
]
|
||||||
a) system-architect (系统架构师)
|
}]
|
||||||
推荐理由:实时同步架构设计和技术选型的核心角色
|
})
|
||||||
b) ui-designer (UI设计师)
|
|
||||||
推荐理由:协作界面用户体验和实时状态展示
|
|
||||||
c) product-manager (产品经理)
|
|
||||||
推荐理由:功能优先级和MVP范围决策
|
|
||||||
d) data-architect (数据架构师)
|
|
||||||
推荐理由:数据同步模型和存储方案设计
|
|
||||||
e) ux-expert (UX专家)
|
|
||||||
推荐理由:多用户协作交互流程优化
|
|
||||||
|
|
||||||
支持格式:
|
|
||||||
- 分别选择:2a 2c 2d (选择a、c、d)
|
|
||||||
- 合并语法:2acd (选择a、c、d)
|
|
||||||
- 逗号分隔:2a,c,d (选择a、c、d)
|
|
||||||
|
|
||||||
请输入选择:
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**User input examples**:
|
**⚠️ CRITICAL**: User MUST interact. NEVER auto-select without confirmation.
|
||||||
- `2acd` → Roles: a, c, d (system-architect, product-manager, data-architect)
|
|
||||||
- `2a 2c 2d` → Same result
|
|
||||||
- `2a,c,d` → Same result
|
|
||||||
|
|
||||||
**Role Recommendation Rules**:
|
### Phase 3: Role-Specific Questions
|
||||||
- NO hardcoded keyword-to-role mappings
|
|
||||||
- Use intelligent analysis of topic, challenges, and requirements
|
|
||||||
- Consider role synergies and coverage gaps
|
|
||||||
- Explain WHY each role is relevant to THIS specific topic
|
|
||||||
- Default recommendation: count+2 roles for user to choose from
|
|
||||||
|
|
||||||
### Phase 3: Role-Specific Questions (Dynamic Generation)
|
|
||||||
|
|
||||||
**Goal**: Generate deep questions mapping role expertise to Phase 1 challenges
|
**Goal**: Generate deep questions mapping role expertise to Phase 1 challenges
|
||||||
|
|
||||||
**Algorithm**:
|
**Algorithm**:
|
||||||
```
|
1. FOR each selected role:
|
||||||
FOR each selected role:
|
- Map Phase 1 challenges to role domain
|
||||||
1. Map Phase 1 challenges to role domain:
|
- Generate 3-4 questions (implementation depth, trade-offs, edge cases)
|
||||||
- "real-time sync" + system-architect → State management pattern
|
- AskUserQuestion per role → Store to `session.role_decisions[role]`
|
||||||
- "100 users" + system-architect → Communication protocol
|
2. Process roles sequentially (one at a time for clarity)
|
||||||
- "low latency" + system-architect → Conflict resolution
|
3. If role needs > 4 questions, split into multiple rounds
|
||||||
|
|
||||||
2. Generate 3-4 questions per role probing implementation depth, trade-offs, edge cases:
|
**Example** (system-architect):
|
||||||
Q: "How handle real-time state sync for 100+ users?" (explores approach)
|
```javascript
|
||||||
Q: "How resolve conflicts when 2 users edit simultaneously?" (explores edge case)
|
AskUserQuestion({
|
||||||
Options: [Event Sourcing/Centralized/CRDT] (concrete, explain trade-offs for THIS use case)
|
questions: [
|
||||||
|
{
|
||||||
3. Output questions in text format per role:
|
question: "100+ 用户实时状态同步方案?",
|
||||||
- Display all questions for current role (3-4 questions, no 10-question limit)
|
header: "状态同步",
|
||||||
- Questions in Chinese (用中文提问)
|
multiSelect: false,
|
||||||
- Wait for user input
|
options: [
|
||||||
- Parse answers using intelligent parsing
|
{ label: "Event Sourcing", description: "完整事件历史,支持回溯,存储成本高" },
|
||||||
- Store answers to session.role_decisions[role]
|
{ label: "集中式状态管理", description: "实现简单,单点瓶颈风险" },
|
||||||
|
{ label: "CRDT", description: "去中心化,自动合并,学习曲线陡" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "两个用户同时编辑冲突如何解决?",
|
||||||
|
header: "冲突解决",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "自动合并", description: "用户无感知,可能产生意外结果" },
|
||||||
|
{ label: "手动解决", description: "用户控制,增加交互复杂度" },
|
||||||
|
{ label: "版本控制", description: "保留历史,需要分支管理" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**Batching Strategy**:
|
### Phase 4: Conflict Resolution
|
||||||
- Each role outputs all its questions at once (typically 3-4 questions)
|
|
||||||
- No need to split per role (within 10-question batch limit)
|
|
||||||
- Multiple roles processed sequentially (one role at a time for clarity)
|
|
||||||
|
|
||||||
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format)
|
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers
|
||||||
|
|
||||||
**Example Topic-Specific Questions** (system-architect role for "real-time collaboration platform"):
|
|
||||||
- "100+ 用户实时状态同步方案?" → Options: Event Sourcing / 集中式状态管理 / CRDT
|
|
||||||
- "两个用户同时编辑冲突如何解决?" → Options: 自动合并 / 手动解决 / 版本控制
|
|
||||||
- "低延迟通信协议选择?" → Options: WebSocket / SSE / 轮询
|
|
||||||
- "系统扩展性架构方案?" → Options: 微服务 / 单体+缓存 / Serverless
|
|
||||||
|
|
||||||
**Quality Requirements**: See "Question Generation Guidelines" section for detailed rules
|
|
||||||
|
|
||||||
### Phase 4: Cross-Role Clarification (Conflict Detection)
|
|
||||||
|
|
||||||
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers, not pre-defined relationships
|
|
||||||
|
|
||||||
**Algorithm**:
|
**Algorithm**:
|
||||||
```
|
|
||||||
1. Analyze Phase 3 answers for conflicts:
|
1. Analyze Phase 3 answers for conflicts:
|
||||||
- Contradictory choices: product-manager "fast iteration" vs system-architect "complex Event Sourcing"
|
- Contradictory choices (e.g., "fast iteration" vs "complex Event Sourcing")
|
||||||
- Missing integration: ui-designer "Optimistic updates" but system-architect didn't address conflict handling
|
- Missing integration (e.g., "Optimistic updates" but no conflict handling)
|
||||||
- Implicit dependencies: ui-designer "Live cursors" but no auth approach defined
|
- Implicit dependencies (e.g., "Live cursors" but no auth defined)
|
||||||
|
2. Generate clarification questions referencing SPECIFIC Phase 3 choices
|
||||||
2. FOR each detected conflict:
|
3. AskUserQuestion (max 4 per call, multi-round) → Store to `session.cross_role_decisions`
|
||||||
Generate clarification questions referencing SPECIFIC Phase 3 choices
|
|
||||||
|
|
||||||
3. Output clarification questions in text format:
|
|
||||||
- Batch conflicts into rounds (max 10 questions per round)
|
|
||||||
- Display questions with context from Phase 3 answers
|
|
||||||
- Questions in Chinese (用中文提问)
|
|
||||||
- Wait for user input
|
|
||||||
- Parse answers using intelligent parsing
|
|
||||||
- Store answers to session.cross_role_decisions
|
|
||||||
|
|
||||||
4. If NO conflicts: Skip Phase 4 (inform user: "未检测到跨角色冲突,跳过Phase 4")
|
4. If NO conflicts: Skip Phase 4 (inform user: "未检测到跨角色冲突,跳过Phase 4")
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "CRDT 与 UI 回滚期望冲突,如何解决?\n背景:system-architect选择CRDT,ui-designer期望回滚UI",
|
||||||
|
header: "架构冲突",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "采用 CRDT", description: "保持去中心化,调整UI期望" },
|
||||||
|
{ label: "显示合并界面", description: "增加用户交互,展示冲突详情" },
|
||||||
|
{ label: "切换到 OT", description: "支持回滚,增加服务器复杂度" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**Batching Strategy**:
|
### Phase 4.5: Final Clarification
|
||||||
- Maximum 10 clarification questions per round
|
|
||||||
- If conflicts > 10, split into multiple rounds
|
|
||||||
- Prioritize most critical conflicts first
|
|
||||||
|
|
||||||
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format with background context)
|
**Purpose**: Ensure no important points missed before generating specification
|
||||||
|
|
||||||
**Example Conflict Detection** (from Phase 3 answers):
|
|
||||||
- **Architecture Conflict**: "CRDT 与 UI 回滚期望冲突,如何解决?"
|
|
||||||
- Background: system-architect chose CRDT, ui-designer expects rollback UI
|
|
||||||
- Options: 采用 CRDT / 显示合并界面 / 切换到 OT
|
|
||||||
- **Integration Gap**: "实时光标功能缺少身份认证方案"
|
|
||||||
- Background: ui-designer chose live cursors, no auth defined
|
|
||||||
- Options: OAuth 2.0 / JWT Token / Session-based
|
|
||||||
|
|
||||||
**Quality Requirements**: See "Question Generation Guidelines" section for conflict-specific rules
|
|
||||||
|
|
||||||
### Phase 5: Generate Guidance Specification
|
|
||||||
|
|
||||||
**Steps**:
|
**Steps**:
|
||||||
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions`
|
1. Ask initial check:
|
||||||
2. Transform Q&A pairs to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
|
```javascript
|
||||||
3. Generate guidance-specification.md (template below) - **PRIMARY OUTPUT FILE**
|
AskUserQuestion({
|
||||||
4. Update workflow-session.json with **METADATA ONLY**:
|
questions: [{
|
||||||
- session_id (e.g., "WFS-topic-slug")
|
question: "在生成最终规范之前,是否有前面未澄清的重点需要补充?",
|
||||||
- selected_roles[] (array of role names, e.g., ["system-architect", "ui-designer", "product-manager"])
|
header: "补充确认",
|
||||||
- topic (original user input string)
|
multiSelect: false,
|
||||||
- timestamp (ISO-8601 format)
|
options: [
|
||||||
- phase_completed: "artifacts"
|
{ label: "无需补充", description: "前面的讨论已经足够完整" },
|
||||||
- count_parameter (number from --count flag)
|
{ label: "需要补充", description: "还有重要内容需要澄清" }
|
||||||
5. Validate: No interrogative sentences in .md file, all decisions traceable, no content duplication in .json
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
2. If "需要补充":
|
||||||
|
- Analyze user's additional points
|
||||||
|
- Generate progressive questions (not role-bound, interconnected)
|
||||||
|
- AskUserQuestion (max 4 per round) → Store to `session.additional_decisions`
|
||||||
|
- Repeat until user confirms completion
|
||||||
|
3. If "无需补充": Proceed to Phase 5
|
||||||
|
|
||||||
**⚠️ CRITICAL OUTPUT SEPARATION**:
|
**Progressive Pattern**: Questions interconnected, each round informs next, continue until resolved.
|
||||||
- **guidance-specification.md**: Full guidance content (decisions, rationale, integration points)
|
|
||||||
- **workflow-session.json**: Session metadata ONLY (no guidance content, no decisions, no Q&A pairs)
|
|
||||||
- **NO content duplication**: Guidance stays in .md, metadata stays in .json
|
|
||||||
|
|
||||||
## Output Document Template
|
### Phase 5: Generate Specification
|
||||||
|
|
||||||
|
**Steps**:
|
||||||
|
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions` + `additional_decisions`
|
||||||
|
2. Transform Q&A to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
|
||||||
|
3. Generate `guidance-specification.md`
|
||||||
|
4. Update `workflow-session.json` (metadata only)
|
||||||
|
5. Validate: No interrogative sentences, all decisions traceable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Question Guidelines
|
||||||
|
|
||||||
|
### Core Principle
|
||||||
|
|
||||||
|
**Target**: 开发者(理解技术但需要从用户需求出发)
|
||||||
|
|
||||||
|
**Question Structure**: `[业务场景/需求前提] + [技术关注点]`
|
||||||
|
**Option Structure**: `标签:[技术方案] + 说明:[业务影响] + [技术权衡]`
|
||||||
|
|
||||||
|
### Quality Rules
|
||||||
|
|
||||||
|
**MUST Include**:
|
||||||
|
- ✅ All questions in Chinese (用中文提问)
|
||||||
|
- ✅ 业务场景作为问题前提
|
||||||
|
- ✅ 技术选项的业务影响说明
|
||||||
|
- ✅ 量化指标和约束条件
|
||||||
|
|
||||||
|
**MUST Avoid**:
|
||||||
|
- ❌ 纯技术选型无业务上下文
|
||||||
|
- ❌ 过度抽象的用户体验问题
|
||||||
|
- ❌ 脱离话题的通用架构问题
|
||||||
|
|
||||||
|
### Phase-Specific Requirements
|
||||||
|
|
||||||
|
| Phase | Focus | Key Requirements |
|
||||||
|
|-------|-------|------------------|
|
||||||
|
| 1 | 意图理解 | Reference topic keywords, 用户场景、业务约束、优先级 |
|
||||||
|
| 2 | 角色推荐 | Intelligent analysis (NOT keyword mapping), explain relevance |
|
||||||
|
| 3 | 角色问题 | Reference Phase 1 keywords, concrete options with trade-offs |
|
||||||
|
| 4 | 冲突解决 | Reference SPECIFIC Phase 3 choices, explain impact on both roles |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output & Governance
|
||||||
|
|
||||||
|
### Output Template
|
||||||
|
|
||||||
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||||
|
|
||||||
@@ -478,9 +385,9 @@ FOR each selected role:
|
|||||||
|
|
||||||
## Next Steps
|
## Next Steps
|
||||||
**⚠️ Automatic Continuation** (when called from auto-parallel):
|
**⚠️ Automatic Continuation** (when called from auto-parallel):
|
||||||
- auto-parallel will assign agents to generate role-specific analysis documents
|
- auto-parallel assigns agents for role-specific analysis
|
||||||
- Each selected role gets dedicated conceptual-planning-agent
|
- Each selected role gets conceptual-planning-agent
|
||||||
- Agents read this guidance-specification.md for framework context
|
- Agents read this guidance-specification.md for context
|
||||||
|
|
||||||
## Appendix: Decision Tracking
|
## Appendix: Decision Tracking
|
||||||
| Decision ID | Category | Question | Selected | Phase | Rationale |
|
| Decision ID | Category | Question | Selected | Phase | Rationale |
|
||||||
@@ -490,95 +397,19 @@ FOR each selected role:
|
|||||||
| D-003+ | [Role] | [Q] | [A] | 3 | [Why] |
|
| D-003+ | [Role] | [Q] | [A] | 3 | [Why] |
|
||||||
```
|
```
|
||||||
|
|
||||||
## Question Generation Guidelines
|
### File Structure
|
||||||
|
|
||||||
### Core Principle: Developer-Facing Questions with User Context
|
|
||||||
|
|
||||||
**Target Audience**: 开发者(理解技术但需要从用户需求出发)
|
|
||||||
|
|
||||||
**Generation Philosophy**:
|
|
||||||
1. **Phase 1**: 用户场景、业务约束、优先级(建立上下文)
|
|
||||||
2. **Phase 2**: 基于话题分析的智能角色推荐(非关键词映射)
|
|
||||||
3. **Phase 3**: 业务需求 + 技术选型(需求驱动的技术决策)
|
|
||||||
4. **Phase 4**: 技术冲突的业务权衡(帮助开发者理解影响)
|
|
||||||
|
|
||||||
### Universal Quality Rules
|
|
||||||
|
|
||||||
**Question Structure** (all phases):
|
|
||||||
```
|
|
||||||
[业务场景/需求前提] + [技术关注点]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option Structure** (all phases):
|
|
||||||
```
|
|
||||||
标签:[技术方案简称] + (业务特征)
|
|
||||||
说明:[业务影响] + [技术权衡]
|
|
||||||
```
|
|
||||||
|
|
||||||
**MUST Include** (all phases):
|
|
||||||
- ✅ All questions in Chinese (用中文提问)
|
|
||||||
- ✅ 业务场景作为问题前提
|
|
||||||
- ✅ 技术选项的业务影响说明
|
|
||||||
- ✅ 量化指标和约束条件
|
|
||||||
|
|
||||||
**MUST Avoid** (all phases):
|
|
||||||
- ❌ 纯技术选型无业务上下文
|
|
||||||
- ❌ 过度抽象的用户体验问题
|
|
||||||
- ❌ 脱离话题的通用架构问题
|
|
||||||
|
|
||||||
### Phase-Specific Requirements
|
|
||||||
|
|
||||||
**Phase 1 Requirements**:
|
|
||||||
- Questions MUST reference topic keywords (NOT generic "Project type?")
|
|
||||||
- Focus: 用户使用场景(谁用?怎么用?多频繁?)、业务约束(预算、时间、团队、合规)
|
|
||||||
- Success metrics: 性能指标、用户体验目标
|
|
||||||
- Priority ranking: MVP vs 长期规划
|
|
||||||
|
|
||||||
**Phase 3 Requirements**:
|
|
||||||
- Questions MUST reference Phase 1 keywords (e.g., "real-time", "100 users")
|
|
||||||
- Options MUST be concrete approaches with relevance to topic
|
|
||||||
- Each option includes trade-offs specific to this use case
|
|
||||||
- Include 业务需求驱动的技术问题、量化指标(并发数、延迟、可用性)
|
|
||||||
|
|
||||||
**Phase 4 Requirements**:
|
|
||||||
- Questions MUST reference SPECIFIC Phase 3 choices in background context
|
|
||||||
- Options address the detected conflict directly
|
|
||||||
- Each option explains impact on both conflicting roles
|
|
||||||
- NEVER use static "Cross-Role Matrix" - ALWAYS analyze actual Phase 3 answers
|
|
||||||
- Focus: 技术冲突的业务权衡、帮助开发者理解不同选择的影响
|
|
||||||
|
|
||||||
## Validation Checklist
|
|
||||||
|
|
||||||
Generated guidance-specification.md MUST:
|
|
||||||
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
|
|
||||||
- ✅ Every decision traceable to user answer
|
|
||||||
- ✅ Cross-role conflicts resolved or documented
|
|
||||||
- ✅ Next steps concrete and specific
|
|
||||||
- ✅ All Phase 1-4 decisions in session metadata
|
|
||||||
|
|
||||||
## Update Mechanism
|
|
||||||
|
|
||||||
```
|
```
|
||||||
IF guidance-specification.md EXISTS:
|
.workflow/active/WFS-[topic]/
|
||||||
Prompt: "Regenerate completely / Update sections / Cancel"
|
├── workflow-session.json # Metadata ONLY
|
||||||
ELSE:
|
├── .process/
|
||||||
Run full Phase 1-5 flow
|
│ └── context-package.json # Phase 0 output
|
||||||
|
└── .brainstorming/
|
||||||
|
└── guidance-specification.md # Full guidance content
|
||||||
```
|
```
|
||||||
|
|
||||||
## Governance Rules
|
### Session Metadata
|
||||||
|
|
||||||
**Output Requirements**:
|
|
||||||
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
|
|
||||||
- Every decision MUST trace to user answer
|
|
||||||
- Conflicts MUST be resolved (not marked "TBD")
|
|
||||||
- Next steps MUST be actionable
|
|
||||||
- Topic preserved as authoritative reference in session
|
|
||||||
|
|
||||||
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
|
|
||||||
|
|
||||||
## Storage Validation
|
|
||||||
|
|
||||||
**workflow-session.json** (metadata only):
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"session_id": "WFS-{topic-slug}",
|
"session_id": "WFS-{topic-slug}",
|
||||||
@@ -591,14 +422,31 @@ ELSE:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**⚠️ Rule**: Session JSON stores ONLY metadata (session_id, selected_roles[], topic, timestamps). All guidance content goes to guidance-specification.md.
|
**⚠️ Rule**: Session JSON stores ONLY metadata. All guidance content goes to guidance-specification.md.
|
||||||
|
|
||||||
## File Structure
|
### Validation Checklist
|
||||||
|
|
||||||
|
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
|
||||||
|
- ✅ Every decision traceable to user answer
|
||||||
|
- ✅ Cross-role conflicts resolved or documented
|
||||||
|
- ✅ Next steps concrete and specific
|
||||||
|
- ✅ No content duplication between .json and .md
|
||||||
|
|
||||||
|
### Update Mechanism
|
||||||
|
|
||||||
```
|
```
|
||||||
.workflow/active/WFS-[topic]/
|
IF guidance-specification.md EXISTS:
|
||||||
├── workflow-session.json # Session metadata ONLY
|
Prompt: "Regenerate completely / Update sections / Cancel"
|
||||||
└── .brainstorming/
|
ELSE:
|
||||||
└── guidance-specification.md # Full guidance content
|
Run full Phase 0-5 flow
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Governance Rules
|
||||||
|
|
||||||
|
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
|
||||||
|
- Every decision MUST trace to user answer
|
||||||
|
- Conflicts MUST be resolved (not marked "TBD")
|
||||||
|
- Next steps MUST be actionable
|
||||||
|
- Topic preserved as authoritative reference
|
||||||
|
|
||||||
|
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
|
||||||
|
|||||||
@@ -9,11 +9,11 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
|
|||||||
|
|
||||||
## Coordinator Role
|
## Coordinator Role
|
||||||
|
|
||||||
**This command is a pure orchestrator**: Execute 3 phases in sequence (interactive framework → parallel role analysis → synthesis), coordinating specialized commands/agents through task attachment model.
|
**This command is a pure orchestrator**: Dispatches 3 phases in sequence (interactive framework → parallel role analysis → synthesis), coordinating specialized commands/agents through task attachment model.
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- Task agent execution **attaches analysis tasks** to orchestrator's TodoWrite
|
- Task agent dispatch **attaches analysis tasks** to orchestrator's TodoWrite
|
||||||
- Phase 1: artifacts command attaches its internal tasks (Phase 1-5)
|
- Phase 1: artifacts command attaches its internal tasks (Phase 1-5)
|
||||||
- Phase 2: N conceptual-planning-agent tasks attached in parallel
|
- Phase 2: N conceptual-planning-agent tasks attached in parallel
|
||||||
- Phase 3: synthesis command attaches its internal tasks
|
- Phase 3: synthesis command attaches its internal tasks
|
||||||
@@ -26,9 +26,9 @@ allowed-tools: SlashCommand(*), Task(*), TodoWrite(*), Read(*), Write(*), Bash(*
|
|||||||
This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) handles user interaction, Phase 2 (role agents) runs in parallel.
|
This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) handles user interaction, Phase 2 (role agents) runs in parallel.
|
||||||
|
|
||||||
1. **User triggers**: `/workflow:brainstorm:auto-parallel "topic" [--count N]`
|
1. **User triggers**: `/workflow:brainstorm:auto-parallel "topic" [--count N]`
|
||||||
2. **Phase 1 executes** → artifacts command (tasks ATTACHED) → Auto-continues
|
2. **Dispatch Phase 1** → artifacts command (tasks ATTACHED) → Auto-continues
|
||||||
3. **Phase 2 executes** → Parallel role agents (N tasks ATTACHED concurrently) → Auto-continues
|
3. **Dispatch Phase 2** → Parallel role agents (N tasks ATTACHED concurrently) → Auto-continues
|
||||||
4. **Phase 3 executes** → Synthesis command (tasks ATTACHED) → Reports final summary
|
4. **Dispatch Phase 3** → Synthesis command (tasks ATTACHED) → Reports final summary
|
||||||
|
|
||||||
**Auto-Continue Mechanism**:
|
**Auto-Continue Mechanism**:
|
||||||
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
||||||
@@ -38,13 +38,13 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
|||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 command execution
|
1. **Start Immediately**: First action is TodoWrite initialization, second action is dispatch Phase 1 command
|
||||||
2. **No Preliminary Analysis**: Do not analyze topic before Phase 1 - artifacts handles all analysis
|
2. **No Preliminary Analysis**: Do not analyze topic before Phase 1 - artifacts handles all analysis
|
||||||
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
|
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
|
||||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
4. **Auto-Continue via TodoList**: Check TodoList status to dispatch next pending phase automatically
|
||||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
6. **Task Attachment Model**: SlashCommand and Task invocations **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
6. **Task Attachment Model**: SlashCommand and Task dispatches **attach** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and dispatch next phase
|
||||||
8. **Parallel Execution**: Phase 2 attaches multiple agent tasks simultaneously for concurrent execution
|
8. **Parallel Execution**: Phase 2 attaches multiple agent tasks simultaneously for concurrent execution
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
@@ -67,7 +67,11 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
|||||||
|
|
||||||
### Phase 1: Interactive Framework Generation
|
### Phase 1: Interactive Framework Generation
|
||||||
|
|
||||||
**Command**: `SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")`
|
**Step 1: Dispatch** - Interactive framework generation via artifacts command
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:brainstorm:artifacts \"{topic}\" --count {N}")
|
||||||
|
```
|
||||||
|
|
||||||
**What It Does**:
|
**What It Does**:
|
||||||
- Topic analysis: Extract challenges, generate task-specific questions
|
- Topic analysis: Extract challenges, generate task-specific questions
|
||||||
@@ -87,31 +91,32 @@ This workflow runs **fully autonomously** once triggered. Phase 1 (artifacts) ha
|
|||||||
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
- workflow-session.json contains selected_roles[] (metadata only, no content duplication)
|
||||||
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
|
- Session directory `.workflow/active/WFS-{topic}/.brainstorming/` exists
|
||||||
|
|
||||||
**TodoWrite Update (Phase 1 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 1 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
{"content": "Phase 1.1: Topic analysis and question generation (artifacts)", "status": "in_progress", "activeForm": "Analyzing topic"},
|
{"content": "Phase 1: Interactive Framework Generation", "status": "in_progress", "activeForm": "Executing artifacts interactive framework"},
|
||||||
{"content": "Phase 1.2: Role selection and user confirmation (artifacts)", "status": "pending", "activeForm": "Selecting roles"},
|
{"content": " → Topic analysis and question generation", "status": "in_progress", "activeForm": "Analyzing topic"},
|
||||||
{"content": "Phase 1.3: Role questions and user decisions (artifacts)", "status": "pending", "activeForm": "Collecting role questions"},
|
{"content": " → Role selection and user confirmation", "status": "pending", "activeForm": "Selecting roles"},
|
||||||
{"content": "Phase 1.4: Conflict detection and resolution (artifacts)", "status": "pending", "activeForm": "Resolving conflicts"},
|
{"content": " → Role questions and user decisions", "status": "pending", "activeForm": "Collecting role questions"},
|
||||||
{"content": "Phase 1.5: Guidance specification generation (artifacts)", "status": "pending", "activeForm": "Generating guidance"},
|
{"content": " → Conflict detection and resolution", "status": "pending", "activeForm": "Resolving conflicts"},
|
||||||
{"content": "Execute parallel role analysis", "status": "pending", "activeForm": "Executing parallel role analysis"},
|
{"content": " → Guidance specification generation", "status": "pending", "activeForm": "Generating guidance"},
|
||||||
{"content": "Execute synthesis integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
{"content": "Phase 2: Parallel Role Analysis", "status": "pending", "activeForm": "Executing parallel role analysis"},
|
||||||
|
{"content": "Phase 3: Synthesis Integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
|
**Note**: SlashCommand dispatch **attaches** artifacts' 5 internal tasks. Orchestrator **executes** these tasks sequentially.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 1.1-1.5** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 1.1-1.5** sequentially
|
||||||
|
|
||||||
**TodoWrite Update (Phase 1 completed - tasks collapsed)**:
|
**TodoWrite Update (Phase 1 completed - tasks collapsed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
{"content": "Phase 1: Interactive Framework Generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
||||||
{"content": "Execute parallel role analysis", "status": "pending", "activeForm": "Executing parallel role analysis"},
|
{"content": "Phase 2: Parallel Role Analysis", "status": "pending", "activeForm": "Executing parallel role analysis"},
|
||||||
{"content": "Execute synthesis integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
{"content": "Phase 3: Synthesis Integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -136,26 +141,10 @@ OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
|
|||||||
TOPIC: {user-provided-topic}
|
TOPIC: {user-provided-topic}
|
||||||
|
|
||||||
## Flow Control Steps
|
## Flow Control Steps
|
||||||
1. **load_topic_framework**
|
1. load_topic_framework → .workflow/active/WFS-{session}/.brainstorming/guidance-specification.md
|
||||||
- Action: Load structured topic discussion framework
|
2. load_role_template → ~/.claude/workflows/cli-templates/planning-roles/{role}.md
|
||||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
3. load_session_metadata → .workflow/active/WFS-{session}/workflow-session.json
|
||||||
- Output: topic_framework_content
|
4. load_style_skill (ui-designer only, if style_skill_package) → .claude/skills/style-{style_skill_package}/
|
||||||
|
|
||||||
2. **load_role_template**
|
|
||||||
- Action: Load {role-name} planning template
|
|
||||||
- Command: Read(~/.claude/workflows/cli-templates/planning-roles/{role}.md)
|
|
||||||
- Output: role_template_guidelines
|
|
||||||
|
|
||||||
3. **load_session_metadata**
|
|
||||||
- Action: Load session metadata and original user intent
|
|
||||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
|
||||||
- Output: session_context (contains original user prompt as PRIMARY reference)
|
|
||||||
|
|
||||||
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
|
|
||||||
- Action: Load style SKILL package for design system reference
|
|
||||||
- Command: Read(.claude/skills/style-{style_skill_package}/SKILL.md) AND Read(.workflow/reference_style/{style_skill_package}/design-tokens.json)
|
|
||||||
- Output: style_skill_content, design_tokens
|
|
||||||
- Usage: Apply design tokens in ui-designer analysis and artifacts
|
|
||||||
|
|
||||||
## Analysis Requirements
|
## Analysis Requirements
|
||||||
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
|
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
|
||||||
@@ -165,13 +154,9 @@ TOPIC: {user-provided-topic}
|
|||||||
**Template Integration**: Apply role template guidelines within framework structure
|
**Template Integration**: Apply role template guidelines within framework structure
|
||||||
|
|
||||||
## Expected Deliverables
|
## Expected Deliverables
|
||||||
1. **analysis.md**: Comprehensive {role-name} analysis addressing all framework discussion points
|
1. **analysis.md** (optionally with analysis-{slug}.md sub-documents)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
2. **Framework Reference**: @../guidance-specification.md
|
||||||
- **FORBIDDEN**: Never use `recommendations.md` or any filename not starting with `analysis`
|
3. **User Intent Alignment**: Validate against session_context
|
||||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
|
||||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
|
||||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
|
||||||
3. **User Intent Alignment**: Validate analysis aligns with original user objectives from session_context
|
|
||||||
|
|
||||||
## Completion Criteria
|
## Completion Criteria
|
||||||
- Address each discussion point from guidance-specification.md with {role-name} expertise
|
- Address each discussion point from guidance-specification.md with {role-name} expertise
|
||||||
@@ -182,7 +167,7 @@ TOPIC: {user-provided-topic}
|
|||||||
"
|
"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parallel Execution**:
|
**Parallel Dispatch**:
|
||||||
- Launch N agents simultaneously (one message with multiple Task calls)
|
- Launch N agents simultaneously (one message with multiple Task calls)
|
||||||
- Each agent task **attached** to orchestrator's TodoWrite
|
- Each agent task **attached** to orchestrator's TodoWrite
|
||||||
- All agents execute concurrently, each attaching their own analysis sub-tasks
|
- All agents execute concurrently, each attaching their own analysis sub-tasks
|
||||||
@@ -194,35 +179,36 @@ TOPIC: {user-provided-topic}
|
|||||||
- guidance-specification.md path
|
- guidance-specification.md path
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md`
|
||||||
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
- Optionally with `analysis-{slug}.md` sub-documents (max 5)
|
||||||
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
|
- **File pattern**: `analysis*.md` for globbing
|
||||||
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
|
- **FORBIDDEN**: `recommendations.md` or any non-`analysis` prefixed files
|
||||||
- All N role analyses completed
|
- All N role analyses completed
|
||||||
|
|
||||||
**TodoWrite Update (Phase 2 agents invoked - tasks attached in parallel)**:
|
**TodoWrite Update (Phase 2 agents dispatched - tasks attached in parallel)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
{"content": "Phase 1: Interactive Framework Generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
||||||
{"content": "Phase 2.1: Execute system-architect analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing system-architect analysis"},
|
{"content": "Phase 2: Parallel Role Analysis", "status": "in_progress", "activeForm": "Executing parallel role analysis"},
|
||||||
{"content": "Phase 2.2: Execute ui-designer analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing ui-designer analysis"},
|
{"content": " → Execute system-architect analysis", "status": "in_progress", "activeForm": "Executing system-architect analysis"},
|
||||||
{"content": "Phase 2.3: Execute product-manager analysis [conceptual-planning-agent]", "status": "in_progress", "activeForm": "Executing product-manager analysis"},
|
{"content": " → Execute ui-designer analysis", "status": "in_progress", "activeForm": "Executing ui-designer analysis"},
|
||||||
{"content": "Execute synthesis integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
{"content": " → Execute product-manager analysis", "status": "in_progress", "activeForm": "Executing product-manager analysis"},
|
||||||
|
{"content": "Phase 3: Synthesis Integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: Multiple Task invocations **attach** N role analysis tasks simultaneously. Orchestrator **executes** these tasks in parallel.
|
**Note**: Multiple Task dispatches **attach** N role analysis tasks simultaneously. Orchestrator **executes** these tasks in parallel.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 2.1-2.N** concurrently
|
**Next Action**: Tasks attached → **Execute Phase 2.1-2.N** concurrently
|
||||||
|
|
||||||
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
|
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
{"content": "Phase 1: Interactive Framework Generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
||||||
{"content": "Execute parallel role analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
|
{"content": "Phase 2: Parallel Role Analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
|
||||||
{"content": "Execute synthesis integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
{"content": "Phase 3: Synthesis Integration", "status": "pending", "activeForm": "Executing synthesis integration"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -234,7 +220,11 @@ TOPIC: {user-provided-topic}
|
|||||||
|
|
||||||
### Phase 3: Synthesis Generation
|
### Phase 3: Synthesis Generation
|
||||||
|
|
||||||
**Command**: `SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")`
|
**Step 3: Dispatch** - Synthesis integration via synthesis command
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:brainstorm:synthesis --session {sessionId}")
|
||||||
|
```
|
||||||
|
|
||||||
**What It Does**:
|
**What It Does**:
|
||||||
- Load original user intent from workflow-session.json
|
- Load original user intent from workflow-session.json
|
||||||
@@ -248,29 +238,30 @@ TOPIC: {user-provided-topic}
|
|||||||
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
- `.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` exists
|
||||||
- Synthesis references all role analyses
|
- Synthesis references all role analyses
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 3 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
{"content": "Phase 1: Interactive Framework Generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
||||||
{"content": "Execute parallel role analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
|
{"content": "Phase 2: Parallel Role Analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
|
||||||
{"content": "Phase 3.1: Load role analysis files (synthesis)", "status": "in_progress", "activeForm": "Loading role analyses"},
|
{"content": "Phase 3: Synthesis Integration", "status": "in_progress", "activeForm": "Executing synthesis integration"},
|
||||||
{"content": "Phase 3.2: Integrate insights across roles (synthesis)", "status": "pending", "activeForm": "Integrating insights"},
|
{"content": " → Load role analysis files", "status": "in_progress", "activeForm": "Loading role analyses"},
|
||||||
{"content": "Phase 3.3: Generate synthesis specification (synthesis)", "status": "pending", "activeForm": "Generating synthesis"}
|
{"content": " → Integrate insights across roles", "status": "pending", "activeForm": "Integrating insights"},
|
||||||
|
{"content": " → Generate synthesis specification", "status": "pending", "activeForm": "Generating synthesis"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
|
**Note**: SlashCommand dispatch **attaches** synthesis' internal tasks. Orchestrator **executes** these tasks sequentially.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
|
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Parse --count parameter from user input", "status": "completed", "activeForm": "Parsing count parameter"},
|
{"content": "Phase 0: Parameter Parsing", "status": "completed", "activeForm": "Parsing count parameter"},
|
||||||
{"content": "Execute artifacts interactive framework generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
{"content": "Phase 1: Interactive Framework Generation", "status": "completed", "activeForm": "Executing artifacts interactive framework"},
|
||||||
{"content": "Execute parallel role analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
|
{"content": "Phase 2: Parallel Role Analysis", "status": "completed", "activeForm": "Executing parallel role analysis"},
|
||||||
{"content": "Execute synthesis integration", "status": "completed", "activeForm": "Executing synthesis integration"}
|
{"content": "Phase 3: Synthesis Integration", "status": "completed", "activeForm": "Executing synthesis integration"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -293,7 +284,7 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
|
|||||||
|
|
||||||
### Key Principles
|
### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand/Task invoked):
|
1. **Task Attachment** (when SlashCommand/Task dispatched):
|
||||||
- Sub-command's or agent's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's or agent's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Phase 1: `/workflow:brainstorm:artifacts` attaches 5 internal tasks (Phase 1.1-1.5)
|
- Phase 1: `/workflow:brainstorm:artifacts` attaches 5 internal tasks (Phase 1.1-1.5)
|
||||||
- Phase 2: Multiple `Task(conceptual-planning-agent)` calls attach N role analysis tasks simultaneously
|
- Phase 2: Multiple `Task(conceptual-planning-agent)` calls attach N role analysis tasks simultaneously
|
||||||
@@ -314,7 +305,7 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase 1 invoked (artifacts tasks ATTACHED) → Artifacts sub-tasks executed → Phase 1 completed (tasks COLLAPSED) → Phase 2 invoked (N role tasks ATTACHED in parallel) → Role analyses executed concurrently → Phase 2 completed (tasks COLLAPSED) → Phase 3 invoked (synthesis tasks ATTACHED) → Synthesis sub-tasks executed → Phase 3 completed (tasks COLLAPSED) → Workflow complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase 1 dispatched (artifacts tasks ATTACHED) → Artifacts sub-tasks executed → Phase 1 completed (tasks COLLAPSED) → Phase 2 dispatched (N role tasks ATTACHED in parallel) → Role analyses executed concurrently → Phase 2 completed (tasks COLLAPSED) → Phase 3 dispatched (synthesis tasks ATTACHED) → Synthesis sub-tasks executed → Phase 3 completed (tasks COLLAPSED) → Workflow complete.
|
||||||
|
|
||||||
### Brainstorming Workflow Specific Features
|
### Brainstorming Workflow Specific Features
|
||||||
|
|
||||||
@@ -324,14 +315,6 @@ Synthesis: .workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.m
|
|||||||
- **Dynamic Role Count**: `--count N` parameter determines number of Phase 2 parallel tasks (default: 3, max: 9)
|
- **Dynamic Role Count**: `--count N` parameter determines number of Phase 2 parallel tasks (default: 3, max: 9)
|
||||||
- **Mixed Execution**: Sequential (Phase 1, 3) and Parallel (Phase 2) task execution
|
- **Mixed Execution**: Sequential (Phase 1, 3) and Parallel (Phase 2) task execution
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- Real-time visibility into attached tasks during execution
|
|
||||||
- Clean orchestrator-level summary after tasks complete
|
|
||||||
- Clear mental model: SlashCommand/Task = attach tasks, not delegate work
|
|
||||||
- Parallel execution support for concurrent role analysis
|
|
||||||
- Dynamic attachment/collapse maintains clarity
|
|
||||||
|
|
||||||
**Note**: See individual Phase descriptions (Phase 1, 2, 3) for detailed TodoWrite Update examples with full JSON structures.
|
|
||||||
|
|
||||||
## Input Processing
|
## Input Processing
|
||||||
|
|
||||||
@@ -450,12 +433,9 @@ CONTEXT_VARS:
|
|||||||
├── workflow-session.json # Session metadata ONLY
|
├── workflow-session.json # Session metadata ONLY
|
||||||
└── .brainstorming/
|
└── .brainstorming/
|
||||||
├── guidance-specification.md # Framework (Phase 1)
|
├── guidance-specification.md # Framework (Phase 1)
|
||||||
├── {role-1}/
|
├── {role}/
|
||||||
│ └── analysis.md # Role analysis (Phase 2)
|
│ ├── analysis.md # Main document (with optional @references)
|
||||||
├── {role-2}/
|
│ └── analysis-{slug}.md # Section documents (max 5)
|
||||||
│ └── analysis.md
|
|
||||||
├── {role-N}/
|
|
||||||
│ └── analysis.md
|
|
||||||
└── synthesis-specification.md # Integration (Phase 3)
|
└── synthesis-specification.md # Integration (Phase 3)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -2,325 +2,318 @@
|
|||||||
name: synthesis
|
name: synthesis
|
||||||
description: Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent
|
description: Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent
|
||||||
argument-hint: "[optional: --session session-id]"
|
argument-hint: "[optional: --session session-id]"
|
||||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
|
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*), AskUserQuestion(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Three-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
|
Six-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
|
||||||
|
|
||||||
**Phase 1-2 (Main Flow)**: Session detection → File discovery → Path preparation
|
**Phase 1-2**: Session detection → File discovery → Path preparation
|
||||||
|
**Phase 3A**: Cross-role analysis agent → Generate recommendations
|
||||||
|
**Phase 4**: User selects enhancements → User answers clarifications (via AskUserQuestion)
|
||||||
|
**Phase 5**: Parallel update agents (one per role)
|
||||||
|
**Phase 6**: Context package update → Metadata update → Completion report
|
||||||
|
|
||||||
**Phase 3A (Analysis Agent)**: Cross-role analysis → Generate recommendations
|
All user interactions use AskUserQuestion tool (max 4 questions per call, multi-round).
|
||||||
|
|
||||||
**Phase 4 (Main Flow)**: User selects enhancements → User answers clarifications → Build update plan
|
|
||||||
|
|
||||||
**Phase 5 (Parallel Update Agents)**: Each agent updates ONE role document → Parallel execution
|
|
||||||
|
|
||||||
**Phase 6 (Main Flow)**: Metadata update → Completion report
|
|
||||||
|
|
||||||
**Key Features**:
|
|
||||||
- Multi-agent architecture (analysis agent + parallel update agents)
|
|
||||||
- Clear separation: Agent analysis vs Main flow interaction
|
|
||||||
- Parallel document updates (one agent per role)
|
|
||||||
- User intent alignment validation
|
|
||||||
|
|
||||||
**Document Flow**:
|
**Document Flow**:
|
||||||
- Input: `[role]/analysis*.md`, `guidance-specification.md`, session metadata
|
- Input: `[role]/analysis*.md`, `guidance-specification.md`, session metadata
|
||||||
- Output: Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
|
- Output: Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### Phase Summary
|
||||||
|
|
||||||
|
| Phase | Goal | Executor | Output |
|
||||||
|
|-------|------|----------|--------|
|
||||||
|
| 1 | Session detection | Main flow | session_id, brainstorm_dir |
|
||||||
|
| 2 | File discovery | Main flow | role_analysis_paths |
|
||||||
|
| 3A | Cross-role analysis | Agent | enhancement_recommendations |
|
||||||
|
| 4 | User interaction | Main flow + AskUserQuestion | update_plan |
|
||||||
|
| 5 | Document updates | Parallel agents | Updated analysis*.md |
|
||||||
|
| 6 | Finalization | Main flow | context-package.json, report |
|
||||||
|
|
||||||
|
### AskUserQuestion Pattern
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Enhancement selection (multi-select)
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "请选择要应用的改进建议",
|
||||||
|
header: "改进选择",
|
||||||
|
multiSelect: true,
|
||||||
|
options: [
|
||||||
|
{ label: "EP-001: API Contract", description: "添加详细的请求/响应 schema 定义" },
|
||||||
|
{ label: "EP-002: User Intent", description: "明确用户需求优先级和验收标准" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
|
// Clarification questions (single-select, multi-round)
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "MVP 阶段的核心目标是什么?",
|
||||||
|
header: "用户意图",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "快速验证", description: "最小功能集,快速上线获取反馈" },
|
||||||
|
{ label: "技术壁垒", description: "完善架构,为长期发展打基础" },
|
||||||
|
{ label: "功能完整", description: "覆盖所有规划功能,延迟上线" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Task Tracking
|
## Task Tracking
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Detect session and validate analyses", "status": "in_progress", "activeForm": "Detecting session"},
|
{"content": "Detect session and validate analyses", "status": "pending", "activeForm": "Detecting session"},
|
||||||
{"content": "Discover role analysis file paths", "status": "pending", "activeForm": "Discovering paths"},
|
{"content": "Discover role analysis file paths", "status": "pending", "activeForm": "Discovering paths"},
|
||||||
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis agent"},
|
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis"},
|
||||||
{"content": "Present enhancements for user selection", "status": "pending", "activeForm": "Presenting enhancements"},
|
{"content": "Present enhancements via AskUserQuestion", "status": "pending", "activeForm": "Selecting enhancements"},
|
||||||
{"content": "Generate and present clarification questions", "status": "pending", "activeForm": "Clarifying with user"},
|
{"content": "Clarification questions via AskUserQuestion", "status": "pending", "activeForm": "Clarifying"},
|
||||||
{"content": "Build update plan from user input", "status": "pending", "activeForm": "Building update plan"},
|
{"content": "Execute parallel update agents", "status": "pending", "activeForm": "Updating documents"},
|
||||||
{"content": "Execute parallel update agents (one per role)", "status": "pending", "activeForm": "Updating documents in parallel"},
|
{"content": "Update context package and metadata", "status": "pending", "activeForm": "Finalizing"}
|
||||||
{"content": "Update session metadata and generate report", "status": "pending", "activeForm": "Finalizing session"}
|
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Execution Phases
|
## Execution Phases
|
||||||
|
|
||||||
### Phase 1: Discovery & Validation
|
### Phase 1: Discovery & Validation
|
||||||
|
|
||||||
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*` directories
|
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*`
|
||||||
2. **Validate Files**:
|
2. **Validate Files**:
|
||||||
- `guidance-specification.md` (optional, warn if missing)
|
- `guidance-specification.md` (optional, warn if missing)
|
||||||
- `*/analysis*.md` (required, error if empty)
|
- `*/analysis*.md` (required, error if empty)
|
||||||
3. **Load User Intent**: Extract from `workflow-session.json` (project/description field)
|
3. **Load User Intent**: Extract from `workflow-session.json`
|
||||||
|
|
||||||
### Phase 2: Role Discovery & Path Preparation
|
### Phase 2: Role Discovery & Path Preparation
|
||||||
|
|
||||||
**Main flow prepares file paths for Agent**:
|
**Main flow prepares file paths for Agent**:
|
||||||
|
|
||||||
1. **Discover Analysis Files**:
|
1. **Discover Analysis Files**:
|
||||||
- Glob(.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md)
|
- Glob: `.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md`
|
||||||
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
|
- Supports: analysis.md + analysis-{slug}.md (max 5)
|
||||||
- Validate: At least one file exists (error if empty)
|
|
||||||
|
|
||||||
2. **Extract Role Information**:
|
2. **Extract Role Information**:
|
||||||
- `role_analysis_paths`: Relative paths from brainstorm_dir
|
- `role_analysis_paths`: Relative paths
|
||||||
- `participating_roles`: Role names extracted from directory paths
|
- `participating_roles`: Role names from directories
|
||||||
|
|
||||||
3. **Pass to Agent** (Phase 3):
|
3. **Pass to Agent**: session_id, brainstorm_dir, role_analysis_paths, participating_roles
|
||||||
- `session_id`
|
|
||||||
- `brainstorm_dir`: .workflow/active/WFS-{session}/.brainstorming/
|
|
||||||
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
|
|
||||||
- `participating_roles`: ["product-manager", "system-architect", ...]
|
|
||||||
|
|
||||||
**Main Flow Responsibility**: File discovery and path preparation only (NO file content reading)
|
|
||||||
|
|
||||||
### Phase 3A: Analysis & Enhancement Agent
|
### Phase 3A: Analysis & Enhancement Agent
|
||||||
|
|
||||||
**First agent call**: Cross-role analysis and generate enhancement recommendations
|
**Agent executes cross-role analysis**:
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
Task(conceptual-planning-agent): "
|
Task(conceptual-planning-agent, `
|
||||||
## Agent Mission
|
## Agent Mission
|
||||||
Analyze role documents, identify conflicts/gaps, and generate enhancement recommendations
|
Analyze role documents, identify conflicts/gaps, generate enhancement recommendations
|
||||||
|
|
||||||
## Input from Main Flow
|
## Input
|
||||||
- brainstorm_dir: {brainstorm_dir}
|
- brainstorm_dir: ${brainstorm_dir}
|
||||||
- role_analysis_paths: {role_analysis_paths}
|
- role_analysis_paths: ${role_analysis_paths}
|
||||||
- participating_roles: {participating_roles}
|
- participating_roles: ${participating_roles}
|
||||||
|
|
||||||
## Execution Instructions
|
## Flow Control Steps
|
||||||
[FLOW_CONTROL]
|
1. load_session_metadata → Read workflow-session.json
|
||||||
|
2. load_role_analyses → Read all analysis files
|
||||||
|
3. cross_role_analysis → Identify consensus, conflicts, gaps, ambiguities
|
||||||
|
4. generate_recommendations → Format as EP-001, EP-002, ...
|
||||||
|
|
||||||
### Flow Control Steps
|
## Output Format
|
||||||
**AGENT RESPONSIBILITY**: Execute these analysis steps sequentially with context accumulation:
|
|
||||||
|
|
||||||
1. **load_session_metadata**
|
|
||||||
- Action: Load original user intent as primary reference
|
|
||||||
- Command: Read({brainstorm_dir}/../workflow-session.json)
|
|
||||||
- Output: original_user_intent (from project/description field)
|
|
||||||
|
|
||||||
2. **load_role_analyses**
|
|
||||||
- Action: Load all role analysis documents
|
|
||||||
- Command: For each path in role_analysis_paths: Read({brainstorm_dir}/{path})
|
|
||||||
- Output: role_analyses_content_map = {role_name: content}
|
|
||||||
|
|
||||||
3. **cross_role_analysis**
|
|
||||||
- Action: Identify consensus themes, conflicts, gaps, underspecified areas
|
|
||||||
- Output: consensus_themes, conflicting_views, gaps_list, ambiguities
|
|
||||||
|
|
||||||
4. **generate_recommendations**
|
|
||||||
- Action: Convert cross-role analysis findings into structured enhancement recommendations
|
|
||||||
- Format: EP-001, EP-002, ... (sequential numbering)
|
|
||||||
- Fields: id, title, affected_roles, category, current_state, enhancement, rationale, priority
|
|
||||||
- Taxonomy: Map to 9 categories (User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology)
|
|
||||||
- Output: enhancement_recommendations (JSON array)
|
|
||||||
|
|
||||||
### Output to Main Flow
|
|
||||||
Return JSON array:
|
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
\"id\": \"EP-001\",
|
"id": "EP-001",
|
||||||
\"title\": \"API Contract Specification\",
|
"title": "API Contract Specification",
|
||||||
\"affected_roles\": [\"system-architect\", \"api-designer\"],
|
"affected_roles": ["system-architect", "api-designer"],
|
||||||
\"category\": \"Architecture\",
|
"category": "Architecture",
|
||||||
\"current_state\": \"High-level API descriptions\",
|
"current_state": "High-level API descriptions",
|
||||||
\"enhancement\": \"Add detailed contract definitions with request/response schemas\",
|
"enhancement": "Add detailed contract definitions",
|
||||||
\"rationale\": \"Enables precise implementation and testing\",
|
"rationale": "Enables precise implementation",
|
||||||
\"priority\": \"High\"
|
"priority": "High"
|
||||||
},
|
}
|
||||||
...
|
|
||||||
]
|
]
|
||||||
|
`)
|
||||||
"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Main Flow User Interaction
|
### Phase 4: User Interaction
|
||||||
|
|
||||||
**Main flow handles all user interaction via text output**:
|
**All interactions via AskUserQuestion (Chinese questions)**
|
||||||
|
|
||||||
**⚠️ CRITICAL**: ALL questions MUST use Chinese (所有问题必须用中文) for better user understanding
|
#### Step 1: Enhancement Selection
|
||||||
|
|
||||||
1. **Present Enhancement Options** (multi-select):
|
```javascript
|
||||||
```markdown
|
// If enhancements > 4, split into multiple rounds
|
||||||
===== Enhancement 选择 =====
|
const enhancements = [...]; // from Phase 3A
|
||||||
|
const BATCH_SIZE = 4;
|
||||||
|
|
||||||
请选择要应用的改进建议(可多选):
|
for (let i = 0; i < enhancements.length; i += BATCH_SIZE) {
|
||||||
|
const batch = enhancements.slice(i, i + BATCH_SIZE);
|
||||||
|
|
||||||
a) EP-001: API Contract Specification
|
AskUserQuestion({
|
||||||
影响角色:system-architect, api-designer
|
questions: [{
|
||||||
说明:添加详细的请求/响应 schema 定义
|
question: `请选择要应用的改进建议 (第${Math.floor(i/BATCH_SIZE)+1}轮)`,
|
||||||
|
header: "改进选择",
|
||||||
|
multiSelect: true,
|
||||||
|
options: batch.map(ep => ({
|
||||||
|
label: `${ep.id}: ${ep.title}`,
|
||||||
|
description: `影响: ${ep.affected_roles.join(', ')} | ${ep.enhancement}`
|
||||||
|
}))
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
b) EP-002: User Intent Validation
|
// Store selections before next round
|
||||||
影响角色:product-manager, ux-expert
|
}
|
||||||
说明:明确用户需求优先级和验收标准
|
|
||||||
|
|
||||||
c) EP-003: Error Handling Strategy
|
// User can also skip: provide "跳过" option
|
||||||
影响角色:system-architect
|
|
||||||
说明:统一异常处理和降级方案
|
|
||||||
|
|
||||||
支持格式:1abc 或 1a 1b 1c 或 1a,b,c
|
|
||||||
请输入选择(可跳过输入 skip):
|
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Generate Clarification Questions** (based on analysis agent output):
|
#### Step 2: Clarification Questions
|
||||||
- ✅ **ALL questions in Chinese (所有问题必须用中文)**
|
|
||||||
- Use 9-category taxonomy scan results
|
|
||||||
- Prioritize most critical questions (no hard limit)
|
|
||||||
- Each with 2-4 options + descriptions
|
|
||||||
|
|
||||||
3. **Interactive Clarification Loop** (max 10 questions per round):
|
```javascript
|
||||||
```markdown
|
// Generate questions based on 9-category taxonomy scan
|
||||||
===== Clarification 问题 (第 1/2 轮) =====
|
// Categories: User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology
|
||||||
|
|
||||||
【问题1 - 用户意图】MVP 阶段的核心目标是什么?
|
const clarifications = [...]; // from analysis
|
||||||
a) 快速验证市场需求
|
const BATCH_SIZE = 4;
|
||||||
说明:最小功能集,快速上线获取反馈
|
|
||||||
b) 建立技术壁垒
|
|
||||||
说明:完善架构,为长期发展打基础
|
|
||||||
c) 实现功能完整性
|
|
||||||
说明:覆盖所有规划功能,延迟上线
|
|
||||||
|
|
||||||
【问题2 - 架构决策】技术栈选择的优先考虑因素?
|
for (let i = 0; i < clarifications.length; i += BATCH_SIZE) {
|
||||||
a) 团队熟悉度
|
const batch = clarifications.slice(i, i + BATCH_SIZE);
|
||||||
说明:使用现有技术栈,降低学习成本
|
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
|
||||||
b) 技术先进性
|
const totalRounds = Math.ceil(clarifications.length / BATCH_SIZE);
|
||||||
说明:采用新技术,提升竞争力
|
|
||||||
c) 生态成熟度
|
|
||||||
说明:选择成熟方案,保证稳定性
|
|
||||||
|
|
||||||
...(最多10个问题)
|
AskUserQuestion({
|
||||||
|
questions: batch.map(q => ({
|
||||||
|
question: q.question,
|
||||||
|
header: q.category.substring(0, 12),
|
||||||
|
multiSelect: false,
|
||||||
|
options: q.options.map(opt => ({
|
||||||
|
label: opt.label,
|
||||||
|
description: opt.description
|
||||||
|
}))
|
||||||
|
}))
|
||||||
|
})
|
||||||
|
|
||||||
请回答 (格式: 1a 2b 3c...):
|
// Store answers before next round
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Wait for user input → Parse all answers in batch → Continue to next round if needed
|
### Question Guidelines
|
||||||
|
|
||||||
4. **Build Update Plan**:
|
**Target**: 开发者(理解技术但需要从用户需求出发)
|
||||||
```
|
|
||||||
|
**Question Structure**: `[跨角色分析发现] + [需要澄清的决策点]`
|
||||||
|
**Option Structure**: `标签:[具体方案] + 说明:[业务影响] + [技术权衡]`
|
||||||
|
|
||||||
|
**9-Category Taxonomy**:
|
||||||
|
|
||||||
|
| Category | Focus | Example Question Pattern |
|
||||||
|
|----------|-------|--------------------------|
|
||||||
|
| User Intent | 用户目标 | "MVP阶段核心目标?" + 验证/壁垒/完整性 |
|
||||||
|
| Requirements | 需求细化 | "功能优先级如何排序?" + 核心/增强/可选 |
|
||||||
|
| Architecture | 架构决策 | "技术栈选择考量?" + 熟悉度/先进性/成熟度 |
|
||||||
|
| UX | 用户体验 | "交互复杂度取舍?" + 简洁/丰富/渐进 |
|
||||||
|
| Feasibility | 可行性 | "资源约束下的范围?" + 最小/标准/完整 |
|
||||||
|
| Risk | 风险管理 | "风险容忍度?" + 保守/平衡/激进 |
|
||||||
|
| Process | 流程规范 | "迭代节奏?" + 快速/稳定/灵活 |
|
||||||
|
| Decisions | 决策确认 | "冲突解决方案?" + 方案A/方案B/折中 |
|
||||||
|
| Terminology | 术语统一 | "统一使用哪个术语?" + 术语A/术语B |
|
||||||
|
|
||||||
|
**Quality Rules**:
|
||||||
|
|
||||||
|
**MUST Include**:
|
||||||
|
- ✅ All questions in Chinese (用中文提问)
|
||||||
|
- ✅ 基于跨角色分析的具体发现
|
||||||
|
- ✅ 选项包含业务影响说明
|
||||||
|
- ✅ 解决实际的模糊点或冲突
|
||||||
|
|
||||||
|
**MUST Avoid**:
|
||||||
|
- ❌ 与角色分析无关的通用问题
|
||||||
|
- ❌ 重复已在 artifacts 阶段确认的内容
|
||||||
|
- ❌ 过于细节的实现级问题
|
||||||
|
|
||||||
|
#### Step 3: Build Update Plan
|
||||||
|
|
||||||
|
```javascript
|
||||||
update_plan = {
|
update_plan = {
|
||||||
"role1": {
|
"role1": {
|
||||||
"enhancements": [EP-001, EP-003],
|
"enhancements": ["EP-001", "EP-003"],
|
||||||
"clarifications": [
|
"clarifications": [
|
||||||
{"question": "...", "answer": "...", "category": "..."},
|
{"question": "...", "answer": "...", "category": "..."}
|
||||||
...
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"role2": {
|
"role2": {
|
||||||
"enhancements": [EP-002],
|
"enhancements": ["EP-002"],
|
||||||
"clarifications": [...]
|
"clarifications": [...]
|
||||||
},
|
}
|
||||||
...
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Parallel Document Update Agents
|
### Phase 5: Parallel Document Update Agents
|
||||||
|
|
||||||
**Parallel agent calls** (one per role needing updates):
|
**Execute in parallel** (one agent per role):
|
||||||
|
|
||||||
```bash
|
```javascript
|
||||||
# Execute in parallel using single message with multiple Task calls
|
// Single message with multiple Task calls for parallelism
|
||||||
|
Task(conceptual-planning-agent, `
|
||||||
Task(conceptual-planning-agent): "
|
|
||||||
## Agent Mission
|
## Agent Mission
|
||||||
Apply user-confirmed enhancements and clarifications to {role1} analysis document
|
Apply enhancements and clarifications to ${role} analysis
|
||||||
|
|
||||||
## Agent Intent
|
## Input
|
||||||
- **Goal**: Integrate synthesis results into role-specific analysis
|
- role: ${role}
|
||||||
- **Scope**: Update ONLY {role1}/analysis.md (isolated, no cross-role dependencies)
|
- analysis_path: ${brainstorm_dir}/${role}/analysis.md
|
||||||
- **Constraints**: Preserve original insights, add refinements without deletion
|
- enhancements: ${role_enhancements}
|
||||||
|
- clarifications: ${role_clarifications}
|
||||||
|
- original_user_intent: ${intent}
|
||||||
|
|
||||||
## Input from Main Flow
|
## Flow Control Steps
|
||||||
- role: {role1}
|
1. load_current_analysis → Read analysis file
|
||||||
- analysis_path: {brainstorm_dir}/{role1}/analysis.md
|
2. add_clarifications_section → Insert Q&A section
|
||||||
- enhancements: [EP-001, EP-003] (user-selected improvements)
|
3. apply_enhancements → Integrate into relevant sections
|
||||||
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
|
4. resolve_contradictions → Remove conflicts
|
||||||
- original_user_intent: {from session metadata}
|
5. enforce_terminology → Align terminology
|
||||||
|
6. validate_intent → Verify alignment with user intent
|
||||||
|
7. write_updated_file → Save changes
|
||||||
|
|
||||||
## Execution Instructions
|
## Output
|
||||||
[FLOW_CONTROL]
|
Updated ${role}/analysis.md
|
||||||
|
`)
|
||||||
### Flow Control Steps
|
|
||||||
**AGENT RESPONSIBILITY**: Execute these update steps sequentially:
|
|
||||||
|
|
||||||
1. **load_current_analysis**
|
|
||||||
- Action: Load existing role analysis document
|
|
||||||
- Command: Read({brainstorm_dir}/{role1}/analysis.md)
|
|
||||||
- Output: current_analysis_content
|
|
||||||
|
|
||||||
2. **add_clarifications_section**
|
|
||||||
- Action: Insert Clarifications section with Q&A
|
|
||||||
- Format: \"## Clarifications\\n### Session {date}\\n- **Q**: {question} (Category: {category})\\n **A**: {answer}\"
|
|
||||||
- Output: analysis_with_clarifications
|
|
||||||
|
|
||||||
3. **apply_enhancements**
|
|
||||||
- Action: Integrate EP-001, EP-003 into relevant sections
|
|
||||||
- Strategy: Locate section by category (Architecture → Architecture section, UX → User Experience section)
|
|
||||||
- Output: analysis_with_enhancements
|
|
||||||
|
|
||||||
4. **resolve_contradictions**
|
|
||||||
- Action: Remove conflicts between original content and clarifications/enhancements
|
|
||||||
- Output: contradiction_free_analysis
|
|
||||||
|
|
||||||
5. **enforce_terminology_consistency**
|
|
||||||
- Action: Align all terminology with user-confirmed choices from clarifications
|
|
||||||
- Output: terminology_consistent_analysis
|
|
||||||
|
|
||||||
6. **validate_user_intent_alignment**
|
|
||||||
- Action: Verify all updates support original_user_intent
|
|
||||||
- Output: validated_analysis
|
|
||||||
|
|
||||||
7. **write_updated_file**
|
|
||||||
- Action: Save final analysis document
|
|
||||||
- Command: Write({brainstorm_dir}/{role1}/analysis.md, validated_analysis)
|
|
||||||
- Output: File update confirmation
|
|
||||||
|
|
||||||
### Output
|
|
||||||
Updated {role1}/analysis.md with Clarifications section + enhanced content
|
|
||||||
")
|
|
||||||
|
|
||||||
Task(conceptual-planning-agent): "
|
|
||||||
## Agent Mission
|
|
||||||
Apply user-confirmed enhancements and clarifications to {role2} analysis document
|
|
||||||
|
|
||||||
## Agent Intent
|
|
||||||
- **Goal**: Integrate synthesis results into role-specific analysis
|
|
||||||
- **Scope**: Update ONLY {role2}/analysis.md (isolated, no cross-role dependencies)
|
|
||||||
- **Constraints**: Preserve original insights, add refinements without deletion
|
|
||||||
|
|
||||||
## Input from Main Flow
|
|
||||||
- role: {role2}
|
|
||||||
- analysis_path: {brainstorm_dir}/{role2}/analysis.md
|
|
||||||
- enhancements: [EP-002] (user-selected improvements)
|
|
||||||
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
|
|
||||||
- original_user_intent: {from session metadata}
|
|
||||||
|
|
||||||
## Execution Instructions
|
|
||||||
[FLOW_CONTROL]
|
|
||||||
|
|
||||||
### Flow Control Steps
|
|
||||||
**AGENT RESPONSIBILITY**: Execute same 7 update steps as {role1} agent (load → clarifications → enhancements → contradictions → terminology → validation → write)
|
|
||||||
|
|
||||||
### Output
|
|
||||||
Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|
||||||
")
|
|
||||||
|
|
||||||
# ... repeat for each role in update_plan
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Agent Characteristics**:
|
**Agent Characteristics**:
|
||||||
- **Intent**: Integrate user-confirmed synthesis results (NOT generate new analysis)
|
- **Isolation**: Each agent updates exactly ONE role (parallel safe)
|
||||||
- **Isolation**: Each agent updates exactly ONE role (parallel execution safe)
|
- **Dependencies**: Zero cross-agent dependencies
|
||||||
- **Context**: Minimal - receives only role-specific enhancements + clarifications
|
|
||||||
- **Dependencies**: Zero cross-agent dependencies (full parallelism)
|
|
||||||
- **Validation**: All updates must align with original_user_intent
|
- **Validation**: All updates must align with original_user_intent
|
||||||
|
|
||||||
### Phase 6: Completion & Metadata Update
|
### Phase 6: Finalization
|
||||||
|
|
||||||
**Main flow finalizes**:
|
#### Step 1: Update Context Package
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Sync updated analyses to context-package.json
|
||||||
|
const context_pkg = Read(".workflow/active/WFS-{session}/.process/context-package.json")
|
||||||
|
|
||||||
|
// Update guidance-specification if exists
|
||||||
|
// Update synthesis-specification if exists
|
||||||
|
// Re-read all role analysis files
|
||||||
|
// Update metadata timestamps
|
||||||
|
|
||||||
|
Write(context_pkg_path, JSON.stringify(context_pkg))
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 2: Update Session Metadata
|
||||||
|
|
||||||
1. Wait for all parallel agents to complete
|
|
||||||
2. Update workflow-session.json:
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"phases": {
|
"phases": {
|
||||||
@@ -330,15 +323,13 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
"completed_at": "timestamp",
|
"completed_at": "timestamp",
|
||||||
"participating_roles": [...],
|
"participating_roles": [...],
|
||||||
"clarification_results": {
|
"clarification_results": {
|
||||||
"enhancements_applied": ["EP-001", "EP-002", ...],
|
"enhancements_applied": ["EP-001", "EP-002"],
|
||||||
"questions_asked": 3,
|
"questions_asked": 3,
|
||||||
"categories_clarified": ["Architecture", "UX", ...],
|
"categories_clarified": ["Architecture", "UX"],
|
||||||
"roles_updated": ["role1", "role2", ...],
|
"roles_updated": ["role1", "role2"]
|
||||||
"outstanding_items": []
|
|
||||||
},
|
},
|
||||||
"quality_metrics": {
|
"quality_metrics": {
|
||||||
"user_intent_alignment": "validated",
|
"user_intent_alignment": "validated",
|
||||||
"requirement_coverage": "comprehensive",
|
|
||||||
"ambiguity_resolution": "complete",
|
"ambiguity_resolution": "complete",
|
||||||
"terminology_consistency": "enforced"
|
"terminology_consistency": "enforced"
|
||||||
}
|
}
|
||||||
@@ -347,7 +338,8 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Generate completion report (show to user):
|
#### Step 3: Completion Report
|
||||||
|
|
||||||
```markdown
|
```markdown
|
||||||
## ✅ Clarification Complete
|
## ✅ Clarification Complete
|
||||||
|
|
||||||
@@ -359,9 +351,11 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
✅ PROCEED: `/workflow:plan --session WFS-{session-id}`
|
✅ PROCEED: `/workflow:plan --session WFS-{session-id}`
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Output
|
## Output
|
||||||
|
|
||||||
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
|
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md`
|
||||||
|
|
||||||
**Updated Structure**:
|
**Updated Structure**:
|
||||||
```markdown
|
```markdown
|
||||||
@@ -381,58 +375,24 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
|||||||
- Ambiguities resolved, placeholders removed
|
- Ambiguities resolved, placeholders removed
|
||||||
- Consistent terminology
|
- Consistent terminology
|
||||||
|
|
||||||
## Session Metadata
|
---
|
||||||
|
|
||||||
Update `workflow-session.json`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"phases": {
|
|
||||||
"BRAINSTORM": {
|
|
||||||
"status": "clarification_completed",
|
|
||||||
"clarification_completed": true,
|
|
||||||
"completed_at": "timestamp",
|
|
||||||
"participating_roles": ["product-manager", "system-architect", ...],
|
|
||||||
"clarification_results": {
|
|
||||||
"questions_asked": 3,
|
|
||||||
"categories_clarified": ["Architecture & Design", ...],
|
|
||||||
"roles_updated": ["system-architect", "ui-designer", ...],
|
|
||||||
"outstanding_items": []
|
|
||||||
},
|
|
||||||
"quality_metrics": {
|
|
||||||
"user_intent_alignment": "validated",
|
|
||||||
"requirement_coverage": "comprehensive",
|
|
||||||
"ambiguity_resolution": "complete",
|
|
||||||
"terminology_consistency": "enforced",
|
|
||||||
"decision_transparency": "documented"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quality Checklist
|
## Quality Checklist
|
||||||
|
|
||||||
**Content**:
|
**Content**:
|
||||||
- All role analyses loaded/analyzed
|
- ✅ All role analyses loaded/analyzed
|
||||||
- Cross-role analysis (consensus, conflicts, gaps)
|
- ✅ Cross-role analysis (consensus, conflicts, gaps)
|
||||||
- 9-category ambiguity scan
|
- ✅ 9-category ambiguity scan
|
||||||
- Questions prioritized
|
- ✅ Questions prioritized
|
||||||
- Clarifications documented
|
|
||||||
|
|
||||||
**Analysis**:
|
**Analysis**:
|
||||||
- User intent validated
|
- ✅ User intent validated
|
||||||
- Cross-role synthesis complete
|
- ✅ Cross-role synthesis complete
|
||||||
- Ambiguities resolved
|
- ✅ Ambiguities resolved
|
||||||
- Correct roles updated
|
- ✅ Terminology consistent
|
||||||
- Terminology consistent
|
|
||||||
- Contradictions removed
|
|
||||||
|
|
||||||
**Documents**:
|
**Documents**:
|
||||||
- Clarifications section formatted
|
- ✅ Clarifications section formatted
|
||||||
- Sections reflect answers
|
- ✅ Sections reflect answers
|
||||||
- No placeholders (TODO/TBD)
|
- ✅ No placeholders (TODO/TBD)
|
||||||
- Valid Markdown
|
- ✅ Valid Markdown
|
||||||
- Cross-references maintained
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -15,22 +15,16 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
|||||||
|
|
||||||
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
|
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
|
||||||
|
|
||||||
| Metric | Before | After | Improvement |
|
|
||||||
|--------|--------|-------|-------------|
|
|
||||||
| **Initial Load** | All task JSONs (~2,300 lines) | TODO_LIST.md only (~650 lines) | **72% reduction** |
|
|
||||||
| **Startup Time** | Seconds | Milliseconds | **~90% faster** |
|
|
||||||
| **Memory** | All tasks | 1-2 tasks | **90% less** |
|
|
||||||
| **Scalability** | 10-20 tasks | 100+ tasks | **5-10x** |
|
|
||||||
|
|
||||||
**Loading Strategy**:
|
**Loading Strategy**:
|
||||||
- **TODO_LIST.md**: Read in Phase 2 (task metadata, status, dependencies)
|
- **TODO_LIST.md**: Read in Phase 3 (task metadata, status, dependencies for TodoWrite generation)
|
||||||
- **IMPL_PLAN.md**: Read existence in Phase 2, parse execution strategy when needed
|
- **IMPL_PLAN.md**: Check existence in Phase 2 (normal mode), parse execution strategy in Phase 4A
|
||||||
- **Task JSONs**: Complete lazy loading (read only during execution)
|
- **Task JSONs**: Lazy loading - read only when task is about to execute (Phase 4B)
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
|
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
|
||||||
**Execute all discovered pending tasks until workflow completion or blocking dependency.**
|
**Execute all discovered pending tasks until workflow completion or blocking dependency.**
|
||||||
**Auto-complete session when all tasks finished: Call `/workflow:session:complete` upon workflow completion.**
|
**Auto-complete session when all tasks finished: Call `/workflow:session:complete` upon workflow completion.**
|
||||||
|
**ONE AGENT = ONE TASK JSON: Each agent instance executes exactly one task JSON file - never batch multiple tasks into single agent execution.**
|
||||||
|
|
||||||
## Core Responsibilities
|
## Core Responsibilities
|
||||||
- **Session Discovery**: Identify and select active workflow sessions
|
- **Session Discovery**: Identify and select active workflow sessions
|
||||||
@@ -42,40 +36,143 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
|||||||
- **Session Auto-Complete**: Call `/workflow:session:complete` when all workflow tasks finished
|
- **Session Auto-Complete**: Call `/workflow:session:complete` when all workflow tasks finished
|
||||||
|
|
||||||
## Execution Philosophy
|
## Execution Philosophy
|
||||||
- **IMPL_PLAN-driven**: Follow execution strategy from IMPL_PLAN.md Section 4
|
|
||||||
- **Discovery-first**: Auto-discover existing plans and tasks
|
|
||||||
- **Status-aware**: Execute only ready tasks with resolved dependencies
|
|
||||||
- **Context-rich**: Provide complete task JSON and accumulated context to agents
|
|
||||||
- **Progress tracking**: Continuous TodoWrite updates throughout entire workflow execution
|
- **Progress tracking**: Continuous TodoWrite updates throughout entire workflow execution
|
||||||
- **Autonomous completion**: Execute all tasks without user interruption until workflow complete
|
- **Autonomous completion**: Execute all tasks without user interruption until workflow complete
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Normal Mode:
|
||||||
|
Phase 1: Discovery
|
||||||
|
├─ Count active sessions
|
||||||
|
└─ Decision:
|
||||||
|
├─ count=0 → ERROR: No active sessions
|
||||||
|
├─ count=1 → Auto-select session → Phase 2
|
||||||
|
└─ count>1 → AskUserQuestion (max 4 options) → Phase 2
|
||||||
|
|
||||||
|
Phase 2: Planning Document Validation
|
||||||
|
├─ Check IMPL_PLAN.md exists
|
||||||
|
├─ Check TODO_LIST.md exists
|
||||||
|
└─ Validate .task/ contains IMPL-*.json files
|
||||||
|
|
||||||
|
Phase 3: TodoWrite Generation
|
||||||
|
├─ Parse TODO_LIST.md for task statuses
|
||||||
|
├─ Generate TodoWrite for entire workflow
|
||||||
|
└─ Prepare session context paths
|
||||||
|
|
||||||
|
Phase 4: Execution Strategy & Task Execution
|
||||||
|
├─ Step 4A: Parse execution strategy from IMPL_PLAN.md
|
||||||
|
└─ Step 4B: Execute tasks with lazy loading
|
||||||
|
└─ Loop:
|
||||||
|
├─ Get next in_progress task from TodoWrite
|
||||||
|
├─ Lazy load task JSON
|
||||||
|
├─ Launch agent with task context
|
||||||
|
├─ Mark task completed
|
||||||
|
└─ Advance to next task
|
||||||
|
|
||||||
|
Phase 5: Completion
|
||||||
|
├─ Update task statuses in JSON files
|
||||||
|
├─ Generate summaries
|
||||||
|
└─ Auto-call /workflow:session:complete
|
||||||
|
|
||||||
|
Resume Mode (--resume-session):
|
||||||
|
├─ Skip Phase 1 & Phase 2
|
||||||
|
└─ Entry Point: Phase 3 (TodoWrite Generation)
|
||||||
|
└─ Continue: Phase 4 → Phase 5
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Phase 1: Discovery
|
### Phase 1: Discovery
|
||||||
**Applies to**: Normal mode only (skipped in resume mode)
|
**Applies to**: Normal mode only (skipped in resume mode)
|
||||||
|
|
||||||
|
**Purpose**: Find and select active workflow session with user confirmation when multiple sessions exist
|
||||||
|
|
||||||
**Process**:
|
**Process**:
|
||||||
1. **Check Active Sessions**: Find sessions in `.workflow/active/` directory
|
|
||||||
2. **Select Session**: If multiple found, prompt user selection
|
|
||||||
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
|
|
||||||
4. **DO NOT read task JSONs yet** - defer until execution phase
|
|
||||||
|
|
||||||
**Resume Mode**: This phase is completely skipped when `--resume-session="session-id"` flag is provided.
|
#### Step 1.1: Count Active Sessions
|
||||||
|
```bash
|
||||||
|
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 2: Planning Document Analysis
|
#### Step 1.2: Handle Session Selection
|
||||||
|
|
||||||
|
**Case A: No Sessions** (count = 0)
|
||||||
|
```
|
||||||
|
ERROR: No active workflow sessions found
|
||||||
|
Run /workflow:plan "task description" to create a session
|
||||||
|
```
|
||||||
|
|
||||||
|
**Case B: Single Session** (count = 1)
|
||||||
|
```bash
|
||||||
|
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
||||||
|
```
|
||||||
|
Auto-select and continue to Phase 2.
|
||||||
|
|
||||||
|
**Case C: Multiple Sessions** (count > 1)
|
||||||
|
|
||||||
|
List sessions with metadata and prompt user selection:
|
||||||
|
```bash
|
||||||
|
bash(for dir in .workflow/active/WFS-*/; do
|
||||||
|
session=$(basename "$dir")
|
||||||
|
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
|
||||||
|
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||||
|
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||||
|
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
|
||||||
|
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
|
||||||
|
done)
|
||||||
|
```
|
||||||
|
|
||||||
|
Use AskUserQuestion to present formatted options (max 4 options shown):
|
||||||
|
```javascript
|
||||||
|
// If more than 4 sessions, show most recent 4 with "Other" option for manual input
|
||||||
|
const sessions = getActiveSessions() // sorted by last modified
|
||||||
|
const displaySessions = sessions.slice(0, 4)
|
||||||
|
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "Multiple active sessions detected. Select one:",
|
||||||
|
header: "Session",
|
||||||
|
multiSelect: false,
|
||||||
|
options: displaySessions.map(s => ({
|
||||||
|
label: s.id,
|
||||||
|
description: `${s.project} | ${s.progress}`
|
||||||
|
}))
|
||||||
|
// Note: User can select "Other" to manually enter session ID
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Input Validation**:
|
||||||
|
- If user selects from options: Use selected session ID
|
||||||
|
- If user selects "Other" and provides input: Validate session exists
|
||||||
|
- If validation fails: Show error and re-prompt or suggest available sessions
|
||||||
|
|
||||||
|
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
|
||||||
|
|
||||||
|
#### Step 1.3: Load Session Metadata
|
||||||
|
```bash
|
||||||
|
bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: Store session metadata in memory
|
||||||
|
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
|
||||||
|
|
||||||
|
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
|
||||||
|
|
||||||
|
### Phase 2: Planning Document Validation
|
||||||
**Applies to**: Normal mode only (skipped in resume mode)
|
**Applies to**: Normal mode only (skipped in resume mode)
|
||||||
|
|
||||||
**Optimized to avoid reading all task JSONs upfront**
|
**Purpose**: Validate planning artifacts exist before execution
|
||||||
|
|
||||||
**Process**:
|
**Process**:
|
||||||
1. **Read IMPL_PLAN.md**: Check existence, understand overall strategy
|
1. **Check IMPL_PLAN.md**: Verify file exists (defer detailed parsing to Phase 4A)
|
||||||
2. **Read TODO_LIST.md**: Get current task statuses and execution progress
|
2. **Check TODO_LIST.md**: Verify file exists (defer reading to Phase 3)
|
||||||
3. **Extract Task Metadata**: Parse task IDs, titles, and dependency relationships from TODO_LIST.md
|
3. **Validate Task Directory**: Ensure `.task/` contains at least one IMPL-*.json file
|
||||||
4. **Build Execution Queue**: Determine ready tasks based on TODO_LIST.md status and dependencies
|
|
||||||
|
|
||||||
**Key Optimization**: Use IMPL_PLAN.md (existence check only) and TODO_LIST.md as primary sources instead of reading all task JSONs
|
**Key Optimization**: Only existence checks here. Actual file reading happens in later phases.
|
||||||
|
|
||||||
**Resume Mode**: This phase is skipped when `--resume-session` flag is provided (session already known).
|
**Resume Mode**: This phase is skipped when `--resume-session` flag is provided. Resume mode entry point is Phase 3.
|
||||||
|
|
||||||
### Phase 3: TodoWrite Generation
|
### Phase 3: TodoWrite Generation
|
||||||
**Applies to**: Both normal and resume modes (resume mode entry point)
|
**Applies to**: Both normal and resume modes (resume mode entry point)
|
||||||
@@ -85,14 +182,11 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
|
|||||||
- Parse TODO_LIST.md to extract all tasks with current statuses
|
- Parse TODO_LIST.md to extract all tasks with current statuses
|
||||||
- Identify first pending task with met dependencies
|
- Identify first pending task with met dependencies
|
||||||
- Generate comprehensive TodoWrite covering entire workflow
|
- Generate comprehensive TodoWrite covering entire workflow
|
||||||
2. **Mark Initial Status**: Set first ready task(s) as `in_progress` in TodoWrite
|
2. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
|
||||||
- **Sequential execution**: Mark ONE task as `in_progress`
|
3. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
|
||||||
- **Parallel batch**: Mark ALL tasks in current batch as `in_progress`
|
|
||||||
3. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
|
|
||||||
4. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
|
|
||||||
|
|
||||||
**Resume Mode Behavior**:
|
**Resume Mode Behavior**:
|
||||||
- Load existing TODO_LIST.md directly from `.workflow/active//{session-id}/`
|
- Load existing TODO_LIST.md directly from `.workflow/active/{session-id}/`
|
||||||
- Extract current progress from TODO_LIST.md
|
- Extract current progress from TODO_LIST.md
|
||||||
- Generate TodoWrite from TODO_LIST.md state
|
- Generate TodoWrite from TODO_LIST.md state
|
||||||
- Proceed immediately to agent execution (Phase 4)
|
- Proceed immediately to agent execution (Phase 4)
|
||||||
@@ -118,7 +212,7 @@ If IMPL_PLAN.md lacks execution strategy, use intelligent fallback (analyze task
|
|||||||
```
|
```
|
||||||
while (TODO_LIST.md has pending tasks) {
|
while (TODO_LIST.md has pending tasks) {
|
||||||
next_task_id = getTodoWriteInProgressTask()
|
next_task_id = getTodoWriteInProgressTask()
|
||||||
task_json = Read(.workflow/session/{session}/.task/{next_task_id}.json) // Lazy load
|
task_json = Read(.workflow/active/{session}/.task/{next_task_id}.json) // Lazy load
|
||||||
executeTaskWithAgent(task_json)
|
executeTaskWithAgent(task_json)
|
||||||
updateTodoListMarkCompleted(next_task_id)
|
updateTodoListMarkCompleted(next_task_id)
|
||||||
advanceTodoWriteToNextTask()
|
advanceTodoWriteToNextTask()
|
||||||
@@ -132,14 +226,10 @@ while (TODO_LIST.md has pending tasks) {
|
|||||||
4. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
|
4. **Launch Agent**: Invoke specialized agent with complete context including flow control steps
|
||||||
5. **Monitor Progress**: Track agent execution and handle errors without user interruption
|
5. **Monitor Progress**: Track agent execution and handle errors without user interruption
|
||||||
6. **Collect Results**: Gather implementation results and outputs
|
6. **Collect Results**: Gather implementation results and outputs
|
||||||
7. **Update TODO_LIST.md**: Mark current task as completed in TODO_LIST.md
|
7. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
|
||||||
8. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
|
|
||||||
|
**Note**: TODO_LIST.md updates are handled by agents (e.g., code-developer.md), not by the orchestrator.
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- Reduces initial context loading by ~90%
|
|
||||||
- Only reads task JSON when actually executing
|
|
||||||
- Scales better for workflows with many tasks
|
|
||||||
- Faster startup time for workflow execution
|
|
||||||
|
|
||||||
### Phase 5: Completion
|
### Phase 5: Completion
|
||||||
**Applies to**: Both normal and resume modes
|
**Applies to**: Both normal and resume modes
|
||||||
@@ -186,8 +276,9 @@ while (TODO_LIST.md has pending tasks) {
|
|||||||
|
|
||||||
#### 2. Parallel Execution
|
#### 2. Parallel Execution
|
||||||
**When**: IMPL_PLAN specifies "Parallel" with clear parallelization opportunities
|
**When**: IMPL_PLAN specifies "Parallel" with clear parallelization opportunities
|
||||||
**Pattern**: Execute independent task groups concurrently
|
**Pattern**: Execute independent task groups concurrently by launching multiple agent instances
|
||||||
**TodoWrite**: MULTIPLE tasks (in same batch) marked as `in_progress` simultaneously
|
**TodoWrite**: MULTIPLE tasks (in same batch) marked as `in_progress` simultaneously
|
||||||
|
**Agent Instantiation**: Launch one agent instance per task (respects ONE AGENT = ONE TASK JSON rule)
|
||||||
|
|
||||||
#### 3. Phased Execution
|
#### 3. Phased Execution
|
||||||
**When**: IMPL_PLAN specifies "Phased" with phase breakdown
|
**When**: IMPL_PLAN specifies "Phased" with phase breakdown
|
||||||
@@ -275,232 +366,47 @@ TodoWrite({
|
|||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
### TODO_LIST.md Update Timing
|
|
||||||
**Single source of truth for task status** - enables lazy loading by providing task metadata without reading JSONs
|
|
||||||
|
|
||||||
- **Before Agent Launch**: Mark task as `in_progress`
|
|
||||||
- **After Task Complete**: Mark as `completed`, advance to next
|
|
||||||
- **On Error**: Keep as `in_progress`, add error note
|
|
||||||
- **Workflow Complete**: Call `/workflow:session:complete`
|
|
||||||
|
|
||||||
## Agent Context Management
|
|
||||||
|
|
||||||
### Context Sources (Priority Order)
|
|
||||||
1. **Complete Task JSON**: Full task definition including all fields and artifacts
|
|
||||||
2. **Artifacts Context**: Brainstorming outputs and role analyses from task.context.artifacts
|
|
||||||
3. **Flow Control Context**: Accumulated outputs from pre_analysis steps (including artifact loading)
|
|
||||||
4. **Dependency Summaries**: Previous task completion summaries
|
|
||||||
5. **Session Context**: Workflow paths and session metadata
|
|
||||||
6. **Inherited Context**: Parent task context and shared variables
|
|
||||||
|
|
||||||
### Context Assembly Process
|
|
||||||
```
|
|
||||||
1. Load Task JSON → Base context (including artifacts array)
|
|
||||||
2. Load Artifacts → Synthesis specifications and brainstorming outputs
|
|
||||||
3. Execute Flow Control → Accumulated context (with artifact loading steps)
|
|
||||||
4. Load Dependencies → Dependency context
|
|
||||||
5. Prepare Session Paths → Session context
|
|
||||||
6. Combine All → Complete agent context with artifact integration
|
|
||||||
```
|
|
||||||
|
|
||||||
### Agent Context Package Structure
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"task": { /* Complete task JSON with artifacts array */ },
|
|
||||||
"artifacts": {
|
|
||||||
"synthesis_specification": { "path": "{{from context-package.json → brainstorm_artifacts.synthesis_output.path}}", "priority": "highest" },
|
|
||||||
"guidance_specification": { "path": "{{from context-package.json → brainstorm_artifacts.guidance_specification.path}}", "priority": "medium" },
|
|
||||||
"role_analyses": [ /* From context-package.json → brainstorm_artifacts.role_analyses[] */ ],
|
|
||||||
"conflict_resolution": { "path": "{{from context-package.json → brainstorm_artifacts.conflict_resolution.path}}", "conditional": true }
|
|
||||||
},
|
|
||||||
"flow_context": {
|
|
||||||
"step_outputs": {
|
|
||||||
"synthesis_specification": "...",
|
|
||||||
"individual_artifacts": "...",
|
|
||||||
"pattern_analysis": "...",
|
|
||||||
"dependency_context": "..."
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"session": {
|
|
||||||
"workflow_dir": ".workflow/active/WFS-session/",
|
|
||||||
"context_package_path": ".workflow/active/WFS-session/.process/context-package.json",
|
|
||||||
"todo_list_path": ".workflow/active/WFS-session/TODO_LIST.md",
|
|
||||||
"summaries_dir": ".workflow/active/WFS-session/.summaries/",
|
|
||||||
"task_json_path": ".workflow/active/WFS-session/.task/IMPL-1.1.json"
|
|
||||||
},
|
|
||||||
"dependencies": [ /* Task summaries from depends_on */ ],
|
|
||||||
"inherited": { /* Parent task context */ }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Context Validation Rules
|
|
||||||
- **Task JSON Complete**: All 5 fields present and valid, including artifacts array in context
|
|
||||||
- **Artifacts Available**: All artifacts loaded from context-package.json
|
|
||||||
- **Flow Control Ready**: All pre_analysis steps completed including artifact loading steps
|
|
||||||
- **Dependencies Loaded**: All depends_on summaries available
|
|
||||||
- **Session Paths Valid**: All workflow paths exist and accessible (verified via context-package.json)
|
|
||||||
- **Agent Assignment**: Valid agent type specified in meta.agent
|
|
||||||
|
|
||||||
## Agent Execution Pattern
|
## Agent Execution Pattern
|
||||||
|
|
||||||
### Flow Control Execution
|
### Flow Control Execution
|
||||||
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
|
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
|
||||||
|
|
||||||
**Orchestrator Responsibility**:
|
**Note**: Orchestrator does NOT execute flow control steps - Agent interprets and executes them autonomously.
|
||||||
- Pass complete task JSON to agent (including `flow_control` block)
|
|
||||||
- Provide session paths for artifact access
|
|
||||||
- Monitor agent completion
|
|
||||||
|
|
||||||
**Agent Responsibility**:
|
|
||||||
- Parse `flow_control.pre_analysis` array from JSON
|
|
||||||
- Execute steps sequentially with variable substitution
|
|
||||||
- Accumulate context from artifacts and dependencies
|
|
||||||
- Follow error handling per `step.on_error`
|
|
||||||
- Complete implementation using accumulated context
|
|
||||||
|
|
||||||
**Orchestrator does NOT execute flow control steps - Agent interprets and executes them from JSON.**
|
|
||||||
|
|
||||||
### Agent Prompt Template
|
### Agent Prompt Template
|
||||||
|
**Dynamic Generation**: Before agent invocation, read task JSON and extract key requirements.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="{meta.agent}",
|
Task(subagent_type="{meta.agent}",
|
||||||
prompt="**EXECUTE TASK FROM JSON**
|
prompt="Execute task: {task.title}
|
||||||
|
|
||||||
## Task JSON Location
|
{[FLOW_CONTROL]}
|
||||||
{session.task_json_path}
|
|
||||||
|
|
||||||
## Instructions
|
**Task Objectives** (from task JSON):
|
||||||
1. **Load Complete Task JSON**: Read and validate all fields (id, title, status, meta, context, flow_control)
|
{task.context.objective}
|
||||||
2. **Execute Flow Control**: If `flow_control.pre_analysis` exists, execute steps sequentially:
|
|
||||||
- Load artifacts (role analysis documents, role analyses) using commands in each step
|
|
||||||
- Accumulate context from step outputs using variable substitution [variable_name]
|
|
||||||
- Handle errors per step.on_error (skip_optional | fail | retry_once)
|
|
||||||
3. **Implement Solution**: Follow `flow_control.implementation_approach` using accumulated context
|
|
||||||
4. **Complete Task**:
|
|
||||||
- Update task status: `jq '.status = \"completed\"' {session.task_json_path} > temp.json && mv temp.json {session.task_json_path}`
|
|
||||||
- Update TODO_LIST.md: Mark task as [x] completed in {session.todo_list_path}
|
|
||||||
- Generate summary: {session.summaries_dir}/{task.id}-summary.md
|
|
||||||
- Check workflow completion and call `/workflow:session:complete` if all tasks done
|
|
||||||
|
|
||||||
## Context Sources (All from JSON)
|
**Expected Deliverables** (from task JSON):
|
||||||
- Requirements: `context.requirements`
|
{task.context.deliverables}
|
||||||
- Focus Paths: `context.focus_paths`
|
|
||||||
- Acceptance: `context.acceptance`
|
|
||||||
- Artifacts: `context.artifacts` (synthesis specs, brainstorming outputs)
|
|
||||||
- Dependencies: `context.depends_on`
|
|
||||||
- Target Files: `flow_control.target_files`
|
|
||||||
|
|
||||||
## Session Paths
|
**Quality Standards** (from task JSON):
|
||||||
|
{task.context.acceptance_criteria}
|
||||||
|
|
||||||
|
**MANDATORY FIRST STEPS**:
|
||||||
|
1. Read complete task JSON: {session.task_json_path}
|
||||||
|
2. Load context package: {session.context_package_path}
|
||||||
|
|
||||||
|
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
||||||
|
|
||||||
|
**Session Paths**:
|
||||||
- Workflow Dir: {session.workflow_dir}
|
- Workflow Dir: {session.workflow_dir}
|
||||||
- TODO List: {session.todo_list_path}
|
- TODO List: {session.todo_list_path}
|
||||||
- Summaries: {session.summaries_dir}
|
- Summaries Dir: {session.summaries_dir}
|
||||||
- Flow Context: {flow_context.step_outputs}
|
- Context Package: {session.context_package_path}
|
||||||
|
|
||||||
**Complete JSON structure is authoritative - load and follow it exactly.**"),
|
**Success Criteria**: Complete all objectives, meet all quality standards, deliver all outputs as specified above.",
|
||||||
description="Execute task: {task.id}")
|
description="Executing: {task.title}")
|
||||||
```
|
```
|
||||||
|
|
||||||
### Agent JSON Loading Specification
|
|
||||||
**MANDATORY AGENT PROTOCOL**: All agents must follow this exact loading sequence:
|
|
||||||
|
|
||||||
1. **JSON Loading**: First action must be `cat {session.task_json_path}`
|
|
||||||
2. **Field Validation**: Verify all 5 required fields exist: `id`, `title`, `status`, `meta`, `context`, `flow_control`
|
|
||||||
3. **Structure Parsing**: Parse nested fields correctly:
|
|
||||||
- `meta.type` and `meta.agent` (NOT flat `task_type`)
|
|
||||||
- `context.requirements`, `context.focus_paths`, `context.acceptance`
|
|
||||||
- `context.depends_on`, `context.inherited`
|
|
||||||
- `flow_control.pre_analysis` array, `flow_control.target_files`
|
|
||||||
4. **Flow Control Execution**: If `flow_control.pre_analysis` exists, execute steps sequentially
|
|
||||||
5. **Status Management**: Update JSON status upon completion
|
|
||||||
|
|
||||||
**JSON Field Reference**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"id": "IMPL-1.2",
|
|
||||||
"title": "Task title",
|
|
||||||
"status": "pending|active|completed|blocked",
|
|
||||||
"meta": {
|
|
||||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
|
||||||
"agent": "@code-developer|@test-fix-agent|@universal-executor"
|
|
||||||
},
|
|
||||||
"context": {
|
|
||||||
"requirements": ["req1", "req2"],
|
|
||||||
"focus_paths": ["src/path1", "src/path2"],
|
|
||||||
"acceptance": ["criteria1", "criteria2"],
|
|
||||||
"depends_on": ["IMPL-1.1"],
|
|
||||||
"inherited": { "from": "parent", "context": ["info"] },
|
|
||||||
"artifacts": [
|
|
||||||
{
|
|
||||||
"type": "synthesis_specification",
|
|
||||||
"source": "context-package.json → brainstorm_artifacts.synthesis_output",
|
|
||||||
"path": "{{loaded dynamically from context-package.json}}",
|
|
||||||
"priority": "highest",
|
|
||||||
"contains": "complete_integrated_specification"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "individual_role_analysis",
|
|
||||||
"source": "context-package.json → brainstorm_artifacts.role_analyses[]",
|
|
||||||
"path": "{{loaded dynamically from context-package.json}}",
|
|
||||||
"note": "Supports analysis*.md pattern (analysis.md, analysis-01.md, analysis-api.md, etc.)",
|
|
||||||
"priority": "low",
|
|
||||||
"contains": "role_specific_analysis_fallback"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [
|
|
||||||
{
|
|
||||||
"step": "load_synthesis_specification",
|
|
||||||
"action": "Load synthesis specification from context-package.json",
|
|
||||||
"commands": [
|
|
||||||
"Read(.workflow/active/WFS-[session]/.process/context-package.json)",
|
|
||||||
"Extract(brainstorm_artifacts.synthesis_output.path)",
|
|
||||||
"Read(extracted path)"
|
|
||||||
],
|
|
||||||
"output_to": "synthesis_specification",
|
|
||||||
"on_error": "skip_optional"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"step": "step_name",
|
|
||||||
"command": "bash_command",
|
|
||||||
"output_to": "variable",
|
|
||||||
"on_error": "skip_optional|fail|retry_once"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"implementation_approach": [
|
|
||||||
{
|
|
||||||
"step": 1,
|
|
||||||
"title": "Implement task following role analyses",
|
|
||||||
"description": "Implement '[title]' following role analyses. PRIORITY: Use role analysis documents as primary requirement source. When implementation needs technical details (e.g., API schemas, caching configs, design tokens), refer to artifacts[] for detailed specifications from original role analyses.",
|
|
||||||
"modification_points": [
|
|
||||||
"Apply consolidated requirements from role analysis documents",
|
|
||||||
"Follow technical guidelines from synthesis",
|
|
||||||
"Consult artifacts for implementation details when needed",
|
|
||||||
"Integrate with existing patterns"
|
|
||||||
],
|
|
||||||
"logic_flow": [
|
|
||||||
"Load role analyses",
|
|
||||||
"Parse architecture and requirements",
|
|
||||||
"Implement following specification",
|
|
||||||
"Consult artifacts for technical details when needed",
|
|
||||||
"Validate against acceptance criteria"
|
|
||||||
],
|
|
||||||
"depends_on": [],
|
|
||||||
"output": "implementation"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Execution Flow
|
|
||||||
1. **Load Task JSON**: Agent reads and validates complete JSON structure
|
|
||||||
2. **Execute Flow Control**: Agent runs pre_analysis steps if present
|
|
||||||
3. **Prepare Implementation**: Agent uses implementation_approach from JSON
|
|
||||||
4. **Launch Implementation**: Agent follows focus_paths and target_files
|
|
||||||
5. **Update Status**: Agent marks JSON status as completed
|
|
||||||
6. **Generate Summary**: Agent creates completion summary
|
|
||||||
|
|
||||||
### Agent Assignment Rules
|
### Agent Assignment Rules
|
||||||
```
|
```
|
||||||
meta.agent specified → Use specified agent
|
meta.agent specified → Use specified agent
|
||||||
@@ -517,7 +423,7 @@ meta.agent missing → Infer from meta.type:
|
|||||||
.workflow/active/WFS-[topic-slug]/
|
.workflow/active/WFS-[topic-slug]/
|
||||||
├── workflow-session.json # Session state and metadata
|
├── workflow-session.json # Session state and metadata
|
||||||
├── IMPL_PLAN.md # Planning document and requirements
|
├── IMPL_PLAN.md # Planning document and requirements
|
||||||
├── TODO_LIST.md # Progress tracking (auto-updated)
|
├── TODO_LIST.md # Progress tracking (updated by agents)
|
||||||
├── .task/ # Task definitions (JSON only)
|
├── .task/ # Task definitions (JSON only)
|
||||||
│ ├── IMPL-1.json # Main task definitions
|
│ ├── IMPL-1.json # Main task definitions
|
||||||
│ └── IMPL-1.1.json # Subtask definitions
|
│ └── IMPL-1.1.json # Subtask definitions
|
||||||
@@ -552,28 +458,3 @@ meta.agent missing → Infer from meta.type:
|
|||||||
- **Dependency Validation**: Check all depends_on references exist
|
- **Dependency Validation**: Check all depends_on references exist
|
||||||
- **Context Verification**: Ensure all required context is available
|
- **Context Verification**: Ensure all required context is available
|
||||||
|
|
||||||
### Recovery Procedures
|
|
||||||
|
|
||||||
**Session Recovery**:
|
|
||||||
```bash
|
|
||||||
# Check session integrity
|
|
||||||
find .workflow/active/ -name "WFS-*" -type d | while read session_dir; do
|
|
||||||
session=$(basename "$session_dir")
|
|
||||||
[ ! -f "$session_dir/workflow-session.json" ] && \
|
|
||||||
echo '{"session_id":"'$session'","status":"active"}' > "$session_dir/workflow-session.json"
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
**Task Recovery**:
|
|
||||||
```bash
|
|
||||||
# Validate task JSON integrity
|
|
||||||
for task_file in .workflow/active/$session/.task/*.json; do
|
|
||||||
jq empty "$task_file" 2>/dev/null || echo "Corrupted: $task_file"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Fix missing dependencies
|
|
||||||
missing_deps=$(jq -r '.context.depends_on[]?' .workflow/active/$session/.task/*.json | sort -u)
|
|
||||||
for dep in $missing_deps; do
|
|
||||||
[ ! -f ".workflow/active/$session/.task/$dep.json" ] && echo "Missing dependency: $dep"
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -10,555 +10,155 @@ examples:
|
|||||||
# Workflow Init Command (/workflow:init)
|
# Workflow Init Command (/workflow:init)
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Initializes `.workflow/project.json` with comprehensive project understanding by leveraging **cli-explore-agent** for intelligent analysis and **memory discovery** for SKILL package indexing.
|
Initialize `.workflow/project.json` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
|
||||||
|
|
||||||
**Key Features**:
|
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
|
||||||
- **Intelligent Project Analysis**: Uses cli-explore-agent's Deep Scan mode
|
|
||||||
- **Technology Stack Detection**: Identifies languages, frameworks, build tools
|
|
||||||
- **Architecture Overview**: Discovers patterns, layers, key components
|
|
||||||
- **Memory Discovery**: Scans and indexes available SKILL packages
|
|
||||||
- **Smart Recommendations**: Suggests memory commands based on project state
|
|
||||||
- **One-time Initialization**: Skips if project.json exists (unless --regenerate)
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
```bash
|
```bash
|
||||||
/workflow:init # Initialize project state (skip if exists)
|
/workflow:init # Initialize (skip if exists)
|
||||||
/workflow:init --regenerate # Force regeneration of project.json
|
/workflow:init --regenerate # Force regeneration
|
||||||
```
|
```
|
||||||
|
|
||||||
## Implementation Flow
|
## Execution Process
|
||||||
|
|
||||||
### Step 1: Check Existing State
|
```
|
||||||
|
Input Parsing:
|
||||||
|
└─ Parse --regenerate flag → regenerate = true | false
|
||||||
|
|
||||||
|
Decision:
|
||||||
|
├─ EXISTS + no --regenerate → Exit: "Already initialized"
|
||||||
|
├─ EXISTS + --regenerate → Backup existing → Continue analysis
|
||||||
|
└─ NOT_FOUND → Continue analysis
|
||||||
|
|
||||||
|
Analysis Flow:
|
||||||
|
├─ Get project metadata (name, root)
|
||||||
|
├─ Invoke cli-explore-agent
|
||||||
|
│ ├─ Structural scan (get_modules_by_depth.sh, find, wc)
|
||||||
|
│ ├─ Semantic analysis (Gemini CLI)
|
||||||
|
│ ├─ Synthesis and merge
|
||||||
|
│ └─ Write .workflow/project.json
|
||||||
|
└─ Display summary
|
||||||
|
|
||||||
|
Output:
|
||||||
|
└─ .workflow/project.json (+ .backup if regenerate)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Parse Input and Check Existing State
|
||||||
|
|
||||||
|
**Parse --regenerate flag**:
|
||||||
|
```javascript
|
||||||
|
const regenerate = $ARGUMENTS.includes('--regenerate')
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check existing state**:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Check if project.json already exists
|
|
||||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||||
```
|
```
|
||||||
|
|
||||||
**If EXISTS and no --regenerate flag**:
|
**If EXISTS and no --regenerate**: Exit early
|
||||||
```
|
```
|
||||||
Project already initialized at .workflow/project.json
|
Project already initialized at .workflow/project.json
|
||||||
Use /workflow:init --regenerate to rebuild project analysis
|
Use /workflow:init --regenerate to rebuild
|
||||||
Use /workflow:status --project to view current state
|
Use /workflow:status --project to view state
|
||||||
```
|
```
|
||||||
|
|
||||||
**If NOT_FOUND or --regenerate flag**: Proceed to initialization
|
### Step 2: Get Project Metadata
|
||||||
|
|
||||||
### Step 2: Project Discovery
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Get project name and root
|
|
||||||
bash(basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
|
bash(basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
|
||||||
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
bash(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||||
|
|
||||||
# Create .workflow directory
|
|
||||||
bash(mkdir -p .workflow)
|
bash(mkdir -p .workflow)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Intelligent Project Analysis
|
### Step 3: Invoke cli-explore-agent
|
||||||
|
|
||||||
**Invoke cli-explore-agent** with Deep Scan mode for comprehensive understanding:
|
**For --regenerate**: Backup and preserve existing data
|
||||||
|
```bash
|
||||||
|
bash(cp .workflow/project.json .workflow/project.json.backup)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Delegate analysis to agent**:
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
description="Deep project analysis",
|
description="Deep project analysis",
|
||||||
prompt=`
|
prompt=`
|
||||||
Analyze project structure and technology stack for workflow initialization.
|
Analyze project for workflow initialization and generate .workflow/project.json.
|
||||||
|
|
||||||
## Analysis Objective
|
## MANDATORY FIRST STEPS
|
||||||
Perform Deep Scan analysis to build comprehensive project understanding for .workflow/project.json initialization.
|
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-json-schema.json (get schema reference)
|
||||||
|
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||||
|
|
||||||
## Required Analysis
|
## Task
|
||||||
|
Generate complete project.json with:
|
||||||
|
- project_name: ${projectName}
|
||||||
|
- initialized_at: current ISO timestamp
|
||||||
|
- overview: {description, technology_stack, architecture, key_components}
|
||||||
|
- features: ${regenerate ? 'preserve from backup' : '[] (empty)'}
|
||||||
|
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
|
||||||
|
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated}'}
|
||||||
|
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
||||||
|
|
||||||
### 1. Technology Stack Detection
|
## Analysis Requirements
|
||||||
- **Primary Languages**: Identify all programming languages with file counts
|
|
||||||
- **Frameworks**: Detect web frameworks (React, Vue, Express, Django, etc.)
|
|
||||||
- **Build Tools**: Identify build systems (npm, cargo, maven, gradle, etc.)
|
|
||||||
- **Test Frameworks**: Find testing tools (jest, pytest, go test, etc.)
|
|
||||||
|
|
||||||
### 2. Project Architecture
|
**Technology Stack**:
|
||||||
- **Architecture Style**: Identify patterns (MVC, microservices, monorepo, etc.)
|
- Languages: File counts, mark primary
|
||||||
- **Layer Structure**: Discover architectural layers (presentation, business, data)
|
- Frameworks: From package.json, requirements.txt, go.mod, etc.
|
||||||
- **Design Patterns**: Find common patterns (singleton, factory, repository, etc.)
|
- Build tools: npm, cargo, maven, webpack, vite
|
||||||
- **Key Components**: List 5-10 core modules/components with brief descriptions
|
- Test frameworks: jest, pytest, go test, junit
|
||||||
|
|
||||||
### 3. Project Metrics
|
**Architecture**:
|
||||||
- **Total Files**: Count source code files
|
- Style: MVC, microservices, layered (from structure & imports)
|
||||||
- **Lines of Code**: Estimate total LOC
|
- Layers: presentation, business-logic, data-access
|
||||||
- **Module Count**: Number of top-level modules/packages
|
- Patterns: singleton, factory, repository
|
||||||
- **Complexity**: Overall complexity rating (low/medium/high)
|
- Key components: 5-10 modules {name, path, description, importance}
|
||||||
|
|
||||||
### 4. Entry Points
|
## Execution
|
||||||
- **Main Entry**: Identify primary application entry point(s)
|
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||||
- **CLI Commands**: Discover available commands/scripts
|
2. Semantic analysis: Gemini for patterns/architecture
|
||||||
- **API Endpoints**: Find HTTP/REST/GraphQL endpoints (if applicable)
|
3. Synthesis: Merge findings
|
||||||
|
4. ${regenerate ? 'Merge with preserved features/development_index/statistics from .workflow/project.json.backup' : ''}
|
||||||
|
5. Write JSON: Write('.workflow/project.json', jsonContent)
|
||||||
|
6. Report: Return brief completion summary
|
||||||
|
|
||||||
## Execution Mode
|
Project root: ${projectRoot}
|
||||||
Use **Deep Scan** with Dual-Source Strategy:
|
`
|
||||||
- Phase 1: Bash structural scan (fast pattern discovery)
|
|
||||||
- Phase 2: Gemini semantic analysis (design intent, patterns)
|
|
||||||
- Phase 3: Synthesis (merge findings with attribution)
|
|
||||||
|
|
||||||
## Analysis Scope
|
|
||||||
- Root directory: ${projectRoot}
|
|
||||||
- Exclude: node_modules, dist, build, .git, vendor, __pycache__
|
|
||||||
- Focus: Source code directories (src, lib, pkg, app, etc.)
|
|
||||||
|
|
||||||
## Output Format
|
|
||||||
Return JSON structure for programmatic processing:
|
|
||||||
|
|
||||||
\`\`\`json
|
|
||||||
{
|
|
||||||
"technology_stack": {
|
|
||||||
"languages": [
|
|
||||||
{"name": "TypeScript", "file_count": 150, "primary": true},
|
|
||||||
{"name": "Python", "file_count": 30, "primary": false}
|
|
||||||
],
|
|
||||||
"frameworks": ["React", "Express", "TypeORM"],
|
|
||||||
"build_tools": ["npm", "webpack"],
|
|
||||||
"test_frameworks": ["Jest", "Supertest"]
|
|
||||||
},
|
|
||||||
"architecture": {
|
|
||||||
"style": "Layered MVC with Repository Pattern",
|
|
||||||
"layers": ["presentation", "business-logic", "data-access"],
|
|
||||||
"patterns": ["MVC", "Repository Pattern", "Dependency Injection"],
|
|
||||||
"key_components": [
|
|
||||||
{
|
|
||||||
"name": "Authentication Module",
|
|
||||||
"path": "src/auth",
|
|
||||||
"description": "JWT-based authentication with OAuth2 support",
|
|
||||||
"importance": "high"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "User Management",
|
|
||||||
"path": "src/users",
|
|
||||||
"description": "User CRUD operations and profile management",
|
|
||||||
"importance": "high"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"metrics": {
|
|
||||||
"total_files": 180,
|
|
||||||
"lines_of_code": 15000,
|
|
||||||
"module_count": 12,
|
|
||||||
"complexity": "medium"
|
|
||||||
},
|
|
||||||
"entry_points": {
|
|
||||||
"main": "src/index.ts",
|
|
||||||
"cli_commands": ["npm start", "npm test", "npm run build"],
|
|
||||||
"api_endpoints": ["/api/auth", "/api/users", "/api/posts"]
|
|
||||||
},
|
|
||||||
"analysis_metadata": {
|
|
||||||
"timestamp": "2025-01-18T10:30:00Z",
|
|
||||||
"mode": "deep-scan",
|
|
||||||
"source": "cli-explore-agent"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
## Quality Requirements
|
|
||||||
- ✅ All technology stack items verified (no guessing)
|
|
||||||
- ✅ Key components include file paths for navigation
|
|
||||||
- ✅ Architecture style based on actual code patterns, not assumptions
|
|
||||||
- ✅ Metrics calculated from actual file counts/lines
|
|
||||||
- ✅ Entry points verified as executable
|
|
||||||
`
|
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Agent Output**: JSON structure with comprehensive project analysis
|
### Step 4: Display Summary
|
||||||
|
|
||||||
### Step 4: Build project.json from Analysis
|
|
||||||
|
|
||||||
**Data Processing**:
|
|
||||||
```javascript
|
```javascript
|
||||||
// Parse agent analysis output
|
const projectJson = JSON.parse(Read('.workflow/project.json'));
|
||||||
const analysis = JSON.parse(agentOutput);
|
|
||||||
|
|
||||||
// Build complete project.json structure
|
console.log(`
|
||||||
const projectMeta = {
|
|
||||||
// Basic metadata
|
|
||||||
project_name: projectName,
|
|
||||||
initialized_at: new Date().toISOString(),
|
|
||||||
|
|
||||||
// Project overview (from cli-explore-agent)
|
|
||||||
overview: {
|
|
||||||
description: generateDescription(analysis), // e.g., "TypeScript web application with React frontend"
|
|
||||||
technology_stack: analysis.technology_stack,
|
|
||||||
architecture: {
|
|
||||||
style: analysis.architecture.style,
|
|
||||||
layers: analysis.architecture.layers,
|
|
||||||
patterns: analysis.architecture.patterns
|
|
||||||
},
|
|
||||||
key_components: analysis.architecture.key_components,
|
|
||||||
entry_points: analysis.entry_points,
|
|
||||||
metrics: analysis.metrics
|
|
||||||
},
|
|
||||||
|
|
||||||
// Feature registry (initially empty, populated by complete)
|
|
||||||
features: [],
|
|
||||||
|
|
||||||
// Statistics
|
|
||||||
statistics: {
|
|
||||||
total_features: 0,
|
|
||||||
total_sessions: 0,
|
|
||||||
last_updated: new Date().toISOString()
|
|
||||||
},
|
|
||||||
|
|
||||||
// Analysis metadata
|
|
||||||
_metadata: {
|
|
||||||
initialized_by: "cli-explore-agent",
|
|
||||||
analysis_timestamp: analysis.analysis_metadata.timestamp,
|
|
||||||
analysis_mode: analysis.analysis_metadata.mode
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Helper: Generate project description
|
|
||||||
function generateDescription(analysis) {
|
|
||||||
const primaryLang = analysis.technology_stack.languages.find(l => l.primary);
|
|
||||||
const frameworks = analysis.technology_stack.frameworks.slice(0, 2).join(', ');
|
|
||||||
|
|
||||||
return `${primaryLang.name} project using ${frameworks}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write to .workflow/project.json
|
|
||||||
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Output Summary
|
|
||||||
|
|
||||||
```
|
|
||||||
✓ Project initialized successfully
|
✓ Project initialized successfully
|
||||||
|
|
||||||
## Project Overview
|
## Project Overview
|
||||||
Name: ${projectName}
|
Name: ${projectJson.project_name}
|
||||||
Description: ${overview.description}
|
Description: ${projectJson.overview.description}
|
||||||
|
|
||||||
### Technology Stack
|
### Technology Stack
|
||||||
Languages: ${languages.map(l => l.name).join(', ')}
|
Languages: ${projectJson.overview.technology_stack.languages.map(l => l.name).join(', ')}
|
||||||
Frameworks: ${frameworks.join(', ')}
|
Frameworks: ${projectJson.overview.technology_stack.frameworks.join(', ')}
|
||||||
|
|
||||||
### Architecture
|
### Architecture
|
||||||
Style: ${architecture.style}
|
Style: ${projectJson.overview.architecture.style}
|
||||||
Components: ${key_components.length} core modules identified
|
Components: ${projectJson.overview.key_components.length} core modules
|
||||||
|
|
||||||
### Project Metrics
|
|
||||||
Files: ${metrics.total_files}
|
|
||||||
LOC: ${metrics.lines_of_code}
|
|
||||||
Complexity: ${metrics.complexity}
|
|
||||||
|
|
||||||
### Memory Resources
|
|
||||||
SKILL Packages: ${memory_resources.skills.length}
|
|
||||||
Documentation: ${memory_resources.documentation.length} project(s)
|
|
||||||
Module Docs: ${memory_resources.module_docs.length} file(s)
|
|
||||||
Gaps: ${memory_resources.gaps.join(', ') || 'none'}
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
• /workflow:plan "feature description" - Start new workflow
|
|
||||||
• /workflow:status --project - View project state
|
|
||||||
|
|
||||||
---
|
---
|
||||||
Project state saved to: .workflow/project.json
|
Project state: .workflow/project.json
|
||||||
Memory index updated: ${memory_resources.last_scanned}
|
${regenerate ? 'Backup: .workflow/project.json.backup' : ''}
|
||||||
|
`);
|
||||||
```
|
```
|
||||||
|
|
||||||
## Extended project.json Schema
|
|
||||||
|
|
||||||
### Complete Structure
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"project_name": "claude_dms3",
|
|
||||||
"initialized_at": "2025-01-18T10:00:00Z",
|
|
||||||
|
|
||||||
"overview": {
|
|
||||||
"description": "TypeScript workflow automation system with AI agent orchestration",
|
|
||||||
"technology_stack": {
|
|
||||||
"languages": [
|
|
||||||
{"name": "TypeScript", "file_count": 150, "primary": true},
|
|
||||||
{"name": "Bash", "file_count": 30, "primary": false}
|
|
||||||
],
|
|
||||||
"frameworks": ["Node.js"],
|
|
||||||
"build_tools": ["npm"],
|
|
||||||
"test_frameworks": ["Jest"]
|
|
||||||
},
|
|
||||||
"architecture": {
|
|
||||||
"style": "Agent-based workflow orchestration with modular command system",
|
|
||||||
"layers": ["command-layer", "agent-orchestration", "cli-integration"],
|
|
||||||
"patterns": ["Command Pattern", "Agent Pattern", "Template Method"]
|
|
||||||
},
|
|
||||||
"key_components": [
|
|
||||||
{
|
|
||||||
"name": "Workflow Planning",
|
|
||||||
"path": ".claude/commands/workflow",
|
|
||||||
"description": "Multi-phase planning workflow with brainstorming and task generation",
|
|
||||||
"importance": "high"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "Agent System",
|
|
||||||
"path": ".claude/agents",
|
|
||||||
"description": "Specialized agents for code development, testing, documentation",
|
|
||||||
"importance": "high"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "CLI Tool Integration",
|
|
||||||
"path": ".claude/scripts",
|
|
||||||
"description": "Gemini, Qwen, Codex wrapper scripts for AI-powered analysis",
|
|
||||||
"importance": "medium"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"entry_points": {
|
|
||||||
"main": ".claude/commands/workflow/plan.md",
|
|
||||||
"cli_commands": ["/workflow:plan", "/workflow:execute", "/memory:docs"],
|
|
||||||
"api_endpoints": []
|
|
||||||
},
|
|
||||||
"metrics": {
|
|
||||||
"total_files": 180,
|
|
||||||
"lines_of_code": 15000,
|
|
||||||
"module_count": 12,
|
|
||||||
"complexity": "medium"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"features": [],
|
|
||||||
|
|
||||||
"statistics": {
|
|
||||||
"total_features": 0,
|
|
||||||
"total_sessions": 0,
|
|
||||||
"last_updated": "2025-01-18T10:00:00Z"
|
|
||||||
},
|
|
||||||
|
|
||||||
"memory_resources": {
|
|
||||||
"skills": [
|
|
||||||
{"name": "claude_dms3", "type": "project_docs", "path": ".claude/skills/claude_dms3"},
|
|
||||||
{"name": "workflow-progress", "type": "workflow_progress", "path": ".claude/skills/workflow-progress"}
|
|
||||||
],
|
|
||||||
"documentation": [
|
|
||||||
{
|
|
||||||
"name": "claude_dms3",
|
|
||||||
"path": ".workflow/docs/claude_dms3",
|
|
||||||
"has_readme": true,
|
|
||||||
"has_architecture": true
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"module_docs": [
|
|
||||||
".claude/commands/workflow/CLAUDE.md",
|
|
||||||
".claude/agents/CLAUDE.md"
|
|
||||||
],
|
|
||||||
"gaps": ["tech_stack"],
|
|
||||||
"last_scanned": "2025-01-18T10:05:00Z"
|
|
||||||
},
|
|
||||||
|
|
||||||
"_metadata": {
|
|
||||||
"initialized_by": "cli-explore-agent",
|
|
||||||
"analysis_timestamp": "2025-01-18T10:00:00Z",
|
|
||||||
"analysis_mode": "deep-scan",
|
|
||||||
"memory_scan_timestamp": "2025-01-18T10:05:00Z"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 5: Discover Memory Resources
|
|
||||||
|
|
||||||
**Goal**: Scan and index available SKILL packages (memory command products) using agent delegation
|
|
||||||
|
|
||||||
**Invoke general-purpose agent** to discover and catalog all memory products:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Task(
|
|
||||||
subagent_type="general-purpose",
|
|
||||||
description="Discover memory resources",
|
|
||||||
prompt=`
|
|
||||||
Discover and index all memory command products: SKILL packages, documentation, and CLAUDE.md files.
|
|
||||||
|
|
||||||
## Discovery Scope
|
|
||||||
1. **SKILL Packages** (.claude/skills/) - Generated by /memory:skill-memory, /memory:tech-research, etc.
|
|
||||||
2. **Documentation** (.workflow/docs/) - Generated by /memory:docs
|
|
||||||
3. **Module Docs** (**/CLAUDE.md) - Generated by /memory:update-full, /memory:update-related
|
|
||||||
|
|
||||||
## Discovery Tasks
|
|
||||||
|
|
||||||
### 1. Scan SKILL Packages
|
|
||||||
- List all directories in .claude/skills/
|
|
||||||
- For each: extract name, classify type, record path
|
|
||||||
- Types: workflow-progress | codemap-* | style-* | tech_stacks | project_docs
|
|
||||||
|
|
||||||
### 2. Scan Documentation
|
|
||||||
- List directories in .workflow/docs/
|
|
||||||
- For each project: name, path, check README.md, ARCHITECTURE.md existence
|
|
||||||
|
|
||||||
### 3. Scan CLAUDE.md Files
|
|
||||||
- Find all **/CLAUDE.md (exclude: node_modules, .git, dist, build)
|
|
||||||
- Return path list only
|
|
||||||
|
|
||||||
### 4. Identify Gaps
|
|
||||||
- No project SKILL? → "project_skill"
|
|
||||||
- No documentation? → "documentation"
|
|
||||||
- Missing tech stack SKILL? → "tech_stack"
|
|
||||||
- No workflow-progress? → "workflow_history"
|
|
||||||
- <10% modules have CLAUDE.md? → "module_docs_low_coverage"
|
|
||||||
|
|
||||||
### 5. Return JSON:
|
|
||||||
{
|
|
||||||
"skills": [
|
|
||||||
{"name": "claude_dms3", "type": "project_docs", "path": ".claude/skills/claude_dms3"},
|
|
||||||
{"name": "workflow-progress", "type": "workflow_progress", "path": ".claude/skills/workflow-progress"}
|
|
||||||
],
|
|
||||||
"documentation": [
|
|
||||||
{
|
|
||||||
"name": "my_project",
|
|
||||||
"path": ".workflow/docs/my_project",
|
|
||||||
"has_readme": true,
|
|
||||||
"has_architecture": true
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"module_docs": [
|
|
||||||
"src/core/CLAUDE.md",
|
|
||||||
"lib/utils/CLAUDE.md"
|
|
||||||
],
|
|
||||||
"gaps": ["tech_stack", "module_docs_low_coverage"]
|
|
||||||
}
|
|
||||||
|
|
||||||
## Context
|
|
||||||
- Project tech stack: ${JSON.stringify(analysis.technology_stack)}
|
|
||||||
- Check .workflow/.archives for session history
|
|
||||||
- If directories missing, return empty state with recommendations
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Agent Output**: JSON structure with skills, documentation, module_docs, and gaps
|
|
||||||
|
|
||||||
**Update project.json**:
|
|
||||||
```javascript
|
|
||||||
const memoryDiscovery = JSON.parse(agentOutput);
|
|
||||||
|
|
||||||
projectMeta.memory_resources = {
|
|
||||||
...memoryDiscovery,
|
|
||||||
last_scanned: new Date().toISOString()
|
|
||||||
};
|
|
||||||
|
|
||||||
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Summary**:
|
|
||||||
```
|
|
||||||
Memory Resources Indexed:
|
|
||||||
- SKILL Packages: ${skills.length}
|
|
||||||
- Documentation: ${documentation.length} project(s)
|
|
||||||
- Module Docs: ${module_docs.length} file(s)
|
|
||||||
- Gaps: ${gaps.join(', ') || 'none'}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Regeneration Behavior
|
|
||||||
|
|
||||||
When using `--regenerate` flag:
|
|
||||||
|
|
||||||
1. **Backup existing file**:
|
|
||||||
```bash
|
|
||||||
bash(cp .workflow/project.json .workflow/project.json.backup)
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Preserve features array**:
|
|
||||||
```javascript
|
|
||||||
const existingMeta = JSON.parse(Read('.workflow/project.json'));
|
|
||||||
const preservedFeatures = existingMeta.features || [];
|
|
||||||
const preservedStats = existingMeta.statistics || {};
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Re-run cli-explore-agent analysis**
|
|
||||||
|
|
||||||
4. **Re-run memory discovery (Phase 5)**
|
|
||||||
|
|
||||||
5. **Merge preserved data with new analysis**:
|
|
||||||
```javascript
|
|
||||||
const newProjectMeta = {
|
|
||||||
...analysisResults,
|
|
||||||
features: preservedFeatures, // Keep existing features
|
|
||||||
statistics: preservedStats // Keep statistics
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Output**:
|
|
||||||
```
|
|
||||||
✓ Project analysis regenerated
|
|
||||||
Backup saved: .workflow/project.json.backup
|
|
||||||
|
|
||||||
Updated:
|
|
||||||
- Technology stack analysis
|
|
||||||
- Architecture overview
|
|
||||||
- Key components discovery
|
|
||||||
- Memory resources index
|
|
||||||
|
|
||||||
Preserved:
|
|
||||||
- ${preservedFeatures.length} existing features
|
|
||||||
- Session statistics
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
### Agent Failure
|
**Agent Failure**: Fall back to basic initialization with placeholder overview
|
||||||
```
|
**Missing Tools**: Agent uses Qwen fallback or bash-only
|
||||||
If cli-explore-agent fails:
|
**Empty Project**: Create minimal JSON with all gaps identified
|
||||||
1. Fall back to basic initialization
|
|
||||||
2. Use get_modules_by_depth.sh for structure
|
|
||||||
3. Create minimal project.json with placeholder overview
|
|
||||||
4. Log warning: "Project initialized with basic analysis. Run /workflow:init --regenerate for full analysis"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Missing Tools
|
|
||||||
```
|
|
||||||
If Gemini CLI unavailable:
|
|
||||||
1. Agent uses Qwen fallback
|
|
||||||
2. If both fail, use bash-only analysis
|
|
||||||
3. Mark in _metadata: "analysis_mode": "bash-fallback"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Invalid Project Root
|
|
||||||
```
|
|
||||||
If not in git repo and empty directory:
|
|
||||||
1. Warn user: "Empty project detected"
|
|
||||||
2. Create minimal project.json
|
|
||||||
3. Suggest: "Add code files and run /workflow:init --regenerate"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Memory Discovery Failures
|
|
||||||
|
|
||||||
**Missing Directories**:
|
|
||||||
```
|
|
||||||
If .claude/skills, .workflow/docs, or CLAUDE.md files not found:
|
|
||||||
1. Return empty state for that category
|
|
||||||
2. Mark in gaps.missing array
|
|
||||||
3. Continue initialization
|
|
||||||
```
|
|
||||||
|
|
||||||
**Metadata Read Failures**:
|
|
||||||
```
|
|
||||||
If SKILL.md files are unreadable:
|
|
||||||
1. Include SKILL with basic info: name (from directory), type (inferred), path
|
|
||||||
2. Log warning: "SKILL package {name} has invalid metadata"
|
|
||||||
3. Continue with other SKILLs
|
|
||||||
```
|
|
||||||
|
|
||||||
**Coverage Check Failures**:
|
|
||||||
```
|
|
||||||
If unable to determine module doc coverage:
|
|
||||||
1. Skip adding "module_docs_low_coverage" to gaps
|
|
||||||
2. Continue with other gap checks
|
|
||||||
```
|
|
||||||
|
|
||||||
**Default Empty State**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"memory_resources": {
|
|
||||||
"skills": [],
|
|
||||||
"documentation": [],
|
|
||||||
"module_docs": [],
|
|
||||||
"gaps": ["project_skill", "documentation", "tech_stack", "workflow_history", "module_docs"],
|
|
||||||
"last_scanned": "ISO_TIMESTAMP"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -92,7 +92,7 @@ AskUserQuestion({
|
|||||||
|
|
||||||
**Trigger**: User calls with file path
|
**Trigger**: User calls with file path
|
||||||
|
|
||||||
**Input**: Path to file containing task description or Enhanced Task JSON
|
**Input**: Path to file containing task description or plan.json
|
||||||
|
|
||||||
**Step 1: Read and Detect Format**
|
**Step 1: Read and Detect Format**
|
||||||
|
|
||||||
@@ -103,42 +103,30 @@ fileContent = Read(filePath)
|
|||||||
try {
|
try {
|
||||||
jsonData = JSON.parse(fileContent)
|
jsonData = JSON.parse(fileContent)
|
||||||
|
|
||||||
// Check if Enhanced Task JSON from lite-plan
|
// Check if plan.json from lite-plan session
|
||||||
if (jsonData.meta?.workflow === "lite-plan") {
|
if (jsonData.summary && jsonData.approach && jsonData.tasks) {
|
||||||
// Extract plan data
|
planObject = jsonData
|
||||||
planObject = {
|
originalUserInput = jsonData.summary
|
||||||
summary: jsonData.context.plan.summary,
|
isPlanJson = true
|
||||||
approach: jsonData.context.plan.approach,
|
|
||||||
tasks: jsonData.context.plan.tasks,
|
|
||||||
estimated_time: jsonData.meta.estimated_time,
|
|
||||||
recommended_execution: jsonData.meta.recommended_execution,
|
|
||||||
complexity: jsonData.meta.complexity
|
|
||||||
}
|
|
||||||
explorationContext = jsonData.context.exploration || null
|
|
||||||
clarificationContext = jsonData.context.clarifications || null
|
|
||||||
originalUserInput = jsonData.title
|
|
||||||
|
|
||||||
isEnhancedTaskJson = true
|
|
||||||
} else {
|
} else {
|
||||||
// Valid JSON but not Enhanced Task JSON - treat as plain text
|
// Valid JSON but not plan.json - treat as plain text
|
||||||
originalUserInput = fileContent
|
originalUserInput = fileContent
|
||||||
isEnhancedTaskJson = false
|
isPlanJson = false
|
||||||
}
|
}
|
||||||
} catch {
|
} catch {
|
||||||
// Not valid JSON - treat as plain text prompt
|
// Not valid JSON - treat as plain text prompt
|
||||||
originalUserInput = fileContent
|
originalUserInput = fileContent
|
||||||
isEnhancedTaskJson = false
|
isPlanJson = false
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Step 2: Create Execution Plan**
|
**Step 2: Create Execution Plan**
|
||||||
|
|
||||||
If `isEnhancedTaskJson === true`:
|
If `isPlanJson === true`:
|
||||||
- Use extracted `planObject` directly
|
- Use `planObject` directly
|
||||||
- Skip planning, use lite-plan's existing plan
|
- User selects execution method and code review
|
||||||
- User still selects execution method and code review
|
|
||||||
|
|
||||||
If `isEnhancedTaskJson === false`:
|
If `isPlanJson === false`:
|
||||||
- Treat file content as prompt (same behavior as Mode 2)
|
- Treat file content as prompt (same behavior as Mode 2)
|
||||||
- Create simple execution plan from content
|
- Create simple execution plan from content
|
||||||
|
|
||||||
@@ -150,26 +138,30 @@ If `isEnhancedTaskJson === false`:
|
|||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
### Workflow Overview
|
|
||||||
|
|
||||||
```
|
```
|
||||||
Input Processing → Mode Detection
|
Input Parsing:
|
||||||
|
|
└─ Decision (mode detection):
|
||||||
v
|
├─ --in-memory flag → Mode 1: Load executionContext → Skip user selection
|
||||||
[Mode 1] --in-memory: Load executionContext → Skip selection
|
├─ Ends with .md/.json/.txt → Mode 3: Read file → Detect format
|
||||||
[Mode 2] Prompt: Create plan → User selects method + review
|
│ ├─ Valid plan.json → Use planObject → User selects method + review
|
||||||
[Mode 3] File: Detect format → Extract plan OR treat as prompt → User selects
|
│ └─ Not plan.json → Treat as prompt → User selects method + review
|
||||||
|
|
└─ Other → Mode 2: Prompt description → User selects method + review
|
||||||
v
|
|
||||||
Execution & Progress Tracking
|
Execution:
|
||||||
├─ Step 1: Initialize execution tracking
|
├─ Step 1: Initialize result tracking (previousExecutionResults = [])
|
||||||
├─ Step 2: Create TodoWrite execution list
|
├─ Step 2: Task grouping & batch creation
|
||||||
├─ Step 3: Launch execution (Agent or Codex)
|
│ ├─ Extract explicit depends_on (no file/keyword inference)
|
||||||
├─ Step 4: Track execution progress
|
│ ├─ Group: independent tasks → single parallel batch (maximize utilization)
|
||||||
└─ Step 5: Code review (optional)
|
│ ├─ Group: dependent tasks → sequential phases (respect dependencies)
|
||||||
|
|
│ └─ Create TodoWrite list for batches
|
||||||
v
|
├─ Step 3: Launch execution
|
||||||
Execution Complete
|
│ ├─ Phase 1: All independent tasks (⚡ single batch, concurrent)
|
||||||
|
│ └─ Phase 2+: Dependent tasks by dependency order
|
||||||
|
├─ Step 4: Track progress (TodoWrite updates per batch)
|
||||||
|
└─ Step 5: Code review (if codeReviewTool ≠ "Skip")
|
||||||
|
|
||||||
|
Output:
|
||||||
|
└─ Execution complete with results in previousExecutionResults[]
|
||||||
```
|
```
|
||||||
|
|
||||||
## Detailed Execution Steps
|
## Detailed Execution Steps
|
||||||
@@ -185,86 +177,106 @@ Execution Complete
|
|||||||
previousExecutionResults = []
|
previousExecutionResults = []
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Create TodoWrite Execution List
|
### Step 2: Task Grouping & Batch Creation
|
||||||
|
|
||||||
**Operations**:
|
**Dependency Analysis & Grouping Algorithm**:
|
||||||
- Create execution tracking from task list
|
|
||||||
- Typically single execution call for all tasks
|
|
||||||
- Split into multiple calls if task list very large (>10 tasks)
|
|
||||||
|
|
||||||
**Execution Call Creation**:
|
|
||||||
```javascript
|
```javascript
|
||||||
function createExecutionCalls(tasks) {
|
// Use explicit depends_on from plan.json (no inference from file/keywords)
|
||||||
const taskTitles = tasks.map(t => t.title || t)
|
function extractDependencies(tasks) {
|
||||||
|
const taskIdToIndex = {}
|
||||||
|
tasks.forEach((t, i) => { taskIdToIndex[t.id] = i })
|
||||||
|
|
||||||
// Single call for ≤10 tasks (most common)
|
return tasks.map((task, i) => {
|
||||||
if (tasks.length <= 10) {
|
// Only use explicit depends_on from plan.json
|
||||||
return [{
|
const deps = (task.depends_on || [])
|
||||||
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
.map(depId => taskIdToIndex[depId])
|
||||||
taskSummary: taskTitles.length <= 3
|
.filter(idx => idx !== undefined && idx < i)
|
||||||
? taskTitles.join(', ')
|
return { ...task, taskIndex: i, dependencies: deps }
|
||||||
: `${taskTitles.slice(0, 2).join(', ')}, and ${taskTitles.length - 2} more`,
|
})
|
||||||
tasks: tasks
|
}
|
||||||
}]
|
|
||||||
}
|
|
||||||
|
|
||||||
// Split into multiple calls for >10 tasks
|
// Group into batches: maximize parallel execution
|
||||||
const callSize = 5
|
function createExecutionCalls(tasks, executionMethod) {
|
||||||
|
const tasksWithDeps = extractDependencies(tasks)
|
||||||
|
const processed = new Set()
|
||||||
const calls = []
|
const calls = []
|
||||||
for (let i = 0; i < tasks.length; i += callSize) {
|
|
||||||
const batchTasks = tasks.slice(i, i + callSize)
|
// Phase 1: All independent tasks → single parallel batch (maximize utilization)
|
||||||
const batchTitles = batchTasks.map(t => t.title || t)
|
const independentTasks = tasksWithDeps.filter(t => t.dependencies.length === 0)
|
||||||
|
if (independentTasks.length > 0) {
|
||||||
|
independentTasks.forEach(t => processed.add(t.taskIndex))
|
||||||
calls.push({
|
calls.push({
|
||||||
method: executionMethod === "Codex" ? "Codex" : "Agent",
|
method: executionMethod,
|
||||||
taskSummary: `Tasks ${i + 1}-${Math.min(i + callSize, tasks.length)}: ${batchTitles[0]}...`,
|
executionType: "parallel",
|
||||||
tasks: batchTasks
|
groupId: "P1",
|
||||||
|
taskSummary: independentTasks.map(t => t.title).join(' | '),
|
||||||
|
tasks: independentTasks
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Phase 2: Dependent tasks → sequential batches (respect dependencies)
|
||||||
|
let sequentialIndex = 1
|
||||||
|
let remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
|
||||||
|
|
||||||
|
while (remaining.length > 0) {
|
||||||
|
// Find tasks whose dependencies are all satisfied
|
||||||
|
const ready = remaining.filter(t =>
|
||||||
|
t.dependencies.every(d => processed.has(d))
|
||||||
|
)
|
||||||
|
|
||||||
|
if (ready.length === 0) {
|
||||||
|
console.warn('Circular dependency detected, forcing remaining tasks')
|
||||||
|
ready.push(...remaining)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Group ready tasks (can run in parallel within this phase)
|
||||||
|
ready.forEach(t => processed.add(t.taskIndex))
|
||||||
|
calls.push({
|
||||||
|
method: executionMethod,
|
||||||
|
executionType: ready.length > 1 ? "parallel" : "sequential",
|
||||||
|
groupId: ready.length > 1 ? `P${calls.length + 1}` : `S${sequentialIndex++}`,
|
||||||
|
taskSummary: ready.map(t => t.title).join(ready.length > 1 ? ' | ' : ' → '),
|
||||||
|
tasks: ready
|
||||||
|
})
|
||||||
|
|
||||||
|
remaining = remaining.filter(t => !processed.has(t.taskIndex))
|
||||||
|
}
|
||||||
|
|
||||||
return calls
|
return calls
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create execution calls with IDs
|
executionCalls = createExecutionCalls(planObject.tasks, executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
|
||||||
executionCalls = createExecutionCalls(planObject.tasks).map((call, index) => ({
|
|
||||||
...call,
|
|
||||||
id: `[${call.method}-${index+1}]`
|
|
||||||
}))
|
|
||||||
|
|
||||||
// Create TodoWrite list
|
|
||||||
TodoWrite({
|
TodoWrite({
|
||||||
todos: executionCalls.map(call => ({
|
todos: executionCalls.map(c => ({
|
||||||
content: `${call.id} (${call.taskSummary})`,
|
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} (${c.tasks.length} tasks)`,
|
||||||
status: "pending",
|
status: "pending",
|
||||||
activeForm: `Executing ${call.id} (${call.taskSummary})`
|
activeForm: `Executing ${c.id}`
|
||||||
}))
|
}))
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
**Example Execution Lists**:
|
|
||||||
```
|
|
||||||
Single call (typical):
|
|
||||||
[ ] [Agent-1] (Create AuthService, Add JWT utilities, Implement middleware)
|
|
||||||
|
|
||||||
Few tasks:
|
|
||||||
[ ] [Codex-1] (Create AuthService, Add JWT utilities, and 3 more)
|
|
||||||
|
|
||||||
Large task sets (>10):
|
|
||||||
[ ] [Agent-1] (Tasks 1-5: Create AuthService, Add JWT utilities, ...)
|
|
||||||
[ ] [Agent-2] (Tasks 6-10: Create tests, Update docs, ...)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Launch Execution
|
### Step 3: Launch Execution
|
||||||
|
|
||||||
**IMPORTANT**: CLI execution MUST run in foreground (no background execution)
|
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
|
||||||
|
|
||||||
**Execution Loop**:
|
|
||||||
```javascript
|
```javascript
|
||||||
for (currentIndex = 0; currentIndex < executionCalls.length; currentIndex++) {
|
const parallel = executionCalls.filter(c => c.executionType === "parallel")
|
||||||
const currentCall = executionCalls[currentIndex]
|
const sequential = executionCalls.filter(c => c.executionType === "sequential")
|
||||||
|
|
||||||
// Update TodoWrite: mark current call in_progress
|
// Phase 1: Launch all parallel batches (single message with multiple tool calls)
|
||||||
// Launch execution with previousExecutionResults context
|
if (parallel.length > 0) {
|
||||||
// After completion: collect result, add to previousExecutionResults
|
TodoWrite({ todos: executionCalls.map(c => ({ status: c.executionType === "parallel" ? "in_progress" : "pending" })) })
|
||||||
// Update TodoWrite: mark current call completed
|
parallelResults = await Promise.all(parallel.map(c => executeBatch(c)))
|
||||||
|
previousExecutionResults.push(...parallelResults)
|
||||||
|
TodoWrite({ todos: executionCalls.map(c => ({ status: parallel.includes(c) ? "completed" : "pending" })) })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Phase 2: Execute sequential batches one by one
|
||||||
|
for (const call of sequential) {
|
||||||
|
TodoWrite({ todos: executionCalls.map(c => ({ status: c === call ? "in_progress" : "..." })) })
|
||||||
|
result = await executeBatch(call)
|
||||||
|
previousExecutionResults.push(result)
|
||||||
|
TodoWrite({ todos: executionCalls.map(c => ({ status: "completed" or "pending" })) })
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -274,63 +286,112 @@ When to use:
|
|||||||
- `executionMethod = "Agent"`
|
- `executionMethod = "Agent"`
|
||||||
- `executionMethod = "Auto" AND complexity = "Low"`
|
- `executionMethod = "Auto" AND complexity = "Low"`
|
||||||
|
|
||||||
|
**Task Formatting Principle**: Each task is a self-contained checklist. The agent only needs to know what THIS task requires, not its position or relation to other tasks.
|
||||||
|
|
||||||
Agent call format:
|
Agent call format:
|
||||||
```javascript
|
```javascript
|
||||||
function formatTaskForAgent(task, index) {
|
// Format single task as self-contained checklist
|
||||||
|
function formatTaskChecklist(task) {
|
||||||
return `
|
return `
|
||||||
### Task ${index + 1}: ${task.title}
|
## ${task.title}
|
||||||
**File**: ${task.file}
|
|
||||||
|
**Target**: \`${task.file}\`
|
||||||
**Action**: ${task.action}
|
**Action**: ${task.action}
|
||||||
**Description**: ${task.description}
|
|
||||||
|
|
||||||
**Implementation Steps**:
|
### What to do
|
||||||
${task.implementation.map((step, i) => `${i + 1}. ${step}`).join('\n')}
|
${task.description}
|
||||||
|
|
||||||
**Reference**:
|
### How to do it
|
||||||
|
${task.implementation.map(step => `- ${step}`).join('\n')}
|
||||||
|
|
||||||
|
### Reference
|
||||||
- Pattern: ${task.reference.pattern}
|
- Pattern: ${task.reference.pattern}
|
||||||
- Example Files: ${task.reference.files.join(', ')}
|
- Examples: ${task.reference.files.join(', ')}
|
||||||
- Guidance: ${task.reference.examples}
|
- Notes: ${task.reference.examples}
|
||||||
|
|
||||||
**Acceptance Criteria**:
|
### Done when
|
||||||
${task.acceptance.map((criterion, i) => `${i + 1}. ${criterion}`).join('\n')}
|
${task.acceptance.map(c => `- [ ] ${c}`).join('\n')}
|
||||||
|
`
|
||||||
|
}
|
||||||
|
|
||||||
|
// For batch execution: aggregate tasks without numbering
|
||||||
|
function formatBatchPrompt(batch) {
|
||||||
|
const tasksSection = batch.tasks.map(t => formatTaskChecklist(t)).join('\n---\n')
|
||||||
|
|
||||||
|
return `
|
||||||
|
${originalUserInput ? `## Goal\n${originalUserInput}\n` : ''}
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
|
||||||
|
${tasksSection}
|
||||||
|
|
||||||
|
${batch.context ? `## Context\n${batch.context}` : ''}
|
||||||
|
|
||||||
|
Complete each task according to its "Done when" checklist.
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
|
|
||||||
Task(
|
Task(
|
||||||
subagent_type="code-developer",
|
subagent_type="code-developer",
|
||||||
description="Implement planned tasks",
|
description=batch.taskSummary,
|
||||||
prompt=`
|
prompt=formatBatchPrompt({
|
||||||
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
|
tasks: batch.tasks,
|
||||||
|
context: buildRelevantContext(batch.tasks)
|
||||||
## Implementation Plan
|
})
|
||||||
|
|
||||||
**Summary**: ${planObject.summary}
|
|
||||||
**Approach**: ${planObject.approach}
|
|
||||||
|
|
||||||
## Task Breakdown (${planObject.tasks.length} tasks)
|
|
||||||
${planObject.tasks.map((task, i) => formatTaskForAgent(task, i)).join('\n')}
|
|
||||||
|
|
||||||
${previousExecutionResults.length > 0 ? `\n## Previous Execution Results\n${previousExecutionResults.map(result => `
|
|
||||||
[${result.executionId}] ${result.status}
|
|
||||||
Tasks: ${result.tasksSummary}
|
|
||||||
Completion: ${result.completionSummary}
|
|
||||||
Outputs: ${result.keyOutputs || 'See git diff'}
|
|
||||||
${result.notes ? `Notes: ${result.notes}` : ''}
|
|
||||||
`).join('\n---\n')}` : ''}
|
|
||||||
|
|
||||||
## Code Context
|
|
||||||
${explorationContext || "No exploration performed"}
|
|
||||||
|
|
||||||
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
|
|
||||||
|
|
||||||
## Instructions
|
|
||||||
- Reference original request to ensure alignment
|
|
||||||
- Review previous results to understand completed work
|
|
||||||
- Build on previous work, avoid duplication
|
|
||||||
- Test functionality as you implement
|
|
||||||
- Complete all assigned tasks
|
|
||||||
`
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Helper: Build relevant context for batch
|
||||||
|
// Context serves as REFERENCE ONLY - helps agent understand existing state
|
||||||
|
function buildRelevantContext(tasks) {
|
||||||
|
const sections = []
|
||||||
|
|
||||||
|
// 1. Previous work completion - what's already done (reference for continuity)
|
||||||
|
if (previousExecutionResults.length > 0) {
|
||||||
|
sections.push(`### Previous Work (Reference)
|
||||||
|
Use this to understand what's already completed. Avoid duplicating work.
|
||||||
|
|
||||||
|
${previousExecutionResults.map(r => `**${r.tasksSummary}**
|
||||||
|
- Status: ${r.status}
|
||||||
|
- Outputs: ${r.keyOutputs || 'See git diff'}
|
||||||
|
${r.notes ? `- Notes: ${r.notes}` : ''}`
|
||||||
|
).join('\n\n')}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Related files - files that may need to be read/referenced
|
||||||
|
const relatedFiles = extractRelatedFiles(tasks)
|
||||||
|
if (relatedFiles.length > 0) {
|
||||||
|
sections.push(`### Related Files (Reference)
|
||||||
|
These files may contain patterns, types, or utilities relevant to your tasks:
|
||||||
|
|
||||||
|
${relatedFiles.map(f => `- \`${f}\``).join('\n')}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Clarifications from user
|
||||||
|
if (clarificationContext) {
|
||||||
|
sections.push(`### User Clarifications
|
||||||
|
${Object.entries(clarificationContext).map(([q, a]) => `- **${q}**: ${a}`).join('\n')}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4. Artifact files (for deeper context if needed)
|
||||||
|
if (executionContext?.session?.artifacts?.plan) {
|
||||||
|
sections.push(`### Artifacts
|
||||||
|
For detailed planning context, read: ${executionContext.session.artifacts.plan}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
return sections.join('\n\n')
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract related files from task references
|
||||||
|
function extractRelatedFiles(tasks) {
|
||||||
|
const files = new Set()
|
||||||
|
tasks.forEach(task => {
|
||||||
|
// Add reference example files
|
||||||
|
if (task.reference?.files) {
|
||||||
|
task.reference.files.forEach(f => files.add(f))
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return [...files]
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result Collection**: After completion, collect result following `executionResult` structure (see Data Structures section)
|
**Result Collection**: After completion, collect result following `executionResult` structure (see Data Structures section)
|
||||||
@@ -341,72 +402,93 @@ When to use:
|
|||||||
- `executionMethod = "Codex"`
|
- `executionMethod = "Codex"`
|
||||||
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
||||||
|
|
||||||
|
**Task Formatting Principle**: Same as Agent - each task is a self-contained checklist. No task numbering or position awareness.
|
||||||
|
|
||||||
Command format:
|
Command format:
|
||||||
```bash
|
```bash
|
||||||
function formatTaskForCodex(task, index) {
|
// Format single task as compact checklist for CLI
|
||||||
|
function formatTaskForCLI(task) {
|
||||||
return `
|
return `
|
||||||
${index + 1}. ${task.title} (${task.file})
|
## ${task.title}
|
||||||
Action: ${task.action}
|
File: ${task.file}
|
||||||
What: ${task.description}
|
Action: ${task.action}
|
||||||
How:
|
|
||||||
${task.implementation.map((step, i) => ` ${i + 1}. ${step}`).join('\n')}
|
What: ${task.description}
|
||||||
Reference: ${task.reference.pattern} (see ${task.reference.files.join(', ')})
|
|
||||||
Guidance: ${task.reference.examples}
|
How:
|
||||||
Verify:
|
${task.implementation.map(step => `- ${step}`).join('\n')}
|
||||||
${task.acceptance.map((criterion, i) => ` - ${criterion}`).join('\n')}
|
|
||||||
|
Reference: ${task.reference.pattern} (see ${task.reference.files.join(', ')})
|
||||||
|
Notes: ${task.reference.examples}
|
||||||
|
|
||||||
|
Done when:
|
||||||
|
${task.acceptance.map(c => `- [ ] ${c}`).join('\n')}
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
|
|
||||||
codex --full-auto exec "
|
// Build CLI prompt for batch
|
||||||
${originalUserInput ? `## Original User Request\n${originalUserInput}\n\n` : ''}
|
// Context provides REFERENCE information - not requirements to fulfill
|
||||||
|
function buildCLIPrompt(batch) {
|
||||||
|
const tasksSection = batch.tasks.map(t => formatTaskForCLI(t)).join('\n---\n')
|
||||||
|
|
||||||
## Implementation Plan
|
let prompt = `${originalUserInput ? `## Goal\n${originalUserInput}\n\n` : ''}`
|
||||||
|
prompt += `## Tasks\n\n${tasksSection}\n`
|
||||||
|
|
||||||
TASK: ${planObject.summary}
|
// Context section - reference information only
|
||||||
APPROACH: ${planObject.approach}
|
const contextSections = []
|
||||||
|
|
||||||
### Task Breakdown (${planObject.tasks.length} tasks)
|
// 1. Previous work - what's already completed
|
||||||
${planObject.tasks.map((task, i) => formatTaskForCodex(task, i)).join('\n')}
|
if (previousExecutionResults.length > 0) {
|
||||||
|
contextSections.push(`### Previous Work (Reference)
|
||||||
|
Already completed - avoid duplicating:
|
||||||
|
${previousExecutionResults.map(r => `- ${r.tasksSummary}: ${r.status}${r.keyOutputs ? ` (${r.keyOutputs})` : ''}`).join('\n')}`)
|
||||||
|
}
|
||||||
|
|
||||||
${previousExecutionResults.length > 0 ? `\n### Previous Execution Results\n${previousExecutionResults.map(result => `
|
// 2. Related files from task references
|
||||||
[${result.executionId}] ${result.status}
|
const relatedFiles = [...new Set(batch.tasks.flatMap(t => t.reference?.files || []))]
|
||||||
Tasks: ${result.tasksSummary}
|
if (relatedFiles.length > 0) {
|
||||||
Status: ${result.completionSummary}
|
contextSections.push(`### Related Files (Reference)
|
||||||
Outputs: ${result.keyOutputs || 'See git diff'}
|
Patterns and examples to follow:
|
||||||
${result.notes ? `Notes: ${result.notes}` : ''}
|
${relatedFiles.map(f => `- ${f}`).join('\n')}`)
|
||||||
`).join('\n---\n')}
|
}
|
||||||
|
|
||||||
IMPORTANT: Review previous results. Build on completed work. Avoid duplication.
|
// 3. User clarifications
|
||||||
` : ''}
|
if (clarificationContext) {
|
||||||
|
contextSections.push(`### Clarifications
|
||||||
|
${Object.entries(clarificationContext).map(([q, a]) => `- ${q}: ${a}`).join('\n')}`)
|
||||||
|
}
|
||||||
|
|
||||||
### Code Context from Exploration
|
// 4. Plan artifact for deeper context
|
||||||
${explorationContext ? `
|
if (executionContext?.session?.artifacts?.plan) {
|
||||||
Project Structure: ${explorationContext.project_structure || 'Standard structure'}
|
contextSections.push(`### Artifacts
|
||||||
Relevant Files: ${explorationContext.relevant_files?.join(', ') || 'TBD'}
|
Detailed plan: ${executionContext.session.artifacts.plan}`)
|
||||||
Current Patterns: ${explorationContext.patterns || 'Follow existing conventions'}
|
}
|
||||||
Integration Points: ${explorationContext.dependencies || 'None specified'}
|
|
||||||
Constraints: ${explorationContext.constraints || 'None'}
|
|
||||||
` : 'No prior exploration - analyze codebase as needed'}
|
|
||||||
|
|
||||||
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
|
if (contextSections.length > 0) {
|
||||||
|
prompt += `\n## Context\n${contextSections.join('\n\n')}\n`
|
||||||
|
}
|
||||||
|
|
||||||
## Execution Instructions
|
prompt += `\nComplete each task according to its "Done when" checklist.`
|
||||||
- Reference original request to ensure alignment
|
|
||||||
- Review previous results for context continuity
|
|
||||||
- Build on previous work, don't duplicate completed tasks
|
|
||||||
- Complete all assigned tasks in single execution
|
|
||||||
- Test functionality as you implement
|
|
||||||
|
|
||||||
Complexity: ${planObject.complexity}
|
return prompt
|
||||||
" --skip-git-repo-check -s danger-full-access
|
}
|
||||||
|
|
||||||
|
codex --full-auto exec "${buildCLIPrompt(batch)}" --skip-git-repo-check -s danger-full-access
|
||||||
```
|
```
|
||||||
|
|
||||||
**Execution with tracking**:
|
**Execution with tracking**:
|
||||||
```javascript
|
```javascript
|
||||||
// Launch CLI in foreground (NOT background)
|
// Launch CLI in foreground (NOT background)
|
||||||
|
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||||
|
const timeoutByComplexity = {
|
||||||
|
"Low": 2400000, // 40 minutes
|
||||||
|
"Medium": 3600000, // 60 minutes
|
||||||
|
"High": 6000000 // 100 minutes
|
||||||
|
}
|
||||||
|
|
||||||
bash_result = Bash(
|
bash_result = Bash(
|
||||||
command=cli_command,
|
command=cli_command,
|
||||||
timeout=600000 // 10 minutes
|
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||||
)
|
)
|
||||||
|
|
||||||
// Update TodoWrite when execution completes
|
// Update TodoWrite when execution completes
|
||||||
@@ -414,105 +496,123 @@ bash_result = Bash(
|
|||||||
|
|
||||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
||||||
|
|
||||||
### Step 4: Track Execution Progress
|
### Step 4: Progress Tracking
|
||||||
|
|
||||||
**Real-time TodoWrite Updates** at execution call level:
|
Progress tracked at batch level (not individual task level). Icons: ⚡ (parallel, concurrent), → (sequential, one-by-one)
|
||||||
|
|
||||||
```javascript
|
|
||||||
// When call starts
|
|
||||||
TodoWrite({
|
|
||||||
todos: [
|
|
||||||
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "in_progress", activeForm: "..." },
|
|
||||||
{ content: "[Agent-2] (Add middleware + Update routes)", status: "pending", activeForm: "..." }
|
|
||||||
]
|
|
||||||
})
|
|
||||||
|
|
||||||
// When call completes
|
|
||||||
TodoWrite({
|
|
||||||
todos: [
|
|
||||||
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "completed", activeForm: "..." },
|
|
||||||
{ content: "[Agent-2] (Add middleware + Update routes)", status: "in_progress", activeForm: "..." }
|
|
||||||
]
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Visibility**:
|
|
||||||
- User sees execution call progress (not individual task progress)
|
|
||||||
- Current execution highlighted as "in_progress"
|
|
||||||
- Completed executions marked with checkmark
|
|
||||||
- Each execution shows task summary for context
|
|
||||||
|
|
||||||
### Step 5: Code Review (Optional)
|
### Step 5: Code Review (Optional)
|
||||||
|
|
||||||
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
|
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
|
||||||
|
|
||||||
|
**Review Focus**: Verify implementation against plan acceptance criteria
|
||||||
|
- Read plan.json for task acceptance criteria
|
||||||
|
- Check each acceptance criterion is fulfilled
|
||||||
|
- Validate code quality and identify issues
|
||||||
|
- Ensure alignment with planned approach
|
||||||
|
|
||||||
**Operations**:
|
**Operations**:
|
||||||
- Agent Review: Current agent performs direct review
|
- Agent Review: Current agent performs direct review
|
||||||
- Gemini Review: Execute gemini CLI with review prompt
|
- Gemini Review: Execute gemini CLI with review prompt
|
||||||
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
|
||||||
|
|
||||||
**Command Formats**:
|
**Unified Review Template** (All tools use same standard):
|
||||||
|
|
||||||
|
**Review Criteria**:
|
||||||
|
- **Acceptance Criteria**: Verify each criterion from plan.tasks[].acceptance
|
||||||
|
- **Code Quality**: Analyze quality, identify issues, suggest improvements
|
||||||
|
- **Plan Alignment**: Validate implementation matches planned approach
|
||||||
|
|
||||||
|
**Shared Prompt Template** (used by all CLI tools):
|
||||||
|
```
|
||||||
|
PURPOSE: Code review for implemented changes against plan acceptance criteria
|
||||||
|
TASK: • Verify plan acceptance criteria fulfillment • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence
|
||||||
|
MODE: analysis
|
||||||
|
CONTEXT: @**/* @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements
|
||||||
|
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from plan.json tasks.
|
||||||
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on plan acceptance criteria and plan adherence | analysis=READ-ONLY
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tool-Specific Execution** (Apply shared prompt template above):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Agent Review: Direct agent review (no CLI)
|
# Method 1: Agent Review (current agent)
|
||||||
# Uses analysis prompt and TodoWrite tools directly
|
# - Read plan.json: ${executionContext.session.artifacts.plan}
|
||||||
|
# - Apply unified review criteria (see Shared Prompt Template)
|
||||||
|
# - Report findings directly
|
||||||
|
|
||||||
# Gemini Review:
|
# Method 2: Gemini Review (recommended)
|
||||||
gemini -p "
|
gemini -p "[Shared Prompt Template with artifacts]"
|
||||||
PURPOSE: Code review for implemented changes
|
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
||||||
TASK: • Analyze quality • Identify issues • Suggest improvements
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @**/* | Memory: Review lite-execute changes
|
|
||||||
EXPECTED: Quality assessment with recommendations
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
|
||||||
"
|
|
||||||
|
|
||||||
# Qwen Review (custom tool via "Other"):
|
# Method 3: Qwen Review (alternative)
|
||||||
qwen -p "
|
qwen -p "[Shared Prompt Template with artifacts]"
|
||||||
PURPOSE: Code review for implemented changes
|
# Same prompt as Gemini, different execution engine
|
||||||
TASK: • Analyze quality • Identify issues • Suggest improvements
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @**/* | Memory: Review lite-execute changes
|
|
||||||
EXPECTED: Quality assessment with recommendations
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
|
|
||||||
"
|
|
||||||
|
|
||||||
# Codex Review (custom tool via "Other"):
|
# Method 4: Codex Review (autonomous)
|
||||||
codex --full-auto exec "Review recent code changes for quality, potential issues, and improvements" --skip-git-repo-check -s danger-full-access
|
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
||||||
|
- `@{plan.json}` → `@${executionContext.session.artifacts.plan}`
|
||||||
|
- `[@{exploration.json}]` → exploration files from artifacts (if exists)
|
||||||
|
|
||||||
|
### Step 6: Update Development Index
|
||||||
|
|
||||||
|
**Trigger**: After all executions complete (regardless of code review)
|
||||||
|
|
||||||
|
**Skip Condition**: Skip if `.workflow/project.json` does not exist
|
||||||
|
|
||||||
|
**Operations**:
|
||||||
|
```javascript
|
||||||
|
const projectJsonPath = '.workflow/project.json'
|
||||||
|
if (!fileExists(projectJsonPath)) return // Silent skip
|
||||||
|
|
||||||
|
const projectJson = JSON.parse(Read(projectJsonPath))
|
||||||
|
|
||||||
|
// Initialize if needed
|
||||||
|
if (!projectJson.development_index) {
|
||||||
|
projectJson.development_index = { feature: [], enhancement: [], bugfix: [], refactor: [], docs: [] }
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detect category from keywords
|
||||||
|
function detectCategory(text) {
|
||||||
|
text = text.toLowerCase()
|
||||||
|
if (/\b(fix|bug|error|issue|crash)\b/.test(text)) return 'bugfix'
|
||||||
|
if (/\b(refactor|cleanup|reorganize)\b/.test(text)) return 'refactor'
|
||||||
|
if (/\b(doc|readme|comment)\b/.test(text)) return 'docs'
|
||||||
|
if (/\b(add|new|create|implement)\b/.test(text)) return 'feature'
|
||||||
|
return 'enhancement'
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detect sub_feature from task file paths
|
||||||
|
function detectSubFeature(tasks) {
|
||||||
|
const dirs = tasks.map(t => t.file?.split('/').slice(-2, -1)[0]).filter(Boolean)
|
||||||
|
const counts = dirs.reduce((a, d) => { a[d] = (a[d] || 0) + 1; return a }, {})
|
||||||
|
return Object.entries(counts).sort((a, b) => b[1] - a[1])[0]?.[0] || 'general'
|
||||||
|
}
|
||||||
|
|
||||||
|
const category = detectCategory(`${planObject.summary} ${planObject.approach}`)
|
||||||
|
const entry = {
|
||||||
|
title: planObject.summary.slice(0, 60),
|
||||||
|
sub_feature: detectSubFeature(planObject.tasks),
|
||||||
|
date: new Date().toISOString().split('T')[0],
|
||||||
|
description: planObject.approach.slice(0, 100),
|
||||||
|
status: previousExecutionResults.every(r => r.status === 'completed') ? 'completed' : 'partial',
|
||||||
|
session_id: executionContext?.session?.id || null
|
||||||
|
}
|
||||||
|
|
||||||
|
projectJson.development_index[category].push(entry)
|
||||||
|
projectJson.statistics.last_updated = new Date().toISOString()
|
||||||
|
Write(projectJsonPath, JSON.stringify(projectJson, null, 2))
|
||||||
|
|
||||||
|
console.log(`✓ Development index: [${category}] ${entry.title}`)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Best Practices
|
## Best Practices
|
||||||
|
|
||||||
### Execution Intelligence
|
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
|
||||||
|
**Task Grouping**: Based on explicit depends_on only; independent tasks run in single parallel batch
|
||||||
1. **Context Continuity**: Each execution call receives previous results
|
**Execution**: All independent tasks launch concurrently via single Claude message with multiple tool calls
|
||||||
- Prevents duplication across multiple executions
|
|
||||||
- Maintains coherent implementation flow
|
|
||||||
- Builds on completed work
|
|
||||||
|
|
||||||
2. **Execution Call Tracking**: Progress at call level, not task level
|
|
||||||
- Each call handles all or subset of tasks
|
|
||||||
- Clear visibility of current execution
|
|
||||||
- Simple progress updates
|
|
||||||
|
|
||||||
3. **Flexible Execution**: Multiple input modes supported
|
|
||||||
- In-memory: Seamless lite-plan integration
|
|
||||||
- Prompt: Quick standalone execution
|
|
||||||
- File: Intelligent format detection
|
|
||||||
- Enhanced Task JSON (lite-plan export): Full plan extraction
|
|
||||||
- Plain text: Uses as prompt
|
|
||||||
|
|
||||||
### Task Management
|
|
||||||
|
|
||||||
1. **Live Progress Updates**: Real-time TodoWrite tracking
|
|
||||||
- Execution calls created before execution starts
|
|
||||||
- Updated as executions progress
|
|
||||||
- Clear completion status
|
|
||||||
|
|
||||||
2. **Simple Execution**: Straightforward task handling
|
|
||||||
- All tasks in single call (typical)
|
|
||||||
- Split only for very large task sets (>10)
|
|
||||||
- Agent/Codex determines optimal execution order
|
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
@@ -542,14 +642,32 @@ Passed from lite-plan via global variable:
|
|||||||
recommended_execution: string,
|
recommended_execution: string,
|
||||||
complexity: string
|
complexity: string
|
||||||
},
|
},
|
||||||
explorationContext: {...} | null,
|
explorationsContext: {...} | null, // Multi-angle explorations
|
||||||
|
explorationAngles: string[], // List of exploration angles
|
||||||
|
explorationManifest: {...} | null, // Exploration manifest
|
||||||
clarificationContext: {...} | null,
|
clarificationContext: {...} | null,
|
||||||
executionMethod: "Agent" | "Codex" | "Auto",
|
executionMethod: "Agent" | "Codex" | "Auto",
|
||||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||||
originalUserInput: string
|
originalUserInput: string,
|
||||||
|
|
||||||
|
// Session artifacts location (saved by lite-plan)
|
||||||
|
session: {
|
||||||
|
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
||||||
|
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
|
||||||
|
artifacts: {
|
||||||
|
explorations: [{angle, path}], // exploration-{angle}.json paths
|
||||||
|
explorations_manifest: string, // explorations-manifest.json path
|
||||||
|
plan: string // plan.json path (always present)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Artifact Usage**:
|
||||||
|
- Artifact files contain detailed planning context
|
||||||
|
- Pass artifact paths to CLI tools and agents for enhanced context
|
||||||
|
- See execution options below for usage examples
|
||||||
|
|
||||||
### executionResult (Output)
|
### executionResult (Output)
|
||||||
|
|
||||||
Collected after each execution call completes:
|
Collected after each execution call completes:
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: plan
|
name: plan
|
||||||
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution
|
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs
|
||||||
argument-hint: "[--cli-execute] \"text description\"|file.md"
|
argument-hint: "\"text description\"|file.md"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -9,7 +9,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
|
|
||||||
## Coordinator Role
|
## Coordinator Role
|
||||||
|
|
||||||
**This command is a pure orchestrator**: Execute 5 slash commands in sequence (including a quality gate), parse their outputs, pass context between them, and ensure complete execution through **automatic continuation**.
|
**This command is a pure orchestrator**: Dispatch 5 slash commands in sequence (including a quality gate), parse their outputs, pass context between them, and ensure complete execution through **automatic continuation**.
|
||||||
|
|
||||||
**Execution Model - Auto-Continue Workflow with Quality Gate**:
|
**Execution Model - Auto-Continue Workflow with Quality Gate**:
|
||||||
|
|
||||||
@@ -17,14 +17,14 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
|
|||||||
|
|
||||||
|
|
||||||
1. **User triggers**: `/workflow:plan "task"`
|
1. **User triggers**: `/workflow:plan "task"`
|
||||||
2. **Phase 1 executes** → Session discovery → Auto-continues
|
2. **Phase 1 dispatches** → Session discovery → Auto-continues
|
||||||
3. **Phase 2 executes** → Context gathering → Auto-continues
|
3. **Phase 2 dispatches** → Context gathering → Auto-continues
|
||||||
4. **Phase 3 executes** (optional, if conflict_risk ≥ medium) → Conflict resolution → Auto-continues
|
4. **Phase 3 dispatches** (optional, if conflict_risk ≥ medium) → Conflict resolution → Auto-continues
|
||||||
5. **Phase 4 executes** → Task generation (task-generate-agent) → Reports final summary
|
5. **Phase 4 dispatches** → Task generation (task-generate-agent) → Reports final summary
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- When a sub-command is invoked (e.g., `/workflow:tools:context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
- When a sub-command is dispatched (e.g., `/workflow:tools:context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
||||||
- Orchestrator **executes these attached tasks** sequentially
|
- Orchestrator **executes these attached tasks** sequentially
|
||||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||||
- This is **task expansion**, not external delegation
|
- This is **task expansion**, not external delegation
|
||||||
@@ -43,13 +43,48 @@ This workflow runs **fully autonomously** once triggered. Phase 3 (conflict reso
|
|||||||
3. **Parse Every Output**: Extract required data from each command/agent output for next phase
|
3. **Parse Every Output**: Extract required data from each command/agent output for next phase
|
||||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
||||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
6. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
6. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
7. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
└─ Convert user input to structured format (GOAL/SCOPE/CONTEXT)
|
||||||
|
|
||||||
|
Phase 1: Session Discovery
|
||||||
|
└─ /workflow:session:start --auto "structured-description"
|
||||||
|
└─ Output: sessionId (WFS-xxx)
|
||||||
|
|
||||||
|
Phase 2: Context Gathering
|
||||||
|
└─ /workflow:tools:context-gather --session sessionId "structured-description"
|
||||||
|
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
||||||
|
└─ Output: contextPath + conflict_risk
|
||||||
|
|
||||||
|
Phase 3: Conflict Resolution (conditional)
|
||||||
|
└─ Decision (conflict_risk check):
|
||||||
|
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
|
||||||
|
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
||||||
|
│ └─ Output: Modified brainstorm artifacts
|
||||||
|
└─ conflict_risk < medium → Skip to Phase 4
|
||||||
|
|
||||||
|
Phase 4: Task Generation
|
||||||
|
└─ /workflow:tools:task-generate-agent --session sessionId
|
||||||
|
└─ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
||||||
|
|
||||||
|
Return:
|
||||||
|
└─ Summary with recommended next steps
|
||||||
|
```
|
||||||
|
|
||||||
## 5-Phase Execution
|
## 5-Phase Execution
|
||||||
|
|
||||||
### Phase 1: Session Discovery
|
### Phase 1: Session Discovery
|
||||||
**Command**: `SlashCommand(command="/workflow:session:start --auto \"[structured-task-description]\"")`
|
|
||||||
|
**Step 1.1: Dispatch** - Create or discover workflow session
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:session:start --auto \"[structured-task-description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Task Description Structure**:
|
**Task Description Structure**:
|
||||||
```
|
```
|
||||||
@@ -72,6 +107,8 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
- Session ID successfully extracted
|
- Session ID successfully extracted
|
||||||
- Session directory `.workflow/active/[sessionId]/` exists
|
- Session directory `.workflow/active/[sessionId]/` exists
|
||||||
|
|
||||||
|
**Note**: Session directory contains `workflow-session.json` (metadata). Do NOT look for `manifest.json` here - it only exists in `.workflow/archives/` for archived sessions.
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||||
|
|
||||||
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
|
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
|
||||||
@@ -79,7 +116,12 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Context Gathering
|
### Phase 2: Context Gathering
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[structured-task-description]\"")`
|
|
||||||
|
**Step 2.1: Dispatch** - Gather project context and analyze codebase
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[structured-task-description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
||||||
|
|
||||||
@@ -93,29 +135,30 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
- Context package path extracted
|
- Context package path extracted
|
||||||
- File exists and is valid JSON
|
- File exists and is valid JSON
|
||||||
|
|
||||||
<!-- TodoWrite: When context-gather invoked, INSERT 3 context-gather tasks, mark first as in_progress -->
|
<!-- TodoWrite: When context-gather dispatched, INSERT 3 context-gather tasks, mark first as in_progress -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 2 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 2 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Phase 2.1: Analyze codebase structure (context-gather)", "status": "in_progress", "activeForm": "Analyzing codebase structure"},
|
{"content": "Phase 2: Context Gathering", "status": "in_progress", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Phase 2.2: Identify integration points (context-gather)", "status": "pending", "activeForm": "Identifying integration points"},
|
{"content": " → Analyze codebase structure", "status": "in_progress", "activeForm": "Analyzing codebase structure"},
|
||||||
{"content": "Phase 2.3: Generate context package (context-gather)", "status": "pending", "activeForm": "Generating context package"},
|
{"content": " → Identify integration points", "status": "pending", "activeForm": "Identifying integration points"},
|
||||||
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
|
{"content": " → Generate context package", "status": "pending", "activeForm": "Generating context package"},
|
||||||
|
{"content": "Phase 4: Task Generation", "status": "pending", "activeForm": "Executing task generation"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
|
**Note**: SlashCommand dispatch **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
|
||||||
|
|
||||||
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
|
<!-- TodoWrite: After Phase 2 tasks complete, REMOVE Phase 2.1-2.3, restore to orchestrator view -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
|
**TodoWrite Update (Phase 2 completed - tasks collapsed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
|
{"content": "Phase 4: Task Generation", "status": "pending", "activeForm": "Executing task generation"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -129,7 +172,11 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
|
|
||||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||||
|
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")`
|
**Step 3.1: Dispatch** - Detect and resolve conflicts with CLI analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- sessionId from Phase 1
|
- sessionId from Phase 1
|
||||||
@@ -149,29 +196,30 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
|
|
||||||
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
|
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached, if conflict_risk ≥ medium)**:
|
**TodoWrite Update (Phase 3 SlashCommand dispatched - tasks attached, if conflict_risk ≥ medium)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Phase 3.1: Detect conflicts with CLI analysis (conflict-resolution)", "status": "in_progress", "activeForm": "Detecting conflicts"},
|
{"content": "Phase 3: Conflict Resolution", "status": "in_progress", "activeForm": "Resolving conflicts"},
|
||||||
{"content": "Phase 3.2: Present conflicts to user (conflict-resolution)", "status": "pending", "activeForm": "Presenting conflicts"},
|
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
|
||||||
{"content": "Phase 3.3: Apply resolution strategies (conflict-resolution)", "status": "pending", "activeForm": "Applying resolution strategies"},
|
{"content": " → Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
|
||||||
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
|
{"content": " → Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
|
||||||
|
{"content": "Phase 4: Task Generation", "status": "pending", "activeForm": "Executing task generation"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
|
**Note**: SlashCommand dispatch **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
|
||||||
|
|
||||||
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
|
<!-- TodoWrite: After Phase 3 tasks complete, REMOVE Phase 3.1-3.3, restore to orchestrator view -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
|
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Resolve conflicts and apply fixes", "status": "completed", "activeForm": "Resolving conflicts"},
|
{"content": "Phase 3: Conflict Resolution", "status": "completed", "activeForm": "Resolving conflicts"},
|
||||||
{"content": "Execute task generation", "status": "pending", "activeForm": "Executing task generation"}
|
{"content": "Phase 4: Task Generation", "status": "pending", "activeForm": "Executing task generation"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -181,9 +229,14 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
|
|
||||||
**Memory State Check**:
|
**Memory State Check**:
|
||||||
- Evaluate current context window usage and memory state
|
- Evaluate current context window usage and memory state
|
||||||
- If memory usage is high (>110K tokens or approaching context limits):
|
- If memory usage is high (>120K tokens or approaching context limits):
|
||||||
- **Command**: `SlashCommand(command="/compact")`
|
|
||||||
- This optimizes memory before proceeding to Phase 3.5
|
**Step 3.2: Dispatch** - Optimize memory before proceeding
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/compact")
|
||||||
|
```
|
||||||
|
|
||||||
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
||||||
- Ensures optimal performance and prevents context overflow
|
- Ensures optimal performance and prevents context overflow
|
||||||
|
|
||||||
@@ -217,17 +270,13 @@ CONTEXT: Existing user database schema, REST API endpoints
|
|||||||
- Task generation translates high-level role analyses into concrete, actionable work items
|
- Task generation translates high-level role analyses into concrete, actionable work items
|
||||||
- **Intent priority**: Current user prompt > role analysis.md files > guidance-specification.md
|
- **Intent priority**: Current user prompt > role analysis.md files > guidance-specification.md
|
||||||
|
|
||||||
**Command**:
|
**Step 4.1: Dispatch** - Generate implementation plan and task JSONs
|
||||||
```bash
|
|
||||||
# Default (agent mode)
|
|
||||||
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")
|
|
||||||
|
|
||||||
# With CLI execution
|
```javascript
|
||||||
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId] --cli-execute")
|
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")
|
||||||
```
|
```
|
||||||
|
|
||||||
**Flag**:
|
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", the agent embeds `command` fields in relevant `implementation_approach` steps.
|
||||||
- `--cli-execute`: Generate tasks with Codex execution commands
|
|
||||||
|
|
||||||
**Input**: `sessionId` from Phase 1
|
**Input**: `sessionId` from Phase 1
|
||||||
|
|
||||||
@@ -236,14 +285,14 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]
|
|||||||
- `.workflow/active/[sessionId]/.task/IMPL-*.json` exists (at least one)
|
- `.workflow/active/[sessionId]/.task/IMPL-*.json` exists (at least one)
|
||||||
- `.workflow/active/[sessionId]/TODO_LIST.md` exists
|
- `.workflow/active/[sessionId]/TODO_LIST.md` exists
|
||||||
|
|
||||||
<!-- TodoWrite: When task-generate-agent invoked, ATTACH 1 agent task -->
|
<!-- TodoWrite: When task-generate-agent dispatched, ATTACH 1 agent task -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 4 SlashCommand invoked - agent task attached)**:
|
**TodoWrite Update (Phase 4 SlashCommand dispatched - agent task attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute task-generate-agent", "status": "in_progress", "activeForm": "Executing task-generate-agent"}
|
{"content": "Phase 4: Task Generation", "status": "in_progress", "activeForm": "Executing task generation"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -254,9 +303,9 @@ SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]
|
|||||||
**TodoWrite Update (Phase 4 completed)**:
|
**TodoWrite Update (Phase 4 completed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute task-generate-agent", "status": "completed", "activeForm": "Executing task-generate-agent"}
|
{"content": "Phase 4: Task Generation", "status": "completed", "activeForm": "Executing task generation"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -282,7 +331,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to catch issues earl
|
|||||||
|
|
||||||
### Key Principles
|
### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand dispatched):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
|
- **Phase 2, 3**: Multiple sub-tasks attached (e.g., Phase 2.1, 2.2, 2.3)
|
||||||
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
|
- **Phase 4**: Single agent task attached (e.g., "Execute task-generate-agent")
|
||||||
@@ -301,14 +350,9 @@ Quality Gate: Consider running /workflow:action-plan-verify to catch issues earl
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary for Phase 2/3, or marked completed for Phase 4) → Next phase begins → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase dispatched (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary for Phase 2/3, or marked completed for Phase 4) → Next phase begins → Repeat until all phases complete.
|
||||||
|
|
||||||
### Benefits
|
|
||||||
|
|
||||||
- ✓ Real-time visibility into sub-task execution
|
|
||||||
- ✓ Clear mental model: SlashCommand = attach → execute → collapse (Phase 2/3) or complete (Phase 4)
|
|
||||||
- ✓ Clean summary after completion
|
|
||||||
- ✓ Easy to track workflow progress
|
|
||||||
|
|
||||||
**Note**: See individual Phase descriptions for detailed TodoWrite Update examples:
|
**Note**: See individual Phase descriptions for detailed TodoWrite Update examples:
|
||||||
- **Phase 2, 3**: Multiple sub-tasks with attach/collapse pattern
|
- **Phase 2, 3**: Multiple sub-tasks with attach/collapse pattern
|
||||||
@@ -374,7 +418,7 @@ Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
|
|||||||
↓ Output: Modified brainstorm artifacts (NO report file)
|
↓ Output: Modified brainstorm artifacts (NO report file)
|
||||||
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
|
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
|
||||||
↓
|
↓
|
||||||
Phase 4: task-generate-agent --session sessionId [--cli-execute]
|
Phase 4: task-generate-agent --session sessionId
|
||||||
↓ Input: sessionId + resolved brainstorm artifacts + session memory
|
↓ Input: sessionId + resolved brainstorm artifacts + session memory
|
||||||
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
||||||
↓
|
↓
|
||||||
@@ -387,11 +431,6 @@ Return summary to user
|
|||||||
- Brainstorming artifacts (potentially modified by Phase 3)
|
- Brainstorming artifacts (potentially modified by Phase 3)
|
||||||
- Session-specific configuration
|
- Session-specific configuration
|
||||||
|
|
||||||
**Structured Description Benefits**:
|
|
||||||
- **Clarity**: Clear separation of goal, scope, and context
|
|
||||||
- **Consistency**: Same format across all phases
|
|
||||||
- **Traceability**: Easy to track what was requested
|
|
||||||
- **Precision**: Better context gathering and analysis
|
|
||||||
|
|
||||||
## Execution Flow Diagram
|
## Execution Flow Diagram
|
||||||
|
|
||||||
@@ -403,30 +442,29 @@ User triggers: /workflow:plan "Build authentication system"
|
|||||||
Phase 1: Session Discovery
|
Phase 1: Session Discovery
|
||||||
→ sessionId extracted
|
→ sessionId extracted
|
||||||
↓
|
↓
|
||||||
Phase 2: Context Gathering (SlashCommand invoked)
|
Phase 2: Context Gathering (SlashCommand dispatched)
|
||||||
→ ATTACH 3 tasks: ← ATTACHED
|
→ ATTACH 3 sub-tasks: ← ATTACHED
|
||||||
- Phase 2.1: Analyze codebase structure
|
- → Analyze codebase structure
|
||||||
- Phase 2.2: Identify integration points
|
- → Identify integration points
|
||||||
- Phase 2.3: Generate context package
|
- → Generate context package
|
||||||
→ Execute Phase 2.1-2.3
|
→ Execute sub-tasks sequentially
|
||||||
→ COLLAPSE tasks ← COLLAPSED
|
→ COLLAPSE tasks ← COLLAPSED
|
||||||
→ contextPath + conflict_risk extracted
|
→ contextPath + conflict_risk extracted
|
||||||
↓
|
↓
|
||||||
Conditional Branch: Check conflict_risk
|
Conditional Branch: Check conflict_risk
|
||||||
├─ IF conflict_risk ≥ medium:
|
├─ IF conflict_risk ≥ medium:
|
||||||
│ Phase 3: Conflict Resolution (SlashCommand invoked)
|
│ Phase 3: Conflict Resolution (SlashCommand dispatched)
|
||||||
│ → ATTACH 3 tasks: ← ATTACHED
|
│ → ATTACH 3 sub-tasks: ← ATTACHED
|
||||||
│ - Phase 3.1: Detect conflicts with CLI analysis
|
│ - → Detect conflicts with CLI analysis
|
||||||
│ - Phase 3.2: Present conflicts to user
|
│ - → Present conflicts to user
|
||||||
│ - Phase 3.3: Apply resolution strategies
|
│ - → Apply resolution strategies
|
||||||
│ → Execute Phase 3.1-3.3
|
│ → Execute sub-tasks sequentially
|
||||||
│ → COLLAPSE tasks ← COLLAPSED
|
│ → COLLAPSE tasks ← COLLAPSED
|
||||||
│
|
│
|
||||||
└─ ELSE: Skip Phase 3, proceed to Phase 4
|
└─ ELSE: Skip Phase 3, proceed to Phase 4
|
||||||
↓
|
↓
|
||||||
Phase 4: Task Generation (SlashCommand invoked)
|
Phase 4: Task Generation (SlashCommand dispatched)
|
||||||
→ ATTACH 1 agent task: ← ATTACHED
|
→ Single agent task (no sub-tasks)
|
||||||
- Execute task-generate-agent
|
|
||||||
→ Agent autonomously completes internally:
|
→ Agent autonomously completes internally:
|
||||||
(discovery → planning → output)
|
(discovery → planning → output)
|
||||||
→ Outputs: IMPL_PLAN.md, IMPL-*.json, TODO_LIST.md
|
→ Outputs: IMPL_PLAN.md, IMPL-*.json, TODO_LIST.md
|
||||||
@@ -435,12 +473,12 @@ Return summary to user
|
|||||||
```
|
```
|
||||||
|
|
||||||
**Key Points**:
|
**Key Points**:
|
||||||
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand invoked
|
- **← ATTACHED**: Tasks attached to TodoWrite when SlashCommand dispatched
|
||||||
- Phase 2, 3: Multiple sub-tasks
|
- Phase 2, 3: Multiple sub-tasks
|
||||||
- Phase 4: Single agent task
|
- Phase 4: Single agent task
|
||||||
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)
|
- **← COLLAPSED**: Sub-tasks collapsed to summary after completion (Phase 2, 3 only)
|
||||||
- **Phase 4**: Single agent task, no collapse (just mark completed)
|
- **Phase 4**: Single agent task, no collapse (just mark completed)
|
||||||
- **Conditional Branch**: Phase 3 only executes if conflict_risk ≥ medium
|
- **Conditional Branch**: Phase 3 only dispatches if conflict_risk ≥ medium
|
||||||
- **Continuous Flow**: No user intervention between phases
|
- **Continuous Flow**: No user intervention between phases
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
@@ -461,9 +499,7 @@ Return summary to user
|
|||||||
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
||||||
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
|
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
|
||||||
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
||||||
- **Build Phase 4 command**:
|
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
|
||||||
- Base command: `/workflow:tools:task-generate-agent --session [sessionId]`
|
|
||||||
- Add `--cli-execute` if flag present
|
|
||||||
- Pass session ID to Phase 4 command
|
- Pass session ID to Phase 4 command
|
||||||
- Verify all Phase 4 outputs
|
- Verify all Phase 4 outputs
|
||||||
- Update TodoWrite after each phase (dynamically adjust for Phase 3 presence)
|
- Update TodoWrite after each phase (dynamically adjust for Phase 3 presence)
|
||||||
|
|||||||
@@ -48,8 +48,54 @@ Intelligently replans workflow sessions or individual tasks with interactive bou
|
|||||||
/workflow:replan IMPL-1 --interactive
|
/workflow:replan IMPL-1 --interactive
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session, --interactive
|
||||||
|
└─ Detect mode: task-id present → Task mode | Otherwise → Session mode
|
||||||
|
|
||||||
|
Phase 1: Mode Detection & Session Discovery
|
||||||
|
├─ Detect operation mode (Task vs Session)
|
||||||
|
├─ Discover/validate session (--session flag or auto-detect)
|
||||||
|
└─ Load session context (workflow-session.json, IMPL_PLAN.md, TODO_LIST.md)
|
||||||
|
|
||||||
|
Phase 2: Interactive Requirement Clarification
|
||||||
|
└─ Decision (by mode):
|
||||||
|
├─ Session mode → 3-4 questions (scope, modules, changes, dependencies)
|
||||||
|
└─ Task mode → 2 questions (update type, ripple effect)
|
||||||
|
|
||||||
|
Phase 3: Impact Analysis & Planning
|
||||||
|
├─ Analyze required changes
|
||||||
|
├─ Generate modification plan
|
||||||
|
└─ User confirmation (Execute / Adjust / Cancel)
|
||||||
|
|
||||||
|
Phase 4: Backup Creation
|
||||||
|
└─ Backup all affected files with manifest
|
||||||
|
|
||||||
|
Phase 5: Apply Modifications
|
||||||
|
├─ Update IMPL_PLAN.md (if needed)
|
||||||
|
├─ Update TODO_LIST.md (if needed)
|
||||||
|
├─ Update/Create/Delete task JSONs
|
||||||
|
└─ Update session metadata
|
||||||
|
|
||||||
|
Phase 6: Verification & Summary
|
||||||
|
├─ Validate consistency (JSON validity, task limits, acyclic dependencies)
|
||||||
|
└─ Generate change summary
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
|
### Input Parsing
|
||||||
|
|
||||||
|
**Parse flags**:
|
||||||
|
```javascript
|
||||||
|
const sessionFlag = $ARGUMENTS.match(/--session\s+(\S+)/)?.[1]
|
||||||
|
const interactive = $ARGUMENTS.includes('--interactive')
|
||||||
|
const taskIdMatch = $ARGUMENTS.match(/\b(IMPL-\d+(?:\.\d+)?)\b/)
|
||||||
|
const taskId = taskIdMatch?.[1]
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 1: Mode Detection & Session Discovery
|
### Phase 1: Mode Detection & Session Discovery
|
||||||
|
|
||||||
**Process**:
|
**Process**:
|
||||||
@@ -97,11 +143,10 @@ Options: Dynamically generated from existing tasks' focus_paths
|
|||||||
**Q3: Task Changes** (if scope >= task_restructure)
|
**Q3: Task Changes** (if scope >= task_restructure)
|
||||||
```javascript
|
```javascript
|
||||||
Options:
|
Options:
|
||||||
- 添加新任务
|
- 添加/删除任务 (add_remove)
|
||||||
- 删除现有任务
|
- 合并/拆分任务 (merge_split)
|
||||||
- 合并任务
|
- 仅更新内容 (update_only)
|
||||||
- 拆分任务
|
// Note: Max 4 options for AskUserQuestion
|
||||||
- 仅更新内容
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Q4: Dependency Changes**
|
**Q4: Dependency Changes**
|
||||||
|
|||||||
606
.claude/commands/workflow/review-fix.md
Normal file
606
.claude/commands/workflow/review-fix.md
Normal file
@@ -0,0 +1,606 @@
|
|||||||
|
---
|
||||||
|
name: review-fix
|
||||||
|
description: Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.
|
||||||
|
argument-hint: "<export-file|review-dir> [--resume] [--max-iterations=N]"
|
||||||
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), Edit(*), Write(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Review-Fix Command
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Fix from exported findings file (session-based path)
|
||||||
|
/workflow:review-fix .workflow/active/WFS-123/.review/fix-export-1706184622000.json
|
||||||
|
|
||||||
|
# Fix from review directory (auto-discovers latest export)
|
||||||
|
/workflow:review-fix .workflow/active/WFS-123/.review/
|
||||||
|
|
||||||
|
# Resume interrupted fix session
|
||||||
|
/workflow:review-fix --resume
|
||||||
|
|
||||||
|
# Custom max retry attempts per finding
|
||||||
|
/workflow:review-fix .workflow/active/WFS-123/.review/ --max-iterations=5
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix Source**: Exported findings from review cycle dashboard
|
||||||
|
**Output Directory**: `{review-dir}/fixes/{fix-session-id}/` (within session .review/)
|
||||||
|
**Default Max Iterations**: 3 (per finding, adjustable)
|
||||||
|
**CLI Tools**: @cli-planning-agent (planning), @cli-execute-agent (fixing)
|
||||||
|
|
||||||
|
## What & Why
|
||||||
|
|
||||||
|
### Core Concept
|
||||||
|
Automated fix orchestrator with **two-phase architecture**: AI-powered planning followed by coordinated parallel/serial execution. Generates fix timeline with intelligent grouping and dependency analysis, then executes fixes with conservative test verification.
|
||||||
|
|
||||||
|
**Fix Process**:
|
||||||
|
- **Planning Phase**: AI analyzes findings, generates fix plan with grouping and execution strategy
|
||||||
|
- **Execution Phase**: Main orchestrator coordinates agents per timeline stages
|
||||||
|
- **No rigid structure**: Adapts to task requirements, not bound to fixed JSON format
|
||||||
|
|
||||||
|
**vs Manual Fixing**:
|
||||||
|
- **Manual**: Developer reviews findings one-by-one, fixes sequentially
|
||||||
|
- **Automated**: AI groups related issues, executes in optimal parallel/serial order with automatic test verification
|
||||||
|
|
||||||
|
### Value Proposition
|
||||||
|
1. **Intelligent Planning**: AI-powered analysis identifies optimal grouping and execution strategy
|
||||||
|
2. **Multi-stage Coordination**: Supports complex parallel + serial execution with dependency management
|
||||||
|
3. **Conservative Safety**: Mandatory test verification with automatic rollback on failure
|
||||||
|
4. **Resume Support**: Checkpoint-based recovery for interrupted sessions
|
||||||
|
|
||||||
|
### Orchestrator Boundary (CRITICAL)
|
||||||
|
- **ONLY command** for automated review finding fixes
|
||||||
|
- Manages: Planning phase coordination, stage-based execution, agent scheduling, progress tracking
|
||||||
|
- Delegates: Fix planning to @cli-planning-agent, fix execution to @cli-execute-agent
|
||||||
|
|
||||||
|
|
||||||
|
### Execution Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Discovery & Initialization
|
||||||
|
└─ Validate export file, create fix session structure, initialize state files
|
||||||
|
|
||||||
|
Phase 2: Planning Coordination (@cli-planning-agent)
|
||||||
|
├─ Analyze findings for patterns and dependencies
|
||||||
|
├─ Group by file + dimension + root cause similarity
|
||||||
|
├─ Determine execution strategy (parallel/serial/hybrid)
|
||||||
|
├─ Generate fix timeline with stages
|
||||||
|
└─ Output: fix-plan.json
|
||||||
|
|
||||||
|
Phase 3: Execution Orchestration (Stage-based)
|
||||||
|
For each timeline stage:
|
||||||
|
├─ Load groups for this stage
|
||||||
|
├─ If parallel: Launch all group agents simultaneously
|
||||||
|
├─ If serial: Execute groups sequentially
|
||||||
|
├─ Each agent:
|
||||||
|
│ ├─ Analyze code context
|
||||||
|
│ ├─ Apply fix per strategy
|
||||||
|
│ ├─ Run affected tests
|
||||||
|
│ ├─ On test failure: Rollback, retry up to max_iterations
|
||||||
|
│ └─ On success: Commit, update fix-progress-{N}.json
|
||||||
|
└─ Advance to next stage
|
||||||
|
|
||||||
|
Phase 4: Completion & Aggregation
|
||||||
|
└─ Aggregate results → Generate fix-summary.md → Update history → Output summary
|
||||||
|
|
||||||
|
Phase 5: Session Completion (Optional)
|
||||||
|
└─ If all fixes successful → Prompt to complete workflow session
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Roles
|
||||||
|
|
||||||
|
| Agent | Responsibility |
|
||||||
|
|-------|---------------|
|
||||||
|
| **Orchestrator** | Input validation, session management, planning coordination, stage-based execution scheduling, progress tracking, aggregation |
|
||||||
|
| **@cli-planning-agent** | Findings analysis, intelligent grouping (file+dimension+root cause), execution strategy determination (parallel/serial/hybrid), timeline generation with dependency mapping |
|
||||||
|
| **@cli-execute-agent** | Fix execution per group, code context analysis, Edit tool operations, test verification, git rollback on failure, completion JSON generation |
|
||||||
|
|
||||||
|
## Enhanced Features
|
||||||
|
|
||||||
|
### 1. Two-Phase Architecture
|
||||||
|
|
||||||
|
**Phase Separation**:
|
||||||
|
|
||||||
|
| Phase | Agent | Output | Purpose |
|
||||||
|
|-------|-------|--------|---------|
|
||||||
|
| **Planning** | @cli-planning-agent | fix-plan.json | Analyze findings, group intelligently, determine optimal execution strategy |
|
||||||
|
| **Execution** | @cli-execute-agent | completions/*.json | Execute fixes per plan with test verification and rollback |
|
||||||
|
|
||||||
|
**Benefits**:
|
||||||
|
- Clear separation of concerns (analysis vs execution)
|
||||||
|
- Reusable plans (can re-execute without re-planning)
|
||||||
|
- Better error isolation (planning failures vs execution failures)
|
||||||
|
|
||||||
|
### 2. Intelligent Grouping Strategy
|
||||||
|
|
||||||
|
**Three-Level Grouping**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Level 1: Primary grouping by file + dimension
|
||||||
|
{file: "auth.ts", dimension: "security"} → Group A
|
||||||
|
{file: "auth.ts", dimension: "quality"} → Group B
|
||||||
|
{file: "query-builder.ts", dimension: "security"} → Group C
|
||||||
|
|
||||||
|
// Level 2: Secondary grouping by root cause similarity
|
||||||
|
Group A findings → Semantic similarity analysis (threshold 0.7)
|
||||||
|
→ Sub-group A1: "missing-input-validation" (findings 1, 2)
|
||||||
|
→ Sub-group A2: "insecure-crypto" (finding 3)
|
||||||
|
|
||||||
|
// Level 3: Dependency analysis
|
||||||
|
Sub-group A1 creates validation utilities
|
||||||
|
Sub-group C4 depends on those utilities
|
||||||
|
→ A1 must execute before C4 (serial stage dependency)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Similarity Computation**:
|
||||||
|
- Combine: `description + recommendation + category`
|
||||||
|
- Vectorize: TF-IDF or LLM embedding
|
||||||
|
- Cluster: Greedy algorithm with cosine similarity > 0.7
|
||||||
|
|
||||||
|
### 3. Execution Strategy Determination
|
||||||
|
|
||||||
|
**Strategy Types**:
|
||||||
|
|
||||||
|
| Strategy | When to Use | Stage Structure |
|
||||||
|
|----------|-------------|-----------------|
|
||||||
|
| **Parallel** | All groups independent, different files | Single stage, all groups in parallel |
|
||||||
|
| **Serial** | Strong dependencies, shared resources | Multiple stages, one group per stage |
|
||||||
|
| **Hybrid** | Mixed dependencies | Multiple stages, parallel within stages |
|
||||||
|
|
||||||
|
**Dependency Detection**:
|
||||||
|
- Shared file modifications
|
||||||
|
- Utility creation + usage patterns
|
||||||
|
- Test dependency chains
|
||||||
|
- Risk level clustering (high-risk groups isolated)
|
||||||
|
|
||||||
|
### 4. Conservative Test Verification
|
||||||
|
|
||||||
|
**Test Strategy** (per fix):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 1. Identify affected tests
|
||||||
|
const testPattern = identifyTestPattern(finding.file);
|
||||||
|
// e.g., "tests/auth/**/*.test.*" for src/auth/service.ts
|
||||||
|
|
||||||
|
// 2. Run tests
|
||||||
|
const result = await runTests(testPattern);
|
||||||
|
|
||||||
|
// 3. Evaluate
|
||||||
|
if (result.passRate < 100%) {
|
||||||
|
// Rollback
|
||||||
|
await gitCheckout(finding.file);
|
||||||
|
|
||||||
|
// Retry with failure context
|
||||||
|
if (attempts < maxIterations) {
|
||||||
|
const fixContext = analyzeFailure(result.stderr);
|
||||||
|
regenerateFix(finding, fixContext);
|
||||||
|
retry();
|
||||||
|
} else {
|
||||||
|
markFailed(finding.id);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Commit
|
||||||
|
await gitCommit(`Fix: ${finding.title} [${finding.id}]`);
|
||||||
|
markFixed(finding.id);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pass Criteria**: 100% test pass rate (no partial fixes)
|
||||||
|
|
||||||
|
## Core Responsibilities
|
||||||
|
|
||||||
|
### Orchestrator
|
||||||
|
|
||||||
|
**Phase 1: Discovery & Initialization**
|
||||||
|
- Input validation: Check export file exists and is valid JSON
|
||||||
|
- Auto-discovery: If review-dir provided, find latest `*-fix-export.json`
|
||||||
|
- Session creation: Generate fix-session-id (`fix-{timestamp}`)
|
||||||
|
- Directory structure: Create `{review-dir}/fixes/{fix-session-id}/` with subdirectories
|
||||||
|
- State files: Initialize active-fix-session.json (session marker)
|
||||||
|
- TodoWrite initialization: Set up 4-phase tracking
|
||||||
|
|
||||||
|
**Phase 2: Planning Coordination**
|
||||||
|
- Launch @cli-planning-agent with findings data and project context
|
||||||
|
- Validate fix-plan.json output (schema conformance, includes metadata with session status)
|
||||||
|
- Load plan into memory for execution phase
|
||||||
|
- TodoWrite update: Mark planning complete, start execution
|
||||||
|
|
||||||
|
**Phase 3: Execution Orchestration**
|
||||||
|
- Load fix-plan.json timeline stages
|
||||||
|
- For each stage:
|
||||||
|
- If parallel mode: Launch all group agents via `Promise.all()`
|
||||||
|
- If serial mode: Execute groups sequentially with `await`
|
||||||
|
- Assign agent IDs (agents update their fix-progress-{N}.json)
|
||||||
|
- Handle agent failures gracefully (mark group as failed, continue)
|
||||||
|
- Advance to next stage only when current stage complete
|
||||||
|
|
||||||
|
**Phase 4: Completion & Aggregation**
|
||||||
|
- Collect final status from all fix-progress-{N}.json files
|
||||||
|
- Generate fix-summary.md with timeline and results
|
||||||
|
- Update fix-history.json with new session entry
|
||||||
|
- Remove active-fix-session.json
|
||||||
|
- TodoWrite completion: Mark all phases done
|
||||||
|
- Output summary to user
|
||||||
|
|
||||||
|
**Phase 5: Session Completion (Optional)**
|
||||||
|
- If all findings fixed successfully (no failures):
|
||||||
|
- Prompt user: "All fixes complete. Complete workflow session? [Y/n]"
|
||||||
|
- If confirmed: Execute `/workflow:session:complete` to archive session with lessons learned
|
||||||
|
- If partial success (some failures):
|
||||||
|
- Output: "Some findings failed. Review fix-summary.md before completing session."
|
||||||
|
- Do NOT auto-complete session
|
||||||
|
|
||||||
|
### Output File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/active/WFS-{session-id}/.review/
|
||||||
|
├── fix-export-{timestamp}.json # Exported findings (input)
|
||||||
|
└── fixes/{fix-session-id}/
|
||||||
|
├── fix-plan.json # Planning agent output (execution plan with metadata)
|
||||||
|
├── fix-progress-1.json # Group 1 progress (planning agent init → agent updates)
|
||||||
|
├── fix-progress-2.json # Group 2 progress (planning agent init → agent updates)
|
||||||
|
├── fix-progress-3.json # Group 3 progress (planning agent init → agent updates)
|
||||||
|
├── fix-summary.md # Final report (orchestrator generates)
|
||||||
|
├── active-fix-session.json # Active session marker
|
||||||
|
└── fix-history.json # All sessions history
|
||||||
|
```
|
||||||
|
|
||||||
|
**File Producers**:
|
||||||
|
- **Planning Agent**: `fix-plan.json` (with metadata), all `fix-progress-*.json` (initial state)
|
||||||
|
- **Execution Agents**: Update assigned `fix-progress-{N}.json` in real-time
|
||||||
|
|
||||||
|
|
||||||
|
### Agent Invocation Template
|
||||||
|
|
||||||
|
**Planning Agent**:
|
||||||
|
```javascript
|
||||||
|
Task({
|
||||||
|
subagent_type: "cli-planning-agent",
|
||||||
|
description: `Generate fix plan and initialize progress files for ${findings.length} findings`,
|
||||||
|
prompt: `
|
||||||
|
## Task Objective
|
||||||
|
Analyze ${findings.length} code review findings and generate execution plan with intelligent grouping and timeline coordination.
|
||||||
|
|
||||||
|
## Input Data
|
||||||
|
Review Session: ${reviewId}
|
||||||
|
Fix Session ID: ${fixSessionId}
|
||||||
|
Total Findings: ${findings.length}
|
||||||
|
|
||||||
|
Findings:
|
||||||
|
${JSON.stringify(findings, null, 2)}
|
||||||
|
|
||||||
|
Project Context:
|
||||||
|
- Structure: ${projectStructure}
|
||||||
|
- Test Framework: ${testFramework}
|
||||||
|
- Git Status: ${gitStatus}
|
||||||
|
|
||||||
|
## Output Requirements
|
||||||
|
|
||||||
|
### 1. fix-plan.json
|
||||||
|
Execute: cat ~/.claude/workflows/cli-templates/fix-plan-template.json
|
||||||
|
|
||||||
|
Generate execution plan following template structure:
|
||||||
|
|
||||||
|
**Key Generation Rules**:
|
||||||
|
- **Metadata**: Populate fix_session_id, review_session_id, status ("planning"), created_at, started_at timestamps
|
||||||
|
- **Execution Strategy**: Choose approach (parallel/serial/hybrid) based on dependency analysis, set parallel_limit and stages count
|
||||||
|
- **Groups**: Create groups (G1, G2, ...) with intelligent grouping (see Analysis Requirements below), assign progress files (fix-progress-1.json, ...), populate fix_strategy with approach/complexity/test_pattern, assess risks, identify dependencies
|
||||||
|
- **Timeline**: Define stages respecting dependencies, set execution_mode per stage, map groups to stages, calculate critical path
|
||||||
|
|
||||||
|
### 2. fix-progress-{N}.json (one per group)
|
||||||
|
Execute: cat ~/.claude/workflows/cli-templates/fix-progress-template.json
|
||||||
|
|
||||||
|
For each group (G1, G2, G3, ...), generate fix-progress-{N}.json following template structure:
|
||||||
|
|
||||||
|
**Initial State Requirements**:
|
||||||
|
- Status: "pending", phase: "waiting"
|
||||||
|
- Timestamps: Set last_update to now, others null
|
||||||
|
- Findings: Populate from review findings with status "pending", all operation fields null
|
||||||
|
- Summary: Initialize all counters to zero
|
||||||
|
- Flow control: Empty implementation_approach array
|
||||||
|
- Errors: Empty array
|
||||||
|
|
||||||
|
**CRITICAL**: Ensure complete template structure - all fields must be present.
|
||||||
|
|
||||||
|
## Analysis Requirements
|
||||||
|
|
||||||
|
### Intelligent Grouping Strategy
|
||||||
|
Group findings using these criteria (in priority order):
|
||||||
|
|
||||||
|
1. **File Proximity**: Findings in same file or related files
|
||||||
|
2. **Dimension Affinity**: Same dimension (security, performance, etc.)
|
||||||
|
3. **Root Cause Similarity**: Similar underlying issues
|
||||||
|
4. **Fix Approach Commonality**: Can be fixed with similar approach
|
||||||
|
|
||||||
|
**Grouping Guidelines**:
|
||||||
|
- Optimal group size: 2-5 findings per group
|
||||||
|
- Avoid cross-cutting concerns in same group
|
||||||
|
- Consider test isolation (different test suites → different groups)
|
||||||
|
- Balance workload across groups for parallel execution
|
||||||
|
|
||||||
|
### Execution Strategy Determination
|
||||||
|
|
||||||
|
**Parallel Mode**: Use when groups are independent, no shared files
|
||||||
|
**Serial Mode**: Use when groups have dependencies or shared resources
|
||||||
|
**Hybrid Mode**: Use for mixed dependency graphs (recommended for most cases)
|
||||||
|
|
||||||
|
**Dependency Analysis**:
|
||||||
|
- Identify shared files between groups
|
||||||
|
- Detect test dependency chains
|
||||||
|
- Evaluate risk of concurrent modifications
|
||||||
|
|
||||||
|
### Risk Assessment
|
||||||
|
|
||||||
|
For each group, evaluate:
|
||||||
|
- **Complexity**: Based on code structure, file size, existing tests
|
||||||
|
- **Impact Scope**: Number of files affected, API surface changes
|
||||||
|
- **Rollback Feasibility**: Ease of reverting changes if tests fail
|
||||||
|
|
||||||
|
### Test Strategy
|
||||||
|
|
||||||
|
For each group, determine:
|
||||||
|
- **Test Pattern**: Glob pattern matching affected tests
|
||||||
|
- **Pass Criteria**: All tests must pass (100% pass rate)
|
||||||
|
- **Test Command**: Infer from project (package.json, pytest.ini, etc.)
|
||||||
|
|
||||||
|
## Output Files
|
||||||
|
|
||||||
|
Write to ${sessionDir}:
|
||||||
|
- ./fix-plan.json
|
||||||
|
- ./fix-progress-1.json
|
||||||
|
- ./fix-progress-2.json
|
||||||
|
- ./fix-progress-{N}.json (as many as groups created)
|
||||||
|
|
||||||
|
## Quality Checklist
|
||||||
|
|
||||||
|
Before finalizing outputs:
|
||||||
|
- ✅ All findings assigned to exactly one group
|
||||||
|
- ✅ Group dependencies correctly identified
|
||||||
|
- ✅ Timeline stages respect dependencies
|
||||||
|
- ✅ All progress files have complete initial structure
|
||||||
|
- ✅ Test patterns are valid and specific
|
||||||
|
- ✅ Risk assessments are realistic
|
||||||
|
- ✅ Estimated times are reasonable
|
||||||
|
`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Execution Agent** (per group):
|
||||||
|
```javascript
|
||||||
|
Task({
|
||||||
|
subagent_type: "cli-execute-agent",
|
||||||
|
description: `Fix ${group.findings.length} issues: ${group.group_name}`,
|
||||||
|
prompt: `
|
||||||
|
## Task Objective
|
||||||
|
Execute fixes for code review findings in group ${group.group_id}. Update progress file in real-time with flow control tracking.
|
||||||
|
|
||||||
|
## Assignment
|
||||||
|
- Group ID: ${group.group_id}
|
||||||
|
- Group Name: ${group.group_name}
|
||||||
|
- Progress File: ${sessionDir}/${group.progress_file}
|
||||||
|
- Findings Count: ${group.findings.length}
|
||||||
|
- Max Iterations: ${maxIterations} (per finding)
|
||||||
|
|
||||||
|
## Fix Strategy
|
||||||
|
${JSON.stringify(group.fix_strategy, null, 2)}
|
||||||
|
|
||||||
|
## Risk Assessment
|
||||||
|
${JSON.stringify(group.risk_assessment, null, 2)}
|
||||||
|
|
||||||
|
## Execution Flow
|
||||||
|
|
||||||
|
### Initialization (Before Starting)
|
||||||
|
|
||||||
|
1. Read ${group.progress_file} to load initial state
|
||||||
|
2. Update progress file:
|
||||||
|
- assigned_agent: "${agentId}"
|
||||||
|
- status: "in-progress"
|
||||||
|
- started_at: Current ISO 8601 timestamp
|
||||||
|
- last_update: Current ISO 8601 timestamp
|
||||||
|
3. Write updated state back to ${group.progress_file}
|
||||||
|
|
||||||
|
### Main Execution Loop
|
||||||
|
|
||||||
|
For EACH finding in ${group.progress_file}.findings:
|
||||||
|
|
||||||
|
#### Step 1: Analyze Context
|
||||||
|
|
||||||
|
**Before Step**:
|
||||||
|
- Update finding: status→"in-progress", started_at→now()
|
||||||
|
- Update current_finding: Populate with finding details, status→"analyzing", action→"Reading file and understanding code structure"
|
||||||
|
- Update phase→"analyzing"
|
||||||
|
- Update flow_control: Add "analyze_context" step to implementation_approach (status→"in-progress"), set current_step→"analyze_context"
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
|
||||||
|
**Action**:
|
||||||
|
- Read file: finding.file
|
||||||
|
- Understand code structure around line: finding.line
|
||||||
|
- Analyze surrounding context (imports, dependencies, related functions)
|
||||||
|
- Review recommendations: finding.recommendations
|
||||||
|
|
||||||
|
**After Step**:
|
||||||
|
- Update flow_control: Mark "analyze_context" step as "completed" with completed_at→now()
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
|
||||||
|
#### Step 2: Apply Fix
|
||||||
|
|
||||||
|
**Before Step**:
|
||||||
|
- Update current_finding: status→"fixing", action→"Applying code changes per recommendations"
|
||||||
|
- Update phase→"fixing"
|
||||||
|
- Update flow_control: Add "apply_fix" step to implementation_approach (status→"in-progress"), set current_step→"apply_fix"
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
|
||||||
|
**Action**:
|
||||||
|
- Use Edit tool to implement code changes per finding.recommendations
|
||||||
|
- Follow fix_strategy.approach
|
||||||
|
- Maintain code style and existing patterns
|
||||||
|
|
||||||
|
**After Step**:
|
||||||
|
- Update flow_control: Mark "apply_fix" step as "completed" with completed_at→now()
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
|
||||||
|
#### Step 3: Test Verification
|
||||||
|
|
||||||
|
**Before Step**:
|
||||||
|
- Update current_finding: status→"testing", action→"Running test suite to verify fix"
|
||||||
|
- Update phase→"testing"
|
||||||
|
- Update flow_control: Add "run_tests" step to implementation_approach (status→"in-progress"), set current_step→"run_tests"
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
|
||||||
|
**Action**:
|
||||||
|
- Run tests using fix_strategy.test_pattern
|
||||||
|
- Require 100% pass rate
|
||||||
|
- Capture test output
|
||||||
|
|
||||||
|
**On Test Failure**:
|
||||||
|
- Git rollback: \`git checkout -- \${finding.file}\`
|
||||||
|
- Increment finding.attempts
|
||||||
|
- Update flow_control: Mark "run_tests" step as "failed" with completed_at→now()
|
||||||
|
- Update errors: Add entry (finding_id, error_type→"test_failure", message, timestamp)
|
||||||
|
- If finding.attempts < ${maxIterations}:
|
||||||
|
- Reset flow_control: implementation_approach→[], current_step→null
|
||||||
|
- Retry from Step 1
|
||||||
|
- Else:
|
||||||
|
- Update finding: status→"completed", result→"failed", error_message→"Max iterations reached", completed_at→now()
|
||||||
|
- Update summary counts, move to next finding
|
||||||
|
|
||||||
|
**On Test Success**:
|
||||||
|
- Update flow_control: Mark "run_tests" step as "completed" with completed_at→now()
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
- Proceed to Step 4
|
||||||
|
|
||||||
|
#### Step 4: Commit Changes
|
||||||
|
|
||||||
|
**Before Step**:
|
||||||
|
- Update current_finding: status→"committing", action→"Creating git commit for successful fix"
|
||||||
|
- Update phase→"committing"
|
||||||
|
- Update flow_control: Add "commit_changes" step to implementation_approach (status→"in-progress"), set current_step→"commit_changes"
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
|
||||||
|
**Action**:
|
||||||
|
- Git commit: \`git commit -m "fix(${finding.dimension}): ${finding.title} [${finding.id}]"\`
|
||||||
|
- Capture commit hash
|
||||||
|
|
||||||
|
**After Step**:
|
||||||
|
- Update finding: status→"completed", result→"fixed", commit_hash→<captured>, test_passed→true, completed_at→now()
|
||||||
|
- Update flow_control: Mark "commit_changes" step as "completed" with completed_at→now()
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
|
||||||
|
#### After Each Finding
|
||||||
|
|
||||||
|
- Update summary: Recalculate counts (pending/in_progress/fixed/failed) and percent_complete
|
||||||
|
- If all findings completed: Clear current_finding, reset flow_control
|
||||||
|
- Update last_update→now(), write to ${group.progress_file}
|
||||||
|
|
||||||
|
### Final Completion
|
||||||
|
|
||||||
|
When all findings processed:
|
||||||
|
- Update status→"completed", phase→"done", summary.percent_complete→100.0
|
||||||
|
- Update last_update→now(), write final state to ${group.progress_file}
|
||||||
|
|
||||||
|
## Critical Requirements
|
||||||
|
|
||||||
|
### Progress File Updates
|
||||||
|
- **MUST update after every significant action** (before/after each step)
|
||||||
|
- **Always maintain complete structure** - never write partial updates
|
||||||
|
- **Use ISO 8601 timestamps** - e.g., "2025-01-25T14:36:00Z"
|
||||||
|
|
||||||
|
### Flow Control Format
|
||||||
|
Follow action-planning-agent flow_control.implementation_approach format:
|
||||||
|
- step: Identifier (e.g., "analyze_context", "apply_fix")
|
||||||
|
- action: Human-readable description
|
||||||
|
- status: "pending" | "in-progress" | "completed" | "failed"
|
||||||
|
- started_at: ISO 8601 timestamp or null
|
||||||
|
- completed_at: ISO 8601 timestamp or null
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
- Capture all errors in errors[] array
|
||||||
|
- Never leave progress file in invalid state
|
||||||
|
- Always write complete updates, never partial
|
||||||
|
- On unrecoverable error: Mark group as failed, preserve state
|
||||||
|
|
||||||
|
## Test Patterns
|
||||||
|
Use fix_strategy.test_pattern to run affected tests:
|
||||||
|
- Pattern: ${group.fix_strategy.test_pattern}
|
||||||
|
- Command: Infer from project (npm test, pytest, etc.)
|
||||||
|
- Pass Criteria: 100% pass rate required
|
||||||
|
`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
**Planning Failures**:
|
||||||
|
- Invalid template → Abort with error message
|
||||||
|
- Insufficient findings data → Request complete export
|
||||||
|
- Planning timeout → Retry once, then fail gracefully
|
||||||
|
|
||||||
|
**Execution Failures**:
|
||||||
|
- Agent crash → Mark group as failed, continue with other groups
|
||||||
|
- Test command not found → Skip test verification, warn user
|
||||||
|
- Git operations fail → Abort with error, preserve state
|
||||||
|
|
||||||
|
**Rollback Scenarios**:
|
||||||
|
- Test failure after fix → Automatic `git checkout` rollback
|
||||||
|
- Max iterations reached → Leave file unchanged, mark as failed
|
||||||
|
- Unrecoverable error → Rollback entire group, save checkpoint
|
||||||
|
|
||||||
|
### TodoWrite Structure
|
||||||
|
|
||||||
|
**Initialization**:
|
||||||
|
```javascript
|
||||||
|
TodoWrite({
|
||||||
|
todos: [
|
||||||
|
{content: "Phase 1: Discovery & Initialization", status: "completed"},
|
||||||
|
{content: "Phase 2: Planning", status: "in_progress"},
|
||||||
|
{content: "Phase 3: Execution", status: "pending"},
|
||||||
|
{content: "Phase 4: Completion", status: "pending"}
|
||||||
|
]
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**During Execution**:
|
||||||
|
```javascript
|
||||||
|
TodoWrite({
|
||||||
|
todos: [
|
||||||
|
{content: "Phase 1: Discovery & Initialization", status: "completed"},
|
||||||
|
{content: "Phase 2: Planning", status: "completed"},
|
||||||
|
{content: "Phase 3: Execution", status: "in_progress"},
|
||||||
|
{content: " → Stage 1: Parallel execution (3 groups)", status: "completed"},
|
||||||
|
{content: " • Group G1: Auth validation (2 findings)", status: "completed"},
|
||||||
|
{content: " • Group G2: Query security (3 findings)", status: "completed"},
|
||||||
|
{content: " • Group G3: Config quality (1 finding)", status: "completed"},
|
||||||
|
{content: " → Stage 2: Serial execution (1 group)", status: "in_progress"},
|
||||||
|
{content: " • Group G4: Dependent fixes (2 findings)", status: "in_progress"},
|
||||||
|
{content: "Phase 4: Completion", status: "pending"}
|
||||||
|
]
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update Rules**:
|
||||||
|
- Add stage items dynamically based on fix-plan.json timeline
|
||||||
|
- Add group items per stage
|
||||||
|
- Mark completed immediately after each group finishes
|
||||||
|
- Update parent phase status when all child items complete
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
|
||||||
|
2. **Conservative Approach**: Test verification is mandatory - no fixes kept without passing tests
|
||||||
|
3. **Parallel Efficiency**: Default 3 concurrent agents balances speed and resource usage
|
||||||
|
4. **Resume Support**: Fix sessions can resume from checkpoints after interruption
|
||||||
|
5. **Manual Review**: Always review failed fixes manually - may require architectural changes
|
||||||
|
6. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
### View Fix Progress
|
||||||
|
Use `ccw view` to open the workflow dashboard in browser:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw view
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
765
.claude/commands/workflow/review-module-cycle.md
Normal file
765
.claude/commands/workflow/review-module-cycle.md
Normal file
@@ -0,0 +1,765 @@
|
|||||||
|
---
|
||||||
|
name: review-module-cycle
|
||||||
|
description: Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.
|
||||||
|
argument-hint: "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]"
|
||||||
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Review-Module-Cycle Command
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Review specific module (all 7 dimensions)
|
||||||
|
/workflow:review-module-cycle src/auth/**
|
||||||
|
|
||||||
|
# Review multiple modules
|
||||||
|
/workflow:review-module-cycle src/auth/**,src/payment/**
|
||||||
|
|
||||||
|
# Review with custom dimensions
|
||||||
|
/workflow:review-module-cycle src/payment/** --dimensions=security,architecture,quality
|
||||||
|
|
||||||
|
# Review specific files
|
||||||
|
/workflow:review-module-cycle src/payment/processor.ts,src/payment/validator.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
**Review Scope**: Specified modules/files only (independent of git history)
|
||||||
|
**Session Requirement**: Auto-creates workflow session via `/workflow:session:start`
|
||||||
|
**Output Directory**: `.workflow/active/WFS-{session-id}/.review/` (session-based)
|
||||||
|
**Default Dimensions**: Security, Architecture, Quality, Action-Items, Performance, Maintainability, Best-Practices
|
||||||
|
**Max Iterations**: 3 (adjustable via --max-iterations)
|
||||||
|
**Default Iterations**: 1 (deep-dive runs once; use --max-iterations=0 to skip)
|
||||||
|
**CLI Tools**: Gemini → Qwen → Codex (fallback chain)
|
||||||
|
|
||||||
|
## What & Why
|
||||||
|
|
||||||
|
### Core Concept
|
||||||
|
Independent multi-dimensional code review orchestrator with **hybrid parallel-iterative execution** for comprehensive quality assessment of **specific modules or files**.
|
||||||
|
|
||||||
|
**Review Scope**:
|
||||||
|
- **Module-based**: Reviews specified file patterns (e.g., `src/auth/**`, `*.ts`)
|
||||||
|
- **Session-integrated**: Runs within workflow session context for unified tracking
|
||||||
|
- **Output location**: `.review/` subdirectory within active session
|
||||||
|
|
||||||
|
**vs Session Review**:
|
||||||
|
- **Session Review** (`review-session-cycle`): Reviews git changes within a workflow session
|
||||||
|
- **Module Review** (`review-module-cycle`): Reviews any specified code paths, regardless of git history
|
||||||
|
- **Common output**: Both use same `.review/` directory structure within session
|
||||||
|
|
||||||
|
### Value Proposition
|
||||||
|
1. **Module-Focused Review**: Target specific code areas independent of git history
|
||||||
|
2. **Session-Integrated**: Review results tracked within workflow session for unified management
|
||||||
|
3. **Comprehensive Coverage**: Same 7 specialized dimensions as session review
|
||||||
|
4. **Intelligent Prioritization**: Automatic identification of critical issues and cross-cutting concerns
|
||||||
|
5. **Unified Archive**: Review results archived with session for historical reference
|
||||||
|
|
||||||
|
### Orchestrator Boundary (CRITICAL)
|
||||||
|
- **ONLY command** for independent multi-dimensional module review
|
||||||
|
- Manages: dimension coordination, aggregation, iteration control, progress tracking
|
||||||
|
- Delegates: Code exploration and analysis to @cli-explore-agent, dimension-specific reviews via Deep Scan mode
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
### Execution Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Discovery & Initialization
|
||||||
|
└─ Resolve file patterns, validate paths, initialize state, create output structure
|
||||||
|
|
||||||
|
Phase 2: Parallel Reviews (for each dimension)
|
||||||
|
├─ Launch 7 review agents simultaneously
|
||||||
|
├─ Each executes CLI analysis via Gemini/Qwen on specified files
|
||||||
|
├─ Generate dimension JSON + markdown reports
|
||||||
|
└─ Update review-progress.json
|
||||||
|
|
||||||
|
Phase 3: Aggregation
|
||||||
|
├─ Load all dimension JSON files
|
||||||
|
├─ Calculate severity distribution (critical/high/medium/low)
|
||||||
|
├─ Identify cross-cutting concerns (files in 3+ dimensions)
|
||||||
|
└─ Decision:
|
||||||
|
├─ Critical findings OR high > 5 OR critical files → Phase 4 (Iterate)
|
||||||
|
└─ Else → Phase 5 (Complete)
|
||||||
|
|
||||||
|
Phase 4: Iterative Deep-Dive (optional)
|
||||||
|
├─ Select critical findings (max 5 per iteration)
|
||||||
|
├─ Launch deep-dive agents for root cause analysis
|
||||||
|
├─ Generate remediation plans with impact assessment
|
||||||
|
├─ Re-assess severity based on analysis
|
||||||
|
└─ Loop until no critical findings OR max iterations
|
||||||
|
|
||||||
|
Phase 5: Completion
|
||||||
|
└─ Finalize review-progress.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Roles
|
||||||
|
|
||||||
|
| Agent | Responsibility |
|
||||||
|
|-------|---------------|
|
||||||
|
| **Orchestrator** | Phase control, path resolution, state management, aggregation logic, iteration control |
|
||||||
|
| **@cli-explore-agent** (Review) | Execute dimension-specific code analysis via Deep Scan mode, generate findings JSON with dual-source strategy (Bash + Gemini), create structured analysis reports |
|
||||||
|
| **@cli-explore-agent** (Deep-dive) | Focused root cause analysis using dependency mapping, remediation planning with architectural insights, impact assessment, severity re-assessment |
|
||||||
|
|
||||||
|
## Enhanced Features
|
||||||
|
|
||||||
|
### 1. Review Dimensions Configuration
|
||||||
|
|
||||||
|
**7 Specialized Dimensions** with priority-based allocation:
|
||||||
|
|
||||||
|
| Dimension | Template | Priority | Timeout |
|
||||||
|
|-----------|----------|----------|---------|
|
||||||
|
| **Security** | 03-assess-security-risks.txt | 1 (Critical) | 60min |
|
||||||
|
| **Architecture** | 02-review-architecture.txt | 2 (High) | 60min |
|
||||||
|
| **Quality** | 02-review-code-quality.txt | 3 (Medium) | 40min |
|
||||||
|
| **Action-Items** | 02-analyze-code-patterns.txt | 2 (High) | 40min |
|
||||||
|
| **Performance** | 03-analyze-performance.txt | 3 (Medium) | 60min |
|
||||||
|
| **Maintainability** | 02-review-code-quality.txt* | 3 (Medium) | 40min |
|
||||||
|
| **Best-Practices** | 03-review-quality-standards.txt | 3 (Medium) | 40min |
|
||||||
|
|
||||||
|
*Custom focus: "Assess technical debt and maintainability"
|
||||||
|
|
||||||
|
**Category Definitions by Dimension**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const CATEGORIES = {
|
||||||
|
security: ['injection', 'authentication', 'authorization', 'encryption', 'input-validation', 'access-control', 'data-exposure'],
|
||||||
|
architecture: ['coupling', 'cohesion', 'layering', 'dependency', 'pattern-violation', 'scalability', 'separation-of-concerns'],
|
||||||
|
quality: ['code-smell', 'duplication', 'complexity', 'naming', 'error-handling', 'testability', 'readability'],
|
||||||
|
'action-items': ['requirement-coverage', 'acceptance-criteria', 'documentation', 'deployment-readiness', 'missing-functionality'],
|
||||||
|
performance: ['n-plus-one', 'inefficient-query', 'memory-leak', 'blocking-operation', 'caching', 'resource-usage'],
|
||||||
|
maintainability: ['technical-debt', 'magic-number', 'long-method', 'large-class', 'dead-code', 'commented-code'],
|
||||||
|
'best-practices': ['convention-violation', 'anti-pattern', 'deprecated-api', 'missing-validation', 'inconsistent-style']
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Path Pattern Resolution
|
||||||
|
|
||||||
|
**Syntax Rules**:
|
||||||
|
- All paths are **relative** from project root (e.g., `src/auth/**` not `/src/auth/**`)
|
||||||
|
- Multiple patterns: comma-separated, **no spaces** (e.g., `src/auth/**,src/payment/**`)
|
||||||
|
- Glob and specific files can be mixed (e.g., `src/auth/**,src/config.ts`)
|
||||||
|
|
||||||
|
**Supported Patterns**:
|
||||||
|
| Pattern Type | Example | Description |
|
||||||
|
|--------------|---------|-------------|
|
||||||
|
| Glob directory | `src/auth/**` | All files under src/auth/ |
|
||||||
|
| Glob with extension | `src/**/*.ts` | All .ts files under src/ |
|
||||||
|
| Specific file | `src/payment/processor.ts` | Single file |
|
||||||
|
| Multiple patterns | `src/auth/**,src/payment/**` | Comma-separated (no spaces) |
|
||||||
|
|
||||||
|
**Resolution Process**:
|
||||||
|
1. Parse input pattern (split by comma, trim whitespace)
|
||||||
|
2. Expand glob patterns to file list via `find` command
|
||||||
|
3. Validate all files exist and are readable
|
||||||
|
4. Error if pattern matches 0 files
|
||||||
|
5. Store resolved file list in review-state.json
|
||||||
|
|
||||||
|
### 3. Aggregation Logic
|
||||||
|
|
||||||
|
**Cross-Cutting Concern Detection**:
|
||||||
|
1. Files appearing in 3+ dimensions = **Critical Files**
|
||||||
|
2. Same issue pattern across dimensions = **Systemic Issue**
|
||||||
|
3. Severity clustering in specific files = **Hotspots**
|
||||||
|
|
||||||
|
**Deep-Dive Selection Criteria**:
|
||||||
|
- All critical severity findings (priority 1)
|
||||||
|
- Top 3 high-severity findings in critical files (priority 2)
|
||||||
|
- Max 5 findings per iteration (prevent overwhelm)
|
||||||
|
|
||||||
|
### 4. Severity Assessment
|
||||||
|
|
||||||
|
**Severity Levels**:
|
||||||
|
- **Critical**: Security vulnerabilities, data corruption risks, system-wide failures, authentication/authorization bypass
|
||||||
|
- **High**: Feature degradation, performance bottlenecks, architecture violations, significant technical debt
|
||||||
|
- **Medium**: Code smells, minor performance issues, style inconsistencies, maintainability concerns
|
||||||
|
- **Low**: Documentation gaps, minor refactoring opportunities, cosmetic issues
|
||||||
|
|
||||||
|
**Iteration Trigger**:
|
||||||
|
- Critical findings > 0 OR
|
||||||
|
- High findings > 5 OR
|
||||||
|
- Critical files count > 0
|
||||||
|
|
||||||
|
## Core Responsibilities
|
||||||
|
|
||||||
|
### Orchestrator
|
||||||
|
|
||||||
|
**Phase 1: Discovery & Initialization**
|
||||||
|
|
||||||
|
**Step 1: Session Creation**
|
||||||
|
```javascript
|
||||||
|
// Create workflow session for this review (type: review)
|
||||||
|
SlashCommand(command="/workflow:session:start --type review \"Code review for [target_pattern]\"")
|
||||||
|
|
||||||
|
// Parse output
|
||||||
|
const sessionId = output.match(/SESSION_ID: (WFS-[^\s]+)/)[1];
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Path Resolution & Validation**
|
||||||
|
```bash
|
||||||
|
# Expand glob pattern to file list (relative paths from project root)
|
||||||
|
find . -path "./src/auth/**" -type f | sed 's|^\./||'
|
||||||
|
|
||||||
|
# Validate files exist and are readable
|
||||||
|
for file in ${resolvedFiles[@]}; do
|
||||||
|
test -r "$file" || error "File not readable: $file"
|
||||||
|
done
|
||||||
|
```
|
||||||
|
- Parse and expand file patterns (glob support): `src/auth/**` → actual file list
|
||||||
|
- Validation: Ensure all specified files exist and are readable
|
||||||
|
- Store as **relative paths** from project root (e.g., `src/auth/service.ts`)
|
||||||
|
- Agents construct absolute paths dynamically during execution
|
||||||
|
|
||||||
|
**Step 3: Output Directory Setup**
|
||||||
|
- Output directory: `.workflow/active/${sessionId}/.review/`
|
||||||
|
- Create directory structure:
|
||||||
|
```bash
|
||||||
|
mkdir -p ${sessionDir}/.review/{dimensions,iterations,reports}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4: Initialize Review State**
|
||||||
|
- State initialization: Create `review-state.json` with metadata, dimensions, max_iterations, resolved_files (merged metadata + state)
|
||||||
|
- Progress tracking: Create `review-progress.json` for progress tracking
|
||||||
|
|
||||||
|
**Step 5: TodoWrite Initialization**
|
||||||
|
- Set up progress tracking with hierarchical structure
|
||||||
|
- Mark Phase 1 completed, Phase 2 in_progress
|
||||||
|
|
||||||
|
**Phase 2: Parallel Review Coordination**
|
||||||
|
- Launch 7 @cli-explore-agent instances simultaneously (Deep Scan mode)
|
||||||
|
- Pass dimension-specific context (template, timeout, custom focus, **target files**)
|
||||||
|
- Monitor completion via review-progress.json updates
|
||||||
|
- TodoWrite updates: Mark dimensions as completed
|
||||||
|
- CLI tool fallback: Gemini → Qwen → Codex (on error/timeout)
|
||||||
|
|
||||||
|
**Phase 3: Aggregation**
|
||||||
|
- Load all dimension JSON files from dimensions/
|
||||||
|
- Calculate severity distribution: Count by critical/high/medium/low
|
||||||
|
- Identify cross-cutting concerns: Files in 3+ dimensions
|
||||||
|
- Select deep-dive findings: Critical + high in critical files (max 5)
|
||||||
|
- Decision logic: Iterate if critical > 0 OR high > 5 OR critical files exist
|
||||||
|
- Update review-state.json with aggregation results
|
||||||
|
|
||||||
|
**Phase 4: Iteration Control**
|
||||||
|
- Check iteration count < max_iterations (default 3)
|
||||||
|
- Launch deep-dive agents for selected findings
|
||||||
|
- Collect remediation plans and re-assessed severities
|
||||||
|
- Update severity distribution based on re-assessments
|
||||||
|
- Record iteration in review-state.json
|
||||||
|
- Loop back to aggregation if still have critical/high findings
|
||||||
|
|
||||||
|
**Phase 5: Completion**
|
||||||
|
- Finalize review-progress.json with completion statistics
|
||||||
|
- Update review-state.json with completion_time and phase=complete
|
||||||
|
- TodoWrite completion: Mark all tasks done
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Output File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/active/WFS-{session-id}/.review/
|
||||||
|
├── review-state.json # Orchestrator state machine (includes metadata)
|
||||||
|
├── review-progress.json # Real-time progress for dashboard
|
||||||
|
├── dimensions/ # Per-dimension results
|
||||||
|
│ ├── security.json
|
||||||
|
│ ├── architecture.json
|
||||||
|
│ ├── quality.json
|
||||||
|
│ ├── action-items.json
|
||||||
|
│ ├── performance.json
|
||||||
|
│ ├── maintainability.json
|
||||||
|
│ └── best-practices.json
|
||||||
|
├── iterations/ # Deep-dive results
|
||||||
|
│ ├── iteration-1-finding-{uuid}.json
|
||||||
|
│ └── iteration-2-finding-{uuid}.json
|
||||||
|
└── reports/ # Human-readable reports
|
||||||
|
├── security-analysis.md
|
||||||
|
├── security-cli-output.txt
|
||||||
|
├── deep-dive-1-{uuid}.md
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Session Context**:
|
||||||
|
```
|
||||||
|
.workflow/active/WFS-{session-id}/
|
||||||
|
├── workflow-session.json
|
||||||
|
├── IMPL_PLAN.md
|
||||||
|
├── TODO_LIST.md
|
||||||
|
├── .task/
|
||||||
|
├── .summaries/
|
||||||
|
└── .review/ # Review results (this command)
|
||||||
|
└── (structure above)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review State JSON
|
||||||
|
|
||||||
|
**Purpose**: Unified state machine and metadata (merged from metadata + state)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"review_id": "review-20250125-143022",
|
||||||
|
"review_type": "module",
|
||||||
|
"session_id": "WFS-auth-system",
|
||||||
|
"metadata": {
|
||||||
|
"created_at": "2025-01-25T14:30:22Z",
|
||||||
|
"target_pattern": "src/auth/**",
|
||||||
|
"resolved_files": [
|
||||||
|
"src/auth/service.ts",
|
||||||
|
"src/auth/validator.ts",
|
||||||
|
"src/auth/middleware.ts"
|
||||||
|
],
|
||||||
|
"dimensions": ["security", "architecture", "quality", "action-items", "performance", "maintainability", "best-practices"],
|
||||||
|
"max_iterations": 3
|
||||||
|
},
|
||||||
|
"phase": "parallel|aggregate|iterate|complete",
|
||||||
|
"current_iteration": 1,
|
||||||
|
"dimensions_reviewed": ["security", "architecture", "quality", "action-items", "performance", "maintainability", "best-practices"],
|
||||||
|
"selected_strategy": "comprehensive",
|
||||||
|
"next_action": "execute_parallel_reviews|aggregate_findings|execute_deep_dive|generate_final_report|complete",
|
||||||
|
"severity_distribution": {
|
||||||
|
"critical": 2,
|
||||||
|
"high": 5,
|
||||||
|
"medium": 12,
|
||||||
|
"low": 8
|
||||||
|
},
|
||||||
|
"critical_files": [...],
|
||||||
|
"iterations": [...],
|
||||||
|
"completion_criteria": {...}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review Progress JSON
|
||||||
|
|
||||||
|
**Purpose**: Real-time dashboard updates via polling
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"review_id": "review-20250125-143022",
|
||||||
|
"last_update": "2025-01-25T14:35:10Z",
|
||||||
|
"phase": "parallel|aggregate|iterate|complete",
|
||||||
|
"current_iteration": 1,
|
||||||
|
"progress": {
|
||||||
|
"parallel_review": {
|
||||||
|
"total_dimensions": 7,
|
||||||
|
"completed": 5,
|
||||||
|
"in_progress": 2,
|
||||||
|
"percent_complete": 71
|
||||||
|
},
|
||||||
|
"deep_dive": {
|
||||||
|
"total_findings": 6,
|
||||||
|
"analyzed": 2,
|
||||||
|
"in_progress": 1,
|
||||||
|
"percent_complete": 33
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"agent_status": [
|
||||||
|
{
|
||||||
|
"agent_type": "review-agent",
|
||||||
|
"dimension": "security",
|
||||||
|
"status": "completed",
|
||||||
|
"started_at": "2025-01-25T14:30:00Z",
|
||||||
|
"completed_at": "2025-01-25T15:15:00Z",
|
||||||
|
"duration_ms": 2700000
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"agent_type": "deep-dive-agent",
|
||||||
|
"finding_id": "sec-001-uuid",
|
||||||
|
"status": "in_progress",
|
||||||
|
"started_at": "2025-01-25T14:32:00Z"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"estimated_completion": "2025-01-25T16:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Output Schemas
|
||||||
|
|
||||||
|
**Agent-produced JSON files follow standardized schemas**:
|
||||||
|
|
||||||
|
1. **Dimension Results** (cli-explore-agent output from parallel reviews)
|
||||||
|
- Schema: `~/.claude/workflows/cli-templates/schemas/review-dimension-results-schema.json`
|
||||||
|
- Output: `{output-dir}/dimensions/{dimension}.json`
|
||||||
|
- Contains: findings array, summary statistics, cross_references
|
||||||
|
|
||||||
|
2. **Deep-Dive Results** (cli-explore-agent output from iterations)
|
||||||
|
- Schema: `~/.claude/workflows/cli-templates/schemas/review-deep-dive-results-schema.json`
|
||||||
|
- Output: `{output-dir}/iterations/iteration-{N}-finding-{uuid}.json`
|
||||||
|
- Contains: root_cause, remediation_plan, impact_assessment, reassessed_severity
|
||||||
|
|
||||||
|
### Agent Invocation Template
|
||||||
|
|
||||||
|
**Review Agent** (parallel execution, 7 instances):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
description=`Execute ${dimension} review analysis via Deep Scan`,
|
||||||
|
prompt=`
|
||||||
|
## Task Objective
|
||||||
|
Conduct comprehensive ${dimension} code exploration and analysis using Deep Scan mode (Bash + Gemini dual-source strategy) for specified module files
|
||||||
|
|
||||||
|
## Analysis Mode Selection
|
||||||
|
Use **Deep Scan mode** for this review:
|
||||||
|
- Phase 1: Bash structural scan for standard patterns (classes, functions, imports)
|
||||||
|
- Phase 2: Gemini semantic analysis for design intent, non-standard patterns, ${dimension}-specific concerns
|
||||||
|
- Phase 3: Synthesis with attribution (bash-discovered vs gemini-discovered findings)
|
||||||
|
|
||||||
|
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||||
|
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||||
|
1. Read review state: ${reviewStateJsonPath}
|
||||||
|
2. Get target files: Read resolved_files from review-state.json
|
||||||
|
3. Validate file access: bash(ls -la ${targetFiles.join(' ')})
|
||||||
|
4. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
||||||
|
|
||||||
|
## Review Context
|
||||||
|
- Review Type: module (independent)
|
||||||
|
- Review Dimension: ${dimension}
|
||||||
|
- Review ID: ${reviewId}
|
||||||
|
- Target Pattern: ${targetPattern}
|
||||||
|
- Resolved Files: ${resolvedFiles.length} files
|
||||||
|
- Output Directory: ${outputDir}
|
||||||
|
|
||||||
|
## CLI Configuration
|
||||||
|
- Tool Priority: gemini → qwen → codex (fallback chain)
|
||||||
|
- Custom Focus: ${customFocus || 'Standard dimension analysis'}
|
||||||
|
- Mode: analysis (READ-ONLY)
|
||||||
|
- Context Pattern: ${targetFiles.map(f => `@${f}`).join(' ')}
|
||||||
|
|
||||||
|
## Expected Deliverables
|
||||||
|
|
||||||
|
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 4, follow schema exactly
|
||||||
|
|
||||||
|
1. Dimension Results JSON: ${outputDir}/dimensions/${dimension}.json
|
||||||
|
|
||||||
|
**⚠️ CRITICAL JSON STRUCTURE REQUIREMENTS**:
|
||||||
|
|
||||||
|
Root structure MUST be array: \`[{ ... }]\` NOT \`{ ... }\`
|
||||||
|
|
||||||
|
Required top-level fields:
|
||||||
|
- dimension, review_id, analysis_timestamp (NOT timestamp/analyzed_at)
|
||||||
|
- cli_tool_used (gemini|qwen|codex), model, analysis_duration_ms
|
||||||
|
- summary (FLAT structure), findings, cross_references
|
||||||
|
|
||||||
|
Summary MUST be FLAT (NOT nested by_severity):
|
||||||
|
\`{ "total_findings": N, "critical": N, "high": N, "medium": N, "low": N, "files_analyzed": N, "lines_reviewed": N }\`
|
||||||
|
|
||||||
|
Finding required fields:
|
||||||
|
- id: format \`{dim}-{seq}-{uuid8}\` e.g., \`sec-001-a1b2c3d4\` (lowercase)
|
||||||
|
- severity: lowercase only (critical|high|medium|low)
|
||||||
|
- snippet (NOT code_snippet), impact (NOT exploit_scenario)
|
||||||
|
- metadata, iteration (0), status (pending_remediation), cross_references
|
||||||
|
|
||||||
|
2. Analysis Report: ${outputDir}/reports/${dimension}-analysis.md
|
||||||
|
- Human-readable summary with recommendations
|
||||||
|
- Grouped by severity: critical → high → medium → low
|
||||||
|
- Include file:line references for all findings
|
||||||
|
|
||||||
|
3. CLI Output Log: ${outputDir}/reports/${dimension}-cli-output.txt
|
||||||
|
- Raw CLI tool output for debugging
|
||||||
|
- Include full analysis text
|
||||||
|
|
||||||
|
## Dimension-Specific Guidance
|
||||||
|
${getDimensionGuidance(dimension)}
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] Schema obtained via cat review-dimension-results-schema.json
|
||||||
|
- [ ] All target files analyzed for ${dimension} concerns
|
||||||
|
- [ ] All findings include file:line references with code snippets
|
||||||
|
- [ ] Severity assessment follows established criteria (see reference)
|
||||||
|
- [ ] Recommendations are actionable with code examples
|
||||||
|
- [ ] JSON output follows schema exactly
|
||||||
|
- [ ] Report is comprehensive and well-organized
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Deep-Dive Agent** (iteration execution):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
||||||
|
prompt=`
|
||||||
|
## Task Objective
|
||||||
|
Perform focused root cause analysis using Dependency Map mode (for impact analysis) + Deep Scan mode (for semantic understanding) to generate comprehensive remediation plan for critical ${dimension} issue
|
||||||
|
|
||||||
|
## Analysis Mode Selection
|
||||||
|
Use **Dependency Map mode** first to understand dependencies:
|
||||||
|
- Build dependency graph around ${file} to identify affected components
|
||||||
|
- Detect circular dependencies or tight coupling related to this finding
|
||||||
|
- Calculate change risk scores for remediation impact
|
||||||
|
|
||||||
|
Then apply **Deep Scan mode** for semantic analysis:
|
||||||
|
- Understand design intent and architectural context
|
||||||
|
- Identify non-standard patterns or implicit dependencies
|
||||||
|
- Extract remediation insights from code structure
|
||||||
|
|
||||||
|
## Finding Context
|
||||||
|
- Finding ID: ${findingId}
|
||||||
|
- Original Dimension: ${dimension}
|
||||||
|
- Title: ${findingTitle}
|
||||||
|
- File: ${file}:${line}
|
||||||
|
- Severity: ${severity}
|
||||||
|
- Category: ${category}
|
||||||
|
- Original Description: ${description}
|
||||||
|
- Iteration: ${iteration}
|
||||||
|
|
||||||
|
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||||
|
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||||
|
1. Read original finding: ${dimensionJsonPath}
|
||||||
|
2. Read affected file: ${file}
|
||||||
|
3. Identify related code: bash(grep -r "import.*${basename(file)}" ${projectDir}/src --include="*.ts")
|
||||||
|
4. Read test files: bash(find ${projectDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
||||||
|
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||||
|
|
||||||
|
## CLI Configuration
|
||||||
|
- Tool Priority: gemini → qwen → codex
|
||||||
|
- Template: ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt
|
||||||
|
- Mode: analysis (READ-ONLY)
|
||||||
|
|
||||||
|
## Expected Deliverables
|
||||||
|
|
||||||
|
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 5, follow schema exactly
|
||||||
|
|
||||||
|
1. Deep-Dive Results JSON: ${outputDir}/iterations/iteration-${iteration}-finding-${findingId}.json
|
||||||
|
|
||||||
|
**⚠️ CRITICAL JSON STRUCTURE REQUIREMENTS**:
|
||||||
|
|
||||||
|
Root structure MUST be array: \`[{ ... }]\` NOT \`{ ... }\`
|
||||||
|
|
||||||
|
Required top-level fields:
|
||||||
|
- finding_id, dimension, iteration, analysis_timestamp
|
||||||
|
- cli_tool_used, model, analysis_duration_ms
|
||||||
|
- original_finding, root_cause, remediation_plan
|
||||||
|
- impact_assessment, reassessed_severity, confidence_score, cross_references
|
||||||
|
|
||||||
|
All nested objects must follow schema exactly - read schema for field names
|
||||||
|
|
||||||
|
2. Analysis Report: ${outputDir}/reports/deep-dive-${iteration}-${findingId}.md
|
||||||
|
- Detailed root cause analysis
|
||||||
|
- Step-by-step remediation plan
|
||||||
|
- Impact assessment and rollback strategy
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] Schema obtained via cat review-deep-dive-results-schema.json
|
||||||
|
- [ ] Root cause clearly identified with supporting evidence
|
||||||
|
- [ ] Remediation plan is step-by-step actionable with exact file:line references
|
||||||
|
- [ ] Each step includes specific commands and validation tests
|
||||||
|
- [ ] Impact fully assessed (files, tests, breaking changes, dependencies)
|
||||||
|
- [ ] Severity re-evaluation justified with evidence
|
||||||
|
- [ ] Confidence score accurately reflects certainty of analysis
|
||||||
|
- [ ] JSON output follows schema exactly
|
||||||
|
- [ ] References include project-specific and external documentation
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dimension Guidance Reference
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function getDimensionGuidance(dimension) {
|
||||||
|
const guidance = {
|
||||||
|
security: `
|
||||||
|
Focus Areas:
|
||||||
|
- Input validation and sanitization
|
||||||
|
- Authentication and authorization mechanisms
|
||||||
|
- Data encryption (at-rest and in-transit)
|
||||||
|
- SQL/NoSQL injection vulnerabilities
|
||||||
|
- XSS, CSRF, and other web vulnerabilities
|
||||||
|
- Sensitive data exposure
|
||||||
|
- Access control and privilege escalation
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Authentication bypass, SQL injection, RCE, sensitive data exposure
|
||||||
|
- High: Missing authorization checks, weak encryption, exposed secrets
|
||||||
|
- Medium: Missing input validation, insecure defaults, weak password policies
|
||||||
|
- Low: Security headers missing, verbose error messages, outdated dependencies
|
||||||
|
`,
|
||||||
|
architecture: `
|
||||||
|
Focus Areas:
|
||||||
|
- Layering and separation of concerns
|
||||||
|
- Coupling and cohesion
|
||||||
|
- Design pattern adherence
|
||||||
|
- Dependency management
|
||||||
|
- Scalability and extensibility
|
||||||
|
- Module boundaries
|
||||||
|
- API design consistency
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Circular dependencies, god objects, tight coupling across layers
|
||||||
|
- High: Violated architectural principles, scalability bottlenecks
|
||||||
|
- Medium: Missing abstractions, inconsistent patterns, suboptimal design
|
||||||
|
- Low: Minor coupling issues, documentation gaps, naming inconsistencies
|
||||||
|
`,
|
||||||
|
quality: `
|
||||||
|
Focus Areas:
|
||||||
|
- Code duplication
|
||||||
|
- Complexity (cyclomatic, cognitive)
|
||||||
|
- Naming conventions
|
||||||
|
- Error handling patterns
|
||||||
|
- Code readability
|
||||||
|
- Comment quality
|
||||||
|
- Dead code
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Severe complexity (CC > 20), massive duplication (>50 lines)
|
||||||
|
- High: High complexity (CC > 10), significant duplication, poor error handling
|
||||||
|
- Medium: Moderate complexity (CC > 5), naming issues, code smells
|
||||||
|
- Low: Minor duplication, documentation gaps, cosmetic issues
|
||||||
|
`,
|
||||||
|
'action-items': `
|
||||||
|
Focus Areas:
|
||||||
|
- Requirements coverage verification
|
||||||
|
- Acceptance criteria met
|
||||||
|
- Documentation completeness
|
||||||
|
- Deployment readiness
|
||||||
|
- Missing functionality
|
||||||
|
- Test coverage gaps
|
||||||
|
- Configuration management
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Core requirements not met, deployment blockers
|
||||||
|
- High: Significant functionality missing, acceptance criteria not met
|
||||||
|
- Medium: Minor requirements gaps, documentation incomplete
|
||||||
|
- Low: Nice-to-have features missing, minor documentation gaps
|
||||||
|
`,
|
||||||
|
performance: `
|
||||||
|
Focus Areas:
|
||||||
|
- N+1 query problems
|
||||||
|
- Inefficient algorithms (O(n²) where O(n log n) possible)
|
||||||
|
- Memory leaks
|
||||||
|
- Blocking operations on main thread
|
||||||
|
- Missing caching opportunities
|
||||||
|
- Resource usage (CPU, memory, network)
|
||||||
|
- Database query optimization
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Memory leaks, O(n²) in hot path, blocking main thread
|
||||||
|
- High: N+1 queries, missing indexes, inefficient algorithms
|
||||||
|
- Medium: Suboptimal caching, unnecessary computations, lazy loading issues
|
||||||
|
- Low: Minor optimization opportunities, redundant operations
|
||||||
|
`,
|
||||||
|
maintainability: `
|
||||||
|
Focus Areas:
|
||||||
|
- Technical debt indicators
|
||||||
|
- Magic numbers and hardcoded values
|
||||||
|
- Long methods (>50 lines)
|
||||||
|
- Large classes (>500 lines)
|
||||||
|
- Dead code and commented code
|
||||||
|
- Code documentation
|
||||||
|
- Test coverage
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Massive methods (>200 lines), severe technical debt blocking changes
|
||||||
|
- High: Large methods (>100 lines), significant dead code, undocumented complex logic
|
||||||
|
- Medium: Magic numbers, moderate technical debt, missing tests
|
||||||
|
- Low: Minor refactoring opportunities, cosmetic improvements
|
||||||
|
`,
|
||||||
|
'best-practices': `
|
||||||
|
Focus Areas:
|
||||||
|
- Framework conventions adherence
|
||||||
|
- Language idioms
|
||||||
|
- Anti-patterns
|
||||||
|
- Deprecated API usage
|
||||||
|
- Coding standards compliance
|
||||||
|
- Error handling patterns
|
||||||
|
- Logging and monitoring
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Severe anti-patterns, deprecated APIs with security risks
|
||||||
|
- High: Major convention violations, poor error handling, missing logging
|
||||||
|
- Medium: Minor anti-patterns, style inconsistencies, suboptimal patterns
|
||||||
|
- Low: Cosmetic style issues, minor convention deviations
|
||||||
|
`
|
||||||
|
};
|
||||||
|
|
||||||
|
return guidance[dimension] || 'Standard code review analysis';
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Completion Conditions
|
||||||
|
|
||||||
|
**Full Success**:
|
||||||
|
- All dimensions reviewed
|
||||||
|
- Critical findings = 0
|
||||||
|
- High findings ≤ 5
|
||||||
|
- Action: Generate final report, mark phase=complete
|
||||||
|
|
||||||
|
**Partial Success**:
|
||||||
|
- All dimensions reviewed
|
||||||
|
- Max iterations reached
|
||||||
|
- Still have critical/high findings
|
||||||
|
- Action: Generate report with warnings, recommend follow-up
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
**Phase-Level Error Matrix**:
|
||||||
|
|
||||||
|
| Phase | Error | Blocking? | Action |
|
||||||
|
|-------|-------|-----------|--------|
|
||||||
|
| Phase 1 | Invalid path pattern | Yes | Error and exit |
|
||||||
|
| Phase 1 | No files matched | Yes | Error and exit |
|
||||||
|
| Phase 1 | Files not readable | Yes | Error and exit |
|
||||||
|
| Phase 2 | Single dimension fails | No | Log warning, continue other dimensions |
|
||||||
|
| Phase 2 | All dimensions fail | Yes | Error and exit |
|
||||||
|
| Phase 3 | Missing dimension JSON | No | Skip in aggregation, log warning |
|
||||||
|
| Phase 4 | Deep-dive agent fails | No | Skip finding, continue others |
|
||||||
|
| Phase 4 | Max iterations reached | No | Generate partial report |
|
||||||
|
|
||||||
|
**CLI Fallback Chain**: Gemini → Qwen → Codex → degraded mode
|
||||||
|
|
||||||
|
**Fallback Triggers**:
|
||||||
|
1. HTTP 429, 5xx errors, connection timeout
|
||||||
|
2. Invalid JSON output (parse error, missing required fields)
|
||||||
|
3. Low confidence score < 0.4
|
||||||
|
4. Analysis too brief (< 100 words in report)
|
||||||
|
|
||||||
|
**Fallback Behavior**:
|
||||||
|
- On trigger: Retry with next tool in chain
|
||||||
|
- After Codex fails: Enter degraded mode (skip analysis, log error)
|
||||||
|
- Degraded mode: Continue workflow with available results
|
||||||
|
|
||||||
|
### TodoWrite Structure
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
TodoWrite({
|
||||||
|
todos: [
|
||||||
|
{ content: "Phase 1: Discovery & Initialization", status: "completed", activeForm: "Initializing" },
|
||||||
|
{ content: "Phase 2: Parallel Reviews (7 dimensions)", status: "in_progress", activeForm: "Reviewing" },
|
||||||
|
{ content: " → Security review", status: "in_progress", activeForm: "Analyzing security" },
|
||||||
|
// ... other dimensions as sub-items
|
||||||
|
{ content: "Phase 3: Aggregation", status: "pending", activeForm: "Aggregating" },
|
||||||
|
{ content: "Phase 4: Deep-dive", status: "pending", activeForm: "Deep-diving" },
|
||||||
|
{ content: "Phase 5: Completion", status: "pending", activeForm: "Completing" }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Start Specific**: Begin with focused module patterns for faster results
|
||||||
|
2. **Expand Gradually**: Add more modules based on initial findings
|
||||||
|
3. **Use Glob Wisely**: `src/auth/**` is more efficient than `src/**` with lots of irrelevant files
|
||||||
|
4. **Trust Aggregation Logic**: Auto-selection based on proven heuristics
|
||||||
|
5. **Monitor Logs**: Check reports/ directory for CLI analysis insights
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
### View Review Progress
|
||||||
|
Use `ccw view` to open the review dashboard in browser:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw view
|
||||||
|
```
|
||||||
|
|
||||||
|
### Automated Fix Workflow
|
||||||
|
After completing a module review, use the generated findings JSON for automated fixing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Step 1: Complete review (this command)
|
||||||
|
/workflow:review-module-cycle src/auth/**
|
||||||
|
|
||||||
|
# Step 2: Run automated fixes using dimension findings
|
||||||
|
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
|
||||||
|
```
|
||||||
|
|
||||||
|
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
|
||||||
|
|
||||||
776
.claude/commands/workflow/review-session-cycle.md
Normal file
776
.claude/commands/workflow/review-session-cycle.md
Normal file
@@ -0,0 +1,776 @@
|
|||||||
|
---
|
||||||
|
name: review-session-cycle
|
||||||
|
description: Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.
|
||||||
|
argument-hint: "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]"
|
||||||
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Review-Session-Cycle Command
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Execute comprehensive session review (all 7 dimensions)
|
||||||
|
/workflow:review-session-cycle
|
||||||
|
|
||||||
|
# Review specific session with custom dimensions
|
||||||
|
/workflow:review-session-cycle WFS-payment-integration --dimensions=security,architecture,quality
|
||||||
|
|
||||||
|
# Specify session and iteration limit
|
||||||
|
/workflow:review-session-cycle WFS-payment-integration --max-iterations=5
|
||||||
|
```
|
||||||
|
|
||||||
|
**Review Scope**: Git changes from session creation to present (via `git log --since`)
|
||||||
|
**Session Requirement**: Requires active or completed workflow session
|
||||||
|
**Output Directory**: `.workflow/active/WFS-{session-id}/.review/` (session-based)
|
||||||
|
**Default Dimensions**: Security, Architecture, Quality, Action-Items, Performance, Maintainability, Best-Practices
|
||||||
|
**Max Iterations**: 3 (adjustable via --max-iterations)
|
||||||
|
**Default Iterations**: 1 (deep-dive runs once; use --max-iterations=0 to skip)
|
||||||
|
**CLI Tools**: Gemini → Qwen → Codex (fallback chain)
|
||||||
|
|
||||||
|
## What & Why
|
||||||
|
|
||||||
|
### Core Concept
|
||||||
|
Session-based multi-dimensional code review orchestrator with **hybrid parallel-iterative execution** for comprehensive quality assessment of **git changes within a workflow session**.
|
||||||
|
|
||||||
|
**Review Scope**:
|
||||||
|
- **Session-based**: Reviews only files changed during the workflow session (via `git log --since="${sessionCreatedAt}"`)
|
||||||
|
- **For independent module review**: Use `/workflow:review-module-cycle` command instead
|
||||||
|
|
||||||
|
**vs Standard Review**:
|
||||||
|
- **Standard**: Sequential manual reviews → Inconsistent coverage → Missed cross-cutting concerns
|
||||||
|
- **Review-Session-Cycle**: **Parallel automated analysis → Aggregate findings → Deep-dive critical issues** → Comprehensive coverage
|
||||||
|
|
||||||
|
### Value Proposition
|
||||||
|
1. **Comprehensive Coverage**: 7 specialized dimensions analyze all quality aspects simultaneously
|
||||||
|
2. **Intelligent Prioritization**: Automatic identification of critical issues and cross-cutting concerns
|
||||||
|
3. **Actionable Insights**: Deep-dive iterations provide step-by-step remediation plans
|
||||||
|
|
||||||
|
### Orchestrator Boundary (CRITICAL)
|
||||||
|
- **ONLY command** for comprehensive multi-dimensional review
|
||||||
|
- Manages: dimension coordination, aggregation, iteration control, progress tracking
|
||||||
|
- Delegates: Code exploration and analysis to @cli-explore-agent, dimension-specific reviews via Deep Scan mode
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
### Execution Flow (Simplified)
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Discovery & Initialization
|
||||||
|
└─ Validate session, initialize state, create output structure
|
||||||
|
|
||||||
|
Phase 2: Parallel Reviews (for each dimension)
|
||||||
|
├─ Launch 7 review agents simultaneously
|
||||||
|
├─ Each executes CLI analysis via Gemini/Qwen
|
||||||
|
├─ Generate dimension JSON + markdown reports
|
||||||
|
└─ Update review-progress.json
|
||||||
|
|
||||||
|
Phase 3: Aggregation
|
||||||
|
├─ Load all dimension JSON files
|
||||||
|
├─ Calculate severity distribution (critical/high/medium/low)
|
||||||
|
├─ Identify cross-cutting concerns (files in 3+ dimensions)
|
||||||
|
└─ Decision:
|
||||||
|
├─ Critical findings OR high > 5 OR critical files → Phase 4 (Iterate)
|
||||||
|
└─ Else → Phase 5 (Complete)
|
||||||
|
|
||||||
|
Phase 4: Iterative Deep-Dive (optional)
|
||||||
|
├─ Select critical findings (max 5 per iteration)
|
||||||
|
├─ Launch deep-dive agents for root cause analysis
|
||||||
|
├─ Generate remediation plans with impact assessment
|
||||||
|
├─ Re-assess severity based on analysis
|
||||||
|
└─ Loop until no critical findings OR max iterations
|
||||||
|
|
||||||
|
Phase 5: Completion
|
||||||
|
└─ Finalize review-progress.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Roles
|
||||||
|
|
||||||
|
| Agent | Responsibility |
|
||||||
|
|-------|---------------|
|
||||||
|
| **Orchestrator** | Phase control, session discovery, state management, aggregation logic, iteration control |
|
||||||
|
| **@cli-explore-agent** (Review) | Execute dimension-specific code analysis via Deep Scan mode, generate findings JSON with dual-source strategy (Bash + Gemini), create structured analysis reports |
|
||||||
|
| **@cli-explore-agent** (Deep-dive) | Focused root cause analysis using dependency mapping, remediation planning with architectural insights, impact assessment, severity re-assessment |
|
||||||
|
|
||||||
|
## Enhanced Features
|
||||||
|
|
||||||
|
### 1. Review Dimensions Configuration
|
||||||
|
|
||||||
|
**7 Specialized Dimensions** with priority-based allocation:
|
||||||
|
|
||||||
|
| Dimension | Template | Priority | Timeout |
|
||||||
|
|-----------|----------|----------|---------|
|
||||||
|
| **Security** | 03-assess-security-risks.txt | 1 (Critical) | 60min |
|
||||||
|
| **Architecture** | 02-review-architecture.txt | 2 (High) | 60min |
|
||||||
|
| **Quality** | 02-review-code-quality.txt | 3 (Medium) | 40min |
|
||||||
|
| **Action-Items** | 02-analyze-code-patterns.txt | 2 (High) | 40min |
|
||||||
|
| **Performance** | 03-analyze-performance.txt | 3 (Medium) | 60min |
|
||||||
|
| **Maintainability** | 02-review-code-quality.txt* | 3 (Medium) | 40min |
|
||||||
|
| **Best-Practices** | 03-review-quality-standards.txt | 3 (Medium) | 40min |
|
||||||
|
|
||||||
|
*Custom focus: "Assess technical debt and maintainability"
|
||||||
|
|
||||||
|
**Category Definitions by Dimension**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const CATEGORIES = {
|
||||||
|
security: ['injection', 'authentication', 'authorization', 'encryption', 'input-validation', 'access-control', 'data-exposure'],
|
||||||
|
architecture: ['coupling', 'cohesion', 'layering', 'dependency', 'pattern-violation', 'scalability', 'separation-of-concerns'],
|
||||||
|
quality: ['code-smell', 'duplication', 'complexity', 'naming', 'error-handling', 'testability', 'readability'],
|
||||||
|
'action-items': ['requirement-coverage', 'acceptance-criteria', 'documentation', 'deployment-readiness', 'missing-functionality'],
|
||||||
|
performance: ['n-plus-one', 'inefficient-query', 'memory-leak', 'blocking-operation', 'caching', 'resource-usage'],
|
||||||
|
maintainability: ['technical-debt', 'magic-number', 'long-method', 'large-class', 'dead-code', 'commented-code'],
|
||||||
|
'best-practices': ['convention-violation', 'anti-pattern', 'deprecated-api', 'missing-validation', 'inconsistent-style']
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Aggregation Logic
|
||||||
|
|
||||||
|
**Cross-Cutting Concern Detection**:
|
||||||
|
1. Files appearing in 3+ dimensions = **Critical Files**
|
||||||
|
2. Same issue pattern across dimensions = **Systemic Issue**
|
||||||
|
3. Severity clustering in specific files = **Hotspots**
|
||||||
|
|
||||||
|
**Deep-Dive Selection Criteria**:
|
||||||
|
- All critical severity findings (priority 1)
|
||||||
|
- Top 3 high-severity findings in critical files (priority 2)
|
||||||
|
- Max 5 findings per iteration (prevent overwhelm)
|
||||||
|
|
||||||
|
### 3. Severity Assessment
|
||||||
|
|
||||||
|
**Severity Levels**:
|
||||||
|
- **Critical**: Security vulnerabilities, data corruption risks, system-wide failures, authentication/authorization bypass
|
||||||
|
- **High**: Feature degradation, performance bottlenecks, architecture violations, significant technical debt
|
||||||
|
- **Medium**: Code smells, minor performance issues, style inconsistencies, maintainability concerns
|
||||||
|
- **Low**: Documentation gaps, minor refactoring opportunities, cosmetic issues
|
||||||
|
|
||||||
|
**Iteration Trigger**:
|
||||||
|
- Critical findings > 0 OR
|
||||||
|
- High findings > 5 OR
|
||||||
|
- Critical files count > 0
|
||||||
|
|
||||||
|
## Core Responsibilities
|
||||||
|
|
||||||
|
### Orchestrator
|
||||||
|
|
||||||
|
**Phase 1: Discovery & Initialization**
|
||||||
|
|
||||||
|
**Step 1: Session Discovery**
|
||||||
|
```javascript
|
||||||
|
// If session ID not provided, auto-detect
|
||||||
|
if (!providedSessionId) {
|
||||||
|
// Check for active sessions
|
||||||
|
const activeSessions = Glob('.workflow/active/WFS-*');
|
||||||
|
if (activeSessions.length === 1) {
|
||||||
|
sessionId = activeSessions[0].match(/WFS-[^/]+/)[0];
|
||||||
|
} else if (activeSessions.length > 1) {
|
||||||
|
// List sessions and prompt user
|
||||||
|
error("Multiple active sessions found. Please specify session ID.");
|
||||||
|
} else {
|
||||||
|
error("No active session found. Create session first with /workflow:session:start");
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
sessionId = providedSessionId;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate session exists
|
||||||
|
Bash(`test -d .workflow/active/${sessionId} && echo "EXISTS"`);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Session Validation**
|
||||||
|
- Ensure session has implementation artifacts (check `.summaries/` or `.task/` directory)
|
||||||
|
- Extract session creation timestamp from `workflow-session.json`
|
||||||
|
- Use timestamp for git log filtering: `git log --since="${sessionCreatedAt}"`
|
||||||
|
|
||||||
|
**Step 3: Changed Files Detection**
|
||||||
|
```bash
|
||||||
|
# Get files changed since session creation
|
||||||
|
git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4: Output Directory Setup**
|
||||||
|
- Output directory: `.workflow/active/${sessionId}/.review/`
|
||||||
|
- Create directory structure:
|
||||||
|
```bash
|
||||||
|
mkdir -p ${sessionDir}/.review/{dimensions,iterations,reports}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 5: Initialize Review State**
|
||||||
|
- State initialization: Create `review-state.json` with metadata, dimensions, max_iterations (merged metadata + state)
|
||||||
|
- Progress tracking: Create `review-progress.json` for progress tracking
|
||||||
|
|
||||||
|
**Step 6: TodoWrite Initialization**
|
||||||
|
- Set up progress tracking with hierarchical structure
|
||||||
|
- Mark Phase 1 completed, Phase 2 in_progress
|
||||||
|
|
||||||
|
**Phase 2: Parallel Review Coordination**
|
||||||
|
- Launch 7 @cli-explore-agent instances simultaneously (Deep Scan mode)
|
||||||
|
- Pass dimension-specific context (template, timeout, custom focus)
|
||||||
|
- Monitor completion via review-progress.json updates
|
||||||
|
- TodoWrite updates: Mark dimensions as completed
|
||||||
|
- CLI tool fallback: Gemini → Qwen → Codex (on error/timeout)
|
||||||
|
|
||||||
|
**Phase 3: Aggregation**
|
||||||
|
- Load all dimension JSON files from dimensions/
|
||||||
|
- Calculate severity distribution: Count by critical/high/medium/low
|
||||||
|
- Identify cross-cutting concerns: Files in 3+ dimensions
|
||||||
|
- Select deep-dive findings: Critical + high in critical files (max 5)
|
||||||
|
- Decision logic: Iterate if critical > 0 OR high > 5 OR critical files exist
|
||||||
|
- Update review-state.json with aggregation results
|
||||||
|
|
||||||
|
**Phase 4: Iteration Control**
|
||||||
|
- Check iteration count < max_iterations (default 3)
|
||||||
|
- Launch deep-dive agents for selected findings
|
||||||
|
- Collect remediation plans and re-assessed severities
|
||||||
|
- Update severity distribution based on re-assessments
|
||||||
|
- Record iteration in review-state.json
|
||||||
|
- Loop back to aggregation if still have critical/high findings
|
||||||
|
|
||||||
|
**Phase 5: Completion**
|
||||||
|
- Finalize review-progress.json with completion statistics
|
||||||
|
- Update review-state.json with completion_time and phase=complete
|
||||||
|
- TodoWrite completion: Mark all tasks done
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Session File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/active/WFS-{session-id}/.review/
|
||||||
|
├── review-state.json # Orchestrator state machine (includes metadata)
|
||||||
|
├── review-progress.json # Real-time progress for dashboard
|
||||||
|
├── dimensions/ # Per-dimension results
|
||||||
|
│ ├── security.json
|
||||||
|
│ ├── architecture.json
|
||||||
|
│ ├── quality.json
|
||||||
|
│ ├── action-items.json
|
||||||
|
│ ├── performance.json
|
||||||
|
│ ├── maintainability.json
|
||||||
|
│ └── best-practices.json
|
||||||
|
├── iterations/ # Deep-dive results
|
||||||
|
│ ├── iteration-1-finding-{uuid}.json
|
||||||
|
│ └── iteration-2-finding-{uuid}.json
|
||||||
|
└── reports/ # Human-readable reports
|
||||||
|
├── security-analysis.md
|
||||||
|
├── security-cli-output.txt
|
||||||
|
├── deep-dive-1-{uuid}.md
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Session Context**:
|
||||||
|
```
|
||||||
|
.workflow/active/WFS-{session-id}/
|
||||||
|
├── workflow-session.json
|
||||||
|
├── IMPL_PLAN.md
|
||||||
|
├── TODO_LIST.md
|
||||||
|
├── .task/
|
||||||
|
├── .summaries/
|
||||||
|
└── .review/ # Review results (this command)
|
||||||
|
└── (structure above)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review State JSON
|
||||||
|
|
||||||
|
**Purpose**: Unified state machine and metadata (merged from metadata + state)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "WFS-payment-integration",
|
||||||
|
"review_id": "review-20250125-143022",
|
||||||
|
"review_type": "session",
|
||||||
|
"metadata": {
|
||||||
|
"created_at": "2025-01-25T14:30:22Z",
|
||||||
|
"git_changes": {
|
||||||
|
"commit_range": "abc123..def456",
|
||||||
|
"files_changed": 15,
|
||||||
|
"insertions": 342,
|
||||||
|
"deletions": 128
|
||||||
|
},
|
||||||
|
"dimensions": ["security", "architecture", "quality", "action-items", "performance", "maintainability", "best-practices"],
|
||||||
|
"max_iterations": 3
|
||||||
|
},
|
||||||
|
"phase": "parallel|aggregate|iterate|complete",
|
||||||
|
"current_iteration": 1,
|
||||||
|
"dimensions_reviewed": ["security", "architecture", "quality", "action-items", "performance", "maintainability", "best-practices"],
|
||||||
|
"selected_strategy": "comprehensive",
|
||||||
|
"next_action": "execute_parallel_reviews|aggregate_findings|execute_deep_dive|generate_final_report|complete",
|
||||||
|
"severity_distribution": {
|
||||||
|
"critical": 2,
|
||||||
|
"high": 5,
|
||||||
|
"medium": 12,
|
||||||
|
"low": 8
|
||||||
|
},
|
||||||
|
"critical_files": [
|
||||||
|
{
|
||||||
|
"file": "src/payment/processor.ts",
|
||||||
|
"finding_count": 5,
|
||||||
|
"dimensions": ["security", "architecture", "quality"]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"iterations": [
|
||||||
|
{
|
||||||
|
"iteration": 1,
|
||||||
|
"findings_analyzed": ["uuid-1", "uuid-2"],
|
||||||
|
"findings_resolved": 1,
|
||||||
|
"findings_escalated": 1,
|
||||||
|
"severity_change": {
|
||||||
|
"before": {"critical": 2, "high": 5, "medium": 12, "low": 8},
|
||||||
|
"after": {"critical": 1, "high": 6, "medium": 12, "low": 8}
|
||||||
|
},
|
||||||
|
"timestamp": "2025-01-25T14:30:00Z"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"completion_criteria": {
|
||||||
|
"target": "no_critical_findings_and_high_under_5",
|
||||||
|
"current_status": "in_progress",
|
||||||
|
"estimated_completion": "2 iterations remaining"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `phase`: Current execution phase (state machine pointer)
|
||||||
|
- `current_iteration`: Iteration counter (used for max check)
|
||||||
|
- `next_action`: Next step orchestrator should execute
|
||||||
|
- `severity_distribution`: Aggregated counts across all dimensions
|
||||||
|
- `critical_files`: Files appearing in 3+ dimensions with metadata
|
||||||
|
- `iterations[]`: Historical log for trend analysis
|
||||||
|
|
||||||
|
### Review Progress JSON
|
||||||
|
|
||||||
|
**Purpose**: Real-time dashboard updates via polling
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"review_id": "review-20250125-143022",
|
||||||
|
"last_update": "2025-01-25T14:35:10Z",
|
||||||
|
"phase": "parallel|aggregate|iterate|complete",
|
||||||
|
"current_iteration": 1,
|
||||||
|
"progress": {
|
||||||
|
"parallel_review": {
|
||||||
|
"total_dimensions": 7,
|
||||||
|
"completed": 5,
|
||||||
|
"in_progress": 2,
|
||||||
|
"percent_complete": 71
|
||||||
|
},
|
||||||
|
"deep_dive": {
|
||||||
|
"total_findings": 6,
|
||||||
|
"analyzed": 2,
|
||||||
|
"in_progress": 1,
|
||||||
|
"percent_complete": 33
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"agent_status": [
|
||||||
|
{
|
||||||
|
"agent_type": "review-agent",
|
||||||
|
"dimension": "security",
|
||||||
|
"status": "completed",
|
||||||
|
"started_at": "2025-01-25T14:30:00Z",
|
||||||
|
"completed_at": "2025-01-25T15:15:00Z",
|
||||||
|
"duration_ms": 2700000
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"agent_type": "deep-dive-agent",
|
||||||
|
"finding_id": "sec-001-uuid",
|
||||||
|
"status": "in_progress",
|
||||||
|
"started_at": "2025-01-25T14:32:00Z"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"estimated_completion": "2025-01-25T16:00:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Output Schemas
|
||||||
|
|
||||||
|
**Agent-produced JSON files follow standardized schemas**:
|
||||||
|
|
||||||
|
1. **Dimension Results** (cli-explore-agent output from parallel reviews)
|
||||||
|
- Schema: `~/.claude/workflows/cli-templates/schemas/review-dimension-results-schema.json`
|
||||||
|
- Output: `.review-cycle/dimensions/{dimension}.json`
|
||||||
|
- Contains: findings array, summary statistics, cross_references
|
||||||
|
|
||||||
|
2. **Deep-Dive Results** (cli-explore-agent output from iterations)
|
||||||
|
- Schema: `~/.claude/workflows/cli-templates/schemas/review-deep-dive-results-schema.json`
|
||||||
|
- Output: `.review-cycle/iterations/iteration-{N}-finding-{uuid}.json`
|
||||||
|
- Contains: root_cause, remediation_plan, impact_assessment, reassessed_severity
|
||||||
|
|
||||||
|
### Agent Invocation Template
|
||||||
|
|
||||||
|
**Review Agent** (parallel execution, 7 instances):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
description=`Execute ${dimension} review analysis via Deep Scan`,
|
||||||
|
prompt=`
|
||||||
|
## Task Objective
|
||||||
|
Conduct comprehensive ${dimension} code exploration and analysis using Deep Scan mode (Bash + Gemini dual-source strategy) for completed implementation in session ${sessionId}
|
||||||
|
|
||||||
|
## Analysis Mode Selection
|
||||||
|
Use **Deep Scan mode** for this review:
|
||||||
|
- Phase 1: Bash structural scan for standard patterns (classes, functions, imports)
|
||||||
|
- Phase 2: Gemini semantic analysis for design intent, non-standard patterns, ${dimension}-specific concerns
|
||||||
|
- Phase 3: Synthesis with attribution (bash-discovered vs gemini-discovered findings)
|
||||||
|
|
||||||
|
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||||
|
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||||
|
1. Read session metadata: ${sessionMetadataPath}
|
||||||
|
2. Read completed task summaries: bash(find ${summariesDir} -name "IMPL-*.md" -type f)
|
||||||
|
3. Get changed files: bash(cd ${workflowDir} && git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u)
|
||||||
|
4. Read review state: ${reviewStateJsonPath}
|
||||||
|
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
||||||
|
|
||||||
|
## Session Context
|
||||||
|
- Session ID: ${sessionId}
|
||||||
|
- Review Dimension: ${dimension}
|
||||||
|
- Review ID: ${reviewId}
|
||||||
|
- Implementation Phase: Complete (all tests passing)
|
||||||
|
- Output Directory: ${outputDir}
|
||||||
|
|
||||||
|
## CLI Configuration
|
||||||
|
- Tool Priority: gemini → qwen → codex (fallback chain)
|
||||||
|
- Template: ~/.claude/workflows/cli-templates/prompts/analysis/${dimensionTemplate}
|
||||||
|
- Custom Focus: ${customFocus || 'Standard dimension analysis'}
|
||||||
|
- Timeout: ${timeout}ms
|
||||||
|
- Mode: analysis (READ-ONLY)
|
||||||
|
|
||||||
|
## Expected Deliverables
|
||||||
|
|
||||||
|
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 5, follow schema exactly
|
||||||
|
|
||||||
|
1. Dimension Results JSON: ${outputDir}/dimensions/${dimension}.json
|
||||||
|
|
||||||
|
**⚠️ CRITICAL JSON STRUCTURE REQUIREMENTS**:
|
||||||
|
|
||||||
|
Root structure MUST be array: \`[{ ... }]\` NOT \`{ ... }\`
|
||||||
|
|
||||||
|
Required top-level fields:
|
||||||
|
- dimension, review_id, analysis_timestamp (NOT timestamp/analyzed_at)
|
||||||
|
- cli_tool_used (gemini|qwen|codex), model, analysis_duration_ms
|
||||||
|
- summary (FLAT structure), findings, cross_references
|
||||||
|
|
||||||
|
Summary MUST be FLAT (NOT nested by_severity):
|
||||||
|
\`{ "total_findings": N, "critical": N, "high": N, "medium": N, "low": N, "files_analyzed": N, "lines_reviewed": N }\`
|
||||||
|
|
||||||
|
Finding required fields:
|
||||||
|
- id: format \`{dim}-{seq}-{uuid8}\` e.g., \`sec-001-a1b2c3d4\` (lowercase)
|
||||||
|
- severity: lowercase only (critical|high|medium|low)
|
||||||
|
- snippet (NOT code_snippet), impact (NOT exploit_scenario)
|
||||||
|
- metadata, iteration (0), status (pending_remediation), cross_references
|
||||||
|
|
||||||
|
2. Analysis Report: ${outputDir}/reports/${dimension}-analysis.md
|
||||||
|
- Human-readable summary with recommendations
|
||||||
|
- Grouped by severity: critical → high → medium → low
|
||||||
|
- Include file:line references for all findings
|
||||||
|
|
||||||
|
3. CLI Output Log: ${outputDir}/reports/${dimension}-cli-output.txt
|
||||||
|
- Raw CLI tool output for debugging
|
||||||
|
- Include full analysis text
|
||||||
|
|
||||||
|
## Dimension-Specific Guidance
|
||||||
|
${getDimensionGuidance(dimension)}
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] Schema obtained via cat review-dimension-results-schema.json
|
||||||
|
- [ ] All changed files analyzed for ${dimension} concerns
|
||||||
|
- [ ] All findings include file:line references with code snippets
|
||||||
|
- [ ] Severity assessment follows established criteria (see reference)
|
||||||
|
- [ ] Recommendations are actionable with code examples
|
||||||
|
- [ ] JSON output follows schema exactly
|
||||||
|
- [ ] Report is comprehensive and well-organized
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Deep-Dive Agent** (iteration execution):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
||||||
|
prompt=`
|
||||||
|
## Task Objective
|
||||||
|
Perform focused root cause analysis using Dependency Map mode (for impact analysis) + Deep Scan mode (for semantic understanding) to generate comprehensive remediation plan for critical ${dimension} issue
|
||||||
|
|
||||||
|
## Analysis Mode Selection
|
||||||
|
Use **Dependency Map mode** first to understand dependencies:
|
||||||
|
- Build dependency graph around ${file} to identify affected components
|
||||||
|
- Detect circular dependencies or tight coupling related to this finding
|
||||||
|
- Calculate change risk scores for remediation impact
|
||||||
|
|
||||||
|
Then apply **Deep Scan mode** for semantic analysis:
|
||||||
|
- Understand design intent and architectural context
|
||||||
|
- Identify non-standard patterns or implicit dependencies
|
||||||
|
- Extract remediation insights from code structure
|
||||||
|
|
||||||
|
## Finding Context
|
||||||
|
- Finding ID: ${findingId}
|
||||||
|
- Original Dimension: ${dimension}
|
||||||
|
- Title: ${findingTitle}
|
||||||
|
- File: ${file}:${line}
|
||||||
|
- Severity: ${severity}
|
||||||
|
- Category: ${category}
|
||||||
|
- Original Description: ${description}
|
||||||
|
- Iteration: ${iteration}
|
||||||
|
|
||||||
|
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||||
|
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||||
|
1. Read original finding: ${dimensionJsonPath}
|
||||||
|
2. Read affected file: ${file}
|
||||||
|
3. Identify related code: bash(grep -r "import.*${basename(file)}" ${workflowDir}/src --include="*.ts")
|
||||||
|
4. Read test files: bash(find ${workflowDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
||||||
|
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||||
|
|
||||||
|
## CLI Configuration
|
||||||
|
- Tool Priority: gemini → qwen → codex
|
||||||
|
- Template: ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt
|
||||||
|
- Timeout: 2400000ms (40 minutes)
|
||||||
|
- Mode: analysis (READ-ONLY)
|
||||||
|
|
||||||
|
## Expected Deliverables
|
||||||
|
|
||||||
|
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 5, follow schema exactly
|
||||||
|
|
||||||
|
1. Deep-Dive Results JSON: ${outputDir}/iterations/iteration-${iteration}-finding-${findingId}.json
|
||||||
|
|
||||||
|
**⚠️ CRITICAL JSON STRUCTURE REQUIREMENTS**:
|
||||||
|
|
||||||
|
Root structure MUST be array: \`[{ ... }]\` NOT \`{ ... }\`
|
||||||
|
|
||||||
|
Required top-level fields:
|
||||||
|
- finding_id, dimension, iteration, analysis_timestamp
|
||||||
|
- cli_tool_used, model, analysis_duration_ms
|
||||||
|
- original_finding, root_cause, remediation_plan
|
||||||
|
- impact_assessment, reassessed_severity, confidence_score, cross_references
|
||||||
|
|
||||||
|
All nested objects must follow schema exactly - read schema for field names
|
||||||
|
|
||||||
|
2. Analysis Report: ${outputDir}/reports/deep-dive-${iteration}-${findingId}.md
|
||||||
|
- Detailed root cause analysis
|
||||||
|
- Step-by-step remediation plan
|
||||||
|
- Impact assessment and rollback strategy
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] Schema obtained via cat review-deep-dive-results-schema.json
|
||||||
|
- [ ] Root cause clearly identified with supporting evidence
|
||||||
|
- [ ] Remediation plan is step-by-step actionable with exact file:line references
|
||||||
|
- [ ] Each step includes specific commands and validation tests
|
||||||
|
- [ ] Impact fully assessed (files, tests, breaking changes, dependencies)
|
||||||
|
- [ ] Severity re-evaluation justified with evidence
|
||||||
|
- [ ] Confidence score accurately reflects certainty of analysis
|
||||||
|
- [ ] JSON output follows schema exactly
|
||||||
|
- [ ] References include project-specific and external documentation
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dimension Guidance Reference
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function getDimensionGuidance(dimension) {
|
||||||
|
const guidance = {
|
||||||
|
security: `
|
||||||
|
Focus Areas:
|
||||||
|
- Input validation and sanitization
|
||||||
|
- Authentication and authorization mechanisms
|
||||||
|
- Data encryption (at-rest and in-transit)
|
||||||
|
- SQL/NoSQL injection vulnerabilities
|
||||||
|
- XSS, CSRF, and other web vulnerabilities
|
||||||
|
- Sensitive data exposure
|
||||||
|
- Access control and privilege escalation
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Authentication bypass, SQL injection, RCE, sensitive data exposure
|
||||||
|
- High: Missing authorization checks, weak encryption, exposed secrets
|
||||||
|
- Medium: Missing input validation, insecure defaults, weak password policies
|
||||||
|
- Low: Security headers missing, verbose error messages, outdated dependencies
|
||||||
|
`,
|
||||||
|
architecture: `
|
||||||
|
Focus Areas:
|
||||||
|
- Layering and separation of concerns
|
||||||
|
- Coupling and cohesion
|
||||||
|
- Design pattern adherence
|
||||||
|
- Dependency management
|
||||||
|
- Scalability and extensibility
|
||||||
|
- Module boundaries
|
||||||
|
- API design consistency
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Circular dependencies, god objects, tight coupling across layers
|
||||||
|
- High: Violated architectural principles, scalability bottlenecks
|
||||||
|
- Medium: Missing abstractions, inconsistent patterns, suboptimal design
|
||||||
|
- Low: Minor coupling issues, documentation gaps, naming inconsistencies
|
||||||
|
`,
|
||||||
|
quality: `
|
||||||
|
Focus Areas:
|
||||||
|
- Code duplication
|
||||||
|
- Complexity (cyclomatic, cognitive)
|
||||||
|
- Naming conventions
|
||||||
|
- Error handling patterns
|
||||||
|
- Code readability
|
||||||
|
- Comment quality
|
||||||
|
- Dead code
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Severe complexity (CC > 20), massive duplication (>50 lines)
|
||||||
|
- High: High complexity (CC > 10), significant duplication, poor error handling
|
||||||
|
- Medium: Moderate complexity (CC > 5), naming issues, code smells
|
||||||
|
- Low: Minor duplication, documentation gaps, cosmetic issues
|
||||||
|
`,
|
||||||
|
'action-items': `
|
||||||
|
Focus Areas:
|
||||||
|
- Requirements coverage verification
|
||||||
|
- Acceptance criteria met
|
||||||
|
- Documentation completeness
|
||||||
|
- Deployment readiness
|
||||||
|
- Missing functionality
|
||||||
|
- Test coverage gaps
|
||||||
|
- Configuration management
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Core requirements not met, deployment blockers
|
||||||
|
- High: Significant functionality missing, acceptance criteria not met
|
||||||
|
- Medium: Minor requirements gaps, documentation incomplete
|
||||||
|
- Low: Nice-to-have features missing, minor documentation gaps
|
||||||
|
`,
|
||||||
|
performance: `
|
||||||
|
Focus Areas:
|
||||||
|
- N+1 query problems
|
||||||
|
- Inefficient algorithms (O(n²) where O(n log n) possible)
|
||||||
|
- Memory leaks
|
||||||
|
- Blocking operations on main thread
|
||||||
|
- Missing caching opportunities
|
||||||
|
- Resource usage (CPU, memory, network)
|
||||||
|
- Database query optimization
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Memory leaks, O(n²) in hot path, blocking main thread
|
||||||
|
- High: N+1 queries, missing indexes, inefficient algorithms
|
||||||
|
- Medium: Suboptimal caching, unnecessary computations, lazy loading issues
|
||||||
|
- Low: Minor optimization opportunities, redundant operations
|
||||||
|
`,
|
||||||
|
maintainability: `
|
||||||
|
Focus Areas:
|
||||||
|
- Technical debt indicators
|
||||||
|
- Magic numbers and hardcoded values
|
||||||
|
- Long methods (>50 lines)
|
||||||
|
- Large classes (>500 lines)
|
||||||
|
- Dead code and commented code
|
||||||
|
- Code documentation
|
||||||
|
- Test coverage
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Massive methods (>200 lines), severe technical debt blocking changes
|
||||||
|
- High: Large methods (>100 lines), significant dead code, undocumented complex logic
|
||||||
|
- Medium: Magic numbers, moderate technical debt, missing tests
|
||||||
|
- Low: Minor refactoring opportunities, cosmetic improvements
|
||||||
|
`,
|
||||||
|
'best-practices': `
|
||||||
|
Focus Areas:
|
||||||
|
- Framework conventions adherence
|
||||||
|
- Language idioms
|
||||||
|
- Anti-patterns
|
||||||
|
- Deprecated API usage
|
||||||
|
- Coding standards compliance
|
||||||
|
- Error handling patterns
|
||||||
|
- Logging and monitoring
|
||||||
|
|
||||||
|
Severity Criteria:
|
||||||
|
- Critical: Severe anti-patterns, deprecated APIs with security risks
|
||||||
|
- High: Major convention violations, poor error handling, missing logging
|
||||||
|
- Medium: Minor anti-patterns, style inconsistencies, suboptimal patterns
|
||||||
|
- Low: Cosmetic style issues, minor convention deviations
|
||||||
|
`
|
||||||
|
};
|
||||||
|
|
||||||
|
return guidance[dimension] || 'Standard code review analysis';
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Completion Conditions
|
||||||
|
|
||||||
|
**Full Success**:
|
||||||
|
- All dimensions reviewed
|
||||||
|
- Critical findings = 0
|
||||||
|
- High findings ≤ 5
|
||||||
|
- Action: Generate final report, mark phase=complete
|
||||||
|
|
||||||
|
**Partial Success**:
|
||||||
|
- All dimensions reviewed
|
||||||
|
- Max iterations reached
|
||||||
|
- Still have critical/high findings
|
||||||
|
- Action: Generate report with warnings, recommend follow-up
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
**Phase-Level Error Matrix**:
|
||||||
|
|
||||||
|
| Phase | Error | Blocking? | Action |
|
||||||
|
|-------|-------|-----------|--------|
|
||||||
|
| Phase 1 | Session not found | Yes | Error and exit |
|
||||||
|
| Phase 1 | No completed tasks | Yes | Error and exit |
|
||||||
|
| Phase 1 | No changed files | Yes | Error and exit |
|
||||||
|
| Phase 2 | Single dimension fails | No | Log warning, continue other dimensions |
|
||||||
|
| Phase 2 | All dimensions fail | Yes | Error and exit |
|
||||||
|
| Phase 3 | Missing dimension JSON | No | Skip in aggregation, log warning |
|
||||||
|
| Phase 4 | Deep-dive agent fails | No | Skip finding, continue others |
|
||||||
|
| Phase 4 | Max iterations reached | No | Generate partial report |
|
||||||
|
|
||||||
|
**CLI Fallback Chain**: Gemini → Qwen → Codex → degraded mode
|
||||||
|
|
||||||
|
**Fallback Triggers**:
|
||||||
|
1. HTTP 429, 5xx errors, connection timeout
|
||||||
|
2. Invalid JSON output (parse error, missing required fields)
|
||||||
|
3. Low confidence score < 0.4
|
||||||
|
4. Analysis too brief (< 100 words in report)
|
||||||
|
|
||||||
|
**Fallback Behavior**:
|
||||||
|
- On trigger: Retry with next tool in chain
|
||||||
|
- After Codex fails: Enter degraded mode (skip analysis, log error)
|
||||||
|
- Degraded mode: Continue workflow with available results
|
||||||
|
|
||||||
|
### TodoWrite Structure
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
TodoWrite({
|
||||||
|
todos: [
|
||||||
|
{ content: "Phase 1: Discovery & Initialization", status: "completed", activeForm: "Initializing" },
|
||||||
|
{ content: "Phase 2: Parallel Reviews (7 dimensions)", status: "in_progress", activeForm: "Reviewing" },
|
||||||
|
{ content: " → Security review", status: "in_progress", activeForm: "Analyzing security" },
|
||||||
|
// ... other dimensions as sub-items
|
||||||
|
{ content: "Phase 3: Aggregation", status: "pending", activeForm: "Aggregating" },
|
||||||
|
{ content: "Phase 4: Deep-dive", status: "pending", activeForm: "Deep-diving" },
|
||||||
|
{ content: "Phase 5: Completion", status: "pending", activeForm: "Completing" }
|
||||||
|
]
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Default Settings Work**: 7 dimensions + 3 iterations sufficient for most cases
|
||||||
|
2. **Parallel Execution**: ~60 minutes for full initial review (7 dimensions)
|
||||||
|
3. **Trust Aggregation Logic**: Auto-selection based on proven heuristics
|
||||||
|
4. **Monitor Logs**: Check reports/ directory for CLI analysis insights
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
### View Review Progress
|
||||||
|
Use `ccw view` to open the review dashboard in browser:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ccw view
|
||||||
|
```
|
||||||
|
|
||||||
|
### Automated Fix Workflow
|
||||||
|
After completing a review, use the generated findings JSON for automated fixing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Step 1: Complete review (this command)
|
||||||
|
/workflow:review-session-cycle
|
||||||
|
|
||||||
|
# Step 2: Run automated fixes using dimension findings
|
||||||
|
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
|
||||||
|
```
|
||||||
|
|
||||||
|
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.
|
||||||
|
|
||||||
@@ -29,6 +29,39 @@ argument-hint: "[--type=security|architecture|action-items|quality] [optional: s
|
|||||||
- For documentation generation, use `/workflow:tools:docs`
|
- For documentation generation, use `/workflow:tools:docs`
|
||||||
- For CLAUDE.md updates, use `/update-memory-related`
|
- For CLAUDE.md updates, use `/update-memory-related`
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse --type flag (default: quality)
|
||||||
|
└─ Parse session-id argument (optional)
|
||||||
|
|
||||||
|
Step 1: Session Resolution
|
||||||
|
└─ Decision:
|
||||||
|
├─ session-id provided → Use provided session
|
||||||
|
└─ Not provided → Auto-detect from .workflow/active/
|
||||||
|
|
||||||
|
Step 2: Validation
|
||||||
|
├─ Check session directory exists
|
||||||
|
└─ Check for completed implementation (.summaries/IMPL-*.md exists)
|
||||||
|
|
||||||
|
Step 3: Type Check
|
||||||
|
└─ Decision:
|
||||||
|
├─ type=docs → Redirect to /workflow:tools:docs
|
||||||
|
└─ Other types → Continue to analysis
|
||||||
|
|
||||||
|
Step 4: Model Analysis Phase
|
||||||
|
├─ Load context (summaries, test results, changed files)
|
||||||
|
└─ Perform specialized review by type:
|
||||||
|
├─ security → Security patterns + Gemini analysis
|
||||||
|
├─ architecture → Qwen architecture analysis
|
||||||
|
├─ quality → Gemini code quality analysis
|
||||||
|
└─ action-items → Requirements verification
|
||||||
|
|
||||||
|
Step 5: Generate Report
|
||||||
|
└─ Output: REVIEW-{type}.md
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Template
|
## Execution Template
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
@@ -87,20 +87,29 @@ Analyze workflow session for archival preparation. Session is STILL in active lo
|
|||||||
|
|
||||||
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
|
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
|
||||||
|
|
||||||
3. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
|
3. **Extract review data** (if .review/ exists):
|
||||||
- Return: {successes, challenges, watch_patterns}
|
- Count dimension results: .review/dimensions/*.json
|
||||||
|
- Count deep-dive results: .review/iterations/*.json
|
||||||
|
- Extract findings summary from dimension JSONs (total, critical, high, medium, low)
|
||||||
|
- Check fix results if .review/fixes/ exists (fixed_count, failed_count)
|
||||||
|
- Build review_metrics: {dimensions_analyzed, total_findings, severity_distribution, fix_success_rate}
|
||||||
|
|
||||||
4. **Build archive entry**:
|
4. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
|
||||||
|
- Return: {successes, challenges, watch_patterns}
|
||||||
|
- If review data exists, include review-specific lessons (common issue patterns, effective fixes)
|
||||||
|
|
||||||
|
5. **Build archive entry**:
|
||||||
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
|
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
|
||||||
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
|
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
|
||||||
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
|
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
|
||||||
|
- If review data exists, include review_metrics in metrics object
|
||||||
|
|
||||||
5. **Extract feature metadata** (for Phase 4):
|
6. **Extract feature metadata** (for Phase 4):
|
||||||
- Parse IMPL_PLAN.md for title (first # heading)
|
- Parse IMPL_PLAN.md for title (first # heading)
|
||||||
- Extract description (first paragraph, max 200 chars)
|
- Extract description (first paragraph, max 200 chars)
|
||||||
- Generate feature tags (3-5 keywords from content)
|
- Generate feature tags (3-5 keywords from content)
|
||||||
|
|
||||||
6. **Return result**: Complete metadata package for atomic commit
|
7. **Return result**: Complete metadata package for atomic commit
|
||||||
{
|
{
|
||||||
"status": "success",
|
"status": "success",
|
||||||
"session_id": "WFS-session-name",
|
"session_id": "WFS-session-name",
|
||||||
@@ -109,7 +118,17 @@ Analyze workflow session for archival preparation. Session is STILL in active lo
|
|||||||
"description": "...",
|
"description": "...",
|
||||||
"archived_at": "...",
|
"archived_at": "...",
|
||||||
"archive_path": ".workflow/archives/WFS-session-name",
|
"archive_path": ".workflow/archives/WFS-session-name",
|
||||||
"metrics": {...},
|
"metrics": {
|
||||||
|
"duration_hours": 2.5,
|
||||||
|
"tasks_completed": 5,
|
||||||
|
"summaries_generated": 3,
|
||||||
|
"review_metrics": { // Optional, only if .review/ exists
|
||||||
|
"dimensions_analyzed": 4,
|
||||||
|
"total_findings": 15,
|
||||||
|
"severity_distribution": {"critical": 1, "high": 3, "medium": 8, "low": 3},
|
||||||
|
"fix_success_rate": 0.87 // Optional, only if .review/fixes/ exists
|
||||||
|
}
|
||||||
|
},
|
||||||
"tags": [...],
|
"tags": [...],
|
||||||
"lessons": {...}
|
"lessons": {...}
|
||||||
},
|
},
|
||||||
@@ -193,6 +212,7 @@ bash(rm .workflow/archives/WFS-session-name/.archiving)
|
|||||||
Location: .workflow/archives/WFS-session-name/
|
Location: .workflow/archives/WFS-session-name/
|
||||||
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
|
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
|
||||||
Manifest: Updated with ${manifest.length} total sessions
|
Manifest: Updated with ${manifest.length} total sessions
|
||||||
|
${reviewMetrics ? `Review: ${reviewMetrics.total_findings} findings across ${reviewMetrics.dimensions_analyzed} dimensions, ${Math.round(reviewMetrics.fix_success_rate * 100)}% fixed` : ''}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 4: Update Project Feature Registry
|
### Phase 4: Update Project Feature Registry
|
||||||
@@ -435,9 +455,10 @@ Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
|||||||
**Phase 2: Agent Analysis** (Read-only data processing)
|
**Phase 2: Agent Analysis** (Read-only data processing)
|
||||||
- Extract all session data from active location
|
- Extract all session data from active location
|
||||||
- Count tasks and summaries
|
- Count tasks and summaries
|
||||||
- Generate lessons learned analysis
|
- Extract review data if .review/ exists (dimension results, findings, fix results)
|
||||||
|
- Generate lessons learned analysis (including review-specific lessons if applicable)
|
||||||
- Extract feature metadata from IMPL_PLAN.md
|
- Extract feature metadata from IMPL_PLAN.md
|
||||||
- Build complete archive + feature metadata package
|
- Build complete archive + feature metadata package (with review_metrics if applicable)
|
||||||
- **No file modifications** - pure analysis
|
- **No file modifications** - pure analysis
|
||||||
- **Total**: 1 agent invocation
|
- **Total**: 1 agent invocation
|
||||||
|
|
||||||
@@ -476,15 +497,4 @@ Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
|||||||
- Resume from Phase 2 (skip marker creation)
|
- Resume from Phase 2 (skip marker creation)
|
||||||
- Idempotent operations (safe to retry)
|
- Idempotent operations (safe to retry)
|
||||||
|
|
||||||
### Benefits Over Previous Design
|
|
||||||
|
|
||||||
**Old Design Weakness**:
|
|
||||||
- Move first → agent second
|
|
||||||
- Agent failure → session moved but metadata incomplete
|
|
||||||
- Inconsistent state requires manual cleanup
|
|
||||||
|
|
||||||
**New Design Strengths**:
|
|
||||||
- Agent first → move second
|
|
||||||
- Agent failure → session still active, safe to retry
|
|
||||||
- Transactional commit → all-or-nothing file operations
|
|
||||||
- Marker-based state → resume capability
|
|
||||||
|
|||||||
@@ -1,11 +1,13 @@
|
|||||||
---
|
---
|
||||||
name: start
|
name: start
|
||||||
description: Discover existing sessions or start new workflow session with intelligent session management and conflict detection
|
description: Discover existing sessions or start new workflow session with intelligent session management and conflict detection
|
||||||
argument-hint: [--auto|--new] [optional: task description for new session]
|
argument-hint: [--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]
|
||||||
examples:
|
examples:
|
||||||
- /workflow:session:start
|
- /workflow:session:start
|
||||||
- /workflow:session:start --auto "implement OAuth2 authentication"
|
- /workflow:session:start --auto "implement OAuth2 authentication"
|
||||||
- /workflow:session:start --new "fix login bug"
|
- /workflow:session:start --type review "Code review for auth module"
|
||||||
|
- /workflow:session:start --type tdd --auto "implement user authentication"
|
||||||
|
- /workflow:session:start --type test --new "test payment flow"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Start Workflow Session (/workflow:session:start)
|
# Start Workflow Session (/workflow:session:start)
|
||||||
@@ -17,6 +19,23 @@ Manages workflow sessions with three operation modes: discovery (manual), auto (
|
|||||||
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
|
||||||
2. **Session-level initialization** (always): Creates session directory structure
|
2. **Session-level initialization** (always): Creates session directory structure
|
||||||
|
|
||||||
|
## Session Types
|
||||||
|
|
||||||
|
The `--type` parameter classifies sessions for CCW dashboard organization:
|
||||||
|
|
||||||
|
| Type | Description | Default For |
|
||||||
|
|------|-------------|-------------|
|
||||||
|
| `workflow` | Standard implementation (default) | `/workflow:plan` |
|
||||||
|
| `review` | Code review sessions | `/workflow:review-module-cycle` |
|
||||||
|
| `tdd` | TDD-based development | `/workflow:tdd-plan` |
|
||||||
|
| `test` | Test generation/fix sessions | `/workflow:test-fix-gen` |
|
||||||
|
| `docs` | Documentation sessions | `/memory:docs` |
|
||||||
|
|
||||||
|
**Validation**: If `--type` is provided with invalid value, return error:
|
||||||
|
```
|
||||||
|
ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
|
||||||
|
```
|
||||||
|
|
||||||
## Step 0: Initialize Project State (First-time Only)
|
## Step 0: Initialize Project State (First-time Only)
|
||||||
|
|
||||||
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
|
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
|
||||||
@@ -86,8 +105,8 @@ bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.process)
|
|||||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
|
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
|
||||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
|
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
|
||||||
|
|
||||||
# Create metadata
|
# Create metadata (include type field, default to "workflow" if not specified)
|
||||||
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
|
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
||||||
@@ -143,11 +162,16 @@ bash(mkdir -p .workflow/active/WFS-fix-login-bug/.summaries)
|
|||||||
|
|
||||||
### Step 3: Create Metadata
|
### Step 3: Create Metadata
|
||||||
```bash
|
```bash
|
||||||
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
|
# Include type field from --type parameter (default: "workflow")
|
||||||
|
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
||||||
|
|
||||||
|
## Execution Guideline
|
||||||
|
|
||||||
|
- **Non-interrupting**: When called from other commands, this command completes and returns control to the caller without interrupting subsequent tasks.
|
||||||
|
|
||||||
## Output Format Specification
|
## Output Format Specification
|
||||||
|
|
||||||
### Success
|
### Success
|
||||||
@@ -168,25 +192,6 @@ DECISION: Reusing existing session
|
|||||||
SESSION_ID: WFS-promptmaster-platform
|
SESSION_ID: WFS-promptmaster-platform
|
||||||
```
|
```
|
||||||
|
|
||||||
## Command Integration
|
|
||||||
|
|
||||||
### For /workflow:plan (Use Auto Mode)
|
|
||||||
```bash
|
|
||||||
SlashCommand(command="/workflow:session:start --auto \"implement OAuth2 authentication\"")
|
|
||||||
|
|
||||||
# Parse session ID from output
|
|
||||||
grep "^SESSION_ID:" | awk '{print $2}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### For Interactive Workflows (Use Discovery Mode)
|
|
||||||
```bash
|
|
||||||
SlashCommand(command="/workflow:session:start")
|
|
||||||
```
|
|
||||||
|
|
||||||
### For New Isolated Work (Use Force New Mode)
|
|
||||||
```bash
|
|
||||||
SlashCommand(command="/workflow:session:start --new \"experimental feature\"")
|
|
||||||
```
|
|
||||||
|
|
||||||
## Session ID Format
|
## Session ID Format
|
||||||
- Pattern: `WFS-[lowercase-slug]`
|
- Pattern: `WFS-[lowercase-slug]`
|
||||||
|
|||||||
@@ -1,328 +0,0 @@
|
|||||||
---
|
|
||||||
name: workflow:status
|
|
||||||
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
|
|
||||||
argument-hint: "[optional: --project|task-id|--validate|--dashboard]"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Workflow Status Command (/workflow:status)
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Generates on-demand views from project and session data. Supports multiple modes:
|
|
||||||
1. **Project Overview** (`--project`): Shows completed features and project statistics
|
|
||||||
2. **Workflow Tasks** (default): Shows current session task progress
|
|
||||||
3. **HTML Dashboard** (`--dashboard`): Generates interactive HTML task board with active and archived sessions
|
|
||||||
|
|
||||||
No synchronization needed - all views are calculated from current JSON state.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
```bash
|
|
||||||
/workflow:status # Show current workflow session overview
|
|
||||||
/workflow:status --project # Show project-level feature registry
|
|
||||||
/workflow:status impl-1 # Show specific task details
|
|
||||||
/workflow:status --validate # Validate workflow integrity
|
|
||||||
/workflow:status --dashboard # Generate HTML dashboard board
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation Flow
|
|
||||||
|
|
||||||
### Mode Selection
|
|
||||||
|
|
||||||
**Check for --project flag**:
|
|
||||||
- If `--project` flag present → Execute **Project Overview Mode**
|
|
||||||
- Otherwise → Execute **Workflow Session Mode** (default)
|
|
||||||
|
|
||||||
## Project Overview Mode
|
|
||||||
|
|
||||||
### Step 1: Check Project State
|
|
||||||
```bash
|
|
||||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
|
||||||
```
|
|
||||||
|
|
||||||
**If NOT_FOUND**:
|
|
||||||
```
|
|
||||||
No project state found.
|
|
||||||
Run /workflow:session:start to initialize project.
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Read Project Data
|
|
||||||
```bash
|
|
||||||
bash(cat .workflow/project.json)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Parse and Display
|
|
||||||
|
|
||||||
**Data Processing**:
|
|
||||||
```javascript
|
|
||||||
const projectData = JSON.parse(Read('.workflow/project.json'));
|
|
||||||
const features = projectData.features || [];
|
|
||||||
const stats = projectData.statistics || {};
|
|
||||||
const overview = projectData.overview || null;
|
|
||||||
|
|
||||||
// Sort features by implementation date (newest first)
|
|
||||||
const sortedFeatures = features.sort((a, b) =>
|
|
||||||
new Date(b.implemented_at) - new Date(a.implemented_at)
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Format** (with extended overview):
|
|
||||||
```
|
|
||||||
## Project: ${projectData.project_name}
|
|
||||||
Initialized: ${projectData.initialized_at}
|
|
||||||
|
|
||||||
${overview ? `
|
|
||||||
### Overview
|
|
||||||
${overview.description}
|
|
||||||
|
|
||||||
**Technology Stack**:
|
|
||||||
${overview.technology_stack.languages.map(l => `- ${l.name}${l.primary ? ' (primary)' : ''}: ${l.file_count} files`).join('\n')}
|
|
||||||
Frameworks: ${overview.technology_stack.frameworks.join(', ')}
|
|
||||||
|
|
||||||
**Architecture**:
|
|
||||||
Style: ${overview.architecture.style}
|
|
||||||
Patterns: ${overview.architecture.patterns.join(', ')}
|
|
||||||
|
|
||||||
**Key Components** (${overview.key_components.length}):
|
|
||||||
${overview.key_components.map(c => `- ${c.name} (${c.path})\n ${c.description}`).join('\n')}
|
|
||||||
|
|
||||||
**Metrics**:
|
|
||||||
- Files: ${overview.metrics.total_files}
|
|
||||||
- Lines of Code: ${overview.metrics.lines_of_code}
|
|
||||||
- Complexity: ${overview.metrics.complexity}
|
|
||||||
|
|
||||||
---
|
|
||||||
` : ''}
|
|
||||||
|
|
||||||
### Completed Features (${stats.total_features})
|
|
||||||
|
|
||||||
${sortedFeatures.map(f => `
|
|
||||||
- ${f.title} (${f.timeline?.implemented_at || f.implemented_at})
|
|
||||||
${f.description}
|
|
||||||
Tags: ${f.tags?.join(', ') || 'none'}
|
|
||||||
Session: ${f.traceability?.session_id || f.session_id}
|
|
||||||
Archive: ${f.traceability?.archive_path || 'unknown'}
|
|
||||||
${f.traceability?.commit_hash ? `Commit: ${f.traceability.commit_hash}` : ''}
|
|
||||||
`).join('\n')}
|
|
||||||
|
|
||||||
### Project Statistics
|
|
||||||
- Total Features: ${stats.total_features}
|
|
||||||
- Total Sessions: ${stats.total_sessions}
|
|
||||||
- Last Updated: ${stats.last_updated}
|
|
||||||
|
|
||||||
### Quick Access
|
|
||||||
- View session details: /workflow:status
|
|
||||||
- Archive query: jq '.archives[] | select(.session_id == "SESSION_ID")' .workflow/archives/manifest.json
|
|
||||||
- Documentation: .workflow/docs/${projectData.project_name}/
|
|
||||||
|
|
||||||
### Query Commands
|
|
||||||
# Find by tag
|
|
||||||
cat .workflow/project.json | jq '.features[] | select(.tags[] == "auth")'
|
|
||||||
|
|
||||||
# View archive
|
|
||||||
cat ${feature.traceability.archive_path}/IMPL_PLAN.md
|
|
||||||
|
|
||||||
# List all tags
|
|
||||||
cat .workflow/project.json | jq -r '.features[].tags[]' | sort -u
|
|
||||||
```
|
|
||||||
|
|
||||||
**Empty State**:
|
|
||||||
```
|
|
||||||
## Project: ${projectData.project_name}
|
|
||||||
Initialized: ${projectData.initialized_at}
|
|
||||||
|
|
||||||
No features completed yet.
|
|
||||||
|
|
||||||
Complete your first workflow session to add features:
|
|
||||||
1. /workflow:plan "feature description"
|
|
||||||
2. /workflow:execute
|
|
||||||
3. /workflow:session:complete
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Show Recent Sessions (Optional)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List 5 most recent archived sessions
|
|
||||||
bash(ls -1t .workflow/archives/WFS-* 2>/dev/null | head -5 | xargs -I {} basename {})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**:
|
|
||||||
```
|
|
||||||
### Recent Sessions
|
|
||||||
- WFS-auth-system (archived)
|
|
||||||
- WFS-payment-flow (archived)
|
|
||||||
- WFS-user-dashboard (archived)
|
|
||||||
|
|
||||||
Use /workflow:session:complete to archive current session.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow Session Mode (Default)
|
|
||||||
|
|
||||||
### Step 1: Find Active Session
|
|
||||||
```bash
|
|
||||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Load Session Data
|
|
||||||
```bash
|
|
||||||
cat .workflow/active/WFS-session/workflow-session.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Scan Task Files
|
|
||||||
```bash
|
|
||||||
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Generate Task Status
|
|
||||||
```bash
|
|
||||||
cat .workflow/active/WFS-session/.task/impl-1.json | jq -r '.status'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Count Task Progress
|
|
||||||
```bash
|
|
||||||
find .workflow/active/WFS-session/.task/ -name "*.json" -type f | wc -l
|
|
||||||
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Display Overview
|
|
||||||
```markdown
|
|
||||||
# Workflow Overview
|
|
||||||
**Session**: WFS-session-name
|
|
||||||
**Progress**: 3/8 tasks completed
|
|
||||||
|
|
||||||
## Active Tasks
|
|
||||||
- [IN PROGRESS] impl-1: Current task in progress
|
|
||||||
- [ ] impl-2: Next pending task
|
|
||||||
|
|
||||||
## Completed Tasks
|
|
||||||
- [COMPLETED] impl-0: Setup completed
|
|
||||||
```
|
|
||||||
|
|
||||||
## Dashboard Mode (HTML Board)
|
|
||||||
|
|
||||||
### Step 1: Check for --dashboard flag
|
|
||||||
```bash
|
|
||||||
# If --dashboard flag present → Execute Dashboard Mode
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Collect Workflow Data
|
|
||||||
|
|
||||||
**Collect Active Sessions**:
|
|
||||||
```bash
|
|
||||||
# Find all active sessions
|
|
||||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null
|
|
||||||
|
|
||||||
# For each active session, read metadata and tasks
|
|
||||||
for session in $(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null); do
|
|
||||||
cat "$session/workflow-session.json"
|
|
||||||
find "$session/.task/" -name "*.json" -type f 2>/dev/null
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
**Collect Archived Sessions**:
|
|
||||||
```bash
|
|
||||||
# Find all archived sessions
|
|
||||||
find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null
|
|
||||||
|
|
||||||
# Read manifest if exists
|
|
||||||
cat .workflow/archives/manifest.json 2>/dev/null
|
|
||||||
|
|
||||||
# For each archived session, read metadata
|
|
||||||
for archive in $(find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null); do
|
|
||||||
cat "$archive/workflow-session.json" 2>/dev/null
|
|
||||||
# Count completed tasks
|
|
||||||
find "$archive/.task/" -name "*.json" -type f 2>/dev/null | wc -l
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Process and Structure Data
|
|
||||||
|
|
||||||
**Build data structure for dashboard**:
|
|
||||||
```javascript
|
|
||||||
const dashboardData = {
|
|
||||||
activeSessions: [],
|
|
||||||
archivedSessions: [],
|
|
||||||
generatedAt: new Date().toISOString()
|
|
||||||
};
|
|
||||||
|
|
||||||
// Process active sessions
|
|
||||||
for each active_session in active_sessions:
|
|
||||||
const sessionData = JSON.parse(Read(active_session/workflow-session.json));
|
|
||||||
const tasks = [];
|
|
||||||
|
|
||||||
// Load all tasks for this session
|
|
||||||
for each task_file in find(active_session/.task/*.json):
|
|
||||||
const taskData = JSON.parse(Read(task_file));
|
|
||||||
tasks.push({
|
|
||||||
task_id: taskData.task_id,
|
|
||||||
title: taskData.title,
|
|
||||||
status: taskData.status,
|
|
||||||
type: taskData.type
|
|
||||||
});
|
|
||||||
|
|
||||||
dashboardData.activeSessions.push({
|
|
||||||
session_id: sessionData.session_id,
|
|
||||||
project: sessionData.project,
|
|
||||||
status: sessionData.status,
|
|
||||||
created_at: sessionData.created_at || sessionData.initialized_at,
|
|
||||||
tasks: tasks
|
|
||||||
});
|
|
||||||
|
|
||||||
// Process archived sessions
|
|
||||||
for each archived_session in archived_sessions:
|
|
||||||
const sessionData = JSON.parse(Read(archived_session/workflow-session.json));
|
|
||||||
const taskCount = bash(find archived_session/.task/*.json | wc -l);
|
|
||||||
|
|
||||||
dashboardData.archivedSessions.push({
|
|
||||||
session_id: sessionData.session_id,
|
|
||||||
project: sessionData.project,
|
|
||||||
archived_at: sessionData.completed_at || sessionData.archived_at,
|
|
||||||
taskCount: parseInt(taskCount),
|
|
||||||
archive_path: archived_session
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4: Generate HTML from Template
|
|
||||||
|
|
||||||
**Load template and inject data**:
|
|
||||||
```javascript
|
|
||||||
// Read the HTML template
|
|
||||||
const template = Read("~/.claude/templates/workflow-dashboard.html");
|
|
||||||
|
|
||||||
// Prepare data for injection
|
|
||||||
const dataJson = JSON.stringify(dashboardData, null, 2);
|
|
||||||
|
|
||||||
// Replace placeholder with actual data
|
|
||||||
const htmlContent = template.replace('{{WORKFLOW_DATA}}', dataJson);
|
|
||||||
|
|
||||||
// Ensure .workflow directory exists
|
|
||||||
bash(mkdir -p .workflow);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Write HTML File
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Write the generated HTML to .workflow/dashboard.html
|
|
||||||
Write({
|
|
||||||
file_path: ".workflow/dashboard.html",
|
|
||||||
content: htmlContent
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 6: Display Success Message
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
Dashboard generated successfully!
|
|
||||||
|
|
||||||
Location: .workflow/dashboard.html
|
|
||||||
|
|
||||||
Open in browser:
|
|
||||||
file://$(pwd)/.workflow/dashboard.html
|
|
||||||
|
|
||||||
Features:
|
|
||||||
- 📊 Active sessions overview
|
|
||||||
- 📦 Archived sessions history
|
|
||||||
- 🔍 Search and filter
|
|
||||||
- 📈 Progress tracking
|
|
||||||
- 🎨 Dark/light theme
|
|
||||||
|
|
||||||
Refresh data: Re-run /workflow:status --dashboard
|
|
||||||
```
|
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: tdd-plan
|
name: tdd-plan
|
||||||
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking
|
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking
|
||||||
argument-hint: "[--cli-execute] \"feature description\"|file.md"
|
argument-hint: "\"feature description\"|file.md"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -9,40 +9,43 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
|
|
||||||
## Coordinator Role
|
## Coordinator Role
|
||||||
|
|
||||||
**This command is a pure orchestrator**: Execute 6 slash commands in sequence, parse outputs, pass context, and ensure complete TDD workflow creation with Red-Green-Refactor task generation.
|
**This command is a pure orchestrator**: Dispatches 6 slash commands in sequence, parse outputs, pass context, and ensure complete TDD workflow creation with Red-Green-Refactor task generation.
|
||||||
|
|
||||||
**Execution Modes**:
|
**CLI Tool Selection**: CLI tool usage is determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
|
||||||
- **Agent Mode** (default): Use `/workflow:tools:task-generate-tdd` (autonomous agent-driven)
|
|
||||||
- **CLI Mode** (`--cli-execute`): Use `/workflow:tools:task-generate-tdd --cli-execute` (Gemini/Qwen)
|
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
- When dispatching a sub-command (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
||||||
- Orchestrator **executes these attached tasks** sequentially
|
- Orchestrator **executes these attached tasks** sequentially
|
||||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||||
- This is **task expansion**, not external delegation
|
- This is **task expansion**, not external delegation
|
||||||
|
|
||||||
**Auto-Continue Mechanism**:
|
**Auto-Continue Mechanism**:
|
||||||
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
||||||
- When each phase finishes executing, automatically execute next pending phase
|
- When each phase finishes executing, automatically dispatch next pending phase
|
||||||
- All phases run autonomously without user interaction
|
- All phases run autonomously without user interaction
|
||||||
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
|
- **⚠️ CONTINUOUS EXECUTION** - Do not stop until all phases complete
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
1. **Start Immediately**: First action is TodoWrite initialization, second action is dispatch Phase 1
|
||||||
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
||||||
3. **Parse Every Output**: Extract required data for next phase
|
3. **Parse Every Output**: Extract required data for next phase
|
||||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
4. **Auto-Continue via TodoList**: Check TodoList status to dispatch next pending phase automatically
|
||||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
6. **TDD Context**: All descriptions include "TDD:" prefix
|
6. **TDD Context**: All descriptions include "TDD:" prefix
|
||||||
7. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
7. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
8. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and dispatch next phase
|
||||||
|
|
||||||
## 6-Phase Execution (with Conflict Resolution)
|
## 6-Phase Execution (with Conflict Resolution)
|
||||||
|
|
||||||
### Phase 1: Session Discovery
|
### Phase 1: Session Discovery
|
||||||
**Command**: `/workflow:session:start --auto "TDD: [structured-description]"`
|
|
||||||
|
**Step 1.1: Dispatch** - Session discovery and initialization
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:session:start --type tdd --auto \"TDD: [structured-description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**TDD Structured Format**:
|
**TDD Structured Format**:
|
||||||
```
|
```
|
||||||
@@ -62,7 +65,12 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Context Gathering
|
### Phase 2: Context Gathering
|
||||||
**Command**: `/workflow:tools:context-gather --session [sessionId] "TDD: [structured-description]"`
|
|
||||||
|
**Step 2.1: Dispatch** - Context gathering and analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"TDD: [structured-description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
||||||
|
|
||||||
@@ -83,7 +91,12 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Test Coverage Analysis
|
### Phase 3: Test Coverage Analysis
|
||||||
**Command**: `/workflow:tools:test-context-gather --session [sessionId]`
|
|
||||||
|
**Step 3.1: Dispatch** - Test coverage analysis and framework detection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-context-gather --session [sessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
**Purpose**: Analyze existing codebase for:
|
**Purpose**: Analyze existing codebase for:
|
||||||
- Existing test patterns and conventions
|
- Existing test patterns and conventions
|
||||||
@@ -93,28 +106,25 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
|
|
||||||
**Parse**: Extract testContextPath (`.workflow/active/[sessionId]/.process/test-context-package.json`)
|
**Parse**: Extract testContextPath (`.workflow/active/[sessionId]/.process/test-context-package.json`)
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- Makes TDD aware of existing environment
|
|
||||||
- Identifies reusable test patterns
|
|
||||||
- Prevents duplicate test creation
|
|
||||||
- Enables integration with existing tests
|
|
||||||
|
|
||||||
<!-- TodoWrite: When test-context-gather invoked, INSERT 3 test-context-gather tasks -->
|
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
<!-- TodoWrite: When test-context-gather dispatched, INSERT 3 test-context-gather tasks -->
|
||||||
|
|
||||||
|
**TodoWrite Update (Phase 3 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Phase 3.1: Detect test framework and conventions (test-context-gather)", "status": "in_progress", "activeForm": "Detecting test framework"},
|
{"content": "Phase 3: Test Coverage Analysis", "status": "in_progress", "activeForm": "Executing test coverage analysis"},
|
||||||
{"content": "Phase 3.2: Analyze existing test coverage (test-context-gather)", "status": "pending", "activeForm": "Analyzing test coverage"},
|
{"content": " → Detect test framework and conventions", "status": "in_progress", "activeForm": "Detecting test framework"},
|
||||||
{"content": "Phase 3.3: Identify coverage gaps (test-context-gather)", "status": "pending", "activeForm": "Identifying coverage gaps"},
|
{"content": " → Analyze existing test coverage", "status": "pending", "activeForm": "Analyzing test coverage"},
|
||||||
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
{"content": " → Identify coverage gaps", "status": "pending", "activeForm": "Identifying coverage gaps"},
|
||||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||||
|
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
||||||
|
|
||||||
@@ -123,11 +133,11 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
|
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||||
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -141,7 +151,11 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
|
|
||||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||||
|
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")`
|
**Step 4.1: Dispatch** - Conflict detection and resolution
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId] --context [contextPath]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- sessionId from Phase 1
|
- sessionId from Phase 1
|
||||||
@@ -159,23 +173,24 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
||||||
- Display: "No significant conflicts detected, proceeding to TDD task generation"
|
- Display: "No significant conflicts detected, proceeding to TDD task generation"
|
||||||
|
|
||||||
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks -->
|
<!-- TodoWrite: If conflict_risk ≥ medium, INSERT 3 conflict-resolution tasks when dispatched -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 4 SlashCommand invoked - tasks attached, if conflict_risk ≥ medium)**:
|
**TodoWrite Update (Phase 4 SlashCommand dispatched - tasks attached, if conflict_risk ≥ medium)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||||
{"content": "Phase 4.1: Detect conflicts with CLI analysis (conflict-resolution)", "status": "in_progress", "activeForm": "Detecting conflicts"},
|
{"content": "Phase 4: Conflict Resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
|
||||||
{"content": "Phase 4.2: Present conflicts to user (conflict-resolution)", "status": "pending", "activeForm": "Presenting conflicts"},
|
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
|
||||||
{"content": "Phase 4.3: Apply resolution strategies (conflict-resolution)", "status": "pending", "activeForm": "Applying resolution strategies"},
|
{"content": " → Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
|
||||||
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
{"content": " → Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
|
||||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||||
|
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
||||||
|
|
||||||
@@ -184,12 +199,12 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
|
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||||
{"content": "Execute conflict resolution", "status": "completed", "activeForm": "Executing conflict resolution"},
|
{"content": "Phase 4: Conflict Resolution", "status": "completed", "activeForm": "Executing conflict resolution"},
|
||||||
{"content": "Execute TDD task generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -200,7 +215,13 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
**Memory State Check**:
|
**Memory State Check**:
|
||||||
- Evaluate current context window usage and memory state
|
- Evaluate current context window usage and memory state
|
||||||
- If memory usage is high (>110K tokens or approaching context limits):
|
- If memory usage is high (>110K tokens or approaching context limits):
|
||||||
- **Command**: `SlashCommand(command="/compact")`
|
|
||||||
|
**Step 4.5: Dispatch** - Memory compaction
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/compact")
|
||||||
|
```
|
||||||
|
|
||||||
- This optimizes memory before proceeding to Phase 5
|
- This optimizes memory before proceeding to Phase 5
|
||||||
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
||||||
- Ensures optimal performance and prevents context overflow
|
- Ensures optimal performance and prevents context overflow
|
||||||
@@ -208,9 +229,14 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 5: TDD Task Generation
|
### Phase 5: TDD Task Generation
|
||||||
**Command**:
|
|
||||||
- Agent Mode (default): `/workflow:tools:task-generate-tdd --session [sessionId]`
|
**Step 5.1: Dispatch** - TDD task generation via action-planning-agent
|
||||||
- CLI Mode (`--cli-execute`): `/workflow:tools:task-generate-tdd --session [sessionId] --cli-execute`
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: CLI tool usage is determined semantically from user's task description.
|
||||||
|
|
||||||
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles)
|
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles)
|
||||||
|
|
||||||
@@ -225,22 +251,23 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
||||||
- Task count ≤10 (compliance with task limit)
|
- Task count ≤10 (compliance with task limit)
|
||||||
|
|
||||||
<!-- TodoWrite: When task-generate-tdd invoked, INSERT 3 task-generate-tdd tasks -->
|
<!-- TodoWrite: When task-generate-tdd dispatched, INSERT 3 task-generate-tdd tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 5 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 5 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||||
{"content": "Phase 5.1: Discovery - analyze TDD requirements (task-generate-tdd)", "status": "in_progress", "activeForm": "Analyzing TDD requirements"},
|
{"content": "Phase 5: TDD Task Generation", "status": "in_progress", "activeForm": "Executing TDD task generation"},
|
||||||
{"content": "Phase 5.2: Planning - design Red-Green-Refactor cycles (task-generate-tdd)", "status": "pending", "activeForm": "Designing TDD cycles"},
|
{"content": " → Discovery - analyze TDD requirements", "status": "in_progress", "activeForm": "Analyzing TDD requirements"},
|
||||||
{"content": "Phase 5.3: Output - generate IMPL tasks with internal TDD phases (task-generate-tdd)", "status": "pending", "activeForm": "Generating TDD tasks"},
|
{"content": " → Planning - design Red-Green-Refactor cycles", "status": "pending", "activeForm": "Designing TDD cycles"},
|
||||||
{"content": "Validate TDD structure", "status": "pending", "activeForm": "Validating TDD structure"}
|
{"content": " → Output - generate IMPL tasks with internal TDD phases", "status": "pending", "activeForm": "Generating TDD tasks"},
|
||||||
|
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
|
**Note**: SlashCommand dispatch **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 5.1-5.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 5.1-5.3** sequentially
|
||||||
|
|
||||||
@@ -249,11 +276,11 @@ TEST_FOCUS: [Test scenarios]
|
|||||||
**TodoWrite Update (Phase 5 completed - tasks collapsed)**:
|
**TodoWrite Update (Phase 5 completed - tasks collapsed)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Execute session discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||||
{"content": "Execute context gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||||
{"content": "Execute test coverage analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||||
{"content": "Execute TDD task generation", "status": "completed", "activeForm": "Executing TDD task generation"},
|
{"content": "Phase 5: TDD Task Generation", "status": "completed", "activeForm": "Executing TDD task generation"},
|
||||||
{"content": "Validate TDD structure", "status": "in_progress", "activeForm": "Validating TDD structure"}
|
{"content": "Phase 6: TDD Structure Validation", "status": "in_progress", "activeForm": "Validating TDD structure"}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -320,7 +347,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task
|
|||||||
|
|
||||||
### Key Principles
|
### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand dispatched):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 3.1, 3.2, 3.3)
|
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 3.1, 3.2, 3.3)
|
||||||
- First attached task marked as `in_progress`, others as `pending`
|
- First attached task marked as `in_progress`, others as `pending`
|
||||||
@@ -337,7 +364,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins (conditional Phase 4 if conflict_risk ≥ medium) → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase dispatched (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins (conditional Phase 4 if conflict_risk ≥ medium) → Repeat until all phases complete.
|
||||||
|
|
||||||
### TDD-Specific Features
|
### TDD-Specific Features
|
||||||
|
|
||||||
@@ -345,12 +372,7 @@ Quality Gate: Consider running /workflow:action-plan-verify to validate TDD task
|
|||||||
- **Phase 5**: Generated IMPL tasks contain internal Red-Green-Refactor cycles
|
- **Phase 5**: Generated IMPL tasks contain internal Red-Green-Refactor cycles
|
||||||
- **Conditional Phase 4**: Conflict resolution only if conflict_risk ≥ medium
|
- **Conditional Phase 4**: Conflict resolution only if conflict_risk ≥ medium
|
||||||
|
|
||||||
### Benefits
|
|
||||||
|
|
||||||
- ✓ Real-time visibility into TDD workflow execution
|
|
||||||
- ✓ Clear mental model: SlashCommand = attach → execute → collapse
|
|
||||||
- ✓ Test-aware planning with coverage analysis
|
|
||||||
- ✓ Self-contained TDD cycles within each IMPL task
|
|
||||||
|
|
||||||
**Note**: See individual Phase descriptions (Phase 3, 4, 5) for detailed TodoWrite Update examples with full JSON structures.
|
**Note**: See individual Phase descriptions (Phase 3, 4, 5) for detailed TodoWrite Update examples with full JSON structures.
|
||||||
|
|
||||||
@@ -428,8 +450,7 @@ Convert user input to TDD-structured format:
|
|||||||
- `/workflow:tools:test-context-gather` - Phase 3: Analyze existing test patterns and coverage
|
- `/workflow:tools:test-context-gather` - Phase 3: Analyze existing test patterns and coverage
|
||||||
- `/workflow:tools:conflict-resolution` - Phase 4: Detect and resolve conflicts (auto-triggered if conflict_risk ≥ medium)
|
- `/workflow:tools:conflict-resolution` - Phase 4: Detect and resolve conflicts (auto-triggered if conflict_risk ≥ medium)
|
||||||
- `/compact` - Phase 4: Memory optimization (if context approaching limits)
|
- `/compact` - Phase 4: Memory optimization (if context approaching limits)
|
||||||
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks with agent-driven approach (default, autonomous)
|
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks (CLI tool usage determined semantically)
|
||||||
- `/workflow:tools:task-generate-tdd --cli-execute` - Phase 5: Generate TDD tasks with CLI tools (Gemini/Qwen, when `--cli-execute` flag used)
|
|
||||||
|
|
||||||
**Follow-up Commands**:
|
**Follow-up Commands**:
|
||||||
- `/workflow:action-plan-verify` - Recommended: Verify TDD plan quality and structure before execution
|
- `/workflow:action-plan-verify` - Recommended: Verify TDD plan quality and structure before execution
|
||||||
|
|||||||
@@ -18,6 +18,39 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini:*)
|
|||||||
- Validate TDD cycle execution
|
- Validate TDD cycle execution
|
||||||
- Generate compliance report
|
- Generate compliance report
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
└─ Decision (session argument):
|
||||||
|
├─ session-id provided → Use provided session
|
||||||
|
└─ No session-id → Auto-detect active session
|
||||||
|
|
||||||
|
Phase 1: Session Discovery
|
||||||
|
├─ Validate session directory exists
|
||||||
|
└─ TodoWrite: Mark phase 1 completed
|
||||||
|
|
||||||
|
Phase 2: Task Chain Validation
|
||||||
|
├─ Load all task JSONs from .task/
|
||||||
|
├─ Extract task IDs and group by feature
|
||||||
|
├─ Validate TDD structure:
|
||||||
|
│ ├─ TEST-N.M → IMPL-N.M → REFACTOR-N.M chain
|
||||||
|
│ ├─ Dependency verification
|
||||||
|
│ └─ Meta field validation (tdd_phase, agent)
|
||||||
|
└─ TodoWrite: Mark phase 2 completed
|
||||||
|
|
||||||
|
Phase 3: Test Execution Analysis
|
||||||
|
└─ /workflow:tools:tdd-coverage-analysis
|
||||||
|
├─ Coverage metrics extraction
|
||||||
|
├─ TDD cycle verification
|
||||||
|
└─ Compliance score calculation
|
||||||
|
|
||||||
|
Phase 4: Compliance Report Generation
|
||||||
|
├─ Gemini analysis for comprehensive report
|
||||||
|
├─ Generate TDD_COMPLIANCE_REPORT.md
|
||||||
|
└─ Return summary to user
|
||||||
|
```
|
||||||
|
|
||||||
## 4-Phase Execution
|
## 4-Phase Execution
|
||||||
|
|
||||||
### Phase 1: Session Discovery
|
### Phase 1: Session Discovery
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: test-fix-gen
|
name: test-fix-gen
|
||||||
description: Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning
|
description: Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning
|
||||||
argument-hint: "[--use-codex] [--cli-execute] (source-session-id | \"feature description\" | /path/to/file.md)"
|
argument-hint: "(source-session-id | \"feature description\" | /path/to/file.md)"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -43,7 +43,7 @@ fi
|
|||||||
- **Session Isolation**: Creates independent `WFS-test-[slug]` session
|
- **Session Isolation**: Creates independent `WFS-test-[slug]` session
|
||||||
- **Context-First**: Gathers implementation context via appropriate method
|
- **Context-First**: Gathers implementation context via appropriate method
|
||||||
- **Format Reuse**: Creates standard `IMPL-*.json` tasks with `meta.type: "test-fix"`
|
- **Format Reuse**: Creates standard `IMPL-*.json` tasks with `meta.type: "test-fix"`
|
||||||
- **Manual First**: Default to manual fixes, use `--use-codex` for automation
|
- **Semantic CLI Selection**: CLI tool usage determined from user's task description
|
||||||
- **Automatic Detection**: Input pattern determines execution mode
|
- **Automatic Detection**: Input pattern determines execution mode
|
||||||
|
|
||||||
### Coordinator Role
|
### Coordinator Role
|
||||||
@@ -59,8 +59,8 @@ This command is a **pure planning coordinator**:
|
|||||||
- **All execution delegated to `/workflow:test-cycle-execute`**
|
- **All execution delegated to `/workflow:test-cycle-execute`**
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
- When dispatching a sub-command (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
||||||
- Orchestrator **executes these attached tasks** sequentially
|
- Orchestrator **executes these attached tasks** sequentially
|
||||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||||
- This is **task expansion**, not external delegation
|
- This is **task expansion**, not external delegation
|
||||||
@@ -79,16 +79,14 @@ This command is a **pure planning coordinator**:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Basic syntax
|
# Basic syntax
|
||||||
/workflow:test-fix-gen [FLAGS] <INPUT>
|
/workflow:test-fix-gen <INPUT>
|
||||||
|
|
||||||
# Flags (optional)
|
|
||||||
--use-codex # Enable Codex automated fixes in IMPL-002
|
|
||||||
--cli-execute # Enable CLI execution in IMPL-001
|
|
||||||
|
|
||||||
# Input
|
# Input
|
||||||
<INPUT> # Session ID, description, or file path
|
<INPUT> # Session ID, description, or file path
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Note**: CLI tool usage is determined semantically from the task description. To request CLI execution, include it in your description (e.g., "use Codex for automated fixes").
|
||||||
|
|
||||||
### Usage Examples
|
### Usage Examples
|
||||||
|
|
||||||
#### Session Mode
|
#### Session Mode
|
||||||
@@ -96,11 +94,8 @@ This command is a **pure planning coordinator**:
|
|||||||
# Test validation for completed implementation
|
# Test validation for completed implementation
|
||||||
/workflow:test-fix-gen WFS-user-auth-v2
|
/workflow:test-fix-gen WFS-user-auth-v2
|
||||||
|
|
||||||
# With automated fixes
|
# With semantic CLI request
|
||||||
/workflow:test-fix-gen --use-codex WFS-api-endpoints
|
/workflow:test-fix-gen WFS-api-endpoints # Add "use Codex" in description for automated fixes
|
||||||
|
|
||||||
# With CLI execution
|
|
||||||
/workflow:test-fix-gen --cli-execute --use-codex WFS-payment-flow
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Prompt Mode - Text Description
|
#### Prompt Mode - Text Description
|
||||||
@@ -108,17 +103,14 @@ This command is a **pure planning coordinator**:
|
|||||||
# Generate tests from feature description
|
# Generate tests from feature description
|
||||||
/workflow:test-fix-gen "Test the user authentication API endpoints in src/auth/api.ts"
|
/workflow:test-fix-gen "Test the user authentication API endpoints in src/auth/api.ts"
|
||||||
|
|
||||||
# With automated fixes
|
# With CLI execution (semantic)
|
||||||
/workflow:test-fix-gen --use-codex "Test user registration and login flows"
|
/workflow:test-fix-gen "Test user registration and login flows, use Codex for automated fixes"
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Prompt Mode - File Reference
|
#### Prompt Mode - File Reference
|
||||||
```bash
|
```bash
|
||||||
# Generate tests from requirements file
|
# Generate tests from requirements file
|
||||||
/workflow:test-fix-gen ./docs/api-requirements.md
|
/workflow:test-fix-gen ./docs/api-requirements.md
|
||||||
|
|
||||||
# With flags
|
|
||||||
/workflow:test-fix-gen --use-codex --cli-execute ./specs/feature.md
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Mode Comparison
|
### Mode Comparison
|
||||||
@@ -136,32 +128,50 @@ This command is a **pure planning coordinator**:
|
|||||||
|
|
||||||
### Core Execution Rules
|
### Core Execution Rules
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite, second is Phase 1 session creation
|
1. **Start Immediately**: First action is TodoWrite, second is dispatch Phase 1 session creation
|
||||||
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
||||||
3. **Parse Every Output**: Extract required data from each phase for next phase
|
3. **Parse Every Output**: Extract required data from each phase for next phase
|
||||||
4. **Sequential Execution**: Each phase depends on previous phase's output
|
4. **Sequential Execution**: Each phase depends on previous phase's output
|
||||||
5. **Complete All Phases**: Do not return until Phase 5 completes
|
5. **Complete All Phases**: Do not return until Phase 5 completes
|
||||||
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
7. **Automatic Detection**: Mode auto-detected from input pattern
|
7. **Automatic Detection**: Mode auto-detected from input pattern
|
||||||
8. **Parse Flags**: Extract `--use-codex` and `--cli-execute` flags for Phase 4
|
8. **Semantic CLI Detection**: CLI tool usage determined from user's task description for Phase 4
|
||||||
9. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
9. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
10. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
10. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||||
|
|
||||||
### 5-Phase Execution
|
### 5-Phase Execution
|
||||||
|
|
||||||
#### Phase 1: Create Test Session
|
#### Phase 1: Create Test Session
|
||||||
|
|
||||||
**Command**:
|
**Step 1.0: Load Source Session Intent (Session Mode Only)** - Preserve user's original task description for semantic CLI selection
|
||||||
- **Session Mode**: `SlashCommand("/workflow:session:start --new \"Test validation for [sourceSessionId]\"")`
|
|
||||||
- **Prompt Mode**: `SlashCommand("/workflow:session:start --new \"Test generation for: [description]\"")`
|
```javascript
|
||||||
|
// Session Mode: Read source session metadata to get original task description
|
||||||
|
Read(".workflow/active/[sourceSessionId]/workflow-session.json")
|
||||||
|
// OR if context-package exists:
|
||||||
|
Read(".workflow/active/[sourceSessionId]/.process/context-package.json")
|
||||||
|
|
||||||
|
// Extract: metadata.task_description or project/description field
|
||||||
|
// This preserves user's CLI tool preferences (e.g., "use Codex for fixes")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.1: Dispatch** - Create test workflow session with preserved intent
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Session Mode - Include original task description to enable semantic CLI selection
|
||||||
|
SlashCommand(command="/workflow:session:start --type test --new \"Test validation for [sourceSessionId]: [originalTaskDescription]\"")
|
||||||
|
|
||||||
|
// Prompt Mode - User's description already contains their intent
|
||||||
|
SlashCommand(command="/workflow:session:start --type test --new \"Test generation for: [description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**: User argument (session ID, description, or file path)
|
**Input**: User argument (session ID, description, or file path)
|
||||||
|
|
||||||
**Expected Behavior**:
|
**Expected Behavior**:
|
||||||
- Creates new session: `WFS-test-[slug]`
|
- Creates new session: `WFS-test-[slug]`
|
||||||
- Writes `workflow-session.json` metadata:
|
- Writes `workflow-session.json` metadata with `type: "test"`
|
||||||
- **Session Mode**: Includes `workflow_type: "test_session"`, `source_session_id: "[sourceId]"`
|
- **Session Mode**: Additionally includes `source_session_id: "[sourceId]"`, description with original user intent
|
||||||
- **Prompt Mode**: Includes `workflow_type: "test_session"` only
|
- **Prompt Mode**: Uses user's description (already contains intent)
|
||||||
- Returns new session ID
|
- Returns new session ID
|
||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
@@ -177,9 +187,15 @@ This command is a **pure planning coordinator**:
|
|||||||
|
|
||||||
#### Phase 2: Gather Test Context
|
#### Phase 2: Gather Test Context
|
||||||
|
|
||||||
**Command**:
|
**Step 2.1: Dispatch** - Gather test context via appropriate method
|
||||||
- **Session Mode**: `SlashCommand("/workflow:tools:test-context-gather --session [testSessionId]")`
|
|
||||||
- **Prompt Mode**: `SlashCommand("/workflow:tools:context-gather --session [testSessionId] \"[task_description]\"")`
|
```javascript
|
||||||
|
// Session Mode
|
||||||
|
SlashCommand(command="/workflow:tools:test-context-gather --session [testSessionId]")
|
||||||
|
|
||||||
|
// Prompt Mode
|
||||||
|
SlashCommand(command="/workflow:tools:context-gather --session [testSessionId] \"[task_description]\"")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**: `testSessionId` from Phase 1
|
**Input**: `testSessionId` from Phase 1
|
||||||
|
|
||||||
@@ -208,7 +224,11 @@ This command is a **pure planning coordinator**:
|
|||||||
|
|
||||||
#### Phase 3: Test Generation Analysis
|
#### Phase 3: Test Generation Analysis
|
||||||
|
|
||||||
**Command**: `SlashCommand("/workflow:tools:test-concept-enhanced --session [testSessionId] --context [contextPath]")`
|
**Step 3.1: Dispatch** - Generate test requirements using Gemini
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessionId] --context [contextPath]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- `testSessionId` from Phase 1
|
- `testSessionId` from Phase 1
|
||||||
@@ -264,12 +284,16 @@ For each targeted file/function, Gemini MUST generate:
|
|||||||
|
|
||||||
#### Phase 4: Generate Test Tasks
|
#### Phase 4: Generate Test Tasks
|
||||||
|
|
||||||
**Command**: `SlashCommand("/workflow:tools:test-task-generate [--use-codex] [--cli-execute] --session [testSessionId]")`
|
**Step 4.1: Dispatch** - Generate test task JSONs
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-task-generate --session [testSessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- `testSessionId` from Phase 1
|
- `testSessionId` from Phase 1
|
||||||
- `--use-codex` flag (if present) - Controls IMPL-002 fix mode
|
|
||||||
- `--cli-execute` flag (if present) - Controls IMPL-001 generation mode
|
**Note**: CLI tool usage is determined semantically from user's task description.
|
||||||
|
|
||||||
**Expected Behavior**:
|
**Expected Behavior**:
|
||||||
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3 (multi-layered test plan)
|
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3 (multi-layered test plan)
|
||||||
@@ -343,26 +367,56 @@ CRITICAL - Next Steps:
|
|||||||
|
|
||||||
**Core Concept**: Dynamic task attachment and collapse for test-fix-gen workflow with dual-mode support (Session Mode and Prompt Mode).
|
**Core Concept**: Dynamic task attachment and collapse for test-fix-gen workflow with dual-mode support (Session Mode and Prompt Mode).
|
||||||
|
|
||||||
|
#### Initial TodoWrite Structure
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{"content": "Phase 1: Create Test Session", "status": "in_progress", "activeForm": "Creating test session"},
|
||||||
|
{"content": "Phase 2: Gather Test Context", "status": "pending", "activeForm": "Gathering test context"},
|
||||||
|
{"content": "Phase 3: Test Generation Analysis", "status": "pending", "activeForm": "Analyzing test generation"},
|
||||||
|
{"content": "Phase 4: Generate Test Tasks", "status": "pending", "activeForm": "Generating test tasks"},
|
||||||
|
{"content": "Phase 5: Return Summary", "status": "pending", "activeForm": "Completing"}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
#### Key Principles
|
#### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand dispatched):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Example: `/workflow:tools:test-context-gather` (Session Mode) or `/workflow:tools:context-gather` (Prompt Mode) attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
|
- Example - Phase 2 with sub-tasks:
|
||||||
- First attached task marked as `in_progress`, others as `pending`
|
```json
|
||||||
- Orchestrator **executes** these attached tasks sequentially
|
[
|
||||||
|
{"content": "Phase 1: Create Test Session", "status": "completed", "activeForm": "Creating test session"},
|
||||||
|
{"content": "Phase 2: Gather Test Context", "status": "in_progress", "activeForm": "Gathering test context"},
|
||||||
|
{"content": " → Load context and analyze coverage", "status": "in_progress", "activeForm": "Loading context"},
|
||||||
|
{"content": " → Detect test framework and conventions", "status": "pending", "activeForm": "Detecting framework"},
|
||||||
|
{"content": " → Generate context package", "status": "pending", "activeForm": "Generating context"},
|
||||||
|
{"content": "Phase 3: Test Generation Analysis", "status": "pending", "activeForm": "Analyzing test generation"},
|
||||||
|
{"content": "Phase 4: Generate Test Tasks", "status": "pending", "activeForm": "Generating test tasks"},
|
||||||
|
{"content": "Phase 5: Return Summary", "status": "pending", "activeForm": "Completing"}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
2. **Task Collapse** (after sub-tasks complete):
|
2. **Task Collapse** (after sub-tasks complete):
|
||||||
- Remove detailed sub-tasks from TodoWrite
|
- Remove detailed sub-tasks from TodoWrite
|
||||||
- **Collapse** to high-level phase summary
|
- **Collapse** to high-level phase summary
|
||||||
- Example: Phase 2.1-2.3 collapse to "Gather test coverage context: completed"
|
- Example - Phase 2 completed:
|
||||||
- Maintains clean orchestrator-level view
|
```json
|
||||||
|
[
|
||||||
|
{"content": "Phase 1: Create Test Session", "status": "completed", "activeForm": "Creating test session"},
|
||||||
|
{"content": "Phase 2: Gather Test Context", "status": "completed", "activeForm": "Gathering test context"},
|
||||||
|
{"content": "Phase 3: Test Generation Analysis", "status": "in_progress", "activeForm": "Analyzing test generation"},
|
||||||
|
{"content": "Phase 4: Generate Test Tasks", "status": "pending", "activeForm": "Generating test tasks"},
|
||||||
|
{"content": "Phase 5: Return Summary", "status": "pending", "activeForm": "Completing"}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
3. **Continuous Execution**:
|
3. **Continuous Execution**:
|
||||||
- After collapse, automatically proceed to next pending phase
|
- After collapse, automatically proceed to next pending phase
|
||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED with mode-specific context gathering) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase dispatched (tasks ATTACHED with mode-specific context gathering) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
||||||
|
|
||||||
#### Test-Fix-Gen Specific Features
|
#### Test-Fix-Gen Specific Features
|
||||||
|
|
||||||
@@ -372,16 +426,8 @@ CRITICAL - Next Steps:
|
|||||||
- **Phase 2**: Mode-specific context gathering (session summaries vs codebase analysis)
|
- **Phase 2**: Mode-specific context gathering (session summaries vs codebase analysis)
|
||||||
- **Phase 3**: Multi-layered test requirements analysis (L0: Static, L1: Unit, L2: Integration, L3: E2E)
|
- **Phase 3**: Multi-layered test requirements analysis (L0: Static, L1: Unit, L2: Integration, L3: E2E)
|
||||||
- **Phase 4**: Multi-task generation with quality gate (IMPL-001, IMPL-001.5-review, IMPL-002)
|
- **Phase 4**: Multi-task generation with quality gate (IMPL-001, IMPL-001.5-review, IMPL-002)
|
||||||
- **Fix Mode Configuration**: `--use-codex` flag controls IMPL-002 fix mode (manual vs automated)
|
- **Fix Mode Configuration**: CLI tool usage determined semantically from user's task description
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- Real-time visibility into attached tasks during execution
|
|
||||||
- Clean orchestrator-level summary after tasks complete
|
|
||||||
- Clear mental model: SlashCommand = attach tasks, not delegate work
|
|
||||||
- Dual-mode support: Both Session Mode and Prompt Mode use same attachment pattern
|
|
||||||
- Dynamic attachment/collapse maintains clarity
|
|
||||||
|
|
||||||
**Note**: Unlike other workflow orchestrators, this file consolidates TodoWrite examples in this section rather than distributing them across Phase descriptions for better dual-mode clarity.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -479,16 +525,15 @@ If quality gate fails:
|
|||||||
- Task ID: `IMPL-002`
|
- Task ID: `IMPL-002`
|
||||||
- `meta.type: "test-fix"`
|
- `meta.type: "test-fix"`
|
||||||
- `meta.agent: "@test-fix-agent"`
|
- `meta.agent: "@test-fix-agent"`
|
||||||
- `meta.use_codex: true|false` (based on `--use-codex` flag)
|
|
||||||
- `context.depends_on: ["IMPL-001"]`
|
- `context.depends_on: ["IMPL-001"]`
|
||||||
- `context.requirements`: Execute and fix tests
|
- `context.requirements`: Execute and fix tests
|
||||||
|
|
||||||
**Test-Fix Cycle Specification**:
|
**Test-Fix Cycle Specification**:
|
||||||
**Note**: This specification describes what test-cycle-execute orchestrator will do. The agent only executes single tasks.
|
**Note**: This specification describes what test-cycle-execute orchestrator will do. The agent only executes single tasks.
|
||||||
- **Cycle Pattern** (orchestrator-managed): test → gemini_diagnose → manual_fix (or codex) → retest
|
- **Cycle Pattern** (orchestrator-managed): test → gemini_diagnose → fix (agent or CLI) → retest
|
||||||
- **Tools Configuration** (orchestrator-controlled):
|
- **Tools Configuration** (orchestrator-controlled):
|
||||||
- Gemini for analysis with bug-fix template → surgical fix suggestions
|
- Gemini for analysis with bug-fix template → surgical fix suggestions
|
||||||
- Manual fix application (default) OR Codex if `--use-codex` flag (resume mechanism)
|
- Agent fix application (default) OR CLI if `command` field present in implementation_approach
|
||||||
- **Exit Conditions** (orchestrator-enforced):
|
- **Exit Conditions** (orchestrator-enforced):
|
||||||
- Success: All tests pass
|
- Success: All tests pass
|
||||||
- Failure: Max iterations reached (5)
|
- Failure: Max iterations reached (5)
|
||||||
@@ -534,11 +579,11 @@ WFS-test-[session]/
|
|||||||
**File**: `workflow-session.json`
|
**File**: `workflow-session.json`
|
||||||
|
|
||||||
**Session Mode** includes:
|
**Session Mode** includes:
|
||||||
- `workflow_type: "test_session"`
|
- `type: "test"` (set by session:start --type test)
|
||||||
- `source_session_id: "[sourceSessionId]"` (enables automatic cross-session context)
|
- `source_session_id: "[sourceSessionId]"` (enables automatic cross-session context)
|
||||||
|
|
||||||
**Prompt Mode** includes:
|
**Prompt Mode** includes:
|
||||||
- `workflow_type: "test_session"`
|
- `type: "test"` (set by session:start --type test)
|
||||||
- No `source_session_id` field
|
- No `source_session_id` field
|
||||||
|
|
||||||
### Execution Flow Diagram
|
### Execution Flow Diagram
|
||||||
@@ -632,8 +677,7 @@ Key Points:
|
|||||||
4. **Mode Selection**:
|
4. **Mode Selection**:
|
||||||
- Use **Session Mode** for completed workflow validation
|
- Use **Session Mode** for completed workflow validation
|
||||||
- Use **Prompt Mode** for ad-hoc test generation
|
- Use **Prompt Mode** for ad-hoc test generation
|
||||||
- Use `--use-codex` for autonomous fix application
|
- Include "use Codex" in description for autonomous fix application
|
||||||
- Use `--cli-execute` for enhanced generation capabilities
|
|
||||||
|
|
||||||
## Related Commands
|
## Related Commands
|
||||||
|
|
||||||
@@ -646,9 +690,7 @@ Key Points:
|
|||||||
- `/workflow:tools:test-context-gather` - Phase 2 (Session Mode): Gather source session context
|
- `/workflow:tools:test-context-gather` - Phase 2 (Session Mode): Gather source session context
|
||||||
- `/workflow:tools:context-gather` - Phase 2 (Prompt Mode): Analyze codebase directly
|
- `/workflow:tools:context-gather` - Phase 2 (Prompt Mode): Analyze codebase directly
|
||||||
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements using Gemini
|
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements using Gemini
|
||||||
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs using action-planning-agent (autonomous, default)
|
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs (CLI tool usage determined semantically)
|
||||||
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes for IMPL-002 (when `--use-codex` flag used)
|
|
||||||
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode for IMPL-001 test generation (when `--cli-execute` flag used)
|
|
||||||
|
|
||||||
**Follow-up Commands**:
|
**Follow-up Commands**:
|
||||||
- `/workflow:status` - Review generated test tasks
|
- `/workflow:status` - Review generated test tasks
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: test-gen
|
name: test-gen
|
||||||
description: Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks
|
description: Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks
|
||||||
argument-hint: "[--use-codex] [--cli-execute] source-session-id"
|
argument-hint: "source-session-id"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -16,11 +16,11 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
- **Context-First**: Prioritizes gathering code changes and summaries from source session
|
- **Context-First**: Prioritizes gathering code changes and summaries from source session
|
||||||
- **Format Reuse**: Creates standard `IMPL-*.json` task, using `meta.type: "test-fix"` for agent assignment
|
- **Format Reuse**: Creates standard `IMPL-*.json` task, using `meta.type: "test-fix"` for agent assignment
|
||||||
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
|
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
|
||||||
- **Manual First**: Default to manual fixes, use `--use-codex` flag for automated Codex fix application
|
- **Semantic CLI Selection**: CLI tool usage is determined by user's task description (e.g., "use Codex for fixes")
|
||||||
|
|
||||||
**Task Attachment Model**:
|
**Task Attachment Model**:
|
||||||
- SlashCommand invocation **expands workflow** by attaching sub-tasks to current TodoWrite
|
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||||
- When a sub-command is invoked (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
- When a sub-command is dispatched (e.g., `/workflow:tools:test-context-gather`), its internal tasks are attached to the orchestrator's TodoWrite
|
||||||
- Orchestrator **executes these attached tasks** sequentially
|
- Orchestrator **executes these attached tasks** sequentially
|
||||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||||
- This is **task expansion**, not external delegation
|
- This is **task expansion**, not external delegation
|
||||||
@@ -48,23 +48,44 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
5. **Complete All Phases**: Do not return to user until Phase 5 completes (summary returned)
|
5. **Complete All Phases**: Do not return to user until Phase 5 completes (summary returned)
|
||||||
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
|
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
|
||||||
8. **Parse --use-codex Flag**: Extract flag from arguments and pass to Phase 4 (test-task-generate)
|
8. **Semantic CLI Selection**: CLI tool usage determined from user's task description, passed to Phase 4
|
||||||
9. **Command Boundary**: This command ends at Phase 5 summary. Test execution is NOT part of this command.
|
9. **Command Boundary**: This command ends at Phase 5 summary. Test execution is NOT part of this command.
|
||||||
10. **Task Attachment Model**: SlashCommand invocation **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
10. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||||
11. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
11. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||||
|
|
||||||
## 5-Phase Execution
|
## 5-Phase Execution
|
||||||
|
|
||||||
### Phase 1: Create Test Session
|
### Phase 1: Create Test Session
|
||||||
**Command**: `SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]\"")`
|
|
||||||
|
|
||||||
**Input**: `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
|
**Step 1.0: Load Source Session Intent** - Preserve user's original task description for semantic CLI selection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Read source session metadata to get original task description
|
||||||
|
Read(".workflow/active/[sourceSessionId]/workflow-session.json")
|
||||||
|
// OR if context-package exists:
|
||||||
|
Read(".workflow/active/[sourceSessionId]/.process/context-package.json")
|
||||||
|
|
||||||
|
// Extract: metadata.task_description or project/description field
|
||||||
|
// This preserves user's CLI tool preferences (e.g., "use Codex for fixes")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.1: Dispatch** - Create new test workflow session with preserved intent
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Include original task description to enable semantic CLI selection
|
||||||
|
SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]: [originalTaskDescription]\"")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Input**:
|
||||||
|
- `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
|
||||||
|
- `originalTaskDescription` from source session metadata (preserves CLI tool preferences)
|
||||||
|
|
||||||
**Expected Behavior**:
|
**Expected Behavior**:
|
||||||
- Creates new session with pattern `WFS-test-[source-slug]` (e.g., `WFS-test-user-auth`)
|
- Creates new session with pattern `WFS-test-[source-slug]` (e.g., `WFS-test-user-auth`)
|
||||||
- Writes metadata to `workflow-session.json`:
|
- Writes metadata to `workflow-session.json`:
|
||||||
- `workflow_type: "test_session"`
|
- `workflow_type: "test_session"`
|
||||||
- `source_session_id: "[sourceSessionId]"`
|
- `source_session_id: "[sourceSessionId]"`
|
||||||
|
- Description includes original user intent for semantic CLI selection
|
||||||
- Returns new session ID for subsequent phases
|
- Returns new session ID for subsequent phases
|
||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
@@ -82,7 +103,12 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Gather Test Context
|
### Phase 2: Gather Test Context
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:test-context-gather --session [testSessionId]")`
|
|
||||||
|
**Step 2.1: Dispatch** - Gather test coverage context from source session
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-context-gather --session [testSessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**: `testSessionId` from Phase 1 (e.g., `WFS-test-user-auth`)
|
**Input**: `testSessionId` from Phase 1 (e.g., `WFS-test-user-auth`)
|
||||||
|
|
||||||
@@ -104,9 +130,9 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
- Test framework detected
|
- Test framework detected
|
||||||
- Test conventions documented
|
- Test conventions documented
|
||||||
|
|
||||||
<!-- TodoWrite: When test-context-gather invoked, INSERT 3 test-context-gather tasks -->
|
<!-- TodoWrite: When test-context-gather dispatched, INSERT 3 test-context-gather tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 2 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 2 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
||||||
@@ -119,7 +145,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 2.1-2.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 2.1-2.3** sequentially
|
||||||
|
|
||||||
@@ -141,7 +167,12 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Test Generation Analysis
|
### Phase 3: Test Generation Analysis
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessionId] --context [testContextPath]")`
|
|
||||||
|
**Step 3.1: Dispatch** - Analyze test requirements with Gemini
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessionId] --context [testContextPath]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- `testSessionId` from Phase 1
|
- `testSessionId` from Phase 1
|
||||||
@@ -168,9 +199,9 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
- Implementation Targets (test files to create)
|
- Implementation Targets (test files to create)
|
||||||
- Success Criteria
|
- Success Criteria
|
||||||
|
|
||||||
<!-- TodoWrite: When test-concept-enhanced invoked, INSERT 3 concept-enhanced tasks -->
|
<!-- TodoWrite: When test-concept-enhanced dispatched, INSERT 3 concept-enhanced tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 3 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 3 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
||||||
@@ -183,7 +214,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** test-concept-enhanced's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** test-concept-enhanced's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
||||||
|
|
||||||
@@ -205,12 +236,17 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
---
|
---
|
||||||
|
|
||||||
### Phase 4: Generate Test Tasks
|
### Phase 4: Generate Test Tasks
|
||||||
**Command**: `SlashCommand(command="/workflow:tools:test-task-generate [--use-codex] [--cli-execute] --session [testSessionId]")`
|
|
||||||
|
**Step 4.1: Dispatch** - Generate test task JSON files and planning documents
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
SlashCommand(command="/workflow:tools:test-task-generate --session [testSessionId]")
|
||||||
|
```
|
||||||
|
|
||||||
**Input**:
|
**Input**:
|
||||||
- `testSessionId` from Phase 1
|
- `testSessionId` from Phase 1
|
||||||
- `--use-codex` flag (if present in original command) - Controls IMPL-002 fix mode
|
|
||||||
- `--cli-execute` flag (if present in original command) - Controls IMPL-001 generation mode
|
**Note**: CLI tool usage for fixes is determined semantically from user's task description (e.g., "use Codex for automated fixes").
|
||||||
|
|
||||||
**Expected Behavior**:
|
**Expected Behavior**:
|
||||||
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3
|
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3
|
||||||
@@ -240,21 +276,20 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
- Task ID: `IMPL-002`
|
- Task ID: `IMPL-002`
|
||||||
- `meta.type: "test-fix"`
|
- `meta.type: "test-fix"`
|
||||||
- `meta.agent: "@test-fix-agent"`
|
- `meta.agent: "@test-fix-agent"`
|
||||||
- `meta.use_codex: true|false` (based on --use-codex flag)
|
|
||||||
- `context.depends_on: ["IMPL-001"]`
|
- `context.depends_on: ["IMPL-001"]`
|
||||||
- `context.requirements`: Execute and fix tests
|
- `context.requirements`: Execute and fix tests
|
||||||
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
|
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
|
||||||
- **Cycle pattern**: test → gemini_diagnose → manual_fix (or codex if --use-codex) → retest
|
- **Cycle pattern**: test → gemini_diagnose → fix (agent or CLI based on `command` field) → retest
|
||||||
- **Tools configuration**: Gemini for analysis with bug-fix template, manual or Codex for fixes
|
- **Tools configuration**: Gemini for analysis with bug-fix template, agent or CLI for fixes
|
||||||
- **Exit conditions**: Success (all pass) or failure (max iterations)
|
- **Exit conditions**: Success (all pass) or failure (max iterations)
|
||||||
- `flow_control.implementation_approach.modification_points`: 3-phase execution flow
|
- `flow_control.implementation_approach.modification_points`: 3-phase execution flow
|
||||||
- Phase 1: Initial test execution
|
- Phase 1: Initial test execution
|
||||||
- Phase 2: Iterative Gemini diagnosis + manual/Codex fixes (based on flag)
|
- Phase 2: Iterative Gemini diagnosis + fixes (agent or CLI based on step's `command` field)
|
||||||
- Phase 3: Final validation and certification
|
- Phase 3: Final validation and certification
|
||||||
|
|
||||||
<!-- TodoWrite: When test-task-generate invoked, INSERT 3 test-task-generate tasks -->
|
<!-- TodoWrite: When test-task-generate dispatched, INSERT 3 test-task-generate tasks -->
|
||||||
|
|
||||||
**TodoWrite Update (Phase 4 SlashCommand invoked - tasks attached)**:
|
**TodoWrite Update (Phase 4 SlashCommand dispatched - tasks attached)**:
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
|
||||||
@@ -267,7 +302,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: SlashCommand invocation **attaches** test-task-generate's 3 tasks. Orchestrator **executes** these tasks.
|
**Note**: SlashCommand dispatch **attaches** test-task-generate's 3 tasks. Orchestrator **executes** these tasks.
|
||||||
|
|
||||||
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
|
||||||
|
|
||||||
@@ -307,7 +342,7 @@ Artifacts Created:
|
|||||||
|
|
||||||
Test Framework: [detected framework]
|
Test Framework: [detected framework]
|
||||||
Test Files to Generate: [count]
|
Test Files to Generate: [count]
|
||||||
Fix Mode: [Manual|Codex Automated] (based on --use-codex flag)
|
Fix Mode: [Agent|CLI] (based on `command` field in implementation_approach steps)
|
||||||
|
|
||||||
Review Generated Artifacts:
|
Review Generated Artifacts:
|
||||||
- Test plan: .workflow/[testSessionId]/IMPL_PLAN.md
|
- Test plan: .workflow/[testSessionId]/IMPL_PLAN.md
|
||||||
@@ -329,7 +364,7 @@ Ready for execution. Use appropriate workflow commands to proceed.
|
|||||||
|
|
||||||
### Key Principles
|
### Key Principles
|
||||||
|
|
||||||
1. **Task Attachment** (when SlashCommand invoked):
|
1. **Task Attachment** (when SlashCommand dispatched):
|
||||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||||
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
|
- Example: `/workflow:tools:test-context-gather` attaches 3 sub-tasks (Phase 2.1, 2.2, 2.3)
|
||||||
- First attached task marked as `in_progress`, others as `pending`
|
- First attached task marked as `in_progress`, others as `pending`
|
||||||
@@ -346,20 +381,16 @@ Ready for execution. Use appropriate workflow commands to proceed.
|
|||||||
- No user intervention required between phases
|
- No user intervention required between phases
|
||||||
- TodoWrite dynamically reflects current execution state
|
- TodoWrite dynamically reflects current execution state
|
||||||
|
|
||||||
**Lifecycle Summary**: Initial pending tasks → Phase invoked (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
**Lifecycle Summary**: Initial pending tasks → Phase dispatched (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins → Repeat until all phases complete.
|
||||||
|
|
||||||
### Test-Gen Specific Features
|
### Test-Gen Specific Features
|
||||||
|
|
||||||
- **Phase 2**: Cross-session context gathering from source implementation session
|
- **Phase 2**: Cross-session context gathering from source implementation session
|
||||||
- **Phase 3**: Test requirements analysis with Gemini for generation strategy
|
- **Phase 3**: Test requirements analysis with Gemini for generation strategy
|
||||||
- **Phase 4**: Dual-task generation (IMPL-001 for test generation, IMPL-002 for test execution)
|
- **Phase 4**: Dual-task generation (IMPL-001 for test generation, IMPL-002 for test execution)
|
||||||
- **Fix Mode Configuration**: `--use-codex` flag controls IMPL-002 fix mode (manual vs automated)
|
- **Fix Mode Configuration**: CLI tool usage determined semantically from user's task description
|
||||||
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- Real-time visibility into attached tasks during execution
|
|
||||||
- Clean orchestrator-level summary after tasks complete
|
|
||||||
- Clear mental model: SlashCommand = attach tasks, not delegate work
|
|
||||||
- Dynamic attachment/collapse maintains clarity
|
|
||||||
|
|
||||||
**Note**: See individual Phase descriptions (Phase 2, 3, 4) for detailed TodoWrite Update examples with full JSON structures.
|
**Note**: See individual Phase descriptions (Phase 2, 3, 4) for detailed TodoWrite Update examples with full JSON structures.
|
||||||
|
|
||||||
@@ -428,7 +459,7 @@ Generates two task definition files:
|
|||||||
- Agent: @test-fix-agent
|
- Agent: @test-fix-agent
|
||||||
- Dependency: IMPL-001 must complete first
|
- Dependency: IMPL-001 must complete first
|
||||||
- Max iterations: 5
|
- Max iterations: 5
|
||||||
- Fix mode: Manual or Codex (based on --use-codex flag)
|
- Fix mode: Agent or CLI (based on `command` field in implementation_approach)
|
||||||
|
|
||||||
See `/workflow:tools:test-task-generate` for complete task JSON schemas.
|
See `/workflow:tools:test-task-generate` for complete task JSON schemas.
|
||||||
|
|
||||||
@@ -465,11 +496,10 @@ Created in `.workflow/active/WFS-test-[session]/`:
|
|||||||
**IMPL-002.json Structure**:
|
**IMPL-002.json Structure**:
|
||||||
- `meta.type: "test-fix"`
|
- `meta.type: "test-fix"`
|
||||||
- `meta.agent: "@test-fix-agent"`
|
- `meta.agent: "@test-fix-agent"`
|
||||||
- `meta.use_codex`: true/false (based on --use-codex flag)
|
|
||||||
- `context.depends_on: ["IMPL-001"]`
|
- `context.depends_on: ["IMPL-001"]`
|
||||||
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
|
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
|
||||||
- Gemini diagnosis template
|
- Gemini diagnosis template
|
||||||
- Fix application mode (manual/codex)
|
- Fix application mode (agent or CLI based on `command` field)
|
||||||
- Max iterations: 5
|
- Max iterations: 5
|
||||||
- `flow_control.implementation_approach.modification_points`: 3-phase flow
|
- `flow_control.implementation_approach.modification_points`: 3-phase flow
|
||||||
|
|
||||||
@@ -487,13 +517,11 @@ See `/workflow:tools:test-task-generate` for complete JSON schemas.
|
|||||||
**Prerequisite Commands**:
|
**Prerequisite Commands**:
|
||||||
- `/workflow:plan` or `/workflow:execute` - Complete implementation session that needs test validation
|
- `/workflow:plan` or `/workflow:execute` - Complete implementation session that needs test validation
|
||||||
|
|
||||||
**Called by This Command** (5 phases):
|
**Dispatched by This Command** (4 phases):
|
||||||
- `/workflow:session:start` - Phase 1: Create independent test workflow session
|
- `/workflow:session:start` - Phase 1: Create independent test workflow session
|
||||||
- `/workflow:tools:test-context-gather` - Phase 2: Analyze test coverage and gather source session context
|
- `/workflow:tools:test-context-gather` - Phase 2: Analyze test coverage and gather source session context
|
||||||
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements and strategy using Gemini
|
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements and strategy using Gemini
|
||||||
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs using action-planning-agent (autonomous, default)
|
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs (CLI tool usage determined semantically)
|
||||||
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes for IMPL-002 (when `--use-codex` flag used)
|
|
||||||
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode for IMPL-001 test generation (when `--cli-execute` flag used)
|
|
||||||
|
|
||||||
**Follow-up Commands**:
|
**Follow-up Commands**:
|
||||||
- `/workflow:status` - Review generated test tasks
|
- `/workflow:status` - Review generated test tasks
|
||||||
|
|||||||
@@ -12,11 +12,6 @@ examples:
|
|||||||
## Purpose
|
## Purpose
|
||||||
Analyzes conflicts between implementation plans and existing codebase, **including module scenario uniqueness detection**, generating multiple resolution strategies with **iterative clarification until boundaries are clear**.
|
Analyzes conflicts between implementation plans and existing codebase, **including module scenario uniqueness detection**, generating multiple resolution strategies with **iterative clarification until boundaries are clear**.
|
||||||
|
|
||||||
**Key Enhancements**:
|
|
||||||
- **Scenario Uniqueness Detection**: Agent searches all existing modules to identify functional overlaps
|
|
||||||
- **Iterative Clarification Loop**: Unlimited questions per conflict until scenario boundaries are uniquely defined (max 10 rounds)
|
|
||||||
- **Dynamic Re-analysis**: Agent updates strategies based on user clarifications
|
|
||||||
|
|
||||||
**Scope**: Detection and strategy generation only - NO code modification or task creation.
|
**Scope**: Detection and strategy generation only - NO code modification or task creation.
|
||||||
|
|
||||||
**Trigger**: Auto-executes in `/workflow:plan` Phase 3 when `conflict_risk ≥ medium`.
|
**Trigger**: Auto-executes in `/workflow:plan` Phase 3 when `conflict_risk ≥ medium`.
|
||||||
@@ -64,6 +59,41 @@ Analyzes conflicts between implementation plans and existing codebase, **includi
|
|||||||
- Module merge/split decisions
|
- Module merge/split decisions
|
||||||
- **Requires iterative clarification until uniqueness confirmed**
|
- **Requires iterative clarification until uniqueness confirmed**
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session, --context
|
||||||
|
└─ Validation: Both REQUIRED, conflict_risk >= medium
|
||||||
|
|
||||||
|
Phase 1: Validation
|
||||||
|
├─ Step 1: Verify session directory exists
|
||||||
|
├─ Step 2: Load context-package.json
|
||||||
|
├─ Step 3: Check conflict_risk (skip if none/low)
|
||||||
|
└─ Step 4: Prepare agent task prompt
|
||||||
|
|
||||||
|
Phase 2: CLI-Powered Analysis (Agent)
|
||||||
|
├─ Execute Gemini analysis (Qwen fallback)
|
||||||
|
├─ Detect conflicts including ModuleOverlap category
|
||||||
|
└─ Generate 2-4 strategies per conflict with modifications
|
||||||
|
|
||||||
|
Phase 3: Iterative User Interaction
|
||||||
|
└─ FOR each conflict (one by one):
|
||||||
|
├─ Display conflict with overlap_analysis (if ModuleOverlap)
|
||||||
|
├─ Display strategies (2-4 + custom option)
|
||||||
|
├─ User selects strategy
|
||||||
|
└─ IF clarification_needed:
|
||||||
|
├─ Collect answers
|
||||||
|
├─ Agent re-analysis
|
||||||
|
└─ Loop until uniqueness_confirmed (max 10 rounds)
|
||||||
|
|
||||||
|
Phase 4: Apply Modifications
|
||||||
|
├─ Step 1: Extract modifications from resolved strategies
|
||||||
|
├─ Step 2: Apply using Edit tool
|
||||||
|
├─ Step 3: Update context-package.json (mark resolved)
|
||||||
|
└─ Step 4: Output custom conflict summary (if any)
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
### Phase 1: Validation
|
### Phase 1: Validation
|
||||||
@@ -84,35 +114,44 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
- Risk: {conflict_risk}
|
- Risk: {conflict_risk}
|
||||||
- Files: {existing_files_list}
|
- Files: {existing_files_list}
|
||||||
|
|
||||||
|
## Exploration Context (from context-package.exploration_results)
|
||||||
|
- Exploration Count: ${contextPackage.exploration_results?.exploration_count || 0}
|
||||||
|
- Angles Analyzed: ${JSON.stringify(contextPackage.exploration_results?.angles || [])}
|
||||||
|
- Pre-identified Conflict Indicators: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.conflict_indicators || [])}
|
||||||
|
- Critical Files: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.critical_files?.map(f => f.path) || [])}
|
||||||
|
- All Patterns: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_patterns || [])}
|
||||||
|
- All Integration Points: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_integration_points || [])}
|
||||||
|
|
||||||
## Analysis Steps
|
## Analysis Steps
|
||||||
|
|
||||||
### 1. Load Context
|
### 1. Load Context
|
||||||
- Read existing files from conflict_detection.existing_files
|
- Read existing files from conflict_detection.existing_files
|
||||||
- Load plan from .workflow/active/{session_id}/.process/context-package.json
|
- Load plan from .workflow/active/{session_id}/.process/context-package.json
|
||||||
|
- **NEW**: Load exploration_results and use aggregated_insights for enhanced analysis
|
||||||
- Extract role analyses and requirements
|
- Extract role analyses and requirements
|
||||||
|
|
||||||
### 2. Execute CLI Analysis (Enhanced with Scenario Uniqueness Detection)
|
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
||||||
|
|
||||||
Primary (Gemini):
|
Primary (Gemini):
|
||||||
cd {project_root} && gemini -p "
|
cd {project_root} && gemini -p "
|
||||||
PURPOSE: Detect conflicts between plan and codebase, including module scenario overlaps
|
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
||||||
TASK:
|
TASK:
|
||||||
• Compare architectures
|
• **Review pre-identified conflict_indicators from exploration results**
|
||||||
|
• Compare architectures (use exploration key_patterns)
|
||||||
• Identify breaking API changes
|
• Identify breaking API changes
|
||||||
• Detect data model incompatibilities
|
• Detect data model incompatibilities
|
||||||
• Assess dependency conflicts
|
• Assess dependency conflicts
|
||||||
• **NEW: Analyze module scenario uniqueness**
|
• **Analyze module scenario uniqueness**
|
||||||
- Extract new module functionality from plan
|
- Use exploration integration_points for precise locations
|
||||||
- Search all existing modules with similar functionality
|
- Cross-validate with exploration critical_files
|
||||||
- Compare scenario coverage and identify overlaps
|
|
||||||
- Generate clarification questions for boundary definition
|
- Generate clarification questions for boundary definition
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/{session_id}/**/*
|
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/{session_id}/**/*
|
||||||
EXPECTED: Conflict list with severity ratings, including ModuleOverlap conflicts with:
|
EXPECTED: Conflict list with severity ratings, including:
|
||||||
- Existing module list with scenarios
|
- Validation of exploration conflict_indicators
|
||||||
- Overlap analysis matrix
|
- ModuleOverlap conflicts with overlap_analysis
|
||||||
- Targeted clarification questions
|
- Targeted clarification questions
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | analysis=READ-ONLY
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||||
"
|
"
|
||||||
|
|
||||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||||
|
|||||||
@@ -24,6 +24,37 @@ Orchestrator command that invokes `context-search-agent` to gather comprehensive
|
|||||||
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
|
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
|
||||||
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
|
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Parse: task_description (required)
|
||||||
|
|
||||||
|
Step 1: Context-Package Detection
|
||||||
|
└─ Decision (existing package):
|
||||||
|
├─ Valid package exists → Return existing (skip execution)
|
||||||
|
└─ No valid package → Continue to Step 2
|
||||||
|
|
||||||
|
Step 2: Complexity Assessment & Parallel Explore (NEW)
|
||||||
|
├─ Analyze task_description → classify Low/Medium/High
|
||||||
|
├─ Select exploration angles (1-4 based on complexity)
|
||||||
|
├─ Launch N cli-explore-agents in parallel
|
||||||
|
│ └─ Each outputs: exploration-{angle}.json
|
||||||
|
└─ Generate explorations-manifest.json
|
||||||
|
|
||||||
|
Step 3: Invoke Context-Search Agent (with exploration input)
|
||||||
|
├─ Phase 1: Initialization & Pre-Analysis
|
||||||
|
├─ Phase 2: Multi-Source Discovery
|
||||||
|
│ ├─ Track 0: Exploration Synthesis (prioritize & deduplicate)
|
||||||
|
│ ├─ Track 1-4: Existing tracks
|
||||||
|
└─ Phase 3: Synthesis & Packaging
|
||||||
|
└─ Generate context-package.json with exploration_results
|
||||||
|
|
||||||
|
Step 4: Output Verification
|
||||||
|
└─ Verify context-package.json contains exploration_results
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
### Step 1: Context-Package Detection
|
### Step 1: Context-Package Detection
|
||||||
@@ -48,17 +79,144 @@ if (file_exists(contextPackagePath)) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Invoke Context-Search Agent
|
### Step 2: Complexity Assessment & Parallel Explore
|
||||||
|
|
||||||
**Only execute if Step 1 finds no valid package**
|
**Only execute if Step 1 finds no valid package**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 2.1 Complexity Assessment
|
||||||
|
function analyzeTaskComplexity(taskDescription) {
|
||||||
|
const text = taskDescription.toLowerCase();
|
||||||
|
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
|
||||||
|
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
|
||||||
|
return 'Low';
|
||||||
|
}
|
||||||
|
|
||||||
|
const ANGLE_PRESETS = {
|
||||||
|
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
|
||||||
|
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
|
||||||
|
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
|
||||||
|
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
|
||||||
|
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
|
||||||
|
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
|
||||||
|
};
|
||||||
|
|
||||||
|
function selectAngles(taskDescription, complexity) {
|
||||||
|
const text = taskDescription.toLowerCase();
|
||||||
|
let preset = 'feature';
|
||||||
|
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
|
||||||
|
else if (/security|auth|permission/.test(text)) preset = 'security';
|
||||||
|
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
|
||||||
|
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
|
||||||
|
|
||||||
|
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
|
||||||
|
return ANGLE_PRESETS[preset].slice(0, count);
|
||||||
|
}
|
||||||
|
|
||||||
|
const complexity = analyzeTaskComplexity(task_description);
|
||||||
|
const selectedAngles = selectAngles(task_description, complexity);
|
||||||
|
const sessionFolder = `.workflow/active/${session_id}/.process`;
|
||||||
|
|
||||||
|
// 2.2 Launch Parallel Explore Agents
|
||||||
|
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
description=`Explore: ${angle}`,
|
||||||
|
prompt=`
|
||||||
|
## Task Objective
|
||||||
|
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
|
||||||
|
|
||||||
|
## Assigned Context
|
||||||
|
- **Exploration Angle**: ${angle}
|
||||||
|
- **Task Description**: ${task_description}
|
||||||
|
- **Session ID**: ${session_id}
|
||||||
|
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||||
|
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||||
|
|
||||||
|
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||||
|
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||||
|
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||||
|
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||||
|
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||||
|
|
||||||
|
## Exploration Strategy (${angle} focus)
|
||||||
|
|
||||||
|
**Step 1: Structural Scan** (Bash)
|
||||||
|
- get_modules_by_depth.sh → identify modules related to ${angle}
|
||||||
|
- find/rg → locate files relevant to ${angle} aspect
|
||||||
|
- Analyze imports/dependencies from ${angle} perspective
|
||||||
|
|
||||||
|
**Step 2: Semantic Analysis** (Gemini CLI)
|
||||||
|
- How does existing code handle ${angle} concerns?
|
||||||
|
- What patterns are used for ${angle}?
|
||||||
|
- Where would new code integrate from ${angle} viewpoint?
|
||||||
|
|
||||||
|
**Step 3: Write Output**
|
||||||
|
- Consolidate ${angle} findings into JSON
|
||||||
|
- Identify ${angle}-specific clarification needs
|
||||||
|
|
||||||
|
## Expected Output
|
||||||
|
|
||||||
|
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||||
|
|
||||||
|
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
|
||||||
|
|
||||||
|
**Required Fields** (all ${angle} focused):
|
||||||
|
- project_structure: Modules/architecture relevant to ${angle}
|
||||||
|
- relevant_files: Files affected from ${angle} perspective
|
||||||
|
**IMPORTANT**: Use object format with relevance scores for synthesis:
|
||||||
|
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Core ${angle} logic"}]\`
|
||||||
|
Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
|
||||||
|
- patterns: ${angle}-related patterns to follow
|
||||||
|
- dependencies: Dependencies relevant to ${angle}
|
||||||
|
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
|
||||||
|
- constraints: ${angle}-specific limitations/conventions
|
||||||
|
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
|
||||||
|
- _metadata.exploration_angle: "${angle}"
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] Schema obtained via cat explore-json-schema.json
|
||||||
|
- [ ] get_modules_by_depth.sh executed
|
||||||
|
- [ ] At least 3 relevant files identified with ${angle} rationale
|
||||||
|
- [ ] Patterns are actionable (code examples, not generic advice)
|
||||||
|
- [ ] Integration points include file:line locations
|
||||||
|
- [ ] Constraints are project-specific to ${angle}
|
||||||
|
- [ ] JSON output follows schema exactly
|
||||||
|
- [ ] clarification_needs includes options + recommended
|
||||||
|
|
||||||
|
## Output
|
||||||
|
Write: ${sessionFolder}/exploration-${angle}.json
|
||||||
|
Return: 2-3 sentence summary of ${angle} findings
|
||||||
|
`
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
// 2.3 Generate Manifest after all complete
|
||||||
|
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
|
||||||
|
const explorationManifest = {
|
||||||
|
session_id,
|
||||||
|
task_description,
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
complexity,
|
||||||
|
exploration_count: selectedAngles.length,
|
||||||
|
angles_explored: selectedAngles,
|
||||||
|
explorations: explorationFiles.map(file => {
|
||||||
|
const data = JSON.parse(Read(file));
|
||||||
|
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
|
||||||
|
})
|
||||||
|
};
|
||||||
|
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Invoke Context-Search Agent
|
||||||
|
|
||||||
|
**Only execute after Step 2 completes**
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="context-search-agent",
|
subagent_type="context-search-agent",
|
||||||
description="Gather comprehensive context for plan",
|
description="Gather comprehensive context for plan",
|
||||||
prompt=`
|
prompt=`
|
||||||
You are executing as context-search-agent (.claude/agents/context-search-agent.md).
|
|
||||||
|
|
||||||
## Execution Mode
|
## Execution Mode
|
||||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||||
|
|
||||||
@@ -67,6 +225,12 @@ You are executing as context-search-agent (.claude/agents/context-search-agent.m
|
|||||||
- **Task Description**: ${task_description}
|
- **Task Description**: ${task_description}
|
||||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
||||||
|
|
||||||
|
## Exploration Input (from Step 2)
|
||||||
|
- **Manifest**: ${sessionFolder}/explorations-manifest.json
|
||||||
|
- **Exploration Count**: ${explorationManifest.exploration_count}
|
||||||
|
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
|
||||||
|
- **Complexity**: ${complexity}
|
||||||
|
|
||||||
## Mission
|
## Mission
|
||||||
Execute complete context-search-agent workflow for implementation planning:
|
Execute complete context-search-agent workflow for implementation planning:
|
||||||
|
|
||||||
@@ -77,7 +241,8 @@ Execute complete context-search-agent workflow for implementation planning:
|
|||||||
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
### Phase 2: Multi-Source Context Discovery
|
||||||
Execute all 4 discovery tracks:
|
Execute all discovery tracks:
|
||||||
|
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
|
||||||
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
|
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
|
||||||
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
|
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
|
||||||
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
||||||
@@ -86,7 +251,7 @@ Execute all 4 discovery tracks:
|
|||||||
### Phase 3: Synthesis, Assessment & Packaging
|
### Phase 3: Synthesis, Assessment & Packaging
|
||||||
1. Apply relevance scoring and build dependency graph
|
1. Apply relevance scoring and build dependency graph
|
||||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||||
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include technology_stack, architecture, key_components, and entry_points.
|
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include description, technology_stack, architecture, and key_components.
|
||||||
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||||
5. Perform conflict detection with risk assessment
|
5. Perform conflict detection with risk assessment
|
||||||
6. **Inject historical conflicts** from archive analysis into conflict_detection
|
6. **Inject historical conflicts** from archive analysis into conflict_detection
|
||||||
@@ -95,11 +260,12 @@ Execute all 4 discovery tracks:
|
|||||||
## Output Requirements
|
## Output Requirements
|
||||||
Complete context-package.json with:
|
Complete context-package.json with:
|
||||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack (sourced from `project.json` overview)
|
- **project_context**: description, technology_stack, architecture, key_components (sourced from `project.json` overview)
|
||||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||||
- **dependencies**: {internal[], external[]} with dependency graph
|
- **dependencies**: {internal[], external[]} with dependency graph
|
||||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
|
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
|
||||||
|
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
|
||||||
|
|
||||||
## Quality Validation
|
## Quality Validation
|
||||||
Before completion verify:
|
Before completion verify:
|
||||||
@@ -116,7 +282,7 @@ Report completion with statistics.
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Output Verification
|
### Step 4: Output Verification
|
||||||
|
|
||||||
After agent completes, verify output:
|
After agent completes, verify output:
|
||||||
|
|
||||||
@@ -126,6 +292,12 @@ const outputPath = `.workflow/${session_id}/.process/context-package.json`;
|
|||||||
if (!file_exists(outputPath)) {
|
if (!file_exists(outputPath)) {
|
||||||
throw new Error("❌ Agent failed to generate context-package.json");
|
throw new Error("❌ Agent failed to generate context-package.json");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Verify exploration_results included
|
||||||
|
const pkg = JSON.parse(Read(outputPath));
|
||||||
|
if (pkg.exploration_results?.exploration_count > 0) {
|
||||||
|
console.log(`✅ Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Parameter Reference
|
## Parameter Reference
|
||||||
@@ -146,6 +318,7 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
|
|||||||
- **dependencies**: Internal and external dependency graphs
|
- **dependencies**: Internal and external dependency graphs
|
||||||
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
||||||
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
|
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
|
||||||
|
- **exploration_results**: Aggregated exploration insights (from parallel explore phase)
|
||||||
|
|
||||||
## Historical Archive Analysis
|
## Historical Archive Analysis
|
||||||
|
|
||||||
|
|||||||
@@ -1,258 +1,291 @@
|
|||||||
---
|
---
|
||||||
name: task-generate-agent
|
name: task-generate-agent
|
||||||
description: Autonomous task generation using action-planning-agent with discovery and output phases for workflow planning
|
description: Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation
|
||||||
argument-hint: "--session WFS-session-id [--cli-execute]"
|
argument-hint: "--session WFS-session-id"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:task-generate-agent --session WFS-auth
|
- /workflow:tools:task-generate-agent --session WFS-auth
|
||||||
- /workflow:tools:task-generate-agent --session WFS-auth --cli-execute
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Autonomous Task Generation Command
|
# Generate Implementation Plan Command
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Autonomous task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes.
|
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent. This command produces **planning artifacts only** - it does NOT execute code implementation. Actual code implementation requires separate execution command (e.g., /workflow:execute).
|
||||||
|
|
||||||
## Core Philosophy
|
## Core Philosophy
|
||||||
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
|
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
|
||||||
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
|
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
|
||||||
|
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
|
||||||
|
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
|
||||||
- **Memory-First**: Reuse loaded documents from conversation memory
|
- **Memory-First**: Reuse loaded documents from conversation memory
|
||||||
|
- **Smart Selection**: Load synthesis_output OR guidance + relevant role analyses, NOT all role analyses
|
||||||
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
||||||
- **Pre-Selected Templates**: Command selects correct template based on `--cli-execute` flag **before** invoking agent
|
|
||||||
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
|
|
||||||
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Process
|
||||||
|
|
||||||
### Phase 1: Discovery & Context Loading
|
```
|
||||||
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
**Agent Context Package**:
|
Phase 1: Context Preparation & Module Detection (Command)
|
||||||
```javascript
|
├─ Assemble session paths (metadata, context package, output dirs)
|
||||||
{
|
├─ Provide metadata (session_id, execution_mode, mcp_capabilities)
|
||||||
"session_id": "WFS-[session-id]",
|
├─ Auto-detect modules from context-package + directory structure
|
||||||
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
|
└─ Decision:
|
||||||
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
|
├─ modules.length == 1 → Single Agent Mode (Phase 2A)
|
||||||
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
|
└─ modules.length >= 2 → Parallel Mode (Phase 2B + Phase 3)
|
||||||
// Path selected by command based on --cli-execute flag, agent reads it
|
|
||||||
"session_metadata": {
|
Phase 2A: Single Agent Planning (Original Flow)
|
||||||
// If in memory: use cached content
|
├─ Load context package (progressive loading strategy)
|
||||||
// Else: Load from .workflow/active//{session-id}/workflow-session.json
|
├─ Generate Task JSON Files (.task/IMPL-*.json)
|
||||||
},
|
├─ Create IMPL_PLAN.md
|
||||||
"brainstorm_artifacts": {
|
└─ Generate TODO_LIST.md
|
||||||
// Loaded from context-package.json → brainstorm_artifacts section
|
|
||||||
"role_analyses": [
|
Phase 2B: N Parallel Planning (Multi-Module)
|
||||||
{
|
├─ Launch N action-planning-agents simultaneously (one per module)
|
||||||
"role": "system-architect",
|
├─ Each agent generates module-scoped tasks (IMPL-{prefix}{seq}.json)
|
||||||
"files": [{"path": "...", "type": "primary|supplementary"}]
|
├─ Task ID format: IMPL-A1, IMPL-A2... / IMPL-B1, IMPL-B2...
|
||||||
}
|
└─ Each module limited to ≤9 tasks
|
||||||
],
|
|
||||||
"guidance_specification": {"path": "...", "exists": true},
|
Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
||||||
"synthesis_output": {"path": "...", "exists": true},
|
├─ Collect all module task JSONs
|
||||||
"conflict_resolution": {"path": "...", "exists": true} // if conflict_risk >= medium
|
├─ Resolve cross-module dependencies (CROSS::{module}::{pattern} → actual ID)
|
||||||
},
|
├─ Generate unified IMPL_PLAN.md (grouped by module)
|
||||||
"context_package_path": ".workflow/active//{session-id}/.process/context-package.json",
|
└─ Generate TODO_LIST.md (hierarchical: module → tasks)
|
||||||
"context_package": {
|
|
||||||
// If in memory: use cached content
|
|
||||||
// Else: Load from .workflow/active//{session-id}/.process/context-package.json
|
|
||||||
},
|
|
||||||
"mcp_capabilities": {
|
|
||||||
"code_index": true,
|
|
||||||
"exa_code": true,
|
|
||||||
"exa_web": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Discovery Actions**:
|
## Document Generation Lifecycle
|
||||||
1. **Load Session Context** (if not in memory)
|
|
||||||
```javascript
|
|
||||||
if (!memory.has("workflow-session.json")) {
|
|
||||||
Read(.workflow/active//{session-id}/workflow-session.json)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Load Context Package** (if not in memory)
|
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
|
||||||
```javascript
|
|
||||||
if (!memory.has("context-package.json")) {
|
|
||||||
Read(.workflow/active//{session-id}/.process/context-package.json)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Extract & Load Role Analyses** (from context-package.json)
|
**Command prepares session paths, metadata, and detects module structure.**
|
||||||
```javascript
|
|
||||||
// Extract role analysis paths from context package
|
|
||||||
const roleAnalysisPaths = contextPackage.brainstorm_artifacts.role_analyses
|
|
||||||
.flatMap(role => role.files.map(f => f.path));
|
|
||||||
|
|
||||||
// Load each role analysis file
|
**Session Path Structure**:
|
||||||
roleAnalysisPaths.forEach(path => Read(path));
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Load Conflict Resolution** (from context-package.json, if exists)
|
|
||||||
```javascript
|
|
||||||
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
|
|
||||||
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Code Analysis with Native Tools** (optional - enhance understanding)
|
|
||||||
```bash
|
|
||||||
# Find relevant files for task context
|
|
||||||
find . -name "*auth*" -type f
|
|
||||||
rg "authentication|oauth" -g "*.ts"
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **MCP External Research** (optional - gather best practices)
|
|
||||||
```javascript
|
|
||||||
// Get external examples for implementation
|
|
||||||
mcp__exa__get_code_context_exa(
|
|
||||||
query="TypeScript JWT authentication best practices",
|
|
||||||
tokensNum="dynamic"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 2: Agent Execution (Document Generation)
|
|
||||||
|
|
||||||
**Pre-Agent Template Selection** (Command decides path before invoking agent):
|
|
||||||
```javascript
|
|
||||||
// Command checks flag and selects template PATH (not content)
|
|
||||||
const templatePath = hasCliExecuteFlag
|
|
||||||
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
|
|
||||||
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
|
|
||||||
```
|
```
|
||||||
|
.workflow/active/WFS-{session-id}/
|
||||||
|
├── workflow-session.json # Session metadata
|
||||||
|
├── .process/
|
||||||
|
│ └── context-package.json # Context package with artifact catalog
|
||||||
|
├── .task/ # Output: Task JSON files
|
||||||
|
│ ├── IMPL-A1.json # Multi-module: prefixed by module
|
||||||
|
│ ├── IMPL-A2.json
|
||||||
|
│ ├── IMPL-B1.json
|
||||||
|
│ └── ...
|
||||||
|
├── IMPL_PLAN.md # Output: Implementation plan (grouped by module)
|
||||||
|
└── TODO_LIST.md # Output: TODO list (hierarchical)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Command Preparation**:
|
||||||
|
1. **Assemble Session Paths** for agent prompt:
|
||||||
|
- `session_metadata_path`
|
||||||
|
- `context_package_path`
|
||||||
|
- Output directory paths
|
||||||
|
|
||||||
|
2. **Provide Metadata** (simple values):
|
||||||
|
- `session_id`
|
||||||
|
- `mcp_capabilities` (available MCP tools)
|
||||||
|
|
||||||
|
3. **Auto Module Detection** (determines single vs parallel mode):
|
||||||
|
```javascript
|
||||||
|
function autoDetectModules(contextPackage, projectRoot) {
|
||||||
|
// Priority 1: Explicit frontend/backend separation
|
||||||
|
if (exists('src/frontend') && exists('src/backend')) {
|
||||||
|
return [
|
||||||
|
{ name: 'frontend', prefix: 'A', paths: ['src/frontend'] },
|
||||||
|
{ name: 'backend', prefix: 'B', paths: ['src/backend'] }
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Priority 2: Monorepo structure
|
||||||
|
if (exists('packages/*') || exists('apps/*')) {
|
||||||
|
return detectMonorepoModules(); // Returns 2-3 main packages
|
||||||
|
}
|
||||||
|
|
||||||
|
// Priority 3: Context-package dependency clustering
|
||||||
|
const modules = clusterByDependencies(contextPackage.dependencies?.internal);
|
||||||
|
if (modules.length >= 2) return modules.slice(0, 3);
|
||||||
|
|
||||||
|
// Default: Single module (original flow)
|
||||||
|
return [{ name: 'main', prefix: '', paths: ['.'] }];
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Decision Logic**:
|
||||||
|
- `modules.length == 1` → Phase 2A (Single Agent, original flow)
|
||||||
|
- `modules.length >= 2` → Phase 2B + Phase 3 (N+1 Parallel)
|
||||||
|
|
||||||
|
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
|
||||||
|
|
||||||
|
### Phase 2A: Single Agent Planning (Original Flow)
|
||||||
|
|
||||||
|
**Condition**: `modules.length == 1` (no multi-module detected)
|
||||||
|
|
||||||
|
**Purpose**: Generate IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT code implementation.
|
||||||
|
|
||||||
**Agent Invocation**:
|
**Agent Invocation**:
|
||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
description="Generate task JSON and implementation plan",
|
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
||||||
prompt=`
|
prompt=`
|
||||||
## Execution Context
|
## TASK OBJECTIVE
|
||||||
|
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
|
||||||
|
|
||||||
**Session ID**: WFS-{session-id}
|
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
|
||||||
**Execution Mode**: {agent-mode | cli-execute-mode}
|
|
||||||
**Task JSON Template Path**: {template_path}
|
|
||||||
|
|
||||||
## Phase 1: Discovery Results (Provided Context)
|
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
|
||||||
|
|
||||||
### Session Metadata
|
## SESSION PATHS
|
||||||
{session_metadata_content}
|
Input:
|
||||||
|
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
|
||||||
|
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||||
|
|
||||||
### Role Analyses (Enhanced by Synthesis)
|
Output:
|
||||||
{role_analyses_content}
|
- Task Dir: .workflow/active/{session-id}/.task/
|
||||||
- Includes requirements, design specs, enhancements, and clarifications from synthesis phase
|
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
|
||||||
|
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
|
||||||
|
|
||||||
### Artifacts Inventory
|
## CONTEXT METADATA
|
||||||
- **Guidance Specification**: {guidance_spec_path}
|
Session ID: {session-id}
|
||||||
- **Role Analyses**: {role_analyses_list}
|
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||||
|
|
||||||
### Context Package
|
## CLI TOOL SELECTION
|
||||||
{context_package_summary}
|
Determine CLI tool usage per-step based on user's task description:
|
||||||
- Includes conflict_risk assessment
|
- If user specifies "use Codex/Gemini/Qwen for X" → Add command field to relevant steps
|
||||||
|
- Default: Agent execution (no command field) unless user explicitly requests CLI
|
||||||
|
|
||||||
### Conflict Resolution (Conditional)
|
## EXPLORATION CONTEXT (from context-package.exploration_results)
|
||||||
If conflict_risk was medium/high, modifications have been applied to:
|
- Load exploration_results from context-package.json
|
||||||
- **guidance-specification.md**: Design decisions updated to resolve conflicts
|
- Use aggregated_insights.critical_files for focus_paths generation
|
||||||
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
|
- Apply aggregated_insights.constraints to acceptance criteria
|
||||||
- **context-package.json**: Marked as "resolved" with conflict IDs
|
- Reference aggregated_insights.all_patterns for implementation approach
|
||||||
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
|
- Use aggregated_insights.all_integration_points for precise modification locations
|
||||||
|
- Use conflict_indicators for risk-aware task sequencing
|
||||||
|
|
||||||
### MCP Analysis Results (Optional)
|
## EXPECTED DELIVERABLES
|
||||||
**Code Structure**: {mcp_code_index_results}
|
1. Task JSON Files (.task/IMPL-*.json)
|
||||||
**External Research**: {mcp_exa_research_results}
|
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
|
||||||
|
- Quantified requirements with explicit counts
|
||||||
|
- Artifacts integration from context package
|
||||||
|
- **focus_paths enhanced with exploration critical_files**
|
||||||
|
- Flow control with pre_analysis steps (include exploration integration_points analysis)
|
||||||
|
|
||||||
## Phase 2: Document Generation Task
|
2. Implementation Plan (IMPL_PLAN.md)
|
||||||
|
- Context analysis and artifact references
|
||||||
|
- Task breakdown and execution strategy
|
||||||
|
- Complete structure per agent definition
|
||||||
|
|
||||||
**Agent Configuration Reference**: All task generation rules, quantification requirements, quality standards, and execution details are defined in action-planning-agent.
|
3. TODO List (TODO_LIST.md)
|
||||||
|
- Hierarchical structure (containers, pending, completed markers)
|
||||||
|
- Links to task JSONs and summaries
|
||||||
|
- Matches task JSON hierarchy
|
||||||
|
|
||||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
## QUALITY STANDARDS
|
||||||
- Task Decomposition Standards
|
Hard Constraints:
|
||||||
- Quantification Requirements (MANDATORY)
|
- Task count <= 18 (hard limit - request re-scope if exceeded)
|
||||||
- 5-Field Task JSON Schema
|
- All requirements quantified (explicit counts and enumerated lists)
|
||||||
- IMPL_PLAN.md Structure
|
- Acceptance criteria measurable (include verification commands)
|
||||||
- TODO_LIST.md Format
|
- Artifact references mapped from context package
|
||||||
- Execution Flow & Quality Validation
|
- All documents follow agent-defined structure
|
||||||
|
|
||||||
### Required Outputs Summary
|
## SUCCESS CRITERIA
|
||||||
|
- All planning documents generated successfully:
|
||||||
#### 1. Task JSON Files (.task/IMPL-*.json)
|
- Task JSONs valid and saved to .task/ directory
|
||||||
- **Location**: `.workflow/active//{session-id}/.task/`
|
- IMPL_PLAN.md created with complete structure
|
||||||
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
|
- TODO_LIST.md generated matching task JSONs
|
||||||
- **Schema**: 5-field structure (id, title, status, meta, context, flow_control) with artifacts integration
|
- Return completion status with document count and task breakdown summary
|
||||||
- **Details**: See action-planning-agent.md § Task JSON Generation
|
|
||||||
|
|
||||||
#### 2. IMPL_PLAN.md
|
|
||||||
- **Location**: `.workflow/active//{session-id}/IMPL_PLAN.md`
|
|
||||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
|
|
||||||
- **Details**: See action-planning-agent.md § Implementation Plan Creation
|
|
||||||
|
|
||||||
#### 3. TODO_LIST.md
|
|
||||||
- **Location**: `.workflow/active//{session-id}/TODO_LIST.md`
|
|
||||||
- **Format**: Hierarchical task list with status indicators (▸, [ ], [x]) and JSON links
|
|
||||||
- **Details**: See action-planning-agent.md § TODO List Generation
|
|
||||||
|
|
||||||
### Agent Execution Summary
|
|
||||||
|
|
||||||
**Key Steps** (Detailed instructions in action-planning-agent.md):
|
|
||||||
1. Load task JSON template from provided path
|
|
||||||
2. Extract and decompose tasks with quantification
|
|
||||||
3. Generate task JSON files enforcing quantification requirements
|
|
||||||
4. Create IMPL_PLAN.md using template
|
|
||||||
5. Generate TODO_LIST.md matching task JSONs
|
|
||||||
6. Update session state
|
|
||||||
|
|
||||||
**Quality Gates** (Full checklist in action-planning-agent.md):
|
|
||||||
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
|
|
||||||
- ✓ Task count ≤10 (hard limit)
|
|
||||||
- ✓ Artifact references mapped correctly
|
|
||||||
- ✓ MCP tool integration added
|
|
||||||
- ✓ Documents follow template structure
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
Generate all three documents and report completion status:
|
|
||||||
- Task JSON files created: N files
|
|
||||||
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
|
||||||
- MCP enhancements: code-index, exa-research
|
|
||||||
- Session ready for execution: /workflow:execute
|
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Phase 2B: N Parallel Planning (Multi-Module)
|
||||||
|
|
||||||
### Agent Context Passing
|
**Condition**: `modules.length >= 2` (multi-module detected)
|
||||||
|
|
||||||
**Memory-Aware Context Assembly**:
|
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task generation.
|
||||||
|
|
||||||
|
**Parallel Agent Invocation**:
|
||||||
```javascript
|
```javascript
|
||||||
// Assemble context package for agent
|
// Launch N agents in parallel (one per module)
|
||||||
const agentContext = {
|
const planningTasks = modules.map(module =>
|
||||||
session_id: "WFS-[id]",
|
Task(
|
||||||
|
subagent_type="action-planning-agent",
|
||||||
|
description=`Plan ${module.name} module`,
|
||||||
|
prompt=`
|
||||||
|
## SCOPE
|
||||||
|
- Module: ${module.name} (${module.type})
|
||||||
|
- Focus Paths: ${module.paths.join(', ')}
|
||||||
|
- Task ID Prefix: IMPL-${module.prefix}
|
||||||
|
- Task Limit: ≤9 tasks
|
||||||
|
- Other Modules: ${otherModules.join(', ')}
|
||||||
|
- Cross-module deps format: "CROSS::{module}::{pattern}"
|
||||||
|
|
||||||
// Use memory if available, else load
|
## SESSION PATHS
|
||||||
session_metadata: memory.has("workflow-session.json")
|
Input:
|
||||||
? memory.get("workflow-session.json")
|
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||||
: Read(.workflow/active/WFS-[id]/workflow-session.json),
|
Output:
|
||||||
|
- Task Dir: .workflow/active/{session-id}/.task/
|
||||||
|
|
||||||
context_package_path: ".workflow/active/WFS-[id]/.process/context-package.json",
|
## INSTRUCTIONS
|
||||||
|
- Generate tasks ONLY for ${module.name} module
|
||||||
|
- Use task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
|
||||||
|
- For cross-module dependencies, use: depends_on: ["CROSS::B::api-endpoint"]
|
||||||
|
- Maximum 9 tasks per module
|
||||||
|
`
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
context_package: memory.has("context-package.json")
|
// Execute all in parallel
|
||||||
? memory.get("context-package.json")
|
await Promise.all(planningTasks);
|
||||||
: Read(".workflow/active/WFS-[id]/.process/context-package.json"),
|
|
||||||
|
|
||||||
// Extract brainstorm artifacts from context package
|
|
||||||
brainstorm_artifacts: extractBrainstormArtifacts(context_package),
|
|
||||||
|
|
||||||
// Load role analyses using paths from context package
|
|
||||||
role_analyses: brainstorm_artifacts.role_analyses
|
|
||||||
.flatMap(role => role.files)
|
|
||||||
.map(file => Read(file.path)),
|
|
||||||
|
|
||||||
// Load conflict resolution if exists (from context package)
|
|
||||||
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
|
|
||||||
? Read(brainstorm_artifacts.conflict_resolution.path)
|
|
||||||
: null,
|
|
||||||
|
|
||||||
// Optional MCP enhancements
|
|
||||||
mcp_analysis: executeMcpDiscovery()
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Output Structure** (direct to .task/):
|
||||||
|
```
|
||||||
|
.task/
|
||||||
|
├── IMPL-A1.json # Module A (e.g., frontend)
|
||||||
|
├── IMPL-A2.json
|
||||||
|
├── IMPL-B1.json # Module B (e.g., backend)
|
||||||
|
├── IMPL-B2.json
|
||||||
|
└── IMPL-C1.json # Module C (e.g., shared)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Task ID Naming**:
|
||||||
|
- Format: `IMPL-{prefix}{seq}.json`
|
||||||
|
- Prefix: A, B, C... (assigned by detection order)
|
||||||
|
- Sequence: 1, 2, 3... (per-module increment)
|
||||||
|
|
||||||
|
### Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
||||||
|
|
||||||
|
**Condition**: Only executed when `modules.length >= 2`
|
||||||
|
|
||||||
|
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified documents.
|
||||||
|
|
||||||
|
**Integration Logic**:
|
||||||
|
```javascript
|
||||||
|
// 1. Collect all module task JSONs
|
||||||
|
const allTasks = glob('.task/IMPL-*.json').map(loadJson);
|
||||||
|
|
||||||
|
// 2. Resolve cross-module dependencies
|
||||||
|
for (const task of allTasks) {
|
||||||
|
if (task.depends_on) {
|
||||||
|
task.depends_on = task.depends_on.map(dep => {
|
||||||
|
if (dep.startsWith('CROSS::')) {
|
||||||
|
// CROSS::B::api-endpoint → find matching IMPL-B* task
|
||||||
|
const [, targetModule, pattern] = dep.match(/CROSS::(\w+)::(.+)/);
|
||||||
|
return findTaskByModuleAndPattern(allTasks, targetModule, pattern);
|
||||||
|
}
|
||||||
|
return dep;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Generate unified IMPL_PLAN.md (grouped by module)
|
||||||
|
generateIMPL_PLAN(allTasks, modules);
|
||||||
|
|
||||||
|
// 4. Generate TODO_LIST.md (hierarchical structure)
|
||||||
|
generateTODO_LIST(allTasks, modules);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: IMPL_PLAN.md and TODO_LIST.md structure definitions are in `action-planning-agent.md`.
|
||||||
|
|
||||||
|
|||||||
@@ -1,24 +1,23 @@
|
|||||||
---
|
---
|
||||||
name: task-generate-tdd
|
name: task-generate-tdd
|
||||||
description: Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation
|
description: Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation
|
||||||
argument-hint: "--session WFS-session-id [--cli-execute]"
|
argument-hint: "--session WFS-session-id"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:task-generate-tdd --session WFS-auth
|
- /workflow:tools:task-generate-tdd --session WFS-auth
|
||||||
- /workflow:tools:task-generate-tdd --session WFS-auth --cli-execute
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Autonomous TDD Task Generation Command
|
# Autonomous TDD Task Generation Command
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes. Generates complete Red-Green-Refactor cycles contained within each task.
|
Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Generates complete Red-Green-Refactor cycles contained within each task.
|
||||||
|
|
||||||
## Core Philosophy
|
## Core Philosophy
|
||||||
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
|
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
|
||||||
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
|
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
|
||||||
- **Memory-First**: Reuse loaded documents from conversation memory
|
- **Memory-First**: Reuse loaded documents from conversation memory
|
||||||
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
|
||||||
- **Pre-Selected Templates**: Command selects correct TDD template based on `--cli-execute` flag **before** invoking agent
|
- **Semantic CLI Selection**: CLI tool usage determined from user's task description, not flags
|
||||||
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
|
- **Agent Simplicity**: Agent generates content with semantic CLI detection
|
||||||
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
||||||
- **TDD-First**: Every feature starts with a failing test (Red phase)
|
- **TDD-First**: Every feature starts with a failing test (Red phase)
|
||||||
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
|
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
|
||||||
@@ -30,11 +29,6 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
|||||||
- **1 feature = 1 task** containing complete TDD cycle internally
|
- **1 feature = 1 task** containing complete TDD cycle internally
|
||||||
- Each task executes Red-Green-Refactor phases sequentially
|
- Each task executes Red-Green-Refactor phases sequentially
|
||||||
- Task count = Feature count (typically 5 features = 5 tasks)
|
- Task count = Feature count (typically 5 features = 5 tasks)
|
||||||
- **Benefits**:
|
|
||||||
- 70% reduction in task management overhead
|
|
||||||
- Continuous context per feature (no switching between TEST/IMPL/REFACTOR)
|
|
||||||
- Simpler dependency management
|
|
||||||
- Maintains TDD rigor through internal phase structure
|
|
||||||
|
|
||||||
**Previous Approach** (Deprecated):
|
**Previous Approach** (Deprecated):
|
||||||
- 1 feature = 3 separate tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
|
- 1 feature = 3 separate tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
|
||||||
@@ -48,16 +42,40 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
|||||||
- Different tech stacks or domains within feature
|
- Different tech stacks or domains within feature
|
||||||
|
|
||||||
### Task Limits
|
### Task Limits
|
||||||
- **Maximum 10 tasks** (hard limit for TDD workflows)
|
- **Maximum 18 tasks** (hard limit for TDD workflows)
|
||||||
- **Feature-based**: Complete functional units with internal TDD cycles
|
- **Feature-based**: Complete functional units with internal TDD cycles
|
||||||
- **Hierarchy**: Flat (≤5 simple features) | Two-level (6-10 for complex features with sub-features)
|
- **Hierarchy**: Flat (≤5 simple features) | Two-level (6-10 for complex features with sub-features)
|
||||||
- **Re-scope**: If >10 tasks needed, break project into multiple TDD workflow sessions
|
- **Re-scope**: If >18 tasks needed, break project into multiple TDD workflow sessions
|
||||||
|
|
||||||
### TDD Cycle Mapping
|
### TDD Cycle Mapping
|
||||||
- **Old approach**: 1 feature = 3 tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
|
- **Old approach**: 1 feature = 3 tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
|
||||||
- **Current approach**: 1 feature = 1 task (IMPL-N with internal Red-Green-Refactor phases)
|
- **Current approach**: 1 feature = 1 task (IMPL-N with internal Red-Green-Refactor phases)
|
||||||
- **Complex features**: 1 container (IMPL-N) + subtasks (IMPL-N.M) when necessary
|
- **Complex features**: 1 container (IMPL-N) + subtasks (IMPL-N.M) when necessary
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Discovery & Context Loading (Memory-First)
|
||||||
|
├─ Load session context (if not in memory)
|
||||||
|
├─ Load context package (if not in memory)
|
||||||
|
├─ Load test context package (if not in memory)
|
||||||
|
├─ Extract & load role analyses from context package
|
||||||
|
├─ Load conflict resolution (if exists)
|
||||||
|
└─ Optional: MCP external research
|
||||||
|
|
||||||
|
Phase 2: Agent Execution (Document Generation)
|
||||||
|
├─ Pre-agent template selection (semantic CLI detection)
|
||||||
|
├─ Invoke action-planning-agent
|
||||||
|
├─ Generate TDD Task JSON Files (.task/IMPL-*.json)
|
||||||
|
│ └─ Each task: complete Red-Green-Refactor cycle internally
|
||||||
|
├─ Create IMPL_PLAN.md (TDD variant)
|
||||||
|
└─ Generate TODO_LIST.md with TDD phase indicators
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Phase 1: Discovery & Context Loading
|
### Phase 1: Discovery & Context Loading
|
||||||
@@ -67,11 +85,8 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
|
|||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
"session_id": "WFS-[session-id]",
|
"session_id": "WFS-[session-id]",
|
||||||
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
|
|
||||||
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
|
|
||||||
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
|
|
||||||
// Path selected by command based on --cli-execute flag, agent reads it
|
|
||||||
"workflow_type": "tdd",
|
"workflow_type": "tdd",
|
||||||
|
// Note: CLI tool usage is determined semantically by action-planning-agent based on user's task description
|
||||||
"session_metadata": {
|
"session_metadata": {
|
||||||
// If in memory: use cached content
|
// If in memory: use cached content
|
||||||
// Else: Load from .workflow/active//{session-id}/workflow-session.json
|
// Else: Load from .workflow/active//{session-id}/workflow-session.json
|
||||||
@@ -180,8 +195,7 @@ Task(
|
|||||||
|
|
||||||
**Session ID**: WFS-{session-id}
|
**Session ID**: WFS-{session-id}
|
||||||
**Workflow Type**: TDD
|
**Workflow Type**: TDD
|
||||||
**Execution Mode**: {agent-mode | cli-execute-mode}
|
**Note**: CLI tool usage is determined semantically from user's task description
|
||||||
**Task JSON Template Path**: {template_path}
|
|
||||||
|
|
||||||
## Phase 1: Discovery Results (Provided Context)
|
## Phase 1: Discovery Results (Provided Context)
|
||||||
|
|
||||||
@@ -235,7 +249,7 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
|||||||
- Each task executes Red-Green-Refactor phases sequentially
|
- Each task executes Red-Green-Refactor phases sequentially
|
||||||
- Task count = Feature count (typically 5 features = 5 tasks)
|
- Task count = Feature count (typically 5 features = 5 tasks)
|
||||||
- Subtasks only when complexity >2500 lines or >6 files per cycle
|
- Subtasks only when complexity >2500 lines or >6 files per cycle
|
||||||
- **Maximum 10 tasks** (hard limit for TDD workflows)
|
- **Maximum 18 tasks** (hard limit for TDD workflows)
|
||||||
|
|
||||||
#### TDD Cycle Mapping
|
#### TDD Cycle Mapping
|
||||||
- **Simple features**: IMPL-N with internal Red-Green-Refactor phases
|
- **Simple features**: IMPL-N with internal Red-Green-Refactor phases
|
||||||
@@ -246,16 +260,15 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
|||||||
|
|
||||||
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
|
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
|
||||||
- **Location**: `.workflow/active//{session-id}/.task/`
|
- **Location**: `.workflow/active//{session-id}/.task/`
|
||||||
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
|
|
||||||
- **Schema**: 5-field structure with TDD-specific metadata
|
- **Schema**: 5-field structure with TDD-specific metadata
|
||||||
- `meta.tdd_workflow`: true (REQUIRED)
|
- `meta.tdd_workflow`: true (REQUIRED)
|
||||||
- `meta.max_iterations`: 3 (Green phase test-fix cycle limit)
|
- `meta.max_iterations`: 3 (Green phase test-fix cycle limit)
|
||||||
- `meta.use_codex`: false (manual fixes by default)
|
|
||||||
- `context.tdd_cycles`: Array with quantified test cases and coverage
|
- `context.tdd_cycles`: Array with quantified test cases and coverage
|
||||||
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase` field
|
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase` field
|
||||||
1. Red Phase (`tdd_phase: "red"`): Write failing tests
|
1. Red Phase (`tdd_phase: "red"`): Write failing tests
|
||||||
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
|
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
|
||||||
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
|
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
|
||||||
|
- CLI tool usage determined semantically (add `command` field when user requests CLI execution)
|
||||||
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
|
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
|
||||||
|
|
||||||
##### 2. IMPL_PLAN.md (TDD Variant)
|
##### 2. IMPL_PLAN.md (TDD Variant)
|
||||||
@@ -305,7 +318,7 @@ Refer to: @.claude/agents/action-planning-agent.md for:
|
|||||||
|
|
||||||
**Quality Gates** (Full checklist in action-planning-agent.md):
|
**Quality Gates** (Full checklist in action-planning-agent.md):
|
||||||
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
|
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
|
||||||
- ✓ Task count ≤10 (hard limit)
|
- ✓ Task count ≤18 (hard limit)
|
||||||
- ✓ Each task has meta.tdd_workflow: true
|
- ✓ Each task has meta.tdd_workflow: true
|
||||||
- ✓ Each task has exactly 3 implementation steps with tdd_phase field
|
- ✓ Each task has exactly 3 implementation steps with tdd_phase field
|
||||||
- ✓ Green phase includes test-fix cycle logic
|
- ✓ Green phase includes test-fix cycle logic
|
||||||
@@ -456,16 +469,14 @@ This section provides quick reference for TDD task JSON structure. For complete
|
|||||||
|
|
||||||
**Basic Usage**:
|
**Basic Usage**:
|
||||||
```bash
|
```bash
|
||||||
# Agent mode (default, autonomous execution)
|
# Standard execution
|
||||||
/workflow:tools:task-generate-tdd --session WFS-auth
|
/workflow:tools:task-generate-tdd --session WFS-auth
|
||||||
|
|
||||||
# CLI tool mode (use Gemini/Qwen for generation)
|
# With semantic CLI request (include in task description)
|
||||||
/workflow:tools:task-generate-tdd --session WFS-auth --cli-execute
|
# e.g., "Generate TDD tasks for auth module, use Codex for implementation"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Execution Modes**:
|
**CLI Tool Selection**: Determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
|
||||||
- **Agent mode** (default): Uses `action-planning-agent` with agent-mode task template
|
|
||||||
- **CLI mode** (`--cli-execute`): Uses Gemini/Qwen with cli-mode task template
|
|
||||||
|
|
||||||
**Output**:
|
**Output**:
|
||||||
- TDD task JSON files in `.task/` directory (IMPL-N.json format)
|
- TDD task JSON files in `.task/` directory (IMPL-N.json format)
|
||||||
@@ -494,18 +505,14 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
|
|||||||
3. **Success Path**: Tests pass → Complete task
|
3. **Success Path**: Tests pass → Complete task
|
||||||
4. **Failure Path**: Tests fail → Enter iterative fix cycle:
|
4. **Failure Path**: Tests fail → Enter iterative fix cycle:
|
||||||
- **Gemini Diagnosis**: Analyze failures with bug-fix template
|
- **Gemini Diagnosis**: Analyze failures with bug-fix template
|
||||||
- **Fix Application**: Manual (default) or Codex (if meta.use_codex=true)
|
- **Fix Application**: Agent (default) or CLI (if `command` field present)
|
||||||
- **Retest**: Verify fix resolves failures
|
- **Retest**: Verify fix resolves failures
|
||||||
- **Repeat**: Up to max_iterations (default: 3)
|
- **Repeat**: Up to max_iterations (default: 3)
|
||||||
5. **Safety Net**: Auto-revert all changes if max iterations reached
|
5. **Safety Net**: Auto-revert all changes if max iterations reached
|
||||||
|
|
||||||
**Key Benefits**:
|
|
||||||
- Faster feedback loop within Green phase
|
|
||||||
- Autonomous recovery from initial implementation errors
|
|
||||||
- Systematic debugging with Gemini's bug-fix template
|
|
||||||
- Safe rollback prevents broken TDD state
|
|
||||||
|
|
||||||
## Configuration Options
|
## Configuration Options
|
||||||
- **meta.max_iterations**: Number of fix attempts (default: 3 for TDD, 5 for test-gen)
|
- **meta.max_iterations**: Number of fix attempts (default: 3 for TDD, 5 for test-gen)
|
||||||
- **meta.use_codex**: Enable Codex automated fixes (default: false, manual)
|
- **CLI tool usage**: Determined semantically from user's task description via `command` field in implementation_approach
|
||||||
|
|
||||||
|
|||||||
@@ -1,680 +0,0 @@
|
|||||||
---
|
|
||||||
name: task-generate
|
|
||||||
description: Generate task JSON files and IMPL_PLAN.md from analysis results using action-planning-agent with artifact integration
|
|
||||||
argument-hint: "--session WFS-session-id [--cli-execute]"
|
|
||||||
examples:
|
|
||||||
- /workflow:tools:task-generate --session WFS-auth
|
|
||||||
- /workflow:tools:task-generate --session WFS-auth --cli-execute
|
|
||||||
---
|
|
||||||
|
|
||||||
# Task Generation Command
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
This command generates task JSON files and an `IMPL_PLAN.md` from brainstorming role analyses. It automatically detects and integrates all brainstorming artifacts (role-specific `analysis.md` files and `guidance-specification.md`), creating a structured and context-rich plan for implementation. The command supports two primary execution modes: a default agent-based mode for seamless context handling and a `--cli-execute` mode that leverages the Codex CLI for complex, autonomous development tasks. Its core function is to translate requirements and design specifications from role analyses into actionable, executable tasks, ensuring all necessary context, dependencies, and implementation steps are defined upfront.
|
|
||||||
|
|
||||||
## 2. Execution Modes
|
|
||||||
|
|
||||||
This command offers two distinct modes for task execution, providing flexibility for different implementation complexities.
|
|
||||||
|
|
||||||
### Agent Mode (Default)
|
|
||||||
In the default mode, each step in `implementation_approach` **omits the `command` field**. The agent interprets the step's `modification_points` and `logic_flow` to execute the task autonomously.
|
|
||||||
- **Step Structure**: Contains `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, and `output` fields
|
|
||||||
- **Execution**: Agent reads these fields and performs the implementation autonomously
|
|
||||||
- **Context Loading**: Agent loads context via `pre_analysis` steps
|
|
||||||
- **Validation**: Agent validates against acceptance criteria in `context.acceptance`
|
|
||||||
- **Benefit**: Direct agent execution with full context awareness, no external tool overhead
|
|
||||||
- **Use Case**: Standard implementation tasks where agent capability is sufficient
|
|
||||||
|
|
||||||
### CLI Execute Mode (`--cli-execute`)
|
|
||||||
When the `--cli-execute` flag is used, each step in `implementation_approach` **includes a `command` field** that specifies the exact execution command. This mode is designed for complex implementations requiring specialized CLI tools.
|
|
||||||
- **Step Structure**: Includes all default fields PLUS a `command` field
|
|
||||||
- **Execution**: The specified command executes the step directly (e.g., `bash(codex ...)`)
|
|
||||||
- **Context Packages**: Each command receives context via the CONTEXT field in the prompt
|
|
||||||
- **Multi-Step Support**: Complex tasks can have multiple sequential codex steps with `resume --last`
|
|
||||||
- **Benefit**: Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning and autonomous execution
|
|
||||||
- **Use Case**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
|
||||||
|
|
||||||
## 3. Core Principles
|
|
||||||
This command is built on a set of core principles to ensure efficient and reliable task generation.
|
|
||||||
|
|
||||||
- **Role Analysis-Driven**: All generated tasks originate from role-specific `analysis.md` files (enhanced in synthesis phase), ensuring direct link between requirements/design and implementation
|
|
||||||
- **Artifact-Aware**: Automatically detects and integrates all brainstorming outputs (role analyses, guidance-specification.md, enhancements) to enrich task context
|
|
||||||
- **Context-Rich**: Embeds comprehensive context (requirements, focus paths, acceptance criteria, artifact references) directly into each task JSON
|
|
||||||
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
|
|
||||||
- **Flow-Control Ready**: Pre-defines clear execution sequence (`pre_analysis`, `implementation_approach`) within each task
|
|
||||||
- **Memory-First**: Prioritizes using documents already loaded in conversation memory to avoid redundant file operations
|
|
||||||
- **Mode-Flexible**: Supports both agent-driven execution (default) and CLI tool execution (with `--cli-execute` flag)
|
|
||||||
- **Multi-Step Support**: Complex tasks can use multiple sequential steps in `implementation_approach` with codex resume mechanism
|
|
||||||
- **Quantification-Enforced**: **NEW** - All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations to prevent ambiguity (e.g., "17 commands: [list]" not "implement commands")
|
|
||||||
- **Responsibility**: Parses analysis, detects artifacts, generates enhanced task JSONs, creates `IMPL_PLAN.md` and `TODO_LIST.md`, updates session state
|
|
||||||
|
|
||||||
## 3.5. Quantification Requirements (MANDATORY)
|
|
||||||
|
|
||||||
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
|
|
||||||
|
|
||||||
**Core Rules**:
|
|
||||||
1. **Extract Counts from Analysis**: Search for HOW MANY items and list them explicitly
|
|
||||||
2. **Enforce Explicit Lists**: Every deliverable uses format `{count} {type}: [{explicit_list}]`
|
|
||||||
3. **Make Acceptance Measurable**: Include verification commands (e.g., `ls ... | wc -l = N`)
|
|
||||||
4. **Quantify Modification Points**: Specify exact targets (files, functions with line numbers)
|
|
||||||
5. **Avoid Vague Language**: Replace "complete", "comprehensive", "reorganize" with quantified statements
|
|
||||||
|
|
||||||
**Standard Formats**:
|
|
||||||
|
|
||||||
- **Requirements**: `"Implement N items: [item1, item2, ...]"` or `"Modify N files: [file1:func:lines, ...]"`
|
|
||||||
- **Acceptance**: `"N items exist: verify by [command]"` or `"Coverage >= X%: verify by [test command]"`
|
|
||||||
- **Modification Points**: `"Create N files: [list]"` or `"Modify N functions: [func() in file lines X-Y]"`
|
|
||||||
|
|
||||||
**Validation Checklist**:
|
|
||||||
- [ ] Every requirement contains explicit count or enumerated list
|
|
||||||
- [ ] Every acceptance criterion is measurable with verification command
|
|
||||||
- [ ] Every modification_point specifies exact targets (files/functions/lines)
|
|
||||||
- [ ] No vague language ("complete", "comprehensive", "reorganize" without counts)
|
|
||||||
- [ ] Each implementation step has its own acceptance criteria
|
|
||||||
|
|
||||||
## 4. Execution Flow
|
|
||||||
The command follows a streamlined, three-step process to convert analysis into executable tasks.
|
|
||||||
|
|
||||||
### Step 1: Input & Discovery
|
|
||||||
The process begins by gathering all necessary inputs. It follows a **Memory-First Rule**, skipping file reads if documents are already in the conversation memory.
|
|
||||||
1. **Session Validation**: Loads and validates the session from `.workflow/active/{session_id}/workflow-session.json`.
|
|
||||||
2. **Context Package Loading** (primary source): Reads `.workflow/active/{session_id}/.process/context-package.json` for smart context and artifact catalog.
|
|
||||||
3. **Brainstorm Artifacts Extraction**: Extracts role analysis paths from `context-package.json` → `brainstorm_artifacts.role_analyses[]` (supports `analysis*.md` automatically).
|
|
||||||
4. **Document Loading**: Reads role analyses, guidance specification, synthesis output, and conflict resolution (if exists) using paths from context package.
|
|
||||||
|
|
||||||
### Step 2: Task Decomposition & Grouping
|
|
||||||
Once all inputs are loaded, the command analyzes the tasks defined in the analysis results and groups them based on shared context.
|
|
||||||
|
|
||||||
**Phase 2.1: Quantification Extraction (NEW - CRITICAL)**
|
|
||||||
1. **Count Extraction**: Scan analysis documents for quantifiable information:
|
|
||||||
- Search for numbers + nouns (e.g., "5 files", "17 commands", "3 features")
|
|
||||||
- Identify enumerated lists (bullet points, numbered lists, comma-separated items)
|
|
||||||
- Extract explicit counts from tables, diagrams, or structured data
|
|
||||||
- Store extracted counts with their context (what is being counted)
|
|
||||||
|
|
||||||
2. **List Enumeration**: Build explicit lists for each deliverable:
|
|
||||||
- If analysis says "implement session commands", enumerate ALL commands: [start, resume, list, complete, archive]
|
|
||||||
- If analysis mentions "create categories", list ALL categories: [literature, experiment, data-analysis, visualization, context]
|
|
||||||
- If analysis describes "modify functions", list ALL functions with line numbers
|
|
||||||
- Maintain full enumerations (no "..." unless list exceeds 20 items)
|
|
||||||
|
|
||||||
3. **Verification Method Assignment**: For each deliverable, determine verification approach:
|
|
||||||
- File count: `ls {path}/*.{ext} | wc -l = {count}`
|
|
||||||
- Directory existence: `ls {parent}/ | grep -E '(name1|name2|...)' | wc -l = {count}`
|
|
||||||
- Test coverage: `pytest --cov={module} --cov-report=term | grep TOTAL | awk '{print $4}' >= {percentage}`
|
|
||||||
- Function existence: `grep -E '(func1|func2|...)' {file} | wc -l = {count}`
|
|
||||||
|
|
||||||
4. **Ambiguity Detection**: Flag vague language for replacement:
|
|
||||||
- Detect words: "complete", "comprehensive", "reorganize", "refactor", "implement", "create" without counts
|
|
||||||
- Require quantification: "implement" → "implement {N} {items}: [{list}]"
|
|
||||||
- Reject unquantified deliverables
|
|
||||||
|
|
||||||
**Phase 2.2: Task Definition & Grouping**
|
|
||||||
1. **Task Definition Parsing**: Extracts task definitions, requirements, and dependencies from quantified analysis
|
|
||||||
2. **Context Signature Analysis**: Computes a unique hash (`context_signature`) for each task based on its `focus_paths` and referenced `artifacts`
|
|
||||||
3. **Task Grouping**:
|
|
||||||
* Tasks with the **same signature** are candidates for merging, as they operate on the same context
|
|
||||||
* Tasks with **different signatures** and no dependencies are grouped for parallel execution
|
|
||||||
* Tasks with `depends_on` relationships are marked for sequential execution
|
|
||||||
4. **Modification Target Determination**: Extracts specific code locations (`file:function:lines`) from the analysis to populate the `target_files` field
|
|
||||||
|
|
||||||
### Step 3: Output Generation
|
|
||||||
Finally, the command generates all the necessary output files.
|
|
||||||
1. **Task JSON Creation**: Creates individual `.task/IMPL-*.json` files, embedding all context, artifacts, and flow control steps. If `--cli-execute` is active, it generates the appropriate `codex exec` commands.
|
|
||||||
2. **IMPL_PLAN.md Generation**: Creates the main implementation plan document, summarizing the strategy, tasks, and dependencies.
|
|
||||||
3. **TODO_LIST.md Generation**: Creates a simple checklist for tracking task progress.
|
|
||||||
4. **Session State Update**: Updates `workflow-session.json` with the final task count and artifact inventory, marking the session as ready for execution.
|
|
||||||
|
|
||||||
## 5. Task Decomposition Strategy
|
|
||||||
The command employs a sophisticated strategy to group and decompose tasks, optimizing for context reuse and parallel execution.
|
|
||||||
|
|
||||||
### Core Principles
|
|
||||||
- **Primary Rule: Shared Context → Merge Tasks**: Tasks that operate on the same files, use the same artifacts, and share the same tech stack are merged. This avoids redundant context loading and recognizes inherent relationships between the tasks.
|
|
||||||
- **Secondary Rule: Different Contexts + No Dependencies → Decompose for Parallel Execution**: Tasks that are fully independent (different files, different artifacts, no shared dependencies) are decomposed into separate parallel execution groups.
|
|
||||||
|
|
||||||
### Context Analysis for Task Grouping
|
|
||||||
The decision to merge or decompose is based on analyzing context indicators:
|
|
||||||
|
|
||||||
1. **Shared Context Indicators (→ Merge)**:
|
|
||||||
* Identical `focus_paths` (working on the same modules/files).
|
|
||||||
* Same tech stack and dependencies.
|
|
||||||
* Identical `context.artifacts` references.
|
|
||||||
* A sequential logic flow within the same feature.
|
|
||||||
* Shared test fixtures or setup.
|
|
||||||
|
|
||||||
2. **Independent Context Indicators (→ Decompose)**:
|
|
||||||
* Different `focus_paths` (separate modules).
|
|
||||||
* Different tech stacks (e.g., frontend vs. backend).
|
|
||||||
* Different `context.artifacts` (using different brainstorming outputs).
|
|
||||||
* No shared dependencies.
|
|
||||||
* Can be tested independently.
|
|
||||||
|
|
||||||
**Decomposition is only performed when**:
|
|
||||||
- Tasks have different contexts and no shared dependencies (enabling parallel execution).
|
|
||||||
- A single task represents an excessive workload (e.g., >2500 lines of code or >6 files to modify).
|
|
||||||
- A sequential dependency creates a necessary block (e.g., IMPL-1 must complete before IMPL-2 can start).
|
|
||||||
|
|
||||||
### Context Signature Algorithm
|
|
||||||
To automate grouping, a `context_signature` is computed for each task.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Compute context signature for task grouping
|
|
||||||
function computeContextSignature(task) {
|
|
||||||
const focusPathsStr = task.context.focus_paths.sort().join('|');
|
|
||||||
const artifactsStr = task.context.artifacts.map(a => a.path).sort().join('|');
|
|
||||||
const techStack = task.context.shared_context?.tech_stack?.sort().join('|') || '';
|
|
||||||
|
|
||||||
return hash(`${focusPathsStr}:${artifactsStr}:${techStack}`);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Execution Group Assignment
|
|
||||||
Tasks are assigned to execution groups based on their signatures and dependencies.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Group tasks by context signature
|
|
||||||
function groupTasksByContext(tasks) {
|
|
||||||
const groups = {};
|
|
||||||
|
|
||||||
tasks.forEach(task => {
|
|
||||||
const signature = computeContextSignature(task);
|
|
||||||
if (!groups[signature]) {
|
|
||||||
groups[signature] = [];
|
|
||||||
}
|
|
||||||
groups[signature].push(task);
|
|
||||||
});
|
|
||||||
|
|
||||||
return groups;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Assign execution groups for parallel tasks
|
|
||||||
function assignExecutionGroups(tasks) {
|
|
||||||
const contextGroups = groupTasksByContext(tasks);
|
|
||||||
|
|
||||||
Object.entries(contextGroups).forEach(([signature, groupTasks]) => {
|
|
||||||
if (groupTasks.length === 1) {
|
|
||||||
const task = groupTasks[0];
|
|
||||||
// Single task with unique context
|
|
||||||
if (!task.context.depends_on || task.context.depends_on.length === 0) {
|
|
||||||
task.meta.execution_group = `parallel-${signature.slice(0, 8)}`;
|
|
||||||
} else {
|
|
||||||
task.meta.execution_group = null; // Sequential task
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Multiple tasks with same context → Should be merged
|
|
||||||
console.warn(`Tasks ${groupTasks.map(t => t.id).join(', ')} share context and should be merged`);
|
|
||||||
// Merge tasks into single task
|
|
||||||
return mergeTasks(groupTasks);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
```
|
|
||||||
**Task Limits**:
|
|
||||||
- **Maximum 10 tasks** (hard limit).
|
|
||||||
- **Hierarchy**: Flat (≤5 tasks) or two-level (6-10 tasks). If >10, the scope should be re-evaluated.
|
|
||||||
- **Parallel Groups**: Tasks with the same `execution_group` ID are independent and can run concurrently.
|
|
||||||
|
|
||||||
## 6. Generated Outputs
|
|
||||||
The command produces three key documents and a directory of task files.
|
|
||||||
|
|
||||||
### 6.1. Task JSON Schema (`.task/IMPL-*.json`)
|
|
||||||
Each task JSON embeds all necessary context, artifacts, and execution steps using this schema:
|
|
||||||
|
|
||||||
**Top-Level Fields**:
|
|
||||||
- `id`: Task identifier (format: `IMPL-N` or `IMPL-N.M` for subtasks)
|
|
||||||
- `title`: Descriptive task name
|
|
||||||
- `status`: Task state (`pending|active|completed|blocked|container`)
|
|
||||||
- `context_package_path`: Path to context package (`.workflow/active/WFS-[session]/.process/context-package.json`)
|
|
||||||
- `meta`: Task metadata
|
|
||||||
- `context`: Task-specific context and requirements
|
|
||||||
- `flow_control`: Execution steps and workflow
|
|
||||||
|
|
||||||
**Meta Object**:
|
|
||||||
- `type`: Task category (`feature|bugfix|refactor|test-gen|test-fix|docs`)
|
|
||||||
- `agent`: Assigned agent (`@code-developer|@test-fix-agent|@universal-executor`)
|
|
||||||
- `execution_group`: Parallelization group ID or null
|
|
||||||
- `context_signature`: Hash for context-based grouping
|
|
||||||
|
|
||||||
**Context Object**:
|
|
||||||
- `requirements`: Quantified implementation requirements (with counts and explicit lists)
|
|
||||||
- `focus_paths`: Target directories/files (absolute or relative paths)
|
|
||||||
- `acceptance`: Measurable acceptance criteria (with verification commands)
|
|
||||||
- `parent`: Parent task ID for subtasks
|
|
||||||
- `depends_on`: Prerequisite task IDs
|
|
||||||
- `inherited`: Shared patterns and dependencies from parent
|
|
||||||
- `shared_context`: Tech stack and conventions
|
|
||||||
- `artifacts`: Referenced brainstorm artifacts with paths, priority, and usage
|
|
||||||
|
|
||||||
**Flow Control Object**:
|
|
||||||
- `pre_analysis`: Context loading and preparation steps
|
|
||||||
- `load_context_package`: Load smart context and artifact catalog
|
|
||||||
- `load_role_analysis_artifacts`: Load role analyses dynamically from context package
|
|
||||||
- `load_planning_context`: Load finalized decisions with resolved conflicts
|
|
||||||
- `codebase_exploration`: Discover existing patterns
|
|
||||||
- `analyze_task_patterns`: Identify modification targets
|
|
||||||
- `implementation_approach`: Execution steps
|
|
||||||
- **Agent Mode**: Steps contain `modification_points` and `logic_flow` (agent executes autonomously)
|
|
||||||
- **CLI Mode**: Steps include `command` field with CLI tool invocation
|
|
||||||
- `target_files`: Specific files/functions/lines to modify
|
|
||||||
|
|
||||||
**Key Characteristics**:
|
|
||||||
- **Quantification**: All requirements/acceptance use explicit counts and enumerations
|
|
||||||
- **Mode Flexibility**: Supports both agent execution (default) and CLI tool execution (`--cli-execute`)
|
|
||||||
- **Context Intelligence**: References context-package.json for smart context and artifact paths
|
|
||||||
- **Artifact Integration**: Dynamically loads role analyses and brainstorm artifacts
|
|
||||||
|
|
||||||
**Example Task JSON**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"id": "IMPL-1",
|
|
||||||
"title": "Implement feature X with Y components",
|
|
||||||
"status": "pending",
|
|
||||||
"context_package_path": ".workflow/active/WFS-session/.process/context-package.json",
|
|
||||||
"meta": {
|
|
||||||
"type": "feature",
|
|
||||||
"agent": "@code-developer",
|
|
||||||
"execution_group": "parallel-abc123",
|
|
||||||
"context_signature": "hash-value"
|
|
||||||
},
|
|
||||||
"context": {
|
|
||||||
"requirements": [
|
|
||||||
"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]",
|
|
||||||
"Create 3 directories: [dir1/, dir2/, dir3/]",
|
|
||||||
"Modify 2 functions: [funcA() in file1.ts lines 10-25, funcB() in file2.ts lines 40-60]"
|
|
||||||
],
|
|
||||||
"focus_paths": ["D:\\project\\src\\module", "./tests/module"],
|
|
||||||
"acceptance": [
|
|
||||||
"5 command files created: verify by ls .claude/commands/*/*.md | wc -l = 5",
|
|
||||||
"3 directories exist: verify by ls -d dir*/ | wc -l = 3",
|
|
||||||
"All tests pass: pytest tests/ --cov=src/module (>=80% coverage)"
|
|
||||||
],
|
|
||||||
"depends_on": [],
|
|
||||||
"artifacts": [
|
|
||||||
{
|
|
||||||
"path": ".workflow/active/WFS-session/.brainstorming/system-architect/analysis.md",
|
|
||||||
"priority": "highest",
|
|
||||||
"usage": "Architecture decisions and API specifications"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [
|
|
||||||
{
|
|
||||||
"step": "load_context_package",
|
|
||||||
"action": "Load context package for artifact paths and smart context",
|
|
||||||
"commands": ["Read({{context_package_path}})"],
|
|
||||||
"output_to": "context_package",
|
|
||||||
"on_error": "fail"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"step": "load_role_analysis_artifacts",
|
|
||||||
"action": "Load role analyses from context-package.json",
|
|
||||||
"commands": [
|
|
||||||
"Read({{context_package_path}})",
|
|
||||||
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
|
||||||
"Read(each extracted path)"
|
|
||||||
],
|
|
||||||
"output_to": "role_analysis_artifacts",
|
|
||||||
"on_error": "skip_optional"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"implementation_approach": [
|
|
||||||
{
|
|
||||||
"step": 1,
|
|
||||||
"title": "Implement feature following role analyses",
|
|
||||||
"description": "Implement feature X using requirements from role analyses and context package",
|
|
||||||
"modification_points": [
|
|
||||||
"Create 5 command files: [cmd1.md, cmd2.md, cmd3.md, cmd4.md, cmd5.md]",
|
|
||||||
"Modify funcA() in file1.ts lines 10-25: add validation logic",
|
|
||||||
"Modify funcB() in file2.ts lines 40-60: integrate with new API"
|
|
||||||
],
|
|
||||||
"logic_flow": [
|
|
||||||
"Load role analyses and context package",
|
|
||||||
"Extract requirements and design decisions",
|
|
||||||
"Implement commands following existing patterns",
|
|
||||||
"Update functions with new logic",
|
|
||||||
"Validate against acceptance criteria"
|
|
||||||
],
|
|
||||||
"depends_on": [],
|
|
||||||
"output": "implementation"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"target_files": ["file1.ts:funcA:10-25", "file2.ts:funcB:40-60"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note**: In CLI Execute Mode (`--cli-execute`), `implementation_approach` steps include a `command` field with the CLI tool invocation (e.g., `bash(codex ...)`).
|
|
||||||
|
|
||||||
### 6.2. IMPL_PLAN.md Structure
|
|
||||||
This document provides a high-level overview of the entire implementation plan.
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
identifier: WFS-{session-id}
|
|
||||||
source: "User requirements" | "File: path" | "Issue: ISS-001"
|
|
||||||
role_analyses: .workflow/active//{session-id}/.brainstorming/[role]/analysis*.md
|
|
||||||
artifacts: .workflow/active//{session-id}/.brainstorming/
|
|
||||||
context_package: .workflow/active//{session-id}/.process/context-package.json # CCW smart context
|
|
||||||
guidance_specification: .workflow/active//{session-id}/.brainstorming/guidance-specification.md # Finalized decisions with resolved conflicts
|
|
||||||
workflow_type: "standard | tdd | design" # Indicates execution model
|
|
||||||
verification_history: # CCW quality gates
|
|
||||||
synthesis_clarify: "passed | skipped | pending" # Brainstorm phase clarification
|
|
||||||
action_plan_verify: "pending"
|
|
||||||
conflict_resolution: "resolved | none | low" # Status from context-package.json
|
|
||||||
phase_progression: "brainstorm → synthesis → context → conflict_resolution (if needed) → planning" # CCW workflow phases
|
|
||||||
---
|
|
||||||
|
|
||||||
# Implementation Plan: {Project Title}
|
|
||||||
|
|
||||||
## 1. Summary
|
|
||||||
Core requirements, objectives, technical approach summary (2-3 paragraphs max).
|
|
||||||
|
|
||||||
**Core Objectives**:
|
|
||||||
- [Key objective 1]
|
|
||||||
- [Key objective 2]
|
|
||||||
|
|
||||||
**Technical Approach**:
|
|
||||||
- [High-level approach]
|
|
||||||
|
|
||||||
## 2. Context Analysis
|
|
||||||
|
|
||||||
### CCW Workflow Context
|
|
||||||
**Phase Progression**:
|
|
||||||
- ✅ Phase 1: Brainstorming (role analyses generated by participating roles)
|
|
||||||
- ✅ Phase 2: Synthesis (concept enhancement + clarification, {N} questions answered, role analyses refined)
|
|
||||||
- ✅ Phase 3: Context Gathering (context-package.json: {N} files, {M} modules analyzed, conflict_risk: {level})
|
|
||||||
- ✅ Phase 4: Conflict Resolution ({status}: {conflict_count} conflicts detected and resolved | skipped if no conflicts)
|
|
||||||
- ⏳ Phase 5: Task Generation (current phase - generating IMPL_PLAN.md and task JSONs)
|
|
||||||
|
|
||||||
**Quality Gates**:
|
|
||||||
- synthesis-clarify: ✅ Passed ({N} ambiguities resolved, {M} enhancements applied)
|
|
||||||
- action-plan-verify: ⏳ Pending (recommended before /workflow:execute)
|
|
||||||
|
|
||||||
**Context Package Summary**:
|
|
||||||
- **Focus Paths**: {list key directories from context-package.json}
|
|
||||||
- **Key Files**: {list primary files for modification}
|
|
||||||
- **Module Depth Analysis**: {from get_modules_by_depth.sh output}
|
|
||||||
- **Smart Context**: {total file count} files, {module count} modules, {dependency count} dependencies identified
|
|
||||||
|
|
||||||
### Project Profile
|
|
||||||
- **Type**: Greenfield/Enhancement/Refactor
|
|
||||||
- **Scale**: User count, data volume, complexity
|
|
||||||
- **Tech Stack**: Primary technologies
|
|
||||||
- **Timeline**: Duration and milestones
|
|
||||||
|
|
||||||
### Module Structure
|
|
||||||
'''
|
|
||||||
[Directory tree showing key modules]
|
|
||||||
'''
|
|
||||||
|
|
||||||
### Dependencies
|
|
||||||
**Primary**: [Core libraries and frameworks]
|
|
||||||
**APIs**: [External services]
|
|
||||||
**Development**: [Testing, linting, CI/CD tools]
|
|
||||||
|
|
||||||
### Patterns & Conventions
|
|
||||||
- **Architecture**: [Key patterns like DI, Event-Driven]
|
|
||||||
- **Component Design**: [Design patterns]
|
|
||||||
- **State Management**: [State strategy]
|
|
||||||
- **Code Style**: [Naming, TypeScript coverage]
|
|
||||||
|
|
||||||
## 3. Brainstorming Artifacts Reference
|
|
||||||
|
|
||||||
### Artifact Usage Strategy
|
|
||||||
**Primary Reference (Role Analyses)**:
|
|
||||||
- **What**: Role-specific analyses from brainstorming phase providing multi-perspective insights
|
|
||||||
- **When**: Every task references relevant role analyses for requirements and design decisions
|
|
||||||
- **How**: Extract requirements, architecture decisions, UI/UX patterns from applicable role documents
|
|
||||||
- **Priority**: Collective authoritative source - multiple role perspectives provide comprehensive coverage
|
|
||||||
- **CCW Value**: Maintains role-specific expertise while enabling cross-role integration during planning
|
|
||||||
|
|
||||||
**Context Intelligence (context-package.json)**:
|
|
||||||
- **What**: Smart context gathered by CCW's context-gather phase
|
|
||||||
- **Content**: Focus paths, dependency graph, existing patterns, module structure, tech stack, conflict_risk status
|
|
||||||
- **Usage**: Tasks load this via `flow_control.preparatory_steps` for environment setup and conflict awareness
|
|
||||||
- **CCW Value**: Automated intelligent context discovery replacing manual file exploration
|
|
||||||
|
|
||||||
**Conflict Resolution Status**:
|
|
||||||
- **What**: Conflict resolution applied in-place to brainstorm artifacts (if conflict_risk was >= medium)
|
|
||||||
- **Location**: guidance-specification.md and role analyses (*.md) contain resolved conflicts
|
|
||||||
- **Status**: Check context-package.json → conflict_detection.conflict_risk ("resolved" | "none" | "low")
|
|
||||||
- **Usage**: Read finalized decisions from guidance-specification.md (includes applied resolutions)
|
|
||||||
- **CCW Value**: Interactive conflict resolution with user confirmation, modifications applied automatically
|
|
||||||
|
|
||||||
### Role Analysis Documents (Highest Priority)
|
|
||||||
Role analyses provide specialized perspectives on the implementation:
|
|
||||||
- **system-architect/analysis.md**: Architecture design, ADRs, API specifications, caching strategies
|
|
||||||
- **ui-designer/analysis.md**: Design tokens, layout specifications, component patterns
|
|
||||||
- **ux-expert/analysis.md**: User journeys, interaction flows, accessibility requirements
|
|
||||||
- **guidance-specification/analysis.md**: Product vision, user stories, business requirements, success metrics
|
|
||||||
- **data-architect/analysis.md**: Data models, schemas, database design, migration strategies
|
|
||||||
- **api-designer/analysis.md**: API contracts, endpoint specifications, integration patterns
|
|
||||||
|
|
||||||
### Supporting Artifacts (Reference)
|
|
||||||
- **topic-framework.md**: Role-specific discussion points and analysis framework
|
|
||||||
|
|
||||||
**Artifact Priority in Development**:
|
|
||||||
1. {context_package_path} (primary source: smart context AND brainstorm artifact catalog in `brainstorm_artifacts` + conflict_risk status)
|
|
||||||
2. role/analysis*.md (paths from context-package.json: requirements, design specs, enhanced by synthesis, with resolved conflicts if any)
|
|
||||||
3. guidance-specification.md (path from context-package.json: finalized decisions with resolved conflicts if any)
|
|
||||||
|
|
||||||
## 4. Implementation Strategy
|
|
||||||
|
|
||||||
### Execution Strategy
|
|
||||||
**Execution Model**: [Sequential | Parallel | Phased | TDD Cycles]
|
|
||||||
|
|
||||||
**Rationale**: [Why this execution model fits the project]
|
|
||||||
|
|
||||||
**Parallelization Opportunities**:
|
|
||||||
- [List independent workstreams]
|
|
||||||
|
|
||||||
**Serialization Requirements**:
|
|
||||||
- [List critical dependencies]
|
|
||||||
|
|
||||||
### Architectural Approach
|
|
||||||
**Key Architecture Decisions**:
|
|
||||||
- [ADR references from role analyses]
|
|
||||||
- [Justification for architecture patterns]
|
|
||||||
|
|
||||||
**Integration Strategy**:
|
|
||||||
- [How modules communicate]
|
|
||||||
- [State management approach]
|
|
||||||
|
|
||||||
### Key Dependencies
|
|
||||||
**Task Dependency Graph**:
|
|
||||||
'''
|
|
||||||
[High-level dependency visualization]
|
|
||||||
'''
|
|
||||||
|
|
||||||
**Critical Path**: [Identify bottleneck tasks]
|
|
||||||
|
|
||||||
### Testing Strategy
|
|
||||||
**Testing Approach**:
|
|
||||||
- Unit testing: [Tools, scope]
|
|
||||||
- Integration testing: [Key integration points]
|
|
||||||
- E2E testing: [Critical user flows]
|
|
||||||
|
|
||||||
**Coverage Targets**:
|
|
||||||
- Lines: ≥70%
|
|
||||||
- Functions: ≥70%
|
|
||||||
- Branches: ≥65%
|
|
||||||
|
|
||||||
**Quality Gates**:
|
|
||||||
- [CI/CD gates]
|
|
||||||
- [Performance budgets]
|
|
||||||
|
|
||||||
## 5. Task Breakdown Summary
|
|
||||||
|
|
||||||
### Task Count
|
|
||||||
**{N} tasks** (flat hierarchy | two-level hierarchy, sequential | parallel execution)
|
|
||||||
|
|
||||||
### Task Structure
|
|
||||||
- **IMPL-1**: [Main task title]
|
|
||||||
- **IMPL-2**: [Main task title]
|
|
||||||
...
|
|
||||||
|
|
||||||
### Complexity Assessment
|
|
||||||
- **High**: [List with rationale]
|
|
||||||
- **Medium**: [List]
|
|
||||||
- **Low**: [List]
|
|
||||||
|
|
||||||
### Dependencies
|
|
||||||
[Reference Section 4.3 for dependency graph]
|
|
||||||
|
|
||||||
**Parallelization Opportunities**:
|
|
||||||
- [Specific task groups that can run in parallel]
|
|
||||||
|
|
||||||
## 6. Implementation Plan (Detailed Phased Breakdown)
|
|
||||||
|
|
||||||
### Execution Strategy
|
|
||||||
|
|
||||||
**Phase 1 (Weeks 1-2): [Phase Name]**
|
|
||||||
- **Tasks**: IMPL-1, IMPL-2
|
|
||||||
- **Deliverables**:
|
|
||||||
- [Specific deliverable 1]
|
|
||||||
- [Specific deliverable 2]
|
|
||||||
- **Success Criteria**:
|
|
||||||
- [Measurable criterion]
|
|
||||||
|
|
||||||
**Phase 2 (Weeks 3-N): [Phase Name]**
|
|
||||||
...
|
|
||||||
|
|
||||||
### Resource Requirements
|
|
||||||
|
|
||||||
**Development Team**:
|
|
||||||
- [Team composition and skills]
|
|
||||||
|
|
||||||
**External Dependencies**:
|
|
||||||
- [Third-party services, APIs]
|
|
||||||
|
|
||||||
**Infrastructure**:
|
|
||||||
- [Development, staging, production environments]
|
|
||||||
|
|
||||||
## 7. Risk Assessment & Mitigation
|
|
||||||
|
|
||||||
| Risk | Impact | Probability | Mitigation Strategy | Owner |
|
|
||||||
|------|--------|-------------|---------------------|-------|
|
|
||||||
| [Risk description] | High/Med/Low | High/Med/Low | [Strategy] | [Role] |
|
|
||||||
|
|
||||||
**Critical Risks** (High impact + High probability):
|
|
||||||
- [Risk 1]: [Detailed mitigation plan]
|
|
||||||
|
|
||||||
**Monitoring Strategy**:
|
|
||||||
- [How risks will be monitored]
|
|
||||||
|
|
||||||
## 8. Success Criteria
|
|
||||||
|
|
||||||
**Functional Completeness**:
|
|
||||||
- [ ] All requirements from role analysis documents implemented
|
|
||||||
- [ ] All acceptance criteria from task.json files met
|
|
||||||
|
|
||||||
**Technical Quality**:
|
|
||||||
- [ ] Test coverage ≥70%
|
|
||||||
- [ ] Bundle size within budget
|
|
||||||
- [ ] Performance targets met
|
|
||||||
|
|
||||||
**Operational Readiness**:
|
|
||||||
- [ ] CI/CD pipeline operational
|
|
||||||
- [ ] Monitoring and logging configured
|
|
||||||
- [ ] Documentation complete
|
|
||||||
|
|
||||||
**Business Metrics**:
|
|
||||||
- [ ] [Key business metrics from role analyses]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.3. TODO_LIST.md Structure
|
|
||||||
A simple Markdown file for tracking the status of each task.
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Tasks: [Session Topic]
|
|
||||||
|
|
||||||
## Task Progress
|
|
||||||
▸ **IMPL-001**: [Main Task Group] → [📋](./.task/IMPL-001.json)
|
|
||||||
- [ ] **IMPL-001.1**: [Subtask] → [📋](./.task/IMPL-001.1.json)
|
|
||||||
- [x] **IMPL-001.2**: [Subtask] → [📋](./.task/IMPL-001.2.json) | [✅](./.summaries/IMPL-001.2-summary.md)
|
|
||||||
|
|
||||||
- [x] **IMPL-002**: [Simple Task] → [📋](./.task/IMPL-002.json) | [✅](./.summaries/IMPL-002-summary.md)
|
|
||||||
|
|
||||||
## Status Legend
|
|
||||||
- `▸` = Container task (has subtasks)
|
|
||||||
- `- [ ]` = Pending leaf task
|
|
||||||
- `- [x]` = Completed leaf task
|
|
||||||
- Maximum 2 levels: Main tasks and subtasks only
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.4. Output Files Diagram
|
|
||||||
The command organizes outputs into a standard directory structure.
|
|
||||||
```
|
|
||||||
.workflow/active//{session-id}/
|
|
||||||
├── IMPL_PLAN.md # Implementation plan
|
|
||||||
├── TODO_LIST.md # Progress tracking
|
|
||||||
├── .task/
|
|
||||||
│ ├── IMPL-1.json # Container task
|
|
||||||
│ ├── IMPL-1.1.json # Leaf task with flow_control
|
|
||||||
│ └── IMPL-1.2.json # Leaf task with flow_control
|
|
||||||
├── .brainstorming # Input artifacts from brainstorm + synthesis
|
|
||||||
│ ├── guidance-specification.md # Finalized decisions (with resolved conflicts if any)
|
|
||||||
│ └── {role}/analysis*.md # Role analyses (enhanced by synthesis, with resolved conflicts if any)
|
|
||||||
└── .process/
|
|
||||||
└── context-package.json # Input from context-gather (smart context + conflict_risk status)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 7. Artifact Integration
|
|
||||||
The command intelligently detects and integrates artifacts from the `.brainstorming/` directory.
|
|
||||||
|
|
||||||
#### Artifact Priority
|
|
||||||
1. **context-package.json** (critical): Primary source - smart context AND all brainstorm artifact paths in `brainstorm_artifacts` section + conflict_risk status
|
|
||||||
2. **role/analysis*.md** (highest): Paths from context-package.json → role-specific requirements, design specs, enhanced by synthesis, with resolved conflicts applied in-place
|
|
||||||
3. **guidance-specification.md** (high): Path from context-package.json → finalized decisions with resolved conflicts (if conflict_risk was >= medium)
|
|
||||||
|
|
||||||
#### Artifact-Task Mapping
|
|
||||||
Artifacts are mapped to tasks based on their relevance to the task's domain.
|
|
||||||
- **Role analysis.md files**: Primary requirements source - all relevant role analyses included based on task type
|
|
||||||
- **ui-designer/analysis.md**: Mapped to UI/Frontend tasks for design tokens, layouts, components
|
|
||||||
- **system-architect/analysis.md**: Mapped to Architecture/Backend tasks for ADRs, APIs, patterns
|
|
||||||
- **subject-matter-expert/analysis.md**: Mapped to tasks related to domain logic or standards
|
|
||||||
- **data-architect/analysis.md**: Mapped to tasks involving data models, schemas, or APIs
|
|
||||||
- **product-manager/analysis.md**: Mapped to all tasks for business requirements and user stories
|
|
||||||
|
|
||||||
This ensures that each task has access to the most relevant and detailed specifications from role-specific analyses.
|
|
||||||
|
|
||||||
## 8. Error Handling
|
|
||||||
|
|
||||||
### Input Validation Errors
|
|
||||||
| Error | Cause | Resolution |
|
|
||||||
|-------|-------|------------|
|
|
||||||
| Session not found | Invalid session ID | Verify session exists |
|
|
||||||
| Context missing | Incomplete planning | Run context-gather first |
|
|
||||||
| Invalid format | Corrupted results | Regenerate analysis |
|
|
||||||
|
|
||||||
### Task Generation Errors
|
|
||||||
| Error | Cause | Resolution |
|
|
||||||
|-------|-------|------------|
|
|
||||||
| Count exceeds limit | >10 tasks | Re-scope requirements |
|
|
||||||
| Invalid structure | Missing fields | Fix analysis results |
|
|
||||||
| Dependency cycle | Circular refs | Adjust dependencies |
|
|
||||||
|
|
||||||
### Artifact Integration Errors
|
|
||||||
| Error | Cause | Recovery |
|
|
||||||
|-------|-------|----------|
|
|
||||||
| Artifact not found | Missing output | Continue without artifacts |
|
|
||||||
| Invalid format | Corrupted file | Skip artifact loading |
|
|
||||||
| Path invalid | Moved/deleted | Update references |
|
|
||||||
|
|
||||||
## 10. Usage & Related Commands
|
|
||||||
|
|
||||||
**Basic Usage**:
|
|
||||||
```bash
|
|
||||||
/workflow:tools:task-generate --session WFS-auth [--cli-execute]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow Integration**:
|
|
||||||
- Called by: `/workflow:plan` (task generation phase)
|
|
||||||
- Followed by: `/workflow:execute`, `/workflow:status`
|
|
||||||
|
|
||||||
**Related Commands**:
|
|
||||||
- `/workflow:plan` - Orchestrates entire planning workflow
|
|
||||||
- `/workflow:tools:context-gather` - Provides context package input
|
|
||||||
- `/workflow:tools:conflict-resolution` - Provides conflict resolution (if needed)
|
|
||||||
- `/workflow:execute` - Executes generated tasks
|
|
||||||
@@ -17,6 +17,38 @@ Analyze test coverage and verify Red-Green-Refactor cycle execution for TDD work
|
|||||||
- Verify TDD cycle execution (Red -> Green -> Refactor)
|
- Verify TDD cycle execution (Red -> Green -> Refactor)
|
||||||
- Generate coverage and cycle reports
|
- Generate coverage and cycle reports
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Extract Test Tasks
|
||||||
|
└─ Find TEST-*.json files and extract focus_paths
|
||||||
|
|
||||||
|
Phase 2: Run Test Suite
|
||||||
|
└─ Decision (test framework):
|
||||||
|
├─ Node.js → npm test --coverage --json
|
||||||
|
├─ Python → pytest --cov --json-report
|
||||||
|
└─ Other → [test_command] --coverage --json
|
||||||
|
|
||||||
|
Phase 3: Parse Coverage Data
|
||||||
|
├─ Extract line coverage percentage
|
||||||
|
├─ Extract branch coverage percentage
|
||||||
|
├─ Extract function coverage percentage
|
||||||
|
└─ Identify uncovered lines/branches
|
||||||
|
|
||||||
|
Phase 4: Verify TDD Cycle
|
||||||
|
└─ FOR each TDD chain (TEST-N.M → IMPL-N.M → REFACTOR-N.M):
|
||||||
|
├─ Red Phase: Verify tests created and failed initially
|
||||||
|
├─ Green Phase: Verify tests now pass
|
||||||
|
└─ Refactor Phase: Verify code quality improved
|
||||||
|
|
||||||
|
Phase 5: Generate Analysis Report
|
||||||
|
└─ Create tdd-cycle-report.md with coverage metrics and cycle verification
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Phase 1: Extract Test Tasks
|
### Phase 1: Extract Test Tasks
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
name: test-concept-enhanced
|
name: test-concept-enhanced
|
||||||
description: Analyze test requirements and generate test generation strategy using Gemini with test-context package
|
description: Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini
|
||||||
argument-hint: "--session WFS-test-session-id --context path/to/test-context-package.json"
|
argument-hint: "--session WFS-test-session-id --context path/to/test-context-package.json"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/active/WFS-test-auth/.process/test-context-package.json
|
- /workflow:tools:test-concept-enhanced --session WFS-test-auth --context .workflow/active/WFS-test-auth/.process/test-context-package.json
|
||||||
@@ -9,7 +9,7 @@ examples:
|
|||||||
# Test Concept Enhanced Command
|
# Test Concept Enhanced Command
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Specialized analysis tool for test generation workflows that uses Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
|
Workflow coordinator that delegates test analysis to cli-execution-agent. Agent executes Gemini to analyze test coverage gaps, implementation context, and generate comprehensive test generation strategies.
|
||||||
|
|
||||||
## Core Philosophy
|
## Core Philosophy
|
||||||
- **Coverage-Driven**: Focus on identified test gaps from context analysis
|
- **Coverage-Driven**: Focus on identified test gaps from context analysis
|
||||||
@@ -19,15 +19,39 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
|
|||||||
- **No Code Generation**: Strategy and planning only, actual test generation happens in task execution
|
- **No Code Generation**: Strategy and planning only, actual test generation happens in task execution
|
||||||
|
|
||||||
## Core Responsibilities
|
## Core Responsibilities
|
||||||
- Parse test-context-package.json from test-context-gather
|
- Coordinate test analysis workflow using cli-execution-agent
|
||||||
- Analyze implementation summaries and coverage gaps
|
- Validate test-context-package.json prerequisites
|
||||||
- Study existing test patterns and conventions
|
- Execute Gemini analysis via agent for test strategy generation
|
||||||
- Generate test generation strategy using Gemini
|
- Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)
|
||||||
- Produce TEST_ANALYSIS_RESULTS.md for task generation
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session, --context
|
||||||
|
└─ Validation: Both REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Context Preparation (Command)
|
||||||
|
├─ Load workflow-session.json
|
||||||
|
├─ Verify test session type is "test-gen"
|
||||||
|
├─ Validate test-context-package.json
|
||||||
|
└─ Determine strategy (Simple: 1-3 files | Medium: 4-6 | Complex: >6)
|
||||||
|
|
||||||
|
Phase 2: Test Analysis Execution (Agent)
|
||||||
|
├─ Execute Gemini analysis via cli-execution-agent
|
||||||
|
└─ Generate TEST_ANALYSIS_RESULTS.md
|
||||||
|
|
||||||
|
Phase 3: Output Validation (Command)
|
||||||
|
├─ Verify gemini-test-analysis.md exists
|
||||||
|
├─ Validate TEST_ANALYSIS_RESULTS.md
|
||||||
|
└─ Confirm test requirements are actionable
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Lifecycle
|
||||||
|
|
||||||
### Phase 1: Validation & Preparation
|
### Phase 1: Context Preparation (Command Responsibility)
|
||||||
|
|
||||||
|
**Command prepares session context and validates prerequisites.**
|
||||||
|
|
||||||
1. **Session Validation**
|
1. **Session Validation**
|
||||||
- Load `.workflow/active/{test_session_id}/workflow-session.json`
|
- Load `.workflow/active/{test_session_id}/workflow-session.json`
|
||||||
@@ -40,423 +64,100 @@ Specialized analysis tool for test generation workflows that uses Gemini to anal
|
|||||||
- Extract coverage gaps and framework details
|
- Extract coverage gaps and framework details
|
||||||
|
|
||||||
3. **Strategy Determination**
|
3. **Strategy Determination**
|
||||||
- **Simple Test Generation** (1-3 files): Single Gemini analysis
|
- **Simple** (1-3 files): Single Gemini analysis
|
||||||
- **Medium Test Generation** (4-6 files): Gemini comprehensive analysis
|
- **Medium** (4-6 files): Comprehensive analysis
|
||||||
- **Complex Test Generation** (>6 files): Gemini analysis with modular approach
|
- **Complex** (>6 files): Modular analysis approach
|
||||||
|
|
||||||
### Phase 2: Gemini Test Analysis
|
### Phase 2: Test Analysis Execution (Agent Responsibility)
|
||||||
|
|
||||||
**Tool Configuration**:
|
**Purpose**: Analyze test coverage gaps and generate comprehensive test strategy.
|
||||||
```bash
|
|
||||||
cd .workflow/active/{test_session_id}/.process && gemini -p "
|
|
||||||
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
|
|
||||||
TASK: Study implementation context, existing tests, and generate test requirements for missing coverage
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @{.workflow/active/{test_session_id}/.process/test-context-package.json}
|
|
||||||
|
|
||||||
**MANDATORY FIRST STEP**: Read and analyze test-context-package.json to understand:
|
**Agent Invocation**:
|
||||||
- Test coverage gaps from test_coverage.missing_tests[]
|
```javascript
|
||||||
- Implementation context from source_context.implementation_summaries[]
|
Task(
|
||||||
- Existing test patterns from test_framework.conventions
|
subagent_type="cli-execution-agent",
|
||||||
- Changed files requiring tests from source_context.implementation_summaries[].changed_files
|
description="Analyze test coverage gaps and generate test strategy",
|
||||||
|
prompt=`
|
||||||
|
## TASK OBJECTIVE
|
||||||
|
Analyze test requirements and generate comprehensive test generation strategy using Gemini CLI
|
||||||
|
|
||||||
**ANALYSIS REQUIREMENTS**:
|
## EXECUTION CONTEXT
|
||||||
|
Session: {test_session_id}
|
||||||
|
Source Session: {source_session_id}
|
||||||
|
Working Dir: .workflow/active/{test_session_id}/.process
|
||||||
|
Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt
|
||||||
|
|
||||||
1. **Implementation Understanding**
|
## EXECUTION STEPS
|
||||||
- Load all implementation summaries from source session
|
1. Execute Gemini analysis:
|
||||||
- Understand implemented features, APIs, and business logic
|
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
|
||||||
- Extract key functions, classes, and modules
|
|
||||||
- Identify integration points and dependencies
|
|
||||||
|
|
||||||
2. **Existing Test Pattern Analysis**
|
2. Generate TEST_ANALYSIS_RESULTS.md:
|
||||||
- Study existing test files for patterns and conventions
|
Synthesize gemini-test-analysis.md into standardized format for task generation
|
||||||
- Identify test structure (describe/it, test suites, fixtures)
|
Include: coverage assessment, test framework, test requirements, generation strategy, implementation targets
|
||||||
- Analyze assertion patterns and mocking strategies
|
|
||||||
- Extract test setup/teardown patterns
|
|
||||||
|
|
||||||
3. **Coverage Gap Assessment**
|
## EXPECTED OUTPUTS
|
||||||
- For each file in missing_tests[], analyze:
|
1. gemini-test-analysis.md - Raw Gemini analysis
|
||||||
- File purpose and functionality
|
2. TEST_ANALYSIS_RESULTS.md - Standardized test requirements document
|
||||||
- Public APIs requiring test coverage
|
|
||||||
- Critical paths and edge cases
|
|
||||||
- Integration points requiring tests
|
|
||||||
- Prioritize tests: high (core logic), medium (utilities), low (helpers)
|
|
||||||
|
|
||||||
4. **Test Requirements Specification**
|
## QUALITY VALIDATION
|
||||||
- For each missing test file, specify:
|
- Both output files exist and are complete
|
||||||
- **Test scope**: What needs to be tested
|
- All required sections present in TEST_ANALYSIS_RESULTS.md
|
||||||
- **Test scenarios**: Happy path, error cases, edge cases, integration
|
- Test requirements are actionable and quantified
|
||||||
- **Test data**: Required fixtures, mocks, test data
|
- Test scenarios cover happy path, errors, edge cases
|
||||||
- **Dependencies**: External services, databases, APIs to mock
|
- Dependencies and mocks clearly identified
|
||||||
- **Coverage targets**: Functions/methods requiring tests
|
`
|
||||||
|
)
|
||||||
5. **Test Generation Strategy**
|
|
||||||
- Determine test generation approach for each file
|
|
||||||
- Identify reusable test patterns from existing tests
|
|
||||||
- Plan test data and fixture requirements
|
|
||||||
- Define mocking strategy for dependencies
|
|
||||||
- Specify expected test file structure
|
|
||||||
|
|
||||||
EXPECTED OUTPUT - Write to gemini-test-analysis.md:
|
|
||||||
|
|
||||||
# Test Generation Analysis
|
|
||||||
|
|
||||||
## 1. Implementation Context Summary
|
|
||||||
- **Source Session**: {source_session_id}
|
|
||||||
- **Implemented Features**: {feature_summary}
|
|
||||||
- **Changed Files**: {list_of_implementation_files}
|
|
||||||
- **Tech Stack**: {technologies_used}
|
|
||||||
|
|
||||||
## 2. Test Coverage Assessment
|
|
||||||
- **Existing Tests**: {count} files
|
|
||||||
- **Missing Tests**: {count} files
|
|
||||||
- **Coverage Percentage**: {percentage}%
|
|
||||||
- **Priority Breakdown**:
|
|
||||||
- High Priority: {count} files (core business logic)
|
|
||||||
- Medium Priority: {count} files (utilities, helpers)
|
|
||||||
- Low Priority: {count} files (configuration, constants)
|
|
||||||
|
|
||||||
## 3. Existing Test Pattern Analysis
|
|
||||||
- **Test Framework**: {framework_name_and_version}
|
|
||||||
- **File Naming Convention**: {pattern}
|
|
||||||
- **Test Structure**: {describe_it_or_other}
|
|
||||||
- **Assertion Style**: {expect_assert_should}
|
|
||||||
- **Mocking Strategy**: {mocking_framework_and_patterns}
|
|
||||||
- **Setup/Teardown**: {beforeEach_afterEach_patterns}
|
|
||||||
- **Test Data**: {fixtures_factories_builders}
|
|
||||||
|
|
||||||
## 4. Test Requirements by File
|
|
||||||
|
|
||||||
### File: {implementation_file_path}
|
|
||||||
**Test File**: {suggested_test_file_path}
|
|
||||||
**Priority**: {high|medium|low}
|
|
||||||
|
|
||||||
#### Scope
|
|
||||||
- {description_of_what_needs_testing}
|
|
||||||
|
|
||||||
#### Test Scenarios
|
|
||||||
1. **Happy Path Tests**
|
|
||||||
- {scenario_1}
|
|
||||||
- {scenario_2}
|
|
||||||
|
|
||||||
2. **Error Handling Tests**
|
|
||||||
- {error_scenario_1}
|
|
||||||
- {error_scenario_2}
|
|
||||||
|
|
||||||
3. **Edge Case Tests**
|
|
||||||
- {edge_case_1}
|
|
||||||
- {edge_case_2}
|
|
||||||
|
|
||||||
4. **Integration Tests** (if applicable)
|
|
||||||
- {integration_scenario_1}
|
|
||||||
- {integration_scenario_2}
|
|
||||||
|
|
||||||
#### Test Data & Fixtures
|
|
||||||
- {required_test_data}
|
|
||||||
- {required_mocks}
|
|
||||||
- {required_fixtures}
|
|
||||||
|
|
||||||
#### Dependencies to Mock
|
|
||||||
- {external_service_1}
|
|
||||||
- {external_service_2}
|
|
||||||
|
|
||||||
#### Coverage Targets
|
|
||||||
- Function: {function_name} - {test_requirements}
|
|
||||||
- Function: {function_name} - {test_requirements}
|
|
||||||
|
|
||||||
---
|
|
||||||
[Repeat for each missing test file]
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Test Generation Strategy
|
|
||||||
|
|
||||||
### Overall Approach
|
|
||||||
- {strategy_description}
|
|
||||||
|
|
||||||
### Test Generation Order
|
|
||||||
1. {file_1} - {rationale}
|
|
||||||
2. {file_2} - {rationale}
|
|
||||||
3. {file_3} - {rationale}
|
|
||||||
|
|
||||||
### Reusable Patterns
|
|
||||||
- {pattern_1_from_existing_tests}
|
|
||||||
- {pattern_2_from_existing_tests}
|
|
||||||
|
|
||||||
### Test Data Strategy
|
|
||||||
- {approach_to_test_data_and_fixtures}
|
|
||||||
|
|
||||||
### Mocking Strategy
|
|
||||||
- {approach_to_mocking_dependencies}
|
|
||||||
|
|
||||||
### Quality Criteria
|
|
||||||
- Code coverage target: {percentage}%
|
|
||||||
- Test scenarios per function: {count}
|
|
||||||
- Integration test coverage: {approach}
|
|
||||||
|
|
||||||
## 6. Implementation Targets
|
|
||||||
|
|
||||||
**Purpose**: Identify new test files to create
|
|
||||||
|
|
||||||
**Format**: New test files only (no existing files to modify)
|
|
||||||
|
|
||||||
**Test Files to Create**:
|
|
||||||
1. **Target**: `tests/auth/TokenValidator.test.ts`
|
|
||||||
- **Type**: Create new test file
|
|
||||||
- **Purpose**: Test TokenValidator class
|
|
||||||
- **Scenarios**: 15 test cases covering validation logic, error handling, edge cases
|
|
||||||
- **Dependencies**: Mock JWT library, test fixtures for tokens
|
|
||||||
|
|
||||||
2. **Target**: `tests/middleware/errorHandler.test.ts`
|
|
||||||
- **Type**: Create new test file
|
|
||||||
- **Purpose**: Test error handling middleware
|
|
||||||
- **Scenarios**: 8 test cases for different error types and response formats
|
|
||||||
- **Dependencies**: Mock Express req/res/next, error fixtures
|
|
||||||
|
|
||||||
[List all test files to create]
|
|
||||||
|
|
||||||
## 7. Success Metrics
|
|
||||||
- **Test Coverage Goal**: {target_percentage}%
|
|
||||||
- **Test Quality**: All scenarios covered (happy, error, edge, integration)
|
|
||||||
- **Convention Compliance**: Follow existing test patterns
|
|
||||||
- **Maintainability**: Clear test descriptions, reusable fixtures
|
|
||||||
|
|
||||||
RULES:
|
|
||||||
- Focus on TEST REQUIREMENTS and GENERATION STRATEGY, NOT code generation
|
|
||||||
- Study existing test patterns thoroughly for consistency
|
|
||||||
- Prioritize critical business logic tests
|
|
||||||
- Specify clear test scenarios and coverage targets
|
|
||||||
- Identify all dependencies requiring mocks
|
|
||||||
- **MUST write output to .workflow/active/{test_session_id}/.process/gemini-test-analysis.md**
|
|
||||||
- Do NOT generate actual test code or implementation
|
|
||||||
- Output ONLY test analysis and generation strategy
|
|
||||||
" --approval-mode yolo
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output Location**: `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
|
**Output Files**:
|
||||||
|
- `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
|
||||||
|
- `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
|
||||||
|
|
||||||
### Phase 3: Results Synthesis
|
### Phase 3: Output Validation (Command Responsibility)
|
||||||
|
|
||||||
1. **Output Validation**
|
**Command validates agent outputs.**
|
||||||
- Verify `gemini-test-analysis.md` exists and is complete
|
|
||||||
- Validate all required sections present
|
|
||||||
- Check test requirements are actionable
|
|
||||||
|
|
||||||
2. **Quality Assessment**
|
- Verify `gemini-test-analysis.md` exists and is complete
|
||||||
- Test scenarios cover happy path, errors, edge cases
|
- Validate `TEST_ANALYSIS_RESULTS.md` generated by agent
|
||||||
- Dependencies and mocks clearly identified
|
- Check required sections present
|
||||||
- Test generation strategy is practical
|
- Confirm test requirements are actionable
|
||||||
- Coverage targets are reasonable
|
|
||||||
|
|
||||||
### Phase 4: TEST_ANALYSIS_RESULTS.md Generation
|
|
||||||
|
|
||||||
Synthesize Gemini analysis into standardized format:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Test Generation Analysis Results
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
- **Test Session**: {test_session_id}
|
|
||||||
- **Source Session**: {source_session_id}
|
|
||||||
- **Analysis Timestamp**: {timestamp}
|
|
||||||
- **Coverage Gap**: {missing_test_count} files require tests
|
|
||||||
- **Test Framework**: {framework}
|
|
||||||
- **Overall Strategy**: {high_level_approach}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Coverage Assessment
|
|
||||||
|
|
||||||
### Current Coverage
|
|
||||||
- **Existing Tests**: {count} files
|
|
||||||
- **Implementation Files**: {count} files
|
|
||||||
- **Coverage Percentage**: {percentage}%
|
|
||||||
|
|
||||||
### Missing Tests (Priority Order)
|
|
||||||
1. **High Priority** ({count} files)
|
|
||||||
- {file_1} - {reason}
|
|
||||||
- {file_2} - {reason}
|
|
||||||
|
|
||||||
2. **Medium Priority** ({count} files)
|
|
||||||
- {file_1} - {reason}
|
|
||||||
|
|
||||||
3. **Low Priority** ({count} files)
|
|
||||||
- {file_1} - {reason}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Test Framework & Conventions
|
|
||||||
|
|
||||||
### Framework Configuration
|
|
||||||
- **Framework**: {framework_name}
|
|
||||||
- **Version**: {version}
|
|
||||||
- **Test Pattern**: {file_pattern}
|
|
||||||
- **Test Directory**: {directory_structure}
|
|
||||||
|
|
||||||
### Conventions
|
|
||||||
- **File Naming**: {convention}
|
|
||||||
- **Test Structure**: {describe_it_blocks}
|
|
||||||
- **Assertions**: {assertion_library}
|
|
||||||
- **Mocking**: {mocking_framework}
|
|
||||||
- **Setup/Teardown**: {beforeEach_afterEach}
|
|
||||||
|
|
||||||
### Example Pattern (from existing tests)
|
|
||||||
```
|
|
||||||
{example_test_structure_from_analysis}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Test Requirements by File
|
|
||||||
|
|
||||||
[For each missing test, include:]
|
|
||||||
|
|
||||||
### Test File: {test_file_path}
|
|
||||||
**Implementation**: {implementation_file}
|
|
||||||
**Priority**: {high|medium|low}
|
|
||||||
**Estimated Test Count**: {count}
|
|
||||||
|
|
||||||
#### Test Scenarios
|
|
||||||
1. **Happy Path**: {scenarios}
|
|
||||||
2. **Error Handling**: {scenarios}
|
|
||||||
3. **Edge Cases**: {scenarios}
|
|
||||||
4. **Integration**: {scenarios}
|
|
||||||
|
|
||||||
#### Dependencies & Mocks
|
|
||||||
- {dependency_1_to_mock}
|
|
||||||
- {dependency_2_to_mock}
|
|
||||||
|
|
||||||
#### Test Data Requirements
|
|
||||||
- {fixture_1}
|
|
||||||
- {fixture_2}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Test Generation Strategy
|
|
||||||
|
|
||||||
### Generation Approach
|
|
||||||
{overall_strategy_description}
|
|
||||||
|
|
||||||
### Generation Order
|
|
||||||
1. {test_file_1} - {rationale}
|
|
||||||
2. {test_file_2} - {rationale}
|
|
||||||
3. {test_file_3} - {rationale}
|
|
||||||
|
|
||||||
### Reusable Components
|
|
||||||
- **Test Fixtures**: {common_fixtures}
|
|
||||||
- **Mock Patterns**: {common_mocks}
|
|
||||||
- **Helper Functions**: {test_helpers}
|
|
||||||
|
|
||||||
### Quality Targets
|
|
||||||
- **Coverage Goal**: {percentage}%
|
|
||||||
- **Scenarios per Function**: {min_count}
|
|
||||||
- **Integration Coverage**: {approach}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Implementation Targets
|
|
||||||
|
|
||||||
**Purpose**: New test files to create (code-developer will generate these)
|
|
||||||
|
|
||||||
**Test Files to Create**:
|
|
||||||
|
|
||||||
1. **Target**: `tests/auth/TokenValidator.test.ts`
|
|
||||||
- **Implementation Source**: `src/auth/TokenValidator.ts`
|
|
||||||
- **Test Scenarios**: 15 (validation, error handling, edge cases)
|
|
||||||
- **Dependencies**: Mock JWT library, token fixtures
|
|
||||||
- **Priority**: High
|
|
||||||
|
|
||||||
2. **Target**: `tests/middleware/errorHandler.test.ts`
|
|
||||||
- **Implementation Source**: `src/middleware/errorHandler.ts`
|
|
||||||
- **Test Scenarios**: 8 (error types, response formats)
|
|
||||||
- **Dependencies**: Mock Express, error fixtures
|
|
||||||
- **Priority**: High
|
|
||||||
|
|
||||||
[List all test files with full specifications]
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Success Criteria
|
|
||||||
|
|
||||||
### Coverage Metrics
|
|
||||||
- Achieve {target_percentage}% code coverage
|
|
||||||
- All public APIs have tests
|
|
||||||
- Critical paths fully covered
|
|
||||||
|
|
||||||
### Quality Standards
|
|
||||||
- All test scenarios covered (happy, error, edge, integration)
|
|
||||||
- Follow existing test conventions
|
|
||||||
- Clear test descriptions and assertions
|
|
||||||
- Maintainable test structure
|
|
||||||
|
|
||||||
### Validation Approach
|
|
||||||
- Run full test suite after generation
|
|
||||||
- Verify coverage with coverage tool
|
|
||||||
- Manual review of test quality
|
|
||||||
- Integration test validation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. Reference Information
|
|
||||||
|
|
||||||
### Source Context
|
|
||||||
- **Implementation Summaries**: {paths}
|
|
||||||
- **Existing Tests**: {example_tests}
|
|
||||||
- **Documentation**: {relevant_docs}
|
|
||||||
|
|
||||||
### Analysis Tools
|
|
||||||
- **Gemini Analysis**: gemini-test-analysis.md
|
|
||||||
- **Coverage Tools**: {coverage_tool_if_detected}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Location**: `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
|
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
### Validation Errors
|
### Validation Errors
|
||||||
| Error | Cause | Resolution |
|
| Error | Resolution |
|
||||||
|-------|-------|------------|
|
|-------|------------|
|
||||||
| Missing context package | test-context-gather not run | Run test-context-gather first |
|
| Missing context package | Run test-context-gather first |
|
||||||
| No coverage gaps | All files have tests | Skip test generation, proceed to test execution |
|
| No coverage gaps | Skip test generation, proceed to execution |
|
||||||
| No test framework detected | Missing test dependencies | Request user to configure test framework |
|
| No test framework detected | Configure test framework |
|
||||||
| Invalid source session | Source session incomplete | Complete implementation first |
|
| Invalid source session | Complete implementation first |
|
||||||
|
|
||||||
### Gemini Execution Errors
|
### Execution Errors
|
||||||
| Error | Cause | Recovery |
|
| Error | Recovery |
|
||||||
|-------|-------|----------|
|
|-------|----------|
|
||||||
| Timeout | Large project analysis | Reduce scope, analyze by module |
|
| Gemini timeout | Reduce scope, analyze by module |
|
||||||
| Output incomplete | Token limit exceeded | Retry with focused analysis |
|
| Output incomplete | Retry with focused analysis |
|
||||||
| No output file | Write permission error | Check directory permissions |
|
| No output file | Check directory permissions |
|
||||||
|
|
||||||
### Fallback Strategy
|
**Fallback Strategy**: Generate basic TEST_ANALYSIS_RESULTS.md from context package if Gemini fails
|
||||||
- If Gemini fails, generate basic TEST_ANALYSIS_RESULTS.md from context package
|
|
||||||
- Use coverage gaps and framework info to create minimal requirements
|
|
||||||
- Provide guidance for manual test planning
|
|
||||||
|
|
||||||
## Performance Optimization
|
## Integration & Usage
|
||||||
|
|
||||||
- **Focused Analysis**: Only analyze files with missing tests
|
### Command Chain
|
||||||
- **Pattern Reuse**: Study existing tests for quick pattern extraction
|
- **Called By**: `/workflow:test-gen` (Phase 4: Analysis)
|
||||||
- **Parallel Operations**: Load implementation summaries in parallel
|
- **Requires**: `test-context-package.json` from `/workflow:tools:test-context-gather`
|
||||||
- **Timeout Management**: 20-minute limit for Gemini analysis
|
- **Followed By**: `/workflow:tools:test-task-generate`
|
||||||
|
|
||||||
## Integration
|
### Performance
|
||||||
|
- Focused analysis: Only analyze files with missing tests
|
||||||
|
- Pattern reuse: Study existing tests for quick extraction
|
||||||
|
- Timeout: 20-minute limit for analysis
|
||||||
|
|
||||||
### Called By
|
### Success Criteria
|
||||||
- `/workflow:test-gen` (Phase 4: Analysis)
|
- Valid TEST_ANALYSIS_RESULTS.md generated
|
||||||
|
- All missing tests documented with actionable requirements
|
||||||
### Requires
|
- Test scenarios cover happy path, errors, edge cases, integration
|
||||||
- `/workflow:tools:test-context-gather` output (test-context-package.json)
|
- Dependencies and mocks clearly identified
|
||||||
|
- Test generation strategy is practical
|
||||||
### Followed By
|
- Output follows existing test conventions
|
||||||
- `/workflow:tools:test-task-generate` - Generates test task JSON with code-developer invocation
|
|
||||||
|
|
||||||
## Success Criteria
|
|
||||||
|
|
||||||
- ✅ Valid TEST_ANALYSIS_RESULTS.md generated
|
|
||||||
- ✅ All missing tests documented with requirements
|
|
||||||
- ✅ Test scenarios cover happy path, errors, edge cases
|
|
||||||
- ✅ Dependencies and mocks identified
|
|
||||||
- ✅ Test generation strategy is actionable
|
|
||||||
- ✅ Execution time < 20 minutes
|
|
||||||
- ✅ Output follows existing test conventions
|
|
||||||
|
|
||||||
|
|||||||
@@ -24,6 +24,36 @@ Orchestrator command that invokes `test-context-search-agent` to gather comprehe
|
|||||||
- **Source Context Loading**: Import implementation summaries from source session
|
- **Source Context Loading**: Import implementation summaries from source session
|
||||||
- **Standardized Output**: Generate `.workflow/active/{test_session_id}/.process/test-context-package.json`
|
- **Standardized Output**: Generate `.workflow/active/{test_session_id}/.process/test-context-package.json`
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: test_session_id REQUIRED
|
||||||
|
|
||||||
|
Step 1: Test-Context-Package Detection
|
||||||
|
└─ Decision (existing package):
|
||||||
|
├─ Valid package exists → Return existing (skip execution)
|
||||||
|
└─ No valid package → Continue to Step 2
|
||||||
|
|
||||||
|
Step 2: Invoke Test-Context-Search Agent
|
||||||
|
├─ Phase 1: Session Validation & Source Context Loading
|
||||||
|
│ ├─ Detection: Check for existing test-context-package
|
||||||
|
│ ├─ Test session validation
|
||||||
|
│ └─ Source context loading (summaries, changed files)
|
||||||
|
├─ Phase 2: Test Coverage Analysis
|
||||||
|
│ ├─ Track 1: Existing test discovery
|
||||||
|
│ ├─ Track 2: Coverage gap analysis
|
||||||
|
│ └─ Track 3: Coverage statistics
|
||||||
|
└─ Phase 3: Framework Detection & Packaging
|
||||||
|
├─ Framework identification
|
||||||
|
├─ Convention analysis
|
||||||
|
└─ Generate test-context-package.json
|
||||||
|
|
||||||
|
Step 3: Output Verification
|
||||||
|
└─ Verify test-context-package.json created
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
### Step 1: Test-Context-Package Detection
|
### Step 1: Test-Context-Package Detection
|
||||||
|
|||||||
@@ -1,416 +1,256 @@
|
|||||||
---
|
---
|
||||||
name: test-task-generate
|
name: test-task-generate
|
||||||
description: Autonomous test-fix task generation using action-planning-agent with test-fix-retest cycle specification and discovery phase
|
description: Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests
|
||||||
argument-hint: "[--use-codex] [--cli-execute] --session WFS-test-session-id"
|
argument-hint: "--session WFS-test-session-id"
|
||||||
examples:
|
examples:
|
||||||
- /workflow:tools:test-task-generate --session WFS-test-auth
|
- /workflow:tools:test-task-generate --session WFS-test-auth
|
||||||
- /workflow:tools:test-task-generate --use-codex --session WFS-test-auth
|
|
||||||
- /workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
|
|
||||||
- /workflow:tools:test-task-generate --cli-execute --use-codex --session WFS-test-auth
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Autonomous Test Task Generation Command
|
# Generate Test Planning Documents Command
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Autonomous test-fix task JSON generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes. Generates specialized test-fix tasks with comprehensive test-fix-retest cycle specification.
|
Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent. This command produces **test planning artifacts only** - it does NOT execute tests or implement code. Actual test execution requires separate execution command (e.g., /workflow:test-cycle-execute).
|
||||||
|
|
||||||
## Core Philosophy
|
## Core Philosophy
|
||||||
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
|
- **Planning Only**: Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT execute tests
|
||||||
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
|
- **Agent-Driven Document Generation**: Delegate test plan generation to action-planning-agent
|
||||||
|
- **Two-Phase Flow**: Context Preparation (command) → Test Document Generation (agent)
|
||||||
- **Memory-First**: Reuse loaded documents from conversation memory
|
- **Memory-First**: Reuse loaded documents from conversation memory
|
||||||
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and test research
|
- **MCP-Enhanced**: Use MCP tools for test pattern research and analysis
|
||||||
- **Pre-Selected Templates**: Command selects correct test template based on `--cli-execute` flag **before** invoking agent
|
|
||||||
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
|
|
||||||
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root
|
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root
|
||||||
- **Test-First**: Generate comprehensive test coverage before execution
|
- **Leverage Existing Test Infrastructure**: Prioritize using established testing frameworks and tools present in the project
|
||||||
- **Iterative Refinement**: Test-fix-retest cycle until all tests pass
|
|
||||||
- **Surgical Fixes**: Minimal code changes, no refactoring during test fixes
|
|
||||||
- **Auto-Revert**: Rollback all changes if max iterations reached
|
|
||||||
|
|
||||||
## Execution Modes
|
## Test-Specific Execution Modes
|
||||||
|
|
||||||
### Test Generation (IMPL-001)
|
### Test Generation (IMPL-001)
|
||||||
- **Agent Mode (Default)**: @code-developer generates tests within agent context
|
- **Agent Mode** (default): @code-developer generates tests within agent context
|
||||||
- **CLI Execute Mode (`--cli-execute`)**: Use Codex CLI for autonomous test generation
|
- **CLI Mode**: Use CLI tools when `command` field present in implementation_approach (determined semantically)
|
||||||
|
|
||||||
### Test Fix (IMPL-002)
|
### Test Execution & Fix (IMPL-002+)
|
||||||
- **Manual Mode (Default)**: Gemini diagnosis → user applies fixes
|
- **Agent Mode** (default): Gemini diagnosis → agent applies fixes
|
||||||
- **Codex Mode (`--use-codex`)**: Gemini diagnosis → Codex applies fixes with resume mechanism
|
- **CLI Mode**: Gemini diagnosis → CLI applies fixes (when `command` field present in implementation_approach)
|
||||||
|
|
||||||
## Execution Lifecycle
|
## Execution Process
|
||||||
|
|
||||||
### Phase 1: Discovery & Context Loading
|
```
|
||||||
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
**Agent Context Package**:
|
Phase 1: Context Preparation (Command)
|
||||||
```javascript
|
├─ Assemble test session paths
|
||||||
{
|
│ ├─ session_metadata_path
|
||||||
"session_id": "WFS-test-[session-id]",
|
│ ├─ test_analysis_results_path (REQUIRED)
|
||||||
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
|
│ └─ test_context_package_path
|
||||||
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
|
└─ Provide metadata (session_id, source_session_id)
|
||||||
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
|
|
||||||
// Path selected by command based on --cli-execute flag, agent reads it
|
Phase 2: Test Document Generation (Agent)
|
||||||
"workflow_type": "test_session",
|
├─ Load TEST_ANALYSIS_RESULTS.md as primary requirements source
|
||||||
"use_codex": true | false, // Determined by --use-codex flag
|
├─ Generate Test Task JSON Files (.task/IMPL-*.json)
|
||||||
"session_metadata": {
|
│ ├─ IMPL-001: Test generation (meta.type: "test-gen")
|
||||||
// If in memory: use cached content
|
│ └─ IMPL-002+: Test execution & fix (meta.type: "test-fix")
|
||||||
// Else: Load from .workflow/active/{test-session-id}/workflow-session.json
|
├─ Create IMPL_PLAN.md (test_session variant)
|
||||||
},
|
└─ Generate TODO_LIST.md with test phase indicators
|
||||||
"test_analysis_results_path": ".workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md",
|
|
||||||
"test_analysis_results": {
|
|
||||||
// If in memory: use cached content
|
|
||||||
// Else: Load from TEST_ANALYSIS_RESULTS.md
|
|
||||||
},
|
|
||||||
"test_context_package_path": ".workflow/active/{test-session-id}/.process/test-context-package.json",
|
|
||||||
"test_context_package": {
|
|
||||||
// Existing test patterns and coverage analysis
|
|
||||||
},
|
|
||||||
"source_session_id": "[source-session-id]", // if exists
|
|
||||||
"source_session_summaries": {
|
|
||||||
// Implementation context from source session
|
|
||||||
},
|
|
||||||
"mcp_capabilities": {
|
|
||||||
"code_index": true,
|
|
||||||
"exa_code": true,
|
|
||||||
"exa_web": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Discovery Actions**:
|
## Document Generation Lifecycle
|
||||||
1. **Load Test Session Context** (if not in memory)
|
|
||||||
```javascript
|
|
||||||
if (!memory.has("workflow-session.json")) {
|
|
||||||
Read(.workflow/active/{test-session-id}/workflow-session.json)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Load TEST_ANALYSIS_RESULTS.md** (if not in memory, REQUIRED)
|
### Phase 1: Context Preparation (Command Responsibility)
|
||||||
```javascript
|
|
||||||
if (!memory.has("TEST_ANALYSIS_RESULTS.md")) {
|
|
||||||
Read(.workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Load Test Context Package** (if not in memory)
|
**Command prepares test session paths and metadata for planning document generation.**
|
||||||
```javascript
|
|
||||||
if (!memory.has("test-context-package.json")) {
|
|
||||||
Read(.workflow/active/{test-session-id}/.process/test-context-package.json)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Load Source Session Summaries** (if source_session_id exists)
|
**Test Session Path Structure**:
|
||||||
```javascript
|
|
||||||
if (sessionMetadata.source_session_id) {
|
|
||||||
const summaryFiles = Bash("find .workflow/active/{source-session-id}/.summaries/ -name 'IMPL-*-summary.md'")
|
|
||||||
summaryFiles.forEach(file => Read(file))
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Code Analysis with Native Tools** (optional - enhance understanding)
|
|
||||||
```bash
|
|
||||||
# Find test files and patterns
|
|
||||||
find . -name "*test*" -type f
|
|
||||||
rg "describe|it\(|test\(" -g "*.ts"
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **MCP External Research** (optional - gather test best practices)
|
|
||||||
```javascript
|
|
||||||
// Get external test examples and patterns
|
|
||||||
mcp__exa__get_code_context_exa(
|
|
||||||
query="TypeScript test generation best practices jest",
|
|
||||||
tokensNum="dynamic"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 2: Agent Execution (Document Generation)
|
|
||||||
|
|
||||||
**Pre-Agent Template Selection** (Command decides path before invoking agent):
|
|
||||||
```javascript
|
|
||||||
// Command checks flag and selects template PATH (not content)
|
|
||||||
const templatePath = hasCliExecuteFlag
|
|
||||||
? "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt"
|
|
||||||
: "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt";
|
|
||||||
```
|
```
|
||||||
|
.workflow/active/WFS-test-{session-id}/
|
||||||
|
├── workflow-session.json # Test session metadata
|
||||||
|
├── .process/
|
||||||
|
│ ├── TEST_ANALYSIS_RESULTS.md # Test requirements and strategy
|
||||||
|
│ ├── test-context-package.json # Test patterns and coverage
|
||||||
|
│ └── context-package.json # General context artifacts
|
||||||
|
├── .task/ # Output: Test task JSON files
|
||||||
|
├── IMPL_PLAN.md # Output: Test implementation plan
|
||||||
|
└── TODO_LIST.md # Output: Test TODO list
|
||||||
|
```
|
||||||
|
|
||||||
|
**Command Preparation**:
|
||||||
|
1. **Assemble Test Session Paths** for agent prompt:
|
||||||
|
- `session_metadata_path`
|
||||||
|
- `test_analysis_results_path` (REQUIRED)
|
||||||
|
- `test_context_package_path`
|
||||||
|
- Output directory paths
|
||||||
|
|
||||||
|
2. **Provide Metadata** (simple values):
|
||||||
|
- `session_id`
|
||||||
|
- `source_session_id` (if exists)
|
||||||
|
- `mcp_capabilities` (available MCP tools)
|
||||||
|
|
||||||
|
**Note**: CLI tool usage is now determined semantically from user's task description, not by flags.
|
||||||
|
|
||||||
|
### Phase 2: Test Document Generation (Agent Responsibility)
|
||||||
|
|
||||||
|
**Purpose**: Generate test-specific IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT test execution.
|
||||||
|
|
||||||
**Agent Invocation**:
|
**Agent Invocation**:
|
||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
description="Generate test-fix task JSON and implementation plan",
|
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
||||||
prompt=`
|
prompt=`
|
||||||
## Execution Context
|
## TASK OBJECTIVE
|
||||||
|
Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for test workflow session
|
||||||
|
|
||||||
**Session ID**: WFS-test-{session-id}
|
IMPORTANT: This is TEST PLANNING ONLY - you are generating planning documents, NOT executing tests.
|
||||||
**Workflow Type**: Test Session
|
|
||||||
**Execution Mode**: {agent-mode | cli-execute-mode}
|
|
||||||
**Task JSON Template Path**: {template_path}
|
|
||||||
**Use Codex**: {true | false}
|
|
||||||
|
|
||||||
## Phase 1: Discovery Results (Provided Context)
|
CRITICAL:
|
||||||
|
- Use existing test frameworks and utilities from the project
|
||||||
|
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
||||||
|
|
||||||
### Test Session Metadata
|
## AGENT CONFIGURATION REFERENCE
|
||||||
{session_metadata_content}
|
All test task generation rules, schemas, and quality standards are defined in your agent specification:
|
||||||
- source_session_id: {source_session_id} (if exists)
|
@.claude/agents/action-planning-agent.md
|
||||||
- workflow_type: "test_session"
|
|
||||||
|
|
||||||
### TEST_ANALYSIS_RESULTS.md (REQUIRED)
|
Refer to your specification for:
|
||||||
{test_analysis_results_content}
|
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
||||||
- Coverage Assessment
|
- Test IMPL_PLAN.md Structure (test_session variant with test-fix cycle)
|
||||||
- Test Framework & Conventions
|
- TODO_LIST.md Format (with test phase indicators)
|
||||||
- Test Requirements by File
|
- Progressive Loading Strategy (memory-first, load TEST_ANALYSIS_RESULTS.md as primary source)
|
||||||
- Test Generation Strategy
|
- Quality Validation Rules (task count limits, requirement quantification)
|
||||||
- Implementation Targets
|
|
||||||
- Success Criteria
|
|
||||||
|
|
||||||
### Test Context Package
|
## SESSION PATHS
|
||||||
{test_context_package_summary}
|
Input:
|
||||||
- Existing test patterns, framework config, coverage analysis
|
- Session Metadata: .workflow/active/{test-session-id}/workflow-session.json
|
||||||
|
- TEST_ANALYSIS_RESULTS: .workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md (REQUIRED - primary requirements source)
|
||||||
|
- Test Context Package: .workflow/active/{test-session-id}/.process/test-context-package.json
|
||||||
|
- Context Package: .workflow/active/{test-session-id}/.process/context-package.json
|
||||||
|
- Source Session Summaries: .workflow/active/{source-session-id}/.summaries/IMPL-*.md (if exists)
|
||||||
|
|
||||||
### Source Session Implementation Context (Optional)
|
Output:
|
||||||
{source_session_summaries}
|
- Task Dir: .workflow/active/{test-session-id}/.task/
|
||||||
- Implementation context from completed session
|
- IMPL_PLAN: .workflow/active/{test-session-id}/IMPL_PLAN.md
|
||||||
|
- TODO_LIST: .workflow/active/{test-session-id}/TODO_LIST.md
|
||||||
|
|
||||||
### MCP Analysis Results (Optional)
|
## CONTEXT METADATA
|
||||||
**Code Structure**: {mcp_code_index_results}
|
Session ID: {test-session-id}
|
||||||
**External Research**: {mcp_exa_research_results}
|
Workflow Type: test_session
|
||||||
|
Source Session: {source-session-id} (if exists)
|
||||||
|
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||||
|
|
||||||
## Phase 2: Test Task Document Generation
|
## CLI TOOL SELECTION
|
||||||
|
Determine CLI tool usage per-step based on user's task description:
|
||||||
|
- If user specifies "use Codex/Gemini/Qwen for X" → Add command field to relevant steps
|
||||||
|
- Default: Agent execution (no command field) unless user explicitly requests CLI
|
||||||
|
|
||||||
**Agent Configuration Reference**: All test task generation rules, test-fix cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
## TEST-SPECIFIC REQUIREMENTS SUMMARY
|
||||||
|
(Detailed specifications in your agent definition)
|
||||||
|
|
||||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
### Task Structure Requirements
|
||||||
- Test Task Decomposition Standards
|
- Minimum 2 tasks: IMPL-001 (test generation) + IMPL-002 (test execution & fix)
|
||||||
- Test-Fix-Retest Cycle Requirements
|
- Expandable for complex projects: Add IMPL-003+ (per-module, integration, E2E tests)
|
||||||
- 5-Field Task JSON Schema
|
|
||||||
- IMPL_PLAN.md Structure (Test variant)
|
|
||||||
- TODO_LIST.md Format
|
|
||||||
- Test Execution Flow & Quality Validation
|
|
||||||
|
|
||||||
### Test-Specific Requirements Summary
|
Task Configuration:
|
||||||
|
IMPL-001 (Test Generation):
|
||||||
|
- meta.type: "test-gen"
|
||||||
|
- meta.agent: "@code-developer"
|
||||||
|
- meta.test_framework: Specify existing framework (e.g., "jest", "vitest", "pytest")
|
||||||
|
- flow_control: Test generation strategy from TEST_ANALYSIS_RESULTS.md
|
||||||
|
- CLI execution: Add `command` field when user requests (determined semantically)
|
||||||
|
|
||||||
#### Task Structure Philosophy
|
IMPL-002+ (Test Execution & Fix):
|
||||||
- **Minimum 2 tasks**: IMPL-001 (test generation) + IMPL-002 (test execution & fix)
|
- meta.type: "test-fix"
|
||||||
- **Expandable**: Add IMPL-003+ for complex projects (per-module, integration, etc.)
|
- meta.agent: "@test-fix-agent"
|
||||||
- IMPL-001: Uses @code-developer or CLI execution
|
- flow_control: Test-fix cycle with iteration limits and diagnosis configuration
|
||||||
- IMPL-002: Uses @test-fix-agent with iterative fix cycle
|
- CLI execution: Add `command` field when user requests (determined semantically)
|
||||||
|
|
||||||
#### Test-Fix Cycle Configuration
|
### Test-Fix Cycle Specification (IMPL-002+)
|
||||||
- **Max Iterations**: 5 (for IMPL-002)
|
Required flow_control fields:
|
||||||
- **Diagnosis Tool**: Gemini with bug-fix template
|
- max_iterations: 5
|
||||||
- **Fix Application**: Manual (default) or Codex (if --use-codex flag)
|
- diagnosis_tool: "gemini"
|
||||||
- **Cycle Pattern**: test → gemini_diagnose → manual_fix (or codex) → retest
|
- diagnosis_template: "~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt"
|
||||||
- **Exit Conditions**: All tests pass OR max iterations reached (auto-revert)
|
- cycle_pattern: "test → gemini_diagnose → fix → retest"
|
||||||
|
- exit_conditions: ["all_tests_pass", "max_iterations_reached"]
|
||||||
|
- auto_revert_on_failure: true
|
||||||
|
- CLI fix: Add `command` field when user specifies CLI tool usage
|
||||||
|
|
||||||
#### Required Outputs Summary
|
### Automation Framework Configuration
|
||||||
|
Select automation tools based on test requirements from TEST_ANALYSIS_RESULTS.md:
|
||||||
|
- UI interaction testing → E2E browser automation (meta.e2e_framework)
|
||||||
|
- API/database integration → integration test tools (meta.test_tools)
|
||||||
|
- Performance metrics → load testing tools (meta.perf_framework)
|
||||||
|
- Logic verification → unit test framework (meta.test_framework)
|
||||||
|
|
||||||
##### 1. Test Task JSON Files (.task/IMPL-*.json)
|
**Tool Selection**: Detect from project config > suggest based on requirements
|
||||||
- **Location**: `.workflow/active/{test-session-id}/.task/`
|
|
||||||
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
|
|
||||||
- **Schema**: 5-field structure with test-specific metadata
|
|
||||||
- IMPL-001: `meta.type: "test-gen"`, `meta.agent: "@code-developer"`
|
|
||||||
- IMPL-002: `meta.type: "test-fix"`, `meta.agent: "@test-fix-agent"`, `meta.use_codex: {use_codex}`
|
|
||||||
- `flow_control`: Test generation approach (IMPL-001) or test-fix cycle (IMPL-002)
|
|
||||||
- **Details**: See action-planning-agent.md § Test Task JSON Generation
|
|
||||||
|
|
||||||
##### 2. IMPL_PLAN.md (Test Variant)
|
### TEST_ANALYSIS_RESULTS.md Mapping
|
||||||
- **Location**: `.workflow/active/{test-session-id}/IMPL_PLAN.md`
|
PRIMARY requirements source - extract and map to task JSONs:
|
||||||
- **Template**: `~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt`
|
- Test framework config → meta.test_framework (use existing framework from project)
|
||||||
- **Test-Specific Frontmatter**: workflow_type="test_session", test_framework, source_session_id
|
- Existing test utilities → flow_control.reusable_test_tools (discovered test helpers, fixtures, mocks)
|
||||||
- **Test-Fix-Retest Cycle Section**: Iterative fix cycle with Gemini diagnosis
|
- Test runner commands → flow_control.test_commands (from package.json or pytest config)
|
||||||
- **Details**: See action-planning-agent.md § Test Implementation Plan Creation
|
- Coverage targets → meta.coverage_target
|
||||||
|
- Test requirements → context.requirements (quantified with explicit counts)
|
||||||
|
- Test generation strategy → IMPL-001 flow_control.implementation_approach
|
||||||
|
- Implementation targets → context.files_to_test (absolute paths)
|
||||||
|
|
||||||
##### 3. TODO_LIST.md
|
## EXPECTED DELIVERABLES
|
||||||
- **Location**: `.workflow/active/{test-session-id}/TODO_LIST.md`
|
1. Test Task JSON Files (.task/IMPL-*.json)
|
||||||
- **Format**: Task list with test generation and execution phases
|
- 6-field schema with quantified requirements from TEST_ANALYSIS_RESULTS.md
|
||||||
- **Status**: [ ] (pending), [x] (completed)
|
- Test-specific metadata: type, agent, test_framework, coverage_target
|
||||||
- **Details**: See action-planning-agent.md § TODO List Generation
|
- flow_control includes: reusable_test_tools, test_commands (from project config)
|
||||||
|
- CLI execution via `command` field when user requests (determined semantically)
|
||||||
|
- Artifact references from test-context-package.json
|
||||||
|
- Absolute paths in context.files_to_test
|
||||||
|
|
||||||
### Agent Execution Summary
|
2. Test Implementation Plan (IMPL_PLAN.md)
|
||||||
|
- Template: ~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt
|
||||||
|
- Test-specific frontmatter: workflow_type="test_session", test_framework, source_session_id
|
||||||
|
- Test-Fix-Retest Cycle section with diagnosis configuration
|
||||||
|
- Source session context integration (if applicable)
|
||||||
|
|
||||||
**Key Steps** (Detailed instructions in action-planning-agent.md):
|
3. TODO List (TODO_LIST.md)
|
||||||
1. Load task JSON template from provided path
|
- Hierarchical structure with test phase containers
|
||||||
2. Parse TEST_ANALYSIS_RESULTS.md for test requirements
|
- Links to task JSONs with status markers
|
||||||
3. Generate IMPL-001 (test generation) task JSON
|
- Matches task JSON hierarchy
|
||||||
4. Generate IMPL-002 (test execution & fix) task JSON with use_codex flag
|
|
||||||
5. Generate additional IMPL-*.json if project complexity requires
|
|
||||||
6. Create IMPL_PLAN.md using test template variant
|
|
||||||
7. Generate TODO_LIST.md with test task indicators
|
|
||||||
8. Update session state with test metadata
|
|
||||||
|
|
||||||
**Quality Gates** (Full checklist in action-planning-agent.md):
|
## QUALITY STANDARDS
|
||||||
- ✓ Minimum 2 tasks created (IMPL-001 + IMPL-002)
|
Hard Constraints:
|
||||||
- ✓ IMPL-001 has test generation approach from TEST_ANALYSIS_RESULTS.md
|
- Task count: minimum 2, maximum 18
|
||||||
- ✓ IMPL-002 has test-fix cycle with correct use_codex flag
|
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
|
||||||
- ✓ Test framework configuration integrated
|
- Test framework matches existing project framework
|
||||||
- ✓ Source session context referenced (if exists)
|
- flow_control includes reusable_test_tools and test_commands from project
|
||||||
- ✓ MCP tool integration added
|
- Absolute paths for all focus_paths
|
||||||
- ✓ Documents follow test template structure
|
- Acceptance criteria include verification commands
|
||||||
|
- CLI `command` field added only when user explicitly requests CLI tool usage
|
||||||
|
|
||||||
## Output
|
## SUCCESS CRITERIA
|
||||||
|
- All test planning documents generated successfully
|
||||||
Generate all three documents and report completion status:
|
- Return completion status: task count, test framework, coverage targets, source session status
|
||||||
- Test task JSON files created: N files (minimum 2)
|
|
||||||
- Test requirements integrated: TEST_ANALYSIS_RESULTS.md
|
|
||||||
- Test context integrated: existing patterns and coverage
|
|
||||||
- Source session context: {source_session_id} summaries (if exists)
|
|
||||||
- MCP enhancements: code-index, exa-research
|
|
||||||
- Session ready for test execution: /workflow:execute or /workflow:test-cycle-execute
|
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Agent Context Passing
|
|
||||||
|
|
||||||
**Memory-Aware Context Assembly**:
|
|
||||||
```javascript
|
|
||||||
// Assemble context package for agent
|
|
||||||
const agentContext = {
|
|
||||||
session_id: "WFS-test-[id]",
|
|
||||||
workflow_type: "test_session",
|
|
||||||
use_codex: hasUseCodexFlag,
|
|
||||||
|
|
||||||
// Use memory if available, else load
|
|
||||||
session_metadata: memory.has("workflow-session.json")
|
|
||||||
? memory.get("workflow-session.json")
|
|
||||||
: Read(.workflow/active/WFS-test-[id]/workflow-session.json),
|
|
||||||
|
|
||||||
test_analysis_results_path: ".workflow/active/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md",
|
|
||||||
|
|
||||||
test_analysis_results: memory.has("TEST_ANALYSIS_RESULTS.md")
|
|
||||||
? memory.get("TEST_ANALYSIS_RESULTS.md")
|
|
||||||
: Read(".workflow/active/WFS-test-[id]/.process/TEST_ANALYSIS_RESULTS.md"),
|
|
||||||
|
|
||||||
test_context_package_path: ".workflow/active/WFS-test-[id]/.process/test-context-package.json",
|
|
||||||
|
|
||||||
test_context_package: memory.has("test-context-package.json")
|
|
||||||
? memory.get("test-context-package.json")
|
|
||||||
: Read(".workflow/active/WFS-test-[id]/.process/test-context-package.json"),
|
|
||||||
|
|
||||||
// Load source session summaries if exists
|
|
||||||
source_session_id: session_metadata.source_session_id || null,
|
|
||||||
|
|
||||||
source_session_summaries: session_metadata.source_session_id
|
|
||||||
? loadSourceSummaries(session_metadata.source_session_id)
|
|
||||||
: null,
|
|
||||||
|
|
||||||
// Optional MCP enhancements
|
|
||||||
mcp_analysis: executeMcpDiscovery()
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Test Task Structure Reference
|
|
||||||
|
|
||||||
This section provides quick reference for test task JSON structure. For complete implementation details, see the agent invocation prompt in Phase 2 above.
|
|
||||||
|
|
||||||
**Quick Reference**:
|
|
||||||
- Minimum 2 tasks: IMPL-001 (test-gen) + IMPL-002 (test-fix)
|
|
||||||
- Expandable for complex projects (IMPL-003+)
|
|
||||||
- IMPL-001: `meta.agent: "@code-developer"`, test generation approach
|
|
||||||
- IMPL-002: `meta.agent: "@test-fix-agent"`, `meta.use_codex: {flag}`, test-fix cycle
|
|
||||||
- See Phase 2 agent prompt for full schema and requirements
|
|
||||||
|
|
||||||
## Output Files Structure
|
|
||||||
```
|
|
||||||
.workflow/active/WFS-test-[session]/
|
|
||||||
├── workflow-session.json # Test session metadata
|
|
||||||
├── IMPL_PLAN.md # Test validation plan
|
|
||||||
├── TODO_LIST.md # Progress tracking
|
|
||||||
├── .task/
|
|
||||||
│ └── IMPL-001.json # Test-fix task with cycle spec
|
|
||||||
├── .process/
|
|
||||||
│ ├── ANALYSIS_RESULTS.md # From concept-enhanced (optional)
|
|
||||||
│ ├── context-package.json # From context-gather
|
|
||||||
│ ├── initial-test.log # Phase 1: Initial test results
|
|
||||||
│ ├── fix-iteration-1-diagnosis.md # Gemini diagnosis iteration 1
|
|
||||||
│ ├── fix-iteration-1-changes.log # Codex changes iteration 1
|
|
||||||
│ ├── fix-iteration-1-retest.log # Retest results iteration 1
|
|
||||||
│ ├── fix-iteration-N-*.md/log # Subsequent iterations
|
|
||||||
│ └── final-test.log # Phase 3: Final validation
|
|
||||||
└── .summaries/
|
|
||||||
└── IMPL-001-summary.md # Success report OR failure report
|
|
||||||
```
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
### Input Validation Errors
|
|
||||||
| Error | Cause | Resolution |
|
|
||||||
|-------|-------|------------|
|
|
||||||
| Not a test session | Missing workflow_type: "test_session" | Verify session created by test-gen |
|
|
||||||
| Source session not found | Invalid source_session_id | Check source session exists |
|
|
||||||
| No implementation summaries | Source session incomplete | Ensure source session has completed tasks |
|
|
||||||
|
|
||||||
### Test Framework Discovery Errors
|
|
||||||
| Error | Cause | Resolution |
|
|
||||||
|-------|-------|------------|
|
|
||||||
| No test command found | Unknown framework | Manual test command specification |
|
|
||||||
| No test files found | Tests not written | Request user to write tests first |
|
|
||||||
| Test dependencies missing | Incomplete setup | Run dependency installation |
|
|
||||||
|
|
||||||
### Generation Errors
|
|
||||||
| Error | Cause | Resolution |
|
|
||||||
|-------|-------|------------|
|
|
||||||
| Invalid JSON structure | Template error | Fix task generation logic |
|
|
||||||
| Missing required fields | Incomplete metadata | Validate session metadata |
|
|
||||||
|
|
||||||
## Integration & Usage
|
## Integration & Usage
|
||||||
|
|
||||||
### Command Chain
|
### Command Chain
|
||||||
- **Called By**: `/workflow:test-gen` (Phase 4), `/workflow:test-fix-gen` (Phase 4)
|
- **Called By**: `/workflow:test-gen` (Phase 4), `/workflow:test-fix-gen` (Phase 4)
|
||||||
- **Invokes**: `action-planning-agent` for autonomous task generation
|
- **Invokes**: `action-planning-agent` for test planning document generation
|
||||||
- **Followed By**: `/workflow:execute` or `/workflow:test-cycle-execute` (user-triggered)
|
- **Followed By**: `/workflow:test-cycle-execute` or `/workflow:execute` (user-triggered)
|
||||||
|
|
||||||
### Basic Usage
|
### Usage Examples
|
||||||
```bash
|
```bash
|
||||||
# Agent mode (default, autonomous execution)
|
# Standard execution
|
||||||
/workflow:tools:test-task-generate --session WFS-test-auth
|
/workflow:tools:test-task-generate --session WFS-test-auth
|
||||||
|
|
||||||
# With automated Codex fixes for IMPL-002
|
# With semantic CLI request (include in task description)
|
||||||
/workflow:tools:test-task-generate --use-codex --session WFS-test-auth
|
# e.g., "Generate tests, use Codex for implementation and fixes"
|
||||||
|
|
||||||
# CLI execution mode for IMPL-001 test generation
|
|
||||||
/workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
|
|
||||||
|
|
||||||
# Both flags combined
|
|
||||||
/workflow:tools:test-task-generate --cli-execute --use-codex --session WFS-test-auth
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Execution Modes
|
### CLI Tool Selection
|
||||||
- **Agent mode** (default): Uses `action-planning-agent` with agent-mode task template
|
CLI tool usage is determined semantically from user's task description:
|
||||||
- **CLI mode** (`--cli-execute`): Uses Gemini/Qwen/Codex with cli-mode task template for IMPL-001
|
- Include "use Codex" for automated fixes
|
||||||
- **Codex fixes** (`--use-codex`): Enables automated fixes in IMPL-002 task
|
- Include "use Gemini" for analysis
|
||||||
|
- Default: Agent execution (no `command` field)
|
||||||
### Flag Behavior
|
|
||||||
- **No flags**: `meta.use_codex=false` (manual fixes), agent-mode generation
|
|
||||||
- **--use-codex**: `meta.use_codex=true` (Codex automated fixes with resume mechanism in IMPL-002)
|
|
||||||
- **--cli-execute**: Uses CLI tool execution mode for IMPL-001 test generation
|
|
||||||
- **Both flags**: CLI generation + automated Codex fixes
|
|
||||||
|
|
||||||
### Output
|
### Output
|
||||||
- Test task JSON files in `.task/` directory (minimum 2: IMPL-001.json + IMPL-002.json)
|
- Test task JSON files in `.task/` directory (minimum 2)
|
||||||
- IMPL_PLAN.md with test generation and fix cycle strategy
|
- IMPL_PLAN.md with test strategy and fix cycle specification
|
||||||
- TODO_LIST.md with test task indicators
|
- TODO_LIST.md with test phase indicators
|
||||||
- Session state updated with test metadata
|
- Session ready for test execution
|
||||||
- MCP enhancements integrated (if available)
|
|
||||||
|
|
||||||
## Agent Execution Notes
|
|
||||||
|
|
||||||
The `@test-fix-agent` will execute the task by following the `flow_control.implementation_approach` specification:
|
|
||||||
|
|
||||||
1. **Load task JSON**: Read complete test-fix task from `.task/IMPL-002.json`
|
|
||||||
2. **Check meta.use_codex**: Determine fix mode (manual or automated)
|
|
||||||
3. **Execute pre_analysis**: Load source context, discover framework, analyze tests
|
|
||||||
4. **Phase 1**: Run initial test suite
|
|
||||||
5. **Phase 2**: If failures, enter iterative loop:
|
|
||||||
- Use Gemini for diagnosis (analysis mode with bug-fix template)
|
|
||||||
- Check meta.use_codex flag:
|
|
||||||
- If false (default): Present fix suggestions to user for manual application
|
|
||||||
- If true (--use-codex): Use Codex resume for automated fixes (maintains context)
|
|
||||||
- Retest and check for regressions
|
|
||||||
- Repeat max 5 times
|
|
||||||
6. **Phase 3**: Generate summary and certify code
|
|
||||||
7. **Error Recovery**: Revert changes if max iterations reached
|
|
||||||
|
|
||||||
**Bug Diagnosis Template**: Uses `~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt` template for systematic root cause analysis, code path tracing, and targeted fix recommendations.
|
|
||||||
|
|
||||||
**Codex Usage**: The agent uses `codex exec "..." resume --last` pattern ONLY when meta.use_codex=true (--use-codex flag present) to maintain conversation context across multiple fix iterations, ensuring consistency and learning from previous attempts.
|
|
||||||
|
|||||||
@@ -23,6 +23,44 @@ Extract animation and transition patterns from prompt inference and image refere
|
|||||||
- **Production-Ready**: CSS var() format, WCAG-compliant, semantic naming
|
- **Production-Ready**: CSS var() format, WCAG-compliant, semantic naming
|
||||||
- **Default Behavior**: Non-interactive mode uses inferred patterns + best practices
|
- **Default Behavior**: Non-interactive mode uses inferred patterns + best practices
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session, --images, --focus, --interactive, --refine
|
||||||
|
└─ Decision (mode detection):
|
||||||
|
├─ --refine flag → Refinement Mode
|
||||||
|
└─ No --refine → Exploration Mode
|
||||||
|
|
||||||
|
Phase 0: Setup & Input Validation
|
||||||
|
├─ Step 1: Detect input mode & base path
|
||||||
|
├─ Step 2: Prepare image references (if available)
|
||||||
|
├─ Step 3: Load design tokens context
|
||||||
|
└─ Step 4: Memory check (skip if exists)
|
||||||
|
|
||||||
|
Phase 1: Animation Specification Generation
|
||||||
|
├─ Step 1: Load project context
|
||||||
|
├─ Step 2: Generate animation specification options (Agent Task 1)
|
||||||
|
│ └─ Decision:
|
||||||
|
│ ├─ Exploration Mode → Generate specification questions
|
||||||
|
│ └─ Refinement Mode → Generate refinement options
|
||||||
|
└─ Step 3: Verify options file created
|
||||||
|
|
||||||
|
Phase 1.5: User Confirmation (Optional)
|
||||||
|
└─ Decision (--interactive flag):
|
||||||
|
├─ --interactive present → Present options, capture selection
|
||||||
|
└─ No --interactive → Skip to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Animation System Generation
|
||||||
|
├─ Step 1: Load user selection or use defaults
|
||||||
|
├─ Step 2: Create output directory
|
||||||
|
└─ Step 3: Launch animation generation task (Agent Task 2)
|
||||||
|
|
||||||
|
Phase 3: Verify Output
|
||||||
|
├─ Step 1: Check files created
|
||||||
|
└─ Step 2: Verify file sizes
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 0: Setup & Input Validation
|
## Phase 0: Setup & Input Validation
|
||||||
|
|
||||||
### Step 1: Detect Input Mode & Base Path
|
### Step 1: Detect Input Mode & Base Path
|
||||||
@@ -120,11 +158,7 @@ ELSE:
|
|||||||
- Infers animation patterns from UI element positioning and design style
|
- Infers animation patterns from UI element positioning and design style
|
||||||
- Generates context-aware animation specifications based on visual analysis
|
- Generates context-aware animation specifications based on visual analysis
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- ✅ Flexible input - works with screenshots, mockups, or design files
|
|
||||||
- ✅ AI-driven inference from visual cues
|
|
||||||
- ✅ No external dependencies on MCP tools
|
|
||||||
- ✅ Combines visual analysis with industry best practices
|
|
||||||
|
|
||||||
### Step 3: Load Design Tokens Context
|
### Step 3: Load Design Tokens Context
|
||||||
|
|
||||||
|
|||||||
@@ -480,11 +480,7 @@ TodoWrite({todos: [
|
|||||||
// - Orchestrator's own task (no SlashCommand attachment)
|
// - Orchestrator's own task (no SlashCommand attachment)
|
||||||
// - Mark Phase 3 as completed
|
// - Mark Phase 3 as completed
|
||||||
// - Final state: All 4 orchestrator tasks completed
|
// - Final state: All 4 orchestrator tasks completed
|
||||||
//
|
|
||||||
// Benefits:
|
|
||||||
// ✓ Real-time visibility into attached tasks during execution
|
|
||||||
// ✓ Clean orchestrator-level summary after tasks complete
|
|
||||||
// ✓ Clear mental model: SlashCommand = attach tasks, not delegate work
|
|
||||||
// ✓ Dynamic attachment/collapse maintains clarity
|
// ✓ Dynamic attachment/collapse maintains clarity
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -623,17 +619,7 @@ File discovery is fully automatic - no glob patterns needed.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
- **Simplified Interface**: Single path parameter with intelligent defaults
|
|
||||||
- **Auto-Generation**: Package names auto-generated from directory names
|
|
||||||
- **Automatic Discovery**: No need to specify file patterns - finds all style files automatically
|
|
||||||
- **Pure Orchestrator**: No direct agent execution, delegates to specialized commands
|
|
||||||
- **Auto-Continue**: Autonomous 4-phase execution without user interaction
|
|
||||||
- **Safety First**: Overwrite protection, validation checks, error handling
|
|
||||||
- **Code Reuse**: Leverages existing `import-from-code` and `reference-page-generator` commands
|
|
||||||
- **Clean Separation**: Each command has single responsibility
|
|
||||||
- **Easy Maintenance**: Changes to sub-commands automatically apply
|
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
|
|||||||
@@ -19,6 +19,48 @@ Synchronize finalized design system references to brainstorming artifacts, prepa
|
|||||||
- **Plan-Ready Output**: Ensure design artifacts discoverable by task-generate
|
- **Plan-Ready Output**: Ensure design artifacts discoverable by task-generate
|
||||||
- **Minimal Reading**: Verify file existence, don't read design content
|
- **Minimal Reading**: Verify file existence, don't read design content
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --session, --selected-prototypes
|
||||||
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 1: Session & Artifact Validation
|
||||||
|
├─ Step 1: Validate session exists
|
||||||
|
├─ Step 2: Find latest design run
|
||||||
|
├─ Step 3: Detect design system structure
|
||||||
|
└─ Step 4: Select prototypes (--selected-prototypes OR all)
|
||||||
|
|
||||||
|
Phase 1.1: Memory Check (Conditional)
|
||||||
|
└─ Decision (current design run in synthesis):
|
||||||
|
├─ Already updated → Skip Phase 2-5, EXIT
|
||||||
|
└─ Not found → Continue to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Load Target Artifacts
|
||||||
|
├─ Read role analysis documents (files to update)
|
||||||
|
├─ Read ui-designer/analysis.md (if exists)
|
||||||
|
└─ Read prototype notes (minimal context)
|
||||||
|
|
||||||
|
Phase 3: Update Synthesis Specification
|
||||||
|
└─ Edit role analysis documents with UI/UX Guidelines section
|
||||||
|
|
||||||
|
Phase 4A: Update Relevant Role Analysis Documents
|
||||||
|
├─ ui-designer/analysis.md (always)
|
||||||
|
├─ ux-expert/analysis.md (if animations exist)
|
||||||
|
├─ system-architect/analysis.md (if layouts exist)
|
||||||
|
└─ product-manager/analysis.md (if prototypes)
|
||||||
|
|
||||||
|
Phase 4B: Create UI Designer Design System Reference
|
||||||
|
└─ Write ui-designer/design-system-reference.md
|
||||||
|
|
||||||
|
Phase 5: Update Context Package
|
||||||
|
└─ Update context-package.json with design system references
|
||||||
|
|
||||||
|
Phase 6: Completion
|
||||||
|
└─ Report updated artifacts
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Protocol
|
## Execution Protocol
|
||||||
|
|
||||||
### Phase 1: Session & Artifact Validation
|
### Phase 1: Session & Artifact Validation
|
||||||
@@ -227,7 +269,68 @@ Write(file_path=".workflow/active/WFS-{session}/.brainstorming/ui-designer/desig
|
|||||||
content="[generated content with @ references]")
|
content="[generated content with @ references]")
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 5: Completion
|
### Phase 5: Update Context Package
|
||||||
|
|
||||||
|
**Purpose**: Sync design system references to context-package.json
|
||||||
|
|
||||||
|
**Operations**:
|
||||||
|
```bash
|
||||||
|
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||||
|
|
||||||
|
# 1. Read existing package
|
||||||
|
context_pkg = Read(context_pkg_path)
|
||||||
|
|
||||||
|
# 2. Update brainstorm_artifacts (role analyses now contain @ design references)
|
||||||
|
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
|
||||||
|
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
|
||||||
|
|
||||||
|
context_pkg.brainstorm_artifacts.role_analyses = []
|
||||||
|
FOR file IN role_analysis_files:
|
||||||
|
role_name = extract_role_from_path(file)
|
||||||
|
relative_path = file.replace({brainstorm_dir}/, "")
|
||||||
|
|
||||||
|
context_pkg.brainstorm_artifacts.role_analyses.push({
|
||||||
|
"role": role_name,
|
||||||
|
"files": [{
|
||||||
|
"path": relative_path,
|
||||||
|
"type": "primary",
|
||||||
|
"content": Read(file), # Contains @ design system references
|
||||||
|
"updated_at": NOW()
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
|
# 3. Add design_system_references field
|
||||||
|
context_pkg.design_system_references = {
|
||||||
|
"design_run_id": design_id,
|
||||||
|
"tokens": `${design_id}/${design_tokens_path}`,
|
||||||
|
"style_guide": `${design_id}/${style_guide_path}`,
|
||||||
|
"prototypes": selected_list.map(p => `${design_id}/prototypes/${p}.html`),
|
||||||
|
"updated_at": NOW()
|
||||||
|
}
|
||||||
|
|
||||||
|
# 4. Optional: Add animations and layouts if they exist
|
||||||
|
IF exists({latest_design}/animation-extraction/animation-tokens.json):
|
||||||
|
context_pkg.design_system_references.animations = `${design_id}/animation-extraction/animation-tokens.json`
|
||||||
|
|
||||||
|
IF exists({latest_design}/layout-extraction/layout-templates.json):
|
||||||
|
context_pkg.design_system_references.layouts = `${design_id}/layout-extraction/layout-templates.json`
|
||||||
|
|
||||||
|
# 5. Update metadata
|
||||||
|
context_pkg.metadata.updated_at = NOW()
|
||||||
|
context_pkg.metadata.design_sync_timestamp = NOW()
|
||||||
|
|
||||||
|
# 6. Write back
|
||||||
|
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
|
||||||
|
|
||||||
|
REPORT: "✅ Updated context-package.json with design system references"
|
||||||
|
```
|
||||||
|
|
||||||
|
**TodoWrite Update**:
|
||||||
|
```json
|
||||||
|
{"content": "Update context package with design references", "status": "completed", "activeForm": "Updating context package"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 6: Completion
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
TodoWrite({todos: [
|
TodoWrite({todos: [
|
||||||
|
|||||||
@@ -27,8 +27,8 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
|||||||
6. **Phase 10 (ui-assembly)** → **Attach tasks → Execute → Collapse** → Workflow complete
|
6. **Phase 10 (ui-assembly)** → **Attach tasks → Execute → Collapse** → Workflow complete
|
||||||
|
|
||||||
**Phase Transition Mechanism**:
|
**Phase Transition Mechanism**:
|
||||||
- **Phase 5 (User Interaction)**: User confirms targets → IMMEDIATELY triggers Phase 7
|
- **Phase 5 (User Interaction)**: User confirms targets → IMMEDIATELY dispatches Phase 7
|
||||||
- **Phase 7-10 (Autonomous)**: `SlashCommand` invocation **ATTACHES** tasks to current workflow
|
- **Phase 7-10 (Autonomous)**: SlashCommand dispatch **ATTACHES** tasks to current workflow
|
||||||
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
||||||
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
||||||
- **Phase Transition**: Automatically execute next phase after collapsing
|
- **Phase Transition**: Automatically execute next phase after collapsing
|
||||||
@@ -36,10 +36,55 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
|||||||
|
|
||||||
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
||||||
|
|
||||||
**Task Attachment Model**: SlashCommand invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
**Task Attachment Model**: SlashCommand dispatch is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
||||||
|
|
||||||
**Target Type Detection**: Automatically inferred from prompt/targets, or explicitly set via `--target-type`.
|
**Target Type Detection**: Automatically inferred from prompt/targets, or explicitly set via `--target-type`.
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --input, --targets, --target-type, --device-type, --session, --style-variants, --layout-variants
|
||||||
|
└─ Decision (input detection):
|
||||||
|
├─ Contains * or glob matches → images_input (visual)
|
||||||
|
├─ File/directory exists → code import source
|
||||||
|
└─ Pure text → design prompt
|
||||||
|
|
||||||
|
Phase 1-4: Parameter Parsing & Initialization
|
||||||
|
├─ Phase 1: Normalize parameters (legacy deprecation warning)
|
||||||
|
├─ Phase 2: Intelligent prompt parsing (extract variant counts)
|
||||||
|
├─ Phase 3: Device type inference (explicit > keywords > target_type > default)
|
||||||
|
└─ Phase 4: Run initialization and directory setup
|
||||||
|
|
||||||
|
Phase 5: Unified Target Inference
|
||||||
|
├─ Priority: --pages/--components (legacy) → --targets → prompt analysis → synthesis → default
|
||||||
|
├─ Display confirmation with modification options
|
||||||
|
└─ User confirms → IMMEDIATELY triggers Phase 7
|
||||||
|
|
||||||
|
Phase 6: Code Import (Conditional)
|
||||||
|
└─ Decision (design_source):
|
||||||
|
├─ code_only | hybrid → Dispatch /workflow:ui-design:import-from-code
|
||||||
|
└─ visual_only → Skip to Phase 7
|
||||||
|
|
||||||
|
Phase 7: Style Extraction
|
||||||
|
└─ Decision (needs_visual_supplement):
|
||||||
|
├─ visual_only OR supplement needed → Dispatch /workflow:ui-design:style-extract
|
||||||
|
└─ code_only AND style_complete → Use code import
|
||||||
|
|
||||||
|
Phase 8: Animation Extraction
|
||||||
|
└─ Decision (should_extract_animation):
|
||||||
|
├─ visual_only OR incomplete OR regenerate → Dispatch /workflow:ui-design:animation-extract
|
||||||
|
└─ code_only AND animation_complete → Use code import
|
||||||
|
|
||||||
|
Phase 9: Layout Extraction
|
||||||
|
└─ Decision (needs_visual_supplement OR NOT layout_complete):
|
||||||
|
├─ True → Dispatch /workflow:ui-design:layout-extract
|
||||||
|
└─ False → Use code import
|
||||||
|
|
||||||
|
Phase 10: UI Assembly
|
||||||
|
└─ Dispatch /workflow:ui-design:generate → Workflow complete
|
||||||
|
```
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
1. **Start Immediately**: TodoWrite initialization → Phase 7 execution
|
1. **Start Immediately**: TodoWrite initialization → Phase 7 execution
|
||||||
@@ -47,7 +92,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Glob(*), Write(*
|
|||||||
3. **Parse & Pass**: Extract data from each output for next phase
|
3. **Parse & Pass**: Extract data from each output for next phase
|
||||||
4. **Default to All**: When selecting variants/prototypes, use ALL generated items
|
4. **Default to All**: When selecting variants/prototypes, use ALL generated items
|
||||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
6. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
6. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand dispatch **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
||||||
7. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
7. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 10 (UI assembly) finishes.
|
||||||
|
|
||||||
## Parameter Requirements
|
## Parameter Requirements
|
||||||
@@ -310,13 +355,16 @@ detect_target_type(target_list):
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 6: Code Import & Completeness Assessment (Conditional)
|
### Phase 6: Code Import & Completeness Assessment (Conditional)
|
||||||
```bash
|
|
||||||
|
**Step 6.1: Dispatch** - Import design system from code files
|
||||||
|
|
||||||
|
```javascript
|
||||||
IF design_source IN ["code_only", "hybrid"]:
|
IF design_source IN ["code_only", "hybrid"]:
|
||||||
REPORT: "🔍 Phase 6: Code Import ({design_source})"
|
REPORT: "🔍 Phase 6: Code Import ({design_source})"
|
||||||
command = "/workflow:ui-design:import-from-code --design-id \"{design_id}\" --source \"{code_base_path}\""
|
command = "/workflow:ui-design:import-from-code --design-id \"{design_id}\" --source \"{code_base_path}\""
|
||||||
|
|
||||||
TRY:
|
TRY:
|
||||||
# SlashCommand invocation ATTACHES import-from-code's tasks to current workflow
|
# SlashCommand dispatch ATTACHES import-from-code's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself:
|
# Orchestrator will EXECUTE these attached tasks itself:
|
||||||
# - Phase 0: Discover and categorize code files
|
# - Phase 0: Discover and categorize code files
|
||||||
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
|
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
|
||||||
@@ -420,7 +468,10 @@ IF design_source IN ["code_only", "hybrid"]:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 7: Style Extraction
|
### Phase 7: Style Extraction
|
||||||
```bash
|
|
||||||
|
**Step 7.1: Dispatch** - Extract style design systems
|
||||||
|
|
||||||
|
```javascript
|
||||||
IF design_source == "visual_only" OR needs_visual_supplement:
|
IF design_source == "visual_only" OR needs_visual_supplement:
|
||||||
REPORT: "🎨 Phase 7: Style Extraction (variants: {style_variants})"
|
REPORT: "🎨 Phase 7: Style Extraction (variants: {style_variants})"
|
||||||
command = "/workflow:ui-design:style-extract --design-id \"{design_id}\" " +
|
command = "/workflow:ui-design:style-extract --design-id \"{design_id}\" " +
|
||||||
@@ -428,7 +479,7 @@ IF design_source == "visual_only" OR needs_visual_supplement:
|
|||||||
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
|
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
|
||||||
"--variants {style_variants} --interactive"
|
"--variants {style_variants} --interactive"
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES style-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES style-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(command)
|
SlashCommand(command)
|
||||||
|
|
||||||
@@ -438,7 +489,10 @@ ELSE:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 8: Animation Extraction
|
### Phase 8: Animation Extraction
|
||||||
```bash
|
|
||||||
|
**Step 8.1: Dispatch** - Extract animation patterns
|
||||||
|
|
||||||
|
```javascript
|
||||||
# Determine if animation extraction is needed
|
# Determine if animation extraction is needed
|
||||||
should_extract_animation = false
|
should_extract_animation = false
|
||||||
|
|
||||||
@@ -468,7 +522,7 @@ IF should_extract_animation:
|
|||||||
|
|
||||||
command = " ".join(command_parts)
|
command = " ".join(command_parts)
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES animation-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES animation-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(command)
|
SlashCommand(command)
|
||||||
|
|
||||||
@@ -481,7 +535,10 @@ ELSE:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 9: Layout Extraction
|
### Phase 9: Layout Extraction
|
||||||
```bash
|
|
||||||
|
**Step 9.1: Dispatch** - Extract layout templates
|
||||||
|
|
||||||
|
```javascript
|
||||||
targets_string = ",".join(inferred_target_list)
|
targets_string = ",".join(inferred_target_list)
|
||||||
|
|
||||||
IF (design_source == "visual_only" OR needs_visual_supplement) OR (NOT layout_complete):
|
IF (design_source == "visual_only" OR needs_visual_supplement) OR (NOT layout_complete):
|
||||||
@@ -491,7 +548,7 @@ IF (design_source == "visual_only" OR needs_visual_supplement) OR (NOT layout_co
|
|||||||
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
|
(prompt_text ? "--prompt \"{prompt_text}\" " : "") +
|
||||||
"--targets \"{targets_string}\" --variants {layout_variants} --device-type \"{device_type}\" --interactive"
|
"--targets \"{targets_string}\" --variants {layout_variants} --device-type \"{device_type}\" --interactive"
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES layout-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES layout-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(command)
|
SlashCommand(command)
|
||||||
|
|
||||||
@@ -501,7 +558,10 @@ ELSE:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Phase 10: UI Assembly
|
### Phase 10: UI Assembly
|
||||||
```bash
|
|
||||||
|
**Step 10.1: Dispatch** - Assemble UI prototypes from design tokens and layout templates
|
||||||
|
|
||||||
|
```javascript
|
||||||
command = "/workflow:ui-design:generate --design-id \"{design_id}\"" + (--session ? " --session {session_id}" : "")
|
command = "/workflow:ui-design:generate --design-id \"{design_id}\"" + (--session ? " --session {session_id}" : "")
|
||||||
|
|
||||||
total = style_variants × layout_variants × len(inferred_target_list)
|
total = style_variants × layout_variants × len(inferred_target_list)
|
||||||
@@ -511,7 +571,7 @@ REPORT: " → Pure assembly: Combining layout templates + design tokens"
|
|||||||
REPORT: " → Device: {device_type} (from layout templates)"
|
REPORT: " → Device: {device_type} (from layout templates)"
|
||||||
REPORT: " → Assembly tasks: {total} combinations"
|
REPORT: " → Assembly tasks: {total} combinations"
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES generate's tasks to current workflow
|
# SlashCommand dispatch ATTACHES generate's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(command)
|
SlashCommand(command)
|
||||||
|
|
||||||
@@ -528,32 +588,35 @@ SlashCommand(command)
|
|||||||
```javascript
|
```javascript
|
||||||
// Initialize IMMEDIATELY after Phase 5 user confirmation to track multi-phase execution (4 orchestrator-level tasks)
|
// Initialize IMMEDIATELY after Phase 5 user confirmation to track multi-phase execution (4 orchestrator-level tasks)
|
||||||
TodoWrite({todos: [
|
TodoWrite({todos: [
|
||||||
{"content": "Execute style extraction", "status": "in_progress", "activeForm": "Executing style extraction"},
|
{"content": "Phase 7: Style Extraction", "status": "in_progress", "activeForm": "Executing style extraction"},
|
||||||
{"content": "Execute animation extraction", "status": "pending", "activeForm": "Executing animation extraction"},
|
{"content": "Phase 8: Animation Extraction", "status": "pending", "activeForm": "Executing animation extraction"},
|
||||||
{"content": "Execute layout extraction", "status": "pending", "activeForm": "Executing layout extraction"},
|
{"content": "Phase 9: Layout Extraction", "status": "pending", "activeForm": "Executing layout extraction"},
|
||||||
{"content": "Execute UI assembly", "status": "pending", "activeForm": "Executing UI assembly"}
|
{"content": "Phase 10: UI Assembly", "status": "pending", "activeForm": "Executing UI assembly"}
|
||||||
]})
|
]})
|
||||||
|
|
||||||
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
||||||
//
|
//
|
||||||
// **Key Concept**: SlashCommand invocation ATTACHES tasks to current workflow.
|
// **Key Concept**: SlashCommand dispatch ATTACHES tasks to current workflow.
|
||||||
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
|
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
|
||||||
//
|
//
|
||||||
// Phase 7-10 SlashCommand Invocation Pattern:
|
// Phase 7-10 SlashCommand Dispatch Pattern (when tasks are attached):
|
||||||
// 1. SlashCommand invocation ATTACHES sub-command tasks to TodoWrite
|
// Example - Phase 7 with sub-tasks:
|
||||||
// 2. TodoWrite expands to include attached tasks
|
// [
|
||||||
// 3. Orchestrator EXECUTES attached tasks sequentially
|
// {"content": "Phase 7: Style Extraction", "status": "in_progress", "activeForm": "Executing style extraction"},
|
||||||
// 4. After all attached tasks complete, COLLAPSE them into phase summary
|
// {"content": " → Analyze style references", "status": "in_progress", "activeForm": "Analyzing style references"},
|
||||||
// 5. Update next phase to in_progress
|
// {"content": " → Generate style variants", "status": "pending", "activeForm": "Generating style variants"},
|
||||||
// 6. IMMEDIATELY execute next phase (auto-continue)
|
// {"content": " → Create design tokens", "status": "pending", "activeForm": "Creating design tokens"},
|
||||||
// 7. After Phase 10 completes, workflow finishes (generate command handles preview files)
|
// {"content": "Phase 8: Animation Extraction", "status": "pending", "activeForm": "Executing animation extraction"},
|
||||||
|
// ...
|
||||||
|
// ]
|
||||||
|
//
|
||||||
|
// After sub-tasks complete, COLLAPSE back to:
|
||||||
|
// [
|
||||||
|
// {"content": "Phase 7: Style Extraction", "status": "completed", "activeForm": "Executing style extraction"},
|
||||||
|
// {"content": "Phase 8: Animation Extraction", "status": "in_progress", "activeForm": "Executing animation extraction"},
|
||||||
|
// ...
|
||||||
|
// ]
|
||||||
//
|
//
|
||||||
// Benefits:
|
|
||||||
// ✓ Real-time visibility into sub-command task progress
|
|
||||||
// ✓ Clean orchestrator-level summary after each phase
|
|
||||||
// ✓ Clear mental model: SlashCommand = attach tasks, not delegate work
|
|
||||||
// ✓ Generate command handles preview generation autonomously
|
|
||||||
// ✓ Dynamic attachment/collapse maintains clarity
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Completion Output
|
## Completion Output
|
||||||
|
|||||||
@@ -21,6 +21,36 @@ Pure assembler that combines pre-extracted layout templates with design tokens t
|
|||||||
- `/workflow:ui-design:style-extract` → Complete design systems (design-tokens.json + style-guide.md)
|
- `/workflow:ui-design:style-extract` → Complete design systems (design-tokens.json + style-guide.md)
|
||||||
- `/workflow:ui-design:layout-extract` → Layout structure
|
- `/workflow:ui-design:layout-extract` → Layout structure
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session
|
||||||
|
└─ Decision (base path resolution):
|
||||||
|
├─ --design-id provided → Exact match by design ID
|
||||||
|
├─ --session provided → Latest in session
|
||||||
|
└─ No flags → Latest globally
|
||||||
|
|
||||||
|
Phase 1: Setup & Validation
|
||||||
|
├─ Step 1: Resolve base path & parse configuration
|
||||||
|
├─ Step 2: Load layout templates
|
||||||
|
├─ Step 3: Validate design tokens
|
||||||
|
└─ Step 4: Load animation tokens (optional)
|
||||||
|
|
||||||
|
Phase 2: Assembly (Agent)
|
||||||
|
├─ Step 1: Calculate agent grouping plan
|
||||||
|
│ └─ Grouping rules:
|
||||||
|
│ ├─ Style isolation: Each agent processes ONE style
|
||||||
|
│ ├─ Balanced distribution: Layouts evenly split
|
||||||
|
│ └─ Max 10 layouts per agent, max 6 concurrent agents
|
||||||
|
├─ Step 2: Launch batched assembly tasks (parallel)
|
||||||
|
└─ Step 3: Verify generated files
|
||||||
|
|
||||||
|
Phase 3: Generate Preview Files
|
||||||
|
├─ Step 1: Run preview generation script
|
||||||
|
└─ Step 2: Verify preview files
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 1: Setup & Validation
|
## Phase 1: Setup & Validation
|
||||||
|
|
||||||
### Step 1: Resolve Base Path & Parse Configuration
|
### Step 1: Resolve Base Path & Parse Configuration
|
||||||
@@ -290,7 +320,7 @@ Read({base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html)
|
|||||||
|
|
||||||
### Step 1: Run Preview Generation Script
|
### Step 1: Run Preview Generation Script
|
||||||
```bash
|
```bash
|
||||||
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
|
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
|
||||||
```
|
```
|
||||||
|
|
||||||
**Script generates**:
|
**Script generates**:
|
||||||
@@ -402,7 +432,7 @@ bash(test -f {base_path}/prototypes/compare.html && echo "exists")
|
|||||||
bash(mkdir -p {base_path}/prototypes)
|
bash(mkdir -p {base_path}/prototypes)
|
||||||
|
|
||||||
# Run preview script
|
# Run preview script
|
||||||
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
|
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
|
||||||
```
|
```
|
||||||
|
|
||||||
## Output Structure
|
## Output Structure
|
||||||
@@ -437,7 +467,7 @@ ERROR: Agent assembly failed
|
|||||||
→ Check inputs exist, validate JSON structure
|
→ Check inputs exist, validate JSON structure
|
||||||
|
|
||||||
ERROR: Script permission denied
|
ERROR: Script permission denied
|
||||||
→ chmod +x ~/.claude/scripts/ui-generate-preview.sh
|
→ Verify ccw tool is available: ccw tool list
|
||||||
```
|
```
|
||||||
|
|
||||||
### Recovery Strategies
|
### Recovery Strategies
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
|||||||
7. Phase 4: Design system integration → **Execute orchestrator task** → Reports completion
|
7. Phase 4: Design system integration → **Execute orchestrator task** → Reports completion
|
||||||
|
|
||||||
**Phase Transition Mechanism**:
|
**Phase Transition Mechanism**:
|
||||||
- **Task Attachment**: `SlashCommand` invocation **ATTACHES** tasks to current workflow
|
- **Task Attachment**: SlashCommand dispatch **ATTACHES** tasks to current workflow
|
||||||
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
- **Task Execution**: Orchestrator **EXECUTES** these attached tasks itself
|
||||||
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
- **Task Collapse**: After tasks complete, collapse them into phase summary
|
||||||
- **Phase Transition**: Automatically execute next phase after collapsing
|
- **Phase Transition**: Automatically execute next phase after collapsing
|
||||||
@@ -34,7 +34,51 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
|||||||
|
|
||||||
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until reaching Phase 4.
|
**Auto-Continue Mechanism**: TodoWrite tracks phase status with dynamic task attachment/collapse. After executing all attached tasks, you MUST immediately collapse them, restore phase summary, and execute the next phase. No user intervention required. The workflow is NOT complete until reaching Phase 4.
|
||||||
|
|
||||||
**Task Attachment Model**: SlashCommand invocation is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
**Task Attachment Model**: SlashCommand dispatch is NOT delegation - it's task expansion. The orchestrator executes these attached tasks itself, not waiting for external completion.
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --input, --session (legacy: --images, --prompt)
|
||||||
|
└─ Decision (input detection):
|
||||||
|
├─ Contains * or glob matches → images_input (visual)
|
||||||
|
├─ File/directory exists → code import source
|
||||||
|
└─ Pure text → design prompt
|
||||||
|
|
||||||
|
Phase 0: Parameter Parsing & Input Detection
|
||||||
|
├─ Step 1: Normalize parameters (legacy deprecation warning)
|
||||||
|
├─ Step 2: Detect design source (hybrid | code_only | visual_only)
|
||||||
|
└─ Step 3: Initialize directories and metadata
|
||||||
|
|
||||||
|
Phase 0.5: Code Import (Conditional)
|
||||||
|
└─ Decision (design_source):
|
||||||
|
├─ hybrid → Dispatch /workflow:ui-design:import-from-code
|
||||||
|
└─ Other → Skip to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Style Extraction
|
||||||
|
└─ Decision (skip_style):
|
||||||
|
├─ code_only AND style_complete → Use code import
|
||||||
|
└─ Otherwise → Dispatch /workflow:ui-design:style-extract
|
||||||
|
|
||||||
|
Phase 2.3: Animation Extraction
|
||||||
|
└─ Decision (skip_animation):
|
||||||
|
├─ code_only AND animation_complete → Use code import
|
||||||
|
└─ Otherwise → Dispatch /workflow:ui-design:animation-extract
|
||||||
|
|
||||||
|
Phase 2.5: Layout Extraction
|
||||||
|
└─ Decision (skip_layout):
|
||||||
|
├─ code_only AND layout_complete → Use code import
|
||||||
|
└─ Otherwise → Dispatch /workflow:ui-design:layout-extract
|
||||||
|
|
||||||
|
Phase 3: UI Assembly
|
||||||
|
└─ Dispatch /workflow:ui-design:generate
|
||||||
|
|
||||||
|
Phase 4: Design System Integration
|
||||||
|
└─ Decision (session_id):
|
||||||
|
├─ Provided → Dispatch /workflow:ui-design:update
|
||||||
|
└─ Not provided → Standalone completion
|
||||||
|
```
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
@@ -42,7 +86,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Write(*), Bash(*)
|
|||||||
2. **No Preliminary Validation**: Sub-commands handle their own validation
|
2. **No Preliminary Validation**: Sub-commands handle their own validation
|
||||||
3. **Parse & Pass**: Extract data from each output for next phase
|
3. **Parse & Pass**: Extract data from each output for next phase
|
||||||
4. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
4. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||||
5. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand invocation **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
5. **⚠️ CRITICAL: Task Attachment Model** - SlashCommand dispatch **ATTACHES** tasks to current workflow. Orchestrator **EXECUTES** these attached tasks itself, not waiting for external completion. This is NOT delegation - it's task expansion.
|
||||||
6. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 4.
|
6. **⚠️ CRITICAL: DO NOT STOP** - This is a continuous multi-phase workflow. After executing all attached tasks, you MUST immediately collapse them and execute the next phase. Workflow is NOT complete until Phase 4.
|
||||||
|
|
||||||
## Parameter Requirements
|
## Parameter Requirements
|
||||||
@@ -232,7 +276,9 @@ TodoWrite({todos: [
|
|||||||
|
|
||||||
### Phase 0.5: Code Import & Completeness Assessment (Conditional)
|
### Phase 0.5: Code Import & Completeness Assessment (Conditional)
|
||||||
|
|
||||||
```bash
|
**Step 0.5.1: Dispatch** - Import design system from code files
|
||||||
|
|
||||||
|
```javascript
|
||||||
# Only execute if code files detected
|
# Only execute if code files detected
|
||||||
IF design_source == "hybrid":
|
IF design_source == "hybrid":
|
||||||
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
REPORT: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
@@ -245,7 +291,7 @@ IF design_source == "hybrid":
|
|||||||
"--source \"{code_base_path}\""
|
"--source \"{code_base_path}\""
|
||||||
|
|
||||||
TRY:
|
TRY:
|
||||||
# SlashCommand invocation ATTACHES import-from-code's tasks to current workflow
|
# SlashCommand dispatch ATTACHES import-from-code's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself:
|
# Orchestrator will EXECUTE these attached tasks itself:
|
||||||
# - Phase 0: Discover and categorize code files
|
# - Phase 0: Discover and categorize code files
|
||||||
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
|
# - Phase 1.1-1.3: Style/Animation/Layout Agent extraction
|
||||||
@@ -336,7 +382,9 @@ TodoWrite(mark_completed: "Initialize and detect design source",
|
|||||||
|
|
||||||
### Phase 2: Style Extraction
|
### Phase 2: Style Extraction
|
||||||
|
|
||||||
```bash
|
**Step 2.1: Dispatch** - Extract style design system
|
||||||
|
|
||||||
|
```javascript
|
||||||
# Determine if style extraction needed
|
# Determine if style extraction needed
|
||||||
skip_style = (design_source == "code_only" AND style_complete)
|
skip_style = (design_source == "code_only" AND style_complete)
|
||||||
|
|
||||||
@@ -361,7 +409,7 @@ ELSE:
|
|||||||
|
|
||||||
extract_command = " ".join(command_parts)
|
extract_command = " ".join(command_parts)
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES style-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES style-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(extract_command)
|
SlashCommand(extract_command)
|
||||||
|
|
||||||
@@ -371,7 +419,9 @@ ELSE:
|
|||||||
|
|
||||||
### Phase 2.3: Animation Extraction
|
### Phase 2.3: Animation Extraction
|
||||||
|
|
||||||
```bash
|
**Step 2.3.1: Dispatch** - Extract animation patterns
|
||||||
|
|
||||||
|
```javascript
|
||||||
skip_animation = (design_source == "code_only" AND animation_complete)
|
skip_animation = (design_source == "code_only" AND animation_complete)
|
||||||
|
|
||||||
IF skip_animation:
|
IF skip_animation:
|
||||||
@@ -392,7 +442,7 @@ ELSE:
|
|||||||
|
|
||||||
animation_extract_command = " ".join(command_parts)
|
animation_extract_command = " ".join(command_parts)
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES animation-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES animation-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(animation_extract_command)
|
SlashCommand(animation_extract_command)
|
||||||
|
|
||||||
@@ -402,7 +452,9 @@ ELSE:
|
|||||||
|
|
||||||
### Phase 2.5: Layout Extraction
|
### Phase 2.5: Layout Extraction
|
||||||
|
|
||||||
```bash
|
**Step 2.5.1: Dispatch** - Extract layout templates
|
||||||
|
|
||||||
|
```javascript
|
||||||
skip_layout = (design_source == "code_only" AND layout_complete)
|
skip_layout = (design_source == "code_only" AND layout_complete)
|
||||||
|
|
||||||
IF skip_layout:
|
IF skip_layout:
|
||||||
@@ -425,7 +477,7 @@ ELSE:
|
|||||||
|
|
||||||
layout_extract_command = " ".join(command_parts)
|
layout_extract_command = " ".join(command_parts)
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES layout-extract's tasks to current workflow
|
# SlashCommand dispatch ATTACHES layout-extract's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(layout_extract_command)
|
SlashCommand(layout_extract_command)
|
||||||
|
|
||||||
@@ -435,11 +487,13 @@ ELSE:
|
|||||||
|
|
||||||
### Phase 3: UI Assembly
|
### Phase 3: UI Assembly
|
||||||
|
|
||||||
```bash
|
**Step 3.1: Dispatch** - Assemble UI prototypes from design tokens and layout templates
|
||||||
|
|
||||||
|
```javascript
|
||||||
REPORT: "🚀 Phase 3: UI Assembly"
|
REPORT: "🚀 Phase 3: UI Assembly"
|
||||||
generate_command = f"/workflow:ui-design:generate --design-id \"{design_id}\""
|
generate_command = f"/workflow:ui-design:generate --design-id \"{design_id}\""
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES generate's tasks to current workflow
|
# SlashCommand dispatch ATTACHES generate's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(generate_command)
|
SlashCommand(generate_command)
|
||||||
|
|
||||||
@@ -449,12 +503,14 @@ TodoWrite(mark_completed: "Assemble UI", mark_in_progress: session_id ? "Integra
|
|||||||
|
|
||||||
### Phase 4: Design System Integration
|
### Phase 4: Design System Integration
|
||||||
|
|
||||||
```bash
|
**Step 4.1: Dispatch** - Integrate design system into workflow session
|
||||||
|
|
||||||
|
```javascript
|
||||||
IF session_id:
|
IF session_id:
|
||||||
REPORT: "🚀 Phase 4: Design System Integration"
|
REPORT: "🚀 Phase 4: Design System Integration"
|
||||||
update_command = f"/workflow:ui-design:update --session {session_id}"
|
update_command = f"/workflow:ui-design:update --session {session_id}"
|
||||||
|
|
||||||
# SlashCommand invocation ATTACHES update's tasks to current workflow
|
# SlashCommand dispatch ATTACHES update's tasks to current workflow
|
||||||
# Orchestrator will EXECUTE these attached tasks itself
|
# Orchestrator will EXECUTE these attached tasks itself
|
||||||
SlashCommand(update_command)
|
SlashCommand(update_command)
|
||||||
|
|
||||||
@@ -570,32 +626,39 @@ ELSE:
|
|||||||
```javascript
|
```javascript
|
||||||
// Initialize IMMEDIATELY at start of Phase 0 to track multi-phase execution (6 orchestrator-level tasks)
|
// Initialize IMMEDIATELY at start of Phase 0 to track multi-phase execution (6 orchestrator-level tasks)
|
||||||
TodoWrite({todos: [
|
TodoWrite({todos: [
|
||||||
{content: "Initialize and detect design source", status: "in_progress", activeForm: "Initializing"},
|
{content: "Phase 0: Initialize and Detect Design Source", status: "in_progress", activeForm: "Initializing"},
|
||||||
{content: "Extract style (complete design systems)", status: "pending", activeForm: "Extracting style"},
|
{content: "Phase 2: Style Extraction", status: "pending", activeForm: "Extracting style"},
|
||||||
{content: "Extract animation (CSS auto mode)", status: "pending", activeForm: "Extracting animation"},
|
{content: "Phase 2.3: Animation Extraction", status: "pending", activeForm: "Extracting animation"},
|
||||||
{content: "Extract layout (structure templates)", status: "pending", activeForm: "Extracting layout"},
|
{content: "Phase 2.5: Layout Extraction", status: "pending", activeForm: "Extracting layout"},
|
||||||
{content: "Assemble UI prototypes", status: "pending", activeForm: "Assembling UI"},
|
{content: "Phase 3: UI Assembly", status: "pending", activeForm: "Assembling UI"},
|
||||||
{content: "Integrate design system", status: "pending", activeForm: "Integrating"}
|
{content: "Phase 4: Design System Integration", status: "pending", activeForm: "Integrating"}
|
||||||
]})
|
]})
|
||||||
|
|
||||||
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
// ⚠️ CRITICAL: Dynamic TodoWrite task attachment strategy:
|
||||||
//
|
//
|
||||||
// **Key Concept**: SlashCommand invocation ATTACHES tasks to current workflow.
|
// **Key Concept**: SlashCommand dispatch ATTACHES tasks to current workflow.
|
||||||
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
|
// Orchestrator EXECUTES these attached tasks itself, not waiting for external completion.
|
||||||
//
|
//
|
||||||
// Phase 2-4 SlashCommand Invocation Pattern:
|
// Phase 2-4 SlashCommand Dispatch Pattern (when tasks are attached):
|
||||||
// 1. SlashCommand invocation ATTACHES sub-command tasks to TodoWrite
|
// Example - Phase 2 with sub-tasks:
|
||||||
// 2. TodoWrite expands to include attached tasks
|
// [
|
||||||
// 3. Orchestrator EXECUTES attached tasks sequentially
|
// {"content": "Phase 0: Initialize and Detect Design Source", "status": "completed", "activeForm": "Initializing"},
|
||||||
// 4. After all attached tasks complete, COLLAPSE them into phase summary
|
// {"content": "Phase 2: Style Extraction", "status": "in_progress", "activeForm": "Extracting style"},
|
||||||
// 5. Update next phase to in_progress
|
// {"content": " → Analyze design references", "status": "in_progress", "activeForm": "Analyzing references"},
|
||||||
// 6. IMMEDIATELY execute next phase SlashCommand (auto-continue)
|
// {"content": " → Generate design tokens", "status": "pending", "activeForm": "Generating tokens"},
|
||||||
|
// {"content": " → Create style guide", "status": "pending", "activeForm": "Creating guide"},
|
||||||
|
// {"content": "Phase 2.3: Animation Extraction", "status": "pending", "activeForm": "Extracting animation"},
|
||||||
|
// ...
|
||||||
|
// ]
|
||||||
|
//
|
||||||
|
// After sub-tasks complete, COLLAPSE back to:
|
||||||
|
// [
|
||||||
|
// {"content": "Phase 0: Initialize and Detect Design Source", "status": "completed", "activeForm": "Initializing"},
|
||||||
|
// {"content": "Phase 2: Style Extraction", "status": "completed", "activeForm": "Extracting style"},
|
||||||
|
// {"content": "Phase 2.3: Animation Extraction", "status": "in_progress", "activeForm": "Extracting animation"},
|
||||||
|
// ...
|
||||||
|
// ]
|
||||||
//
|
//
|
||||||
// Benefits:
|
|
||||||
// ✓ Real-time visibility into sub-command task progress
|
|
||||||
// ✓ Clean orchestrator-level summary after each phase
|
|
||||||
// ✓ Clear mental model: SlashCommand = attach tasks, not delegate work
|
|
||||||
// ✓ Dynamic attachment/collapse maintains clarity
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
@@ -639,7 +702,7 @@ TodoWrite({todos: [
|
|||||||
|
|
||||||
- **Input**: `--images` (glob pattern) and/or `--prompt` (text/file paths) + optional `--session`
|
- **Input**: `--images` (glob pattern) and/or `--prompt` (text/file paths) + optional `--session`
|
||||||
- **Output**: Complete design system in `{base_path}/` (style-extraction, layout-extraction, prototypes)
|
- **Output**: Complete design system in `{base_path}/` (style-extraction, layout-extraction, prototypes)
|
||||||
- **Sub-commands Called**:
|
- **Sub-commands Dispatched**:
|
||||||
1. `/workflow:ui-design:import-from-code` (Phase 0.5, conditional - if code files detected)
|
1. `/workflow:ui-design:import-from-code` (Phase 0.5, conditional - if code files detected)
|
||||||
2. `/workflow:ui-design:style-extract` (Phase 2 - complete design systems)
|
2. `/workflow:ui-design:style-extract` (Phase 2 - complete design systems)
|
||||||
3. `/workflow:ui-design:animation-extract` (Phase 2.3 - animation tokens)
|
3. `/workflow:ui-design:animation-extract` (Phase 2.3 - animation tokens)
|
||||||
|
|||||||
@@ -43,6 +43,25 @@ Extract design system tokens from source code files (CSS/SCSS/JS/TS/HTML) using
|
|||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session, --source
|
||||||
|
└─ Decision (base path resolution):
|
||||||
|
├─ --design-id provided → Exact match by design ID
|
||||||
|
├─ --session provided → Latest design run in session
|
||||||
|
└─ Neither → ERROR: Must provide --design-id or --session
|
||||||
|
|
||||||
|
Phase 0: Setup & File Discovery
|
||||||
|
├─ Step 1: Resolve base path
|
||||||
|
├─ Step 2: Initialize directories
|
||||||
|
└─ Step 3: Discover files using script
|
||||||
|
|
||||||
|
Phase 1: Parallel Agent Analysis (3 agents)
|
||||||
|
├─ Style Agent → design-tokens.json + code_snippets
|
||||||
|
├─ Animation Agent → animation-tokens.json + code_snippets
|
||||||
|
└─ Layout Agent → layout-templates.json + code_snippets
|
||||||
|
```
|
||||||
|
|
||||||
### Step 1: Setup & File Discovery
|
### Step 1: Setup & File Discovery
|
||||||
|
|
||||||
**Purpose**: Initialize session, discover and categorize code files
|
**Purpose**: Initialize session, discover and categorize code files
|
||||||
@@ -87,7 +106,7 @@ echo " Output: $base_path"
|
|||||||
|
|
||||||
# 3. Discover files using script
|
# 3. Discover files using script
|
||||||
discovery_file="${intermediates_dir}/discovered-files.json"
|
discovery_file="${intermediates_dir}/discovered-files.json"
|
||||||
~/.claude/scripts/discover-design-files.sh "$source" "$discovery_file"
|
ccw tool exec discover_design_files '{"sourceDir":"'"$source"'","outputPath":"'"$discovery_file"'"}'
|
||||||
|
|
||||||
echo " Output: $discovery_file"
|
echo " Output: $discovery_file"
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -23,6 +23,39 @@ This command separates the "scaffolding" (HTML structure and CSS layout) from th
|
|||||||
- **Device-Aware**: Optimized for specific device types (desktop, mobile, tablet, responsive)
|
- **Device-Aware**: Optimized for specific device types (desktop, mobile, tablet, responsive)
|
||||||
- **Token-Based**: CSS uses `var()` placeholders for spacing and breakpoints
|
- **Token-Based**: CSS uses `var()` placeholders for spacing and breakpoints
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session, --images, --prompt, --targets, --variants, --device-type, --interactive, --refine
|
||||||
|
└─ Decision (mode detection):
|
||||||
|
├─ --refine flag → Refinement Mode (variants_count = 1)
|
||||||
|
└─ No --refine → Exploration Mode (variants_count = --variants OR 3)
|
||||||
|
|
||||||
|
Phase 0: Setup & Input Validation
|
||||||
|
├─ Step 1: Detect input, mode & targets
|
||||||
|
├─ Step 2: Load inputs & create directories
|
||||||
|
└─ Step 3: Memory check (skip if cached)
|
||||||
|
|
||||||
|
Phase 1: Layout Concept/Refinement Options Generation
|
||||||
|
├─ Step 0.5: Load existing layout (Refinement Mode only)
|
||||||
|
├─ Step 1: Generate options (Agent Task 1)
|
||||||
|
│ └─ Decision:
|
||||||
|
│ ├─ Exploration Mode → Generate contrasting layout concepts
|
||||||
|
│ └─ Refinement Mode → Generate refinement options
|
||||||
|
└─ Step 2: Verify options file created
|
||||||
|
|
||||||
|
Phase 1.5: User Confirmation (Optional)
|
||||||
|
└─ Decision (--interactive flag):
|
||||||
|
├─ --interactive present → Present options, capture selection
|
||||||
|
└─ No --interactive → Skip to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Layout Template Generation
|
||||||
|
├─ Step 1: Load user selections or default to all
|
||||||
|
├─ Step 2: Launch parallel agent tasks
|
||||||
|
└─ Step 3: Verify output files
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 0: Setup & Input Validation
|
## Phase 0: Setup & Input Validation
|
||||||
|
|
||||||
### Step 1: Detect Input, Mode & Targets
|
### Step 1: Detect Input, Mode & Targets
|
||||||
|
|||||||
@@ -33,6 +33,29 @@ Converts design run extraction results into shareable reference package with:
|
|||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-run, --package-name, --output-dir
|
||||||
|
└─ Validation:
|
||||||
|
├─ --design-run and --package-name REQUIRED
|
||||||
|
└─ Package name format: lowercase, alphanumeric, hyphens only
|
||||||
|
|
||||||
|
Phase 0: Setup & Validation
|
||||||
|
├─ Step 1: Validate required parameters
|
||||||
|
├─ Step 2: Validate package name format
|
||||||
|
├─ Step 3: Validate design run exists
|
||||||
|
├─ Step 4: Check required extraction files (design-tokens.json, layout-templates.json)
|
||||||
|
└─ Step 5: Setup output directory
|
||||||
|
|
||||||
|
Phase 1: Prepare Component Data
|
||||||
|
├─ Step 1: Copy layout templates
|
||||||
|
├─ Step 2: Copy design tokens
|
||||||
|
└─ Step 3: Copy animation tokens (optional)
|
||||||
|
|
||||||
|
Phase 2: Preview Generation (Agent)
|
||||||
|
└─ Generate preview.html + preview.css via ui-design-agent
|
||||||
|
```
|
||||||
|
|
||||||
### Phase 0: Setup & Validation
|
### Phase 0: Setup & Validation
|
||||||
|
|
||||||
**Purpose**: Validate inputs, prepare output directory
|
**Purpose**: Validate inputs, prepare output directory
|
||||||
|
|||||||
@@ -19,6 +19,43 @@ Extract design style from reference images or text prompts using Claude's built-
|
|||||||
- **Dual Mode**: Exploration (multiple contrasting variants) or Refinement (single design fine-tuning)
|
- **Dual Mode**: Exploration (multiple contrasting variants) or Refinement (single design fine-tuning)
|
||||||
- **Production-Ready**: WCAG AA compliant, OKLCH colors, semantic naming
|
- **Production-Ready**: WCAG AA compliant, OKLCH colors, semantic naming
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Input Parsing:
|
||||||
|
├─ Parse flags: --design-id, --session, --images, --prompt, --variants, --interactive, --refine
|
||||||
|
└─ Decision (mode detection):
|
||||||
|
├─ --refine flag → Refinement Mode (variants_count = 1)
|
||||||
|
└─ No --refine → Exploration Mode (variants_count = --variants OR 3)
|
||||||
|
|
||||||
|
Phase 0: Setup & Input Validation
|
||||||
|
├─ Step 1: Detect input mode, extraction mode & base path
|
||||||
|
├─ Step 2: Load inputs
|
||||||
|
└─ Step 3: Memory check (skip if exists)
|
||||||
|
|
||||||
|
Phase 1: Design Direction/Refinement Options Generation
|
||||||
|
├─ Step 1: Load project context
|
||||||
|
├─ Step 2: Generate options (Agent Task 1)
|
||||||
|
│ └─ Decision:
|
||||||
|
│ ├─ Exploration Mode → Generate contrasting design directions
|
||||||
|
│ └─ Refinement Mode → Generate refinement options
|
||||||
|
└─ Step 3: Verify options file created
|
||||||
|
|
||||||
|
Phase 1.5: User Confirmation (Optional)
|
||||||
|
└─ Decision (--interactive flag):
|
||||||
|
├─ --interactive present → Present options, capture selection
|
||||||
|
└─ No --interactive → Skip to Phase 2
|
||||||
|
|
||||||
|
Phase 2: Design System Generation
|
||||||
|
├─ Step 1: Load user selection or default to all
|
||||||
|
├─ Step 2: Create output directories
|
||||||
|
└─ Step 3: Launch agent tasks (parallel)
|
||||||
|
|
||||||
|
Phase 3: Verify Output
|
||||||
|
├─ Step 1: Check files created
|
||||||
|
└─ Step 2: Verify file sizes
|
||||||
|
```
|
||||||
|
|
||||||
## Phase 0: Setup & Input Validation
|
## Phase 0: Setup & Input Validation
|
||||||
|
|
||||||
### Step 1: Detect Input Mode, Extraction Mode & Base Path
|
### Step 1: Detect Input Mode, Extraction Mode & Base Path
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec classify_folders '{"path":".","outputFormat":"json"}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
# Classify folders by type for documentation generation
|
# Classify folders by type for documentation generation
|
||||||
# Usage: get_modules_by_depth.sh | classify-folders.sh
|
# Usage: get_modules_by_depth.sh | classify-folders.sh
|
||||||
# Output: folder_path|folder_type|code:N|dirs:N
|
# Output: folder_path|folder_type|code:N|dirs:N
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec convert_tokens_to_css '{"inputPath":"design-tokens.json","outputPath":"tokens.css"}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
# Convert design-tokens.json to tokens.css with Google Fonts import and global font rules
|
# Convert design-tokens.json to tokens.css with Google Fonts import and global font rules
|
||||||
# Usage: cat design-tokens.json | ./convert_tokens_to_css.sh > tokens.css
|
# Usage: cat design-tokens.json | ./convert_tokens_to_css.sh > tokens.css
|
||||||
# Or: ./convert_tokens_to_css.sh < design-tokens.json > tokens.css
|
# Or: ./convert_tokens_to_css.sh < design-tokens.json > tokens.css
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec detect_changed_modules '{"baseBranch":"main","format":"list"}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
# Detect modules affected by git changes or recent modifications
|
# Detect modules affected by git changes or recent modifications
|
||||||
# Usage: detect_changed_modules.sh [format]
|
# Usage: detect_changed_modules.sh [format]
|
||||||
# format: list|grouped|paths (default: paths)
|
# format: list|grouped|paths (default: paths)
|
||||||
@@ -154,4 +158,4 @@ detect_changed_modules() {
|
|||||||
# Execute function if script is run directly
|
# Execute function if script is run directly
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||||
detect_changed_modules "$@"
|
detect_changed_modules "$@"
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec discover_design_files '{"sourceDir":".","outputPath":"output.json"}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
# discover-design-files.sh - Discover design-related files and output JSON
|
# discover-design-files.sh - Discover design-related files and output JSON
|
||||||
# Usage: discover-design-files.sh <source_dir> <output_json>
|
# Usage: discover-design-files.sh <source_dir> <output_json>
|
||||||
|
|
||||||
|
|||||||
717
.claude/scripts/generate_module_docs.sh
Normal file
717
.claude/scripts/generate_module_docs.sh
Normal file
@@ -0,0 +1,717 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec generate_module_docs '{"path":".","strategy":"single-layer","tool":"gemini"}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
|
# Generate documentation for modules and projects with multiple strategies
|
||||||
|
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
|
||||||
|
# strategy: full|single|project-readme|project-architecture|http-api
|
||||||
|
# source_path: Path to the source module directory (or project root for project-level docs)
|
||||||
|
# project_name: Project name for output path (e.g., "myproject")
|
||||||
|
# tool: gemini|qwen|codex (default: gemini)
|
||||||
|
# model: Model name (optional, uses tool defaults)
|
||||||
|
#
|
||||||
|
# Default Models:
|
||||||
|
# gemini: gemini-2.5-flash
|
||||||
|
# qwen: coder-model
|
||||||
|
# codex: gpt5-codex
|
||||||
|
#
|
||||||
|
# Module-Level Strategies:
|
||||||
|
# full: Full documentation generation
|
||||||
|
# - Read: All files in current and subdirectories (@**/*)
|
||||||
|
# - Generate: API.md + README.md for each directory containing code files
|
||||||
|
# - Use: Deep directories (Layer 3), comprehensive documentation
|
||||||
|
#
|
||||||
|
# single: Single-layer documentation
|
||||||
|
# - Read: Current directory code + child API.md/README.md files
|
||||||
|
# - Generate: API.md + README.md only in current directory
|
||||||
|
# - Use: Upper layers (Layer 1-2), incremental updates
|
||||||
|
#
|
||||||
|
# Project-Level Strategies:
|
||||||
|
# project-readme: Project overview documentation
|
||||||
|
# - Read: All module API.md and README.md files
|
||||||
|
# - Generate: README.md (project root)
|
||||||
|
# - Use: After all module docs are generated
|
||||||
|
#
|
||||||
|
# project-architecture: System design documentation
|
||||||
|
# - Read: All module docs + project README
|
||||||
|
# - Generate: ARCHITECTURE.md + EXAMPLES.md
|
||||||
|
# - Use: After project README is generated
|
||||||
|
#
|
||||||
|
# http-api: HTTP API documentation
|
||||||
|
# - Read: API route files + existing docs
|
||||||
|
# - Generate: api/README.md
|
||||||
|
# - Use: For projects with HTTP APIs
|
||||||
|
#
|
||||||
|
# Output Structure:
|
||||||
|
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
|
||||||
|
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
|
||||||
|
# Project docs: .workflow/docs/{project_name}/README.md
|
||||||
|
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
|
||||||
|
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
|
||||||
|
# API docs: .workflow/docs/{project_name}/api/README.md
|
||||||
|
#
|
||||||
|
# Features:
|
||||||
|
# - Path mirroring: source structure → docs structure
|
||||||
|
# - Template-driven generation
|
||||||
|
# - Respects .gitignore patterns
|
||||||
|
# - Detects code vs navigation folders
|
||||||
|
# - Tool fallback support
|
||||||
|
|
||||||
|
# Build exclusion filters from .gitignore
|
||||||
|
build_exclusion_filters() {
|
||||||
|
local filters=""
|
||||||
|
|
||||||
|
# Common system/cache directories to exclude
|
||||||
|
local system_excludes=(
|
||||||
|
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
||||||
|
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
||||||
|
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
|
||||||
|
)
|
||||||
|
|
||||||
|
for exclude in "${system_excludes[@]}"; do
|
||||||
|
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Find and parse .gitignore (current dir first, then git root)
|
||||||
|
local gitignore_file=""
|
||||||
|
|
||||||
|
# Check current directory first
|
||||||
|
if [ -f ".gitignore" ]; then
|
||||||
|
gitignore_file=".gitignore"
|
||||||
|
else
|
||||||
|
# Try to find git root and check for .gitignore there
|
||||||
|
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
||||||
|
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
||||||
|
gitignore_file="$git_root/.gitignore"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Parse .gitignore if found
|
||||||
|
if [ -n "$gitignore_file" ]; then
|
||||||
|
while IFS= read -r line; do
|
||||||
|
# Skip empty lines and comments
|
||||||
|
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
||||||
|
|
||||||
|
# Remove trailing slash and whitespace
|
||||||
|
line=$(echo "$line" | sed 's|/$||' | xargs)
|
||||||
|
|
||||||
|
# Skip wildcards patterns (too complex for simple find)
|
||||||
|
[[ "$line" =~ \* ]] && continue
|
||||||
|
|
||||||
|
# Add to filters
|
||||||
|
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
||||||
|
done < "$gitignore_file"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$filters"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Detect folder type (code vs navigation)
|
||||||
|
detect_folder_type() {
|
||||||
|
local target_path="$1"
|
||||||
|
local exclusion_filters="$2"
|
||||||
|
|
||||||
|
# Count code files (primary indicators)
|
||||||
|
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
|
||||||
|
if [ $code_count -gt 0 ]; then
|
||||||
|
echo "code"
|
||||||
|
else
|
||||||
|
echo "navigation"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Scan directory structure and generate structured information
|
||||||
|
scan_directory_structure() {
|
||||||
|
local target_path="$1"
|
||||||
|
local strategy="$2"
|
||||||
|
|
||||||
|
if [ ! -d "$target_path" ]; then
|
||||||
|
echo "Directory not found: $target_path"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
local exclusion_filters=$(build_exclusion_filters)
|
||||||
|
local structure_info=""
|
||||||
|
|
||||||
|
# Get basic directory info
|
||||||
|
local dir_name=$(basename "$target_path")
|
||||||
|
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
|
||||||
|
|
||||||
|
structure_info+="Directory: $dir_name\n"
|
||||||
|
structure_info+="Total files: $total_files\n"
|
||||||
|
structure_info+="Total directories: $total_dirs\n"
|
||||||
|
structure_info+="Folder type: $folder_type\n\n"
|
||||||
|
|
||||||
|
if [ "$strategy" = "full" ]; then
|
||||||
|
# For full: show all subdirectories with file counts
|
||||||
|
structure_info+="Subdirectories with files:\n"
|
||||||
|
while IFS= read -r dir; do
|
||||||
|
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
||||||
|
local rel_path=${dir#$target_path/}
|
||||||
|
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
if [ $file_count -gt 0 ]; then
|
||||||
|
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
|
||||||
|
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
||||||
|
else
|
||||||
|
# For single: show direct children only
|
||||||
|
structure_info+="Direct subdirectories:\n"
|
||||||
|
while IFS= read -r dir; do
|
||||||
|
if [ -n "$dir" ]; then
|
||||||
|
local dir_name=$(basename "$dir")
|
||||||
|
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
|
||||||
|
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
|
||||||
|
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
|
||||||
|
fi
|
||||||
|
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Show main file types in current directory
|
||||||
|
structure_info+="\nCurrent directory files:\n"
|
||||||
|
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
||||||
|
|
||||||
|
structure_info+=" - Code files: $code_files\n"
|
||||||
|
structure_info+=" - Config files: $config_files\n"
|
||||||
|
structure_info+=" - Documentation: $doc_files\n"
|
||||||
|
|
||||||
|
printf "%b" "$structure_info"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Calculate output path based on source path and project name
|
||||||
|
calculate_output_path() {
|
||||||
|
local source_path="$1"
|
||||||
|
local project_name="$2"
|
||||||
|
local project_root="$3"
|
||||||
|
|
||||||
|
# Get absolute path of source (normalize to Unix-style path)
|
||||||
|
local abs_source=$(cd "$source_path" && pwd)
|
||||||
|
|
||||||
|
# Normalize project root to same format
|
||||||
|
local norm_project_root=$(cd "$project_root" && pwd)
|
||||||
|
|
||||||
|
# Calculate relative path from project root
|
||||||
|
local rel_path="${abs_source#$norm_project_root}"
|
||||||
|
|
||||||
|
# Remove leading slash if present
|
||||||
|
rel_path="${rel_path#/}"
|
||||||
|
|
||||||
|
# If source is project root, use project name directly
|
||||||
|
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
|
||||||
|
echo "$norm_project_root/.workflow/docs/$project_name"
|
||||||
|
else
|
||||||
|
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
generate_module_docs() {
|
||||||
|
local strategy="$1"
|
||||||
|
local source_path="$2"
|
||||||
|
local project_name="$3"
|
||||||
|
local tool="${4:-gemini}"
|
||||||
|
local model="$5"
|
||||||
|
|
||||||
|
# Validate parameters
|
||||||
|
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
|
||||||
|
echo "❌ Error: Strategy, source path, and project name are required"
|
||||||
|
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
||||||
|
echo "Module strategies: full, single"
|
||||||
|
echo "Project strategies: project-readme, project-architecture, http-api"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate strategy
|
||||||
|
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
|
||||||
|
local strategy_valid=false
|
||||||
|
for valid_strategy in "${valid_strategies[@]}"; do
|
||||||
|
if [ "$strategy" = "$valid_strategy" ]; then
|
||||||
|
strategy_valid=true
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ "$strategy_valid" = false ]; then
|
||||||
|
echo "❌ Error: Invalid strategy '$strategy'"
|
||||||
|
echo "Valid module strategies: full, single"
|
||||||
|
echo "Valid project strategies: project-readme, project-architecture, http-api"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -d "$source_path" ]; then
|
||||||
|
echo "❌ Error: Source directory '$source_path' does not exist"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Set default models if not specified
|
||||||
|
if [ -z "$model" ]; then
|
||||||
|
case "$tool" in
|
||||||
|
gemini)
|
||||||
|
model="gemini-2.5-flash"
|
||||||
|
;;
|
||||||
|
qwen)
|
||||||
|
model="coder-model"
|
||||||
|
;;
|
||||||
|
codex)
|
||||||
|
model="gpt5-codex"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
model=""
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build exclusion filters
|
||||||
|
local exclusion_filters=$(build_exclusion_filters)
|
||||||
|
|
||||||
|
# Get project root
|
||||||
|
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||||
|
|
||||||
|
# Determine if this is a project-level strategy
|
||||||
|
local is_project_level=false
|
||||||
|
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
|
||||||
|
is_project_level=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Calculate output path
|
||||||
|
local output_path
|
||||||
|
if [ "$is_project_level" = true ]; then
|
||||||
|
# Project-level docs go to project root
|
||||||
|
if [ "$strategy" = "http-api" ]; then
|
||||||
|
output_path="$project_root/.workflow/docs/$project_name/api"
|
||||||
|
else
|
||||||
|
output_path="$project_root/.workflow/docs/$project_name"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create output directory
|
||||||
|
mkdir -p "$output_path"
|
||||||
|
|
||||||
|
# Detect folder type (only for module-level strategies)
|
||||||
|
local folder_type=""
|
||||||
|
if [ "$is_project_level" = false ]; then
|
||||||
|
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Load templates based on strategy
|
||||||
|
local api_template=""
|
||||||
|
local readme_template=""
|
||||||
|
local template_content=""
|
||||||
|
|
||||||
|
if [ "$is_project_level" = true ]; then
|
||||||
|
# Project-level templates
|
||||||
|
case "$strategy" in
|
||||||
|
project-readme)
|
||||||
|
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
|
||||||
|
if [ -f "$proj_readme_path" ]; then
|
||||||
|
template_content=$(cat "$proj_readme_path")
|
||||||
|
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
project-architecture)
|
||||||
|
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
|
||||||
|
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
|
||||||
|
if [ -f "$arch_path" ]; then
|
||||||
|
template_content=$(cat "$arch_path")
|
||||||
|
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
|
||||||
|
fi
|
||||||
|
if [ -f "$examples_path" ]; then
|
||||||
|
template_content="$template_content
|
||||||
|
|
||||||
|
EXAMPLES TEMPLATE:
|
||||||
|
$(cat "$examples_path")"
|
||||||
|
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
http-api)
|
||||||
|
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
||||||
|
if [ -f "$api_path" ]; then
|
||||||
|
template_content=$(cat "$api_path")
|
||||||
|
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
else
|
||||||
|
# Module-level templates
|
||||||
|
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
||||||
|
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
|
||||||
|
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
|
||||||
|
|
||||||
|
if [ "$folder_type" = "code" ]; then
|
||||||
|
if [ -f "$api_template_path" ]; then
|
||||||
|
api_template=$(cat "$api_template_path")
|
||||||
|
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
|
||||||
|
fi
|
||||||
|
if [ -f "$readme_template_path" ]; then
|
||||||
|
readme_template=$(cat "$readme_template_path")
|
||||||
|
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Navigation folder uses navigation template
|
||||||
|
if [ -f "$nav_template_path" ]; then
|
||||||
|
readme_template=$(cat "$nav_template_path")
|
||||||
|
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Scan directory structure (only for module-level strategies)
|
||||||
|
local structure_info=""
|
||||||
|
if [ "$is_project_level" = false ]; then
|
||||||
|
echo " 🔍 Scanning directory structure..."
|
||||||
|
structure_info=$(scan_directory_structure "$source_path" "$strategy")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Prepare logging info
|
||||||
|
local module_name=$(basename "$source_path")
|
||||||
|
|
||||||
|
echo "⚡ Generating docs: $source_path → $output_path"
|
||||||
|
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
|
||||||
|
echo " Output: $output_path"
|
||||||
|
|
||||||
|
# Build strategy-specific prompt
|
||||||
|
local final_prompt=""
|
||||||
|
|
||||||
|
# Project-level strategies
|
||||||
|
if [ "$strategy" = "project-readme" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate comprehensive project overview documentation
|
||||||
|
|
||||||
|
PROJECT: $project_name
|
||||||
|
OUTPUT: Current directory (file will be moved to final location)
|
||||||
|
|
||||||
|
Read: @.workflow/docs/$project_name/**/*.md
|
||||||
|
|
||||||
|
Context: All module documentation files from the project
|
||||||
|
|
||||||
|
Generate ONE documentation file in current directory:
|
||||||
|
- README.md - Project root documentation
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$template_content
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create README.md in CURRENT DIRECTORY
|
||||||
|
- Synthesize information from all module docs
|
||||||
|
- Include project overview, getting started, and navigation
|
||||||
|
- Create clear module navigation with links
|
||||||
|
- Follow template structure exactly"
|
||||||
|
|
||||||
|
elif [ "$strategy" = "project-architecture" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate system design and usage examples documentation
|
||||||
|
|
||||||
|
PROJECT: $project_name
|
||||||
|
OUTPUT: Current directory (files will be moved to final location)
|
||||||
|
|
||||||
|
Read: @.workflow/docs/$project_name/**/*.md
|
||||||
|
|
||||||
|
Context: All project documentation including module docs and project README
|
||||||
|
|
||||||
|
Generate TWO documentation files in current directory:
|
||||||
|
1. ARCHITECTURE.md - System architecture and design patterns
|
||||||
|
2. EXAMPLES.md - End-to-end usage examples
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$template_content
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
|
||||||
|
- Synthesize architectural patterns from module documentation
|
||||||
|
- Document system structure, module relationships, and design decisions
|
||||||
|
- Provide practical code examples and usage scenarios
|
||||||
|
- Follow template structure for both files"
|
||||||
|
|
||||||
|
elif [ "$strategy" = "http-api" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate HTTP API reference documentation
|
||||||
|
|
||||||
|
PROJECT: $project_name
|
||||||
|
OUTPUT: Current directory (file will be moved to final location)
|
||||||
|
|
||||||
|
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
|
||||||
|
|
||||||
|
Context: API route files and existing documentation
|
||||||
|
|
||||||
|
Generate ONE documentation file in current directory:
|
||||||
|
- README.md - HTTP API documentation (in api/ subdirectory)
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$template_content
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create README.md in CURRENT DIRECTORY
|
||||||
|
- Document all HTTP endpoints (routes, methods, parameters, responses)
|
||||||
|
- Include authentication requirements and error codes
|
||||||
|
- Provide request/response examples
|
||||||
|
- Follow template structure (Part B: HTTP API documentation)"
|
||||||
|
|
||||||
|
# Module-level strategies
|
||||||
|
elif [ "$strategy" = "full" ]; then
|
||||||
|
# Full strategy: read all files, generate for each directory
|
||||||
|
if [ "$folder_type" = "code" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate comprehensive API and module documentation
|
||||||
|
|
||||||
|
Directory Structure Analysis:
|
||||||
|
$structure_info
|
||||||
|
|
||||||
|
SOURCE: $source_path
|
||||||
|
OUTPUT: Current directory (files will be moved to final location)
|
||||||
|
|
||||||
|
Read: @**/*
|
||||||
|
|
||||||
|
Generate TWO documentation files in current directory:
|
||||||
|
1. API.md - Code API documentation (functions, classes, interfaces)
|
||||||
|
Template:
|
||||||
|
$api_template
|
||||||
|
|
||||||
|
2. README.md - Module overview documentation
|
||||||
|
Template:
|
||||||
|
$readme_template
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Generate both API.md and README.md in CURRENT DIRECTORY
|
||||||
|
- If subdirectories contain code files, generate their docs too (recursive)
|
||||||
|
- Work bottom-up: deepest directories first
|
||||||
|
- Follow template structure exactly
|
||||||
|
- Use structure analysis for context"
|
||||||
|
else
|
||||||
|
# Navigation folder - README only
|
||||||
|
final_prompt="PURPOSE: Generate navigation documentation for folder structure
|
||||||
|
|
||||||
|
Directory Structure Analysis:
|
||||||
|
$structure_info
|
||||||
|
|
||||||
|
SOURCE: $source_path
|
||||||
|
OUTPUT: Current directory (file will be moved to final location)
|
||||||
|
|
||||||
|
Read: @**/*
|
||||||
|
|
||||||
|
Generate ONE documentation file in current directory:
|
||||||
|
- README.md - Navigation and folder overview
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$readme_template
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create README.md in CURRENT DIRECTORY
|
||||||
|
- Focus on folder structure and navigation
|
||||||
|
- Link to subdirectory documentation
|
||||||
|
- Use structure analysis for context"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Single strategy: read current + child docs only
|
||||||
|
if [ "$folder_type" = "code" ]; then
|
||||||
|
final_prompt="PURPOSE: Generate API and module documentation for current directory
|
||||||
|
|
||||||
|
Directory Structure Analysis:
|
||||||
|
$structure_info
|
||||||
|
|
||||||
|
SOURCE: $source_path
|
||||||
|
OUTPUT: Current directory (files will be moved to final location)
|
||||||
|
|
||||||
|
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
|
||||||
|
|
||||||
|
Generate TWO documentation files in current directory:
|
||||||
|
1. API.md - Code API documentation
|
||||||
|
Template:
|
||||||
|
$api_template
|
||||||
|
|
||||||
|
2. README.md - Module overview
|
||||||
|
Template:
|
||||||
|
$readme_template
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Generate both API.md and README.md in CURRENT DIRECTORY
|
||||||
|
- Reference child documentation, do not duplicate
|
||||||
|
- Follow template structure
|
||||||
|
- Use structure analysis for current directory context"
|
||||||
|
else
|
||||||
|
# Navigation folder - README only
|
||||||
|
final_prompt="PURPOSE: Generate navigation documentation
|
||||||
|
|
||||||
|
Directory Structure Analysis:
|
||||||
|
$structure_info
|
||||||
|
|
||||||
|
SOURCE: $source_path
|
||||||
|
OUTPUT: Current directory (file will be moved to final location)
|
||||||
|
|
||||||
|
Read: @*/API.md @*/README.md @*.md
|
||||||
|
|
||||||
|
Generate ONE documentation file in current directory:
|
||||||
|
- README.md - Navigation and overview
|
||||||
|
|
||||||
|
Template:
|
||||||
|
$readme_template
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
- Create README.md in CURRENT DIRECTORY
|
||||||
|
- Link to child documentation
|
||||||
|
- Use structure analysis for navigation context"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Execute documentation generation
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
echo " 🔄 Starting documentation generation..."
|
||||||
|
|
||||||
|
if cd "$source_path" 2>/dev/null; then
|
||||||
|
local tool_result=0
|
||||||
|
|
||||||
|
# Store current output path for CLI context
|
||||||
|
export DOC_OUTPUT_PATH="$output_path"
|
||||||
|
|
||||||
|
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
|
||||||
|
local git_head_before=""
|
||||||
|
if git rev-parse --git-dir >/dev/null 2>&1; then
|
||||||
|
git_head_before=$(git rev-parse HEAD 2>/dev/null)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Execute with selected tool
|
||||||
|
case "$tool" in
|
||||||
|
qwen)
|
||||||
|
if [ "$model" = "coder-model" ]; then
|
||||||
|
qwen -p "$final_prompt" --yolo 2>&1
|
||||||
|
else
|
||||||
|
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||||
|
fi
|
||||||
|
tool_result=$?
|
||||||
|
;;
|
||||||
|
codex)
|
||||||
|
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
||||||
|
tool_result=$?
|
||||||
|
;;
|
||||||
|
gemini)
|
||||||
|
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||||
|
tool_result=$?
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
||||||
|
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||||
|
tool_result=$?
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# Move generated files to output directory
|
||||||
|
local docs_created=0
|
||||||
|
local moved_files=""
|
||||||
|
|
||||||
|
if [ $tool_result -eq 0 ]; then
|
||||||
|
if [ "$is_project_level" = true ]; then
|
||||||
|
# Project-level documentation files
|
||||||
|
case "$strategy" in
|
||||||
|
project-readme)
|
||||||
|
if [ -f "README.md" ]; then
|
||||||
|
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="README.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
project-architecture)
|
||||||
|
if [ -f "ARCHITECTURE.md" ]; then
|
||||||
|
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="ARCHITECTURE.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
if [ -f "EXAMPLES.md" ]; then
|
||||||
|
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="EXAMPLES.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
http-api)
|
||||||
|
if [ -f "README.md" ]; then
|
||||||
|
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="api/README.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
else
|
||||||
|
# Module-level documentation files
|
||||||
|
# Check and move API.md if it exists
|
||||||
|
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
|
||||||
|
mv "API.md" "$output_path/API.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="API.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check and move README.md if it exists
|
||||||
|
if [ -f "README.md" ]; then
|
||||||
|
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||||
|
docs_created=$((docs_created + 1))
|
||||||
|
moved_files+="README.md "
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if CLI tool auto-committed (and revert if needed)
|
||||||
|
if [ -n "$git_head_before" ]; then
|
||||||
|
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
|
||||||
|
if [ "$git_head_before" != "$git_head_after" ]; then
|
||||||
|
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
|
||||||
|
git reset --soft "$git_head_before" 2>/dev/null
|
||||||
|
echo " ✅ Auto-commit reverted (files remain staged)"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $docs_created -gt 0 ]; then
|
||||||
|
local end_time=$(date +%s)
|
||||||
|
local duration=$((end_time - start_time))
|
||||||
|
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
|
||||||
|
cd - > /dev/null
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
echo " ❌ Documentation generation failed for $source_path"
|
||||||
|
cd - > /dev/null
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " ❌ Cannot access directory: $source_path"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Execute function if script is run directly
|
||||||
|
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||||
|
# Show help if no arguments or help requested
|
||||||
|
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
||||||
|
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
||||||
|
echo ""
|
||||||
|
echo "Module-Level Strategies:"
|
||||||
|
echo " full - Generate docs for all subdirectories with code"
|
||||||
|
echo " single - Generate docs only for current directory"
|
||||||
|
echo ""
|
||||||
|
echo "Project-Level Strategies:"
|
||||||
|
echo " project-readme - Generate project root README.md"
|
||||||
|
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
|
||||||
|
echo " http-api - Generate HTTP API documentation (api/README.md)"
|
||||||
|
echo ""
|
||||||
|
echo "Tools: gemini (default), qwen, codex"
|
||||||
|
echo "Models: Use tool defaults if not specified"
|
||||||
|
echo ""
|
||||||
|
echo "Module Examples:"
|
||||||
|
echo " ./generate_module_docs.sh full ./src/auth myproject"
|
||||||
|
echo " ./generate_module_docs.sh single ./components myproject gemini"
|
||||||
|
echo ""
|
||||||
|
echo "Project Examples:"
|
||||||
|
echo " ./generate_module_docs.sh project-readme . myproject"
|
||||||
|
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
|
||||||
|
echo " ./generate_module_docs.sh http-api . myproject"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
generate_module_docs "$@"
|
||||||
|
fi
|
||||||
@@ -1,4 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec get_modules_by_depth '{"format":"list","path":"."}' OR ccw tool exec get_modules_by_depth '{}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
# Get modules organized by directory depth (deepest first)
|
# Get modules organized by directory depth (deepest first)
|
||||||
# Usage: get_modules_by_depth.sh [format]
|
# Usage: get_modules_by_depth.sh [format]
|
||||||
# format: list|grouped|json (default: list)
|
# format: list|grouped|json (default: list)
|
||||||
@@ -163,4 +167,4 @@ get_modules_by_depth() {
|
|||||||
# Execute function if script is run directly
|
# Execute function if script is run directly
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||||
get_modules_by_depth "$@"
|
get_modules_by_depth "$@"
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec ui_generate_preview '{"designPath":"design-run-1","outputDir":"preview"}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
#
|
#
|
||||||
# UI Generate Preview v2.0 - Template-Based Preview Generation
|
# UI Generate Preview v2.0 - Template-Based Preview Generation
|
||||||
# Purpose: Generate compare.html and index.html using template substitution
|
# Purpose: Generate compare.html and index.html using template substitution
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec ui_instantiate_prototypes '{"designPath":"design-run-1","outputDir":"output"}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
|
|
||||||
# UI Prototype Instantiation Script with Preview Generation (v3.0 - Auto-detect)
|
# UI Prototype Instantiation Script with Preview Generation (v3.0 - Auto-detect)
|
||||||
# Purpose: Generate S × L × P final prototypes from templates + interactive preview files
|
# Purpose: Generate S × L × P final prototypes from templates + interactive preview files
|
||||||
|
|||||||
@@ -1,4 +1,8 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# ⚠️ DEPRECATED: This script is deprecated.
|
||||||
|
# Please use: ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"gemini"}'
|
||||||
|
# This file will be removed in a future version.
|
||||||
|
|
||||||
# Update CLAUDE.md for modules with two strategies
|
# Update CLAUDE.md for modules with two strategies
|
||||||
# Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]
|
# Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]
|
||||||
# strategy: single-layer|multi-layer
|
# strategy: single-layer|multi-layer
|
||||||
|
|||||||
@@ -1,13 +1,13 @@
|
|||||||
---
|
---
|
||||||
name: command-guide
|
name: command-guide
|
||||||
description: Workflow command guide for Claude DMS3 (75 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
||||||
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
||||||
version: 5.8.0
|
version: 5.8.0
|
||||||
---
|
---
|
||||||
|
|
||||||
# Command Guide Skill
|
# Command Guide Skill
|
||||||
|
|
||||||
Comprehensive command guide for Claude DMS3 workflow system covering 75 commands across 4 categories (workflow, cli, memory, task).
|
Comprehensive command guide for Claude Code Workflow (CCW) system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
|
||||||
|
|
||||||
## 🆕 What's New in v5.8.0
|
## 🆕 What's New in v5.8.0
|
||||||
|
|
||||||
@@ -18,7 +18,6 @@ Comprehensive command guide for Claude DMS3 workflow system covering 75 commands
|
|||||||
- **`/workflow:ui-design:codify-style`** - Extract design tokens from code with automatic file discovery
|
- **`/workflow:ui-design:codify-style`** - Extract design tokens from code with automatic file discovery
|
||||||
- **`/workflow:ui-design:reference-page-generator`** - Generate multi-component reference pages
|
- **`/workflow:ui-design:reference-page-generator`** - Generate multi-component reference pages
|
||||||
- **Workflow**: Design extraction → Token documentation → SKILL package → Easy loading
|
- **Workflow**: Design extraction → Token documentation → SKILL package → Easy loading
|
||||||
- **Benefits**: Consistent design system usage, shareable style references, progressive loading
|
|
||||||
|
|
||||||
**⚡ `/workflow:lite-plan`** - Intelligent Planning & Execution (Testing Phase)
|
**⚡ `/workflow:lite-plan`** - Intelligent Planning & Execution (Testing Phase)
|
||||||
- Dynamic workflow adaptation (smart exploration, adaptive planning, progressive clarification)
|
- Dynamic workflow adaptation (smart exploration, adaptive planning, progressive clarification)
|
||||||
@@ -32,9 +31,9 @@ Comprehensive command guide for Claude DMS3 workflow system covering 75 commands
|
|||||||
- Creates feature-specific SKILL packages for code understanding
|
- Creates feature-specific SKILL packages for code understanding
|
||||||
- Progressive loading (2K → 30K tokens) for efficient context management
|
- Progressive loading (2K → 30K tokens) for efficient context management
|
||||||
|
|
||||||
### Agent Enhancements
|
### Agent
|
||||||
|
|
||||||
- **cli-explore-agent** (New) - Specialized code exploration with Deep Scan mode (Bash + Gemini)
|
- **cli-explore-agent** - Specialized code exploration with Deep Scan mode (Bash + Gemini)
|
||||||
- **cli-planning-agent** - Enhanced task generation with improved context handling
|
- **cli-planning-agent** - Enhanced task generation with improved context handling
|
||||||
- **ui-design-agent** - Major refactoring for better design system extraction
|
- **ui-design-agent** - Major refactoring for better design system extraction
|
||||||
|
|
||||||
@@ -277,13 +276,13 @@ All command metadata is stored in JSON indexes for fast querying:
|
|||||||
|
|
||||||
Complete backup of all command and agent documentation for deep analysis:
|
Complete backup of all command and agent documentation for deep analysis:
|
||||||
|
|
||||||
- **[reference/agents/](reference/agents/)** - 13 agent markdown files with implementation details
|
- **[reference/agents/](reference/agents/)** - 14 agent markdown files with implementation details
|
||||||
- **New in v5.8**: cli-explore-agent (code exploration), cli-planning-agent (enhanced)
|
- **New in v5.8**: cli-explore-agent (code exploration), cli-planning-agent (enhanced)
|
||||||
- **[reference/commands/](reference/commands/)** - 75 command markdown files organized by category
|
- **[reference/commands/](reference/commands/)** - 78 command markdown files organized by category
|
||||||
- `cli/` - CLI tool commands (9 files)
|
- `cli/` - CLI tool commands (10 files) - **New**: document-analysis mode
|
||||||
- `memory/` - Memory management commands (10 files) - **New**: code-map-memory, style-skill-memory
|
- `memory/` - Memory management commands (12 files) - **New**: docs-full-cli, docs-related-cli, code-map-memory, style-skill-memory
|
||||||
- `task/` - Task management commands (4 files)
|
- `task/` - Task management commands (4 files)
|
||||||
- `workflow/` - Workflow commands (50 files) - **New**: lite-plan, ui-design enhancements
|
- `workflow/` - Workflow commands (50 files) - **New**: lite-plan, lite-fix, ui-design enhancements
|
||||||
|
|
||||||
**Installation Path**: `~/.claude/skills/command-guide/` (skill designed for global installation)
|
**Installation Path**: `~/.claude/skills/command-guide/` (skill designed for global installation)
|
||||||
|
|
||||||
@@ -314,13 +313,13 @@ Templates are auto-populated during Mode 5 (Issue Reporting) interaction.
|
|||||||
|
|
||||||
## 📊 System Statistics
|
## 📊 System Statistics
|
||||||
|
|
||||||
- **Total Commands**: 75
|
- **Total Commands**: 78
|
||||||
- **Total Agents**: 13
|
- **Total Agents**: 14
|
||||||
- **Categories**: 4 (workflow: 50, cli: 9, memory: 10, task: 4, general: 2)
|
- **Categories**: 5 (workflow: 50, cli: 10, memory: 12, task: 4, general: 2)
|
||||||
- **Use Cases**: 7 (planning, implementation, testing, documentation, session-management, analysis, ui-design)
|
- **Use Cases**: 7 (planning, implementation, testing, documentation, session-management, analysis, general)
|
||||||
- **Difficulty Levels**: 3 (Beginner, Intermediate, Advanced)
|
- **Difficulty Levels**: 3 (Beginner, Intermediate, Advanced)
|
||||||
- **Essential Commands**: 14
|
- **Essential Commands**: 13
|
||||||
- **Reference Docs**: 88 markdown files (13 agents + 75 commands)
|
- **Reference Docs**: 92 markdown files (14 agents + 78 commands)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -386,4 +385,4 @@ This SKILL documentation is kept in sync with command implementations through a
|
|||||||
- 4 issue templates for standardized problem reporting
|
- 4 issue templates for standardized problem reporting
|
||||||
- CLI-assisted complex query analysis with gemini/qwen integration
|
- CLI-assisted complex query analysis with gemini/qwen integration
|
||||||
|
|
||||||
**Maintainer**: Claude DMS3 Team
|
**Maintainer**: CCW Team
|
||||||
|
|||||||
@@ -88,11 +88,7 @@ Tools for combining components and integrating results.
|
|||||||
4. **User selects** 2 preferred layouts (multi-select)
|
4. **User selects** 2 preferred layouts (multi-select)
|
||||||
5. System generates only 4-6 final prototypes (selected combinations)
|
5. System generates only 4-6 final prototypes (selected combinations)
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Reduces unnecessary generation (from 20 to 4-6 prototypes)
|
|
||||||
- Focuses resources on preferred design directions
|
|
||||||
- Saves 70-80% computation time
|
|
||||||
- Better exploration quality
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -169,10 +165,7 @@ Tools for combining components and integrating results.
|
|||||||
- `layout-templates.json` (Structure) - DOM hierarchy, CSS layout rules
|
- `layout-templates.json` (Structure) - DOM hierarchy, CSS layout rules
|
||||||
- `animation-tokens.json` (Motion) - Transitions, keyframes, timing functions
|
- `animation-tokens.json` (Motion) - Transitions, keyframes, timing functions
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Instant re-theming by swapping design tokens
|
|
||||||
- Layout reuse across different visual styles
|
|
||||||
- Independent evolution of style and structure
|
|
||||||
|
|
||||||
### Token-First CSS
|
### Token-First CSS
|
||||||
|
|
||||||
|
|||||||
@@ -1,26 +1,4 @@
|
|||||||
[
|
[
|
||||||
{
|
|
||||||
"name": "analyze",
|
|
||||||
"command": "/cli:analyze",
|
|
||||||
"description": "Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/analyze.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "chat",
|
|
||||||
"command": "/cli:chat",
|
|
||||||
"description": "Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] inquiry",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/chat.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "cli-init",
|
"name": "cli-init",
|
||||||
"command": "/cli:cli-init",
|
"command": "/cli:cli-init",
|
||||||
@@ -32,72 +10,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "cli/cli-init.md"
|
"file_path": "cli/cli-init.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "codex-execute",
|
|
||||||
"command": "/cli:codex-execute",
|
|
||||||
"description": "Multi-stage Codex execution with automatic task decomposition into grouped subtasks using resume mechanism for context continuity",
|
|
||||||
"arguments": "[--verify-git] task description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/codex-execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "discuss-plan",
|
|
||||||
"command": "/cli:discuss-plan",
|
|
||||||
"description": "Multi-round collaborative planning using Gemini, Codex, and Claude synthesis with iterative discussion cycles (read-only, no code changes)",
|
|
||||||
"arguments": "[--topic '...'] [--task-id '...'] [--rounds N]",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/discuss-plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "execute",
|
|
||||||
"command": "/cli:execute",
|
|
||||||
"description": "Autonomous code implementation with YOLO auto-approval using Gemini/Qwen/Codex, supports task ID or description input with automatic file pattern detection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "bug-diagnosis",
|
|
||||||
"command": "/cli:mode:bug-diagnosis",
|
|
||||||
"description": "Read-only bug root cause analysis using Gemini/Qwen/Codex with systematic diagnosis template for fix suggestions",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/bug-diagnosis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "code-analysis",
|
|
||||||
"command": "/cli:mode:code-analysis",
|
|
||||||
"description": "Read-only execution path tracing using Gemini/Qwen/Codex with specialized analysis template for call flow and optimization",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/code-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "plan",
|
|
||||||
"command": "/cli:mode:plan",
|
|
||||||
"description": "Read-only architecture planning using Gemini/Qwen/Codex with strategic planning template for modification plans and impact analysis",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/plan.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "enhance-prompt",
|
"name": "enhance-prompt",
|
||||||
"command": "/enhance-prompt",
|
"command": "/enhance-prompt",
|
||||||
@@ -120,6 +32,28 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "memory/code-map-memory.md"
|
"file_path": "memory/code-map-memory.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "docs-full-cli",
|
||||||
|
"command": "/memory:docs-full-cli",
|
||||||
|
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||||
|
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||||
|
"category": "memory",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "documentation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "memory/docs-full-cli.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "docs-related-cli",
|
||||||
|
"command": "/memory:docs-related-cli",
|
||||||
|
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||||
|
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||||
|
"category": "memory",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "documentation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "memory/docs-related-cli.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "docs",
|
"name": "docs",
|
||||||
"command": "/memory:docs",
|
"command": "/memory:docs",
|
||||||
@@ -450,6 +384,17 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/lite-execute.md"
|
"file_path": "workflow/lite-execute.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-fix",
|
||||||
|
"command": "/workflow:lite-fix",
|
||||||
|
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||||
|
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "general",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-fix.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "lite-plan",
|
"name": "lite-plan",
|
||||||
"command": "/workflow:lite-plan",
|
"command": "/workflow:lite-plan",
|
||||||
@@ -464,14 +409,58 @@
|
|||||||
{
|
{
|
||||||
"name": "plan",
|
"name": "plan",
|
||||||
"command": "/workflow:plan",
|
"command": "/workflow:plan",
|
||||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
|
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||||
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
|
"arguments": "\\\"text description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/plan.md"
|
"file_path": "workflow/plan.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "replan",
|
||||||
|
"command": "/workflow:replan",
|
||||||
|
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||||
|
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/replan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-fix",
|
||||||
|
"command": "/workflow:review-fix",
|
||||||
|
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||||
|
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-fix.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-module-cycle",
|
||||||
|
"command": "/workflow:review-module-cycle",
|
||||||
|
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||||
|
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-module-cycle.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-session-cycle",
|
||||||
|
"command": "/workflow:review-session-cycle",
|
||||||
|
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||||
|
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "session-management",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-session-cycle.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "review",
|
"name": "review",
|
||||||
"command": "/workflow:review",
|
"command": "/workflow:review",
|
||||||
@@ -520,29 +509,18 @@
|
|||||||
"name": "start",
|
"name": "start",
|
||||||
"command": "/workflow:session:start",
|
"command": "/workflow:session:start",
|
||||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "session",
|
"subcategory": "session",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/session/start.md"
|
"file_path": "workflow/session/start.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "workflow:status",
|
|
||||||
"command": "/workflow:status",
|
|
||||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
|
||||||
"arguments": "[optional: --project|task-id|--validate]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/status.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "tdd-plan",
|
"name": "tdd-plan",
|
||||||
"command": "/workflow:tdd-plan",
|
"command": "/workflow:tdd-plan",
|
||||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||||
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
|
"arguments": "\\\"feature description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -575,7 +553,7 @@
|
|||||||
"name": "test-fix-gen",
|
"name": "test-fix-gen",
|
||||||
"command": "/workflow:test-fix-gen",
|
"command": "/workflow:test-fix-gen",
|
||||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||||
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -586,7 +564,7 @@
|
|||||||
"name": "test-gen",
|
"name": "test-gen",
|
||||||
"command": "/workflow:test-gen",
|
"command": "/workflow:test-gen",
|
||||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||||
"arguments": "[--use-codex] [--cli-execute] source-session-id",
|
"arguments": "source-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -618,8 +596,8 @@
|
|||||||
{
|
{
|
||||||
"name": "task-generate-agent",
|
"name": "task-generate-agent",
|
||||||
"command": "/workflow:tools:task-generate-agent",
|
"command": "/workflow:tools:task-generate-agent",
|
||||||
"description": "Autonomous task generation using action-planning-agent with discovery and output phases for workflow planning",
|
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
@@ -630,24 +608,13 @@
|
|||||||
"name": "task-generate-tdd",
|
"name": "task-generate-tdd",
|
||||||
"command": "/workflow:tools:task-generate-tdd",
|
"command": "/workflow:tools:task-generate-tdd",
|
||||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Advanced",
|
"difficulty": "Advanced",
|
||||||
"file_path": "workflow/tools/task-generate-tdd.md"
|
"file_path": "workflow/tools/task-generate-tdd.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "task-generate",
|
|
||||||
"command": "/workflow:tools:task-generate",
|
|
||||||
"description": "Generate task JSON files and IMPL_PLAN.md from analysis results using action-planning-agent with artifact integration",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/tools/task-generate.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "tdd-coverage-analysis",
|
"name": "tdd-coverage-analysis",
|
||||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||||
@@ -662,7 +629,7 @@
|
|||||||
{
|
{
|
||||||
"name": "test-concept-enhanced",
|
"name": "test-concept-enhanced",
|
||||||
"command": "/workflow:tools:test-concept-enhanced",
|
"command": "/workflow:tools:test-concept-enhanced",
|
||||||
"description": "Analyze test requirements and generate test generation strategy using Gemini with test-context package",
|
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
@@ -684,8 +651,8 @@
|
|||||||
{
|
{
|
||||||
"name": "test-task-generate",
|
"name": "test-task-generate",
|
||||||
"command": "/workflow:tools:test-task-generate",
|
"command": "/workflow:tools:test-task-generate",
|
||||||
"description": "Autonomous test-fix task generation using action-planning-agent with test-fix-retest cycle specification and discovery phase",
|
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||||
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
|
"arguments": "--session WFS-test-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
@@ -772,8 +739,8 @@
|
|||||||
{
|
{
|
||||||
"name": "layout-extract",
|
"name": "layout-extract",
|
||||||
"command": "/workflow:ui-design:layout-extract",
|
"command": "/workflow:ui-design:layout-extract",
|
||||||
"description": "Extract structural layout information from reference images, URLs, or text prompts using Claude analysis with variant generation or refinement mode",
|
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--urls \"<list>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "ui-design",
|
"subcategory": "ui-design",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -795,7 +762,7 @@
|
|||||||
"name": "style-extract",
|
"name": "style-extract",
|
||||||
"command": "/workflow:ui-design:style-extract",
|
"command": "/workflow:ui-design:style-extract",
|
||||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--urls \"<list>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "ui-design",
|
"subcategory": "ui-design",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
|
|||||||
@@ -1,28 +1,6 @@
|
|||||||
{
|
{
|
||||||
"cli": {
|
"cli": {
|
||||||
"_root": [
|
"_root": [
|
||||||
{
|
|
||||||
"name": "analyze",
|
|
||||||
"command": "/cli:analyze",
|
|
||||||
"description": "Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/analyze.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "chat",
|
|
||||||
"command": "/cli:chat",
|
|
||||||
"description": "Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] inquiry",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/chat.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "cli-init",
|
"name": "cli-init",
|
||||||
"command": "/cli:cli-init",
|
"command": "/cli:cli-init",
|
||||||
@@ -33,74 +11,6 @@
|
|||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "cli/cli-init.md"
|
"file_path": "cli/cli-init.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "codex-execute",
|
|
||||||
"command": "/cli:codex-execute",
|
|
||||||
"description": "Multi-stage Codex execution with automatic task decomposition into grouped subtasks using resume mechanism for context continuity",
|
|
||||||
"arguments": "[--verify-git] task description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/codex-execute.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "discuss-plan",
|
|
||||||
"command": "/cli:discuss-plan",
|
|
||||||
"description": "Multi-round collaborative planning using Gemini, Codex, and Claude synthesis with iterative discussion cycles (read-only, no code changes)",
|
|
||||||
"arguments": "[--topic '...'] [--task-id '...'] [--rounds N]",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/discuss-plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "execute",
|
|
||||||
"command": "/cli:execute",
|
|
||||||
"description": "Autonomous code implementation with YOLO auto-approval using Gemini/Qwen/Codex, supports task ID or description input with automatic file pattern detection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] description or task-id",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/execute.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"mode": [
|
|
||||||
{
|
|
||||||
"name": "bug-diagnosis",
|
|
||||||
"command": "/cli:mode:bug-diagnosis",
|
|
||||||
"description": "Read-only bug root cause analysis using Gemini/Qwen/Codex with systematic diagnosis template for fix suggestions",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/bug-diagnosis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "code-analysis",
|
|
||||||
"command": "/cli:mode:code-analysis",
|
|
||||||
"description": "Read-only execution path tracing using Gemini/Qwen/Codex with specialized analysis template for call flow and optimization",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/code-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "plan",
|
|
||||||
"command": "/cli:mode:plan",
|
|
||||||
"description": "Read-only architecture planning using Gemini/Qwen/Codex with strategic planning template for modification plans and impact analysis",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/plan.md"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -143,6 +53,28 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "memory/code-map-memory.md"
|
"file_path": "memory/code-map-memory.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "docs-full-cli",
|
||||||
|
"command": "/memory:docs-full-cli",
|
||||||
|
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||||
|
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||||
|
"category": "memory",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "documentation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "memory/docs-full-cli.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "docs-related-cli",
|
||||||
|
"command": "/memory:docs-related-cli",
|
||||||
|
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||||
|
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||||
|
"category": "memory",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "documentation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "memory/docs-related-cli.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "docs",
|
"name": "docs",
|
||||||
"command": "/memory:docs",
|
"command": "/memory:docs",
|
||||||
@@ -338,6 +270,17 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/lite-execute.md"
|
"file_path": "workflow/lite-execute.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-fix",
|
||||||
|
"command": "/workflow:lite-fix",
|
||||||
|
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||||
|
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "general",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-fix.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "lite-plan",
|
"name": "lite-plan",
|
||||||
"command": "/workflow:lite-plan",
|
"command": "/workflow:lite-plan",
|
||||||
@@ -352,14 +295,58 @@
|
|||||||
{
|
{
|
||||||
"name": "plan",
|
"name": "plan",
|
||||||
"command": "/workflow:plan",
|
"command": "/workflow:plan",
|
||||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
|
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||||
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
|
"arguments": "\\\"text description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/plan.md"
|
"file_path": "workflow/plan.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "replan",
|
||||||
|
"command": "/workflow:replan",
|
||||||
|
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||||
|
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/replan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-fix",
|
||||||
|
"command": "/workflow:review-fix",
|
||||||
|
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||||
|
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-fix.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-module-cycle",
|
||||||
|
"command": "/workflow:review-module-cycle",
|
||||||
|
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||||
|
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-module-cycle.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-session-cycle",
|
||||||
|
"command": "/workflow:review-session-cycle",
|
||||||
|
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||||
|
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "session-management",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-session-cycle.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "review",
|
"name": "review",
|
||||||
"command": "/workflow:review",
|
"command": "/workflow:review",
|
||||||
@@ -371,22 +358,11 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/review.md"
|
"file_path": "workflow/review.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "workflow:status",
|
|
||||||
"command": "/workflow:status",
|
|
||||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
|
||||||
"arguments": "[optional: --project|task-id|--validate]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/status.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "tdd-plan",
|
"name": "tdd-plan",
|
||||||
"command": "/workflow:tdd-plan",
|
"command": "/workflow:tdd-plan",
|
||||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||||
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
|
"arguments": "\\\"feature description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -419,7 +395,7 @@
|
|||||||
"name": "test-fix-gen",
|
"name": "test-fix-gen",
|
||||||
"command": "/workflow:test-fix-gen",
|
"command": "/workflow:test-fix-gen",
|
||||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||||
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -430,7 +406,7 @@
|
|||||||
"name": "test-gen",
|
"name": "test-gen",
|
||||||
"command": "/workflow:test-gen",
|
"command": "/workflow:test-gen",
|
||||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||||
"arguments": "[--use-codex] [--cli-execute] source-session-id",
|
"arguments": "source-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -610,7 +586,7 @@
|
|||||||
"name": "start",
|
"name": "start",
|
||||||
"command": "/workflow:session:start",
|
"command": "/workflow:session:start",
|
||||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "session",
|
"subcategory": "session",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -644,8 +620,8 @@
|
|||||||
{
|
{
|
||||||
"name": "task-generate-agent",
|
"name": "task-generate-agent",
|
||||||
"command": "/workflow:tools:task-generate-agent",
|
"command": "/workflow:tools:task-generate-agent",
|
||||||
"description": "Autonomous task generation using action-planning-agent with discovery and output phases for workflow planning",
|
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
@@ -656,24 +632,13 @@
|
|||||||
"name": "task-generate-tdd",
|
"name": "task-generate-tdd",
|
||||||
"command": "/workflow:tools:task-generate-tdd",
|
"command": "/workflow:tools:task-generate-tdd",
|
||||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Advanced",
|
"difficulty": "Advanced",
|
||||||
"file_path": "workflow/tools/task-generate-tdd.md"
|
"file_path": "workflow/tools/task-generate-tdd.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "task-generate",
|
|
||||||
"command": "/workflow:tools:task-generate",
|
|
||||||
"description": "Generate task JSON files and IMPL_PLAN.md from analysis results using action-planning-agent with artifact integration",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/tools/task-generate.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "tdd-coverage-analysis",
|
"name": "tdd-coverage-analysis",
|
||||||
"command": "/workflow:tools:tdd-coverage-analysis",
|
"command": "/workflow:tools:tdd-coverage-analysis",
|
||||||
@@ -688,7 +653,7 @@
|
|||||||
{
|
{
|
||||||
"name": "test-concept-enhanced",
|
"name": "test-concept-enhanced",
|
||||||
"command": "/workflow:tools:test-concept-enhanced",
|
"command": "/workflow:tools:test-concept-enhanced",
|
||||||
"description": "Analyze test requirements and generate test generation strategy using Gemini with test-context package",
|
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
@@ -710,8 +675,8 @@
|
|||||||
{
|
{
|
||||||
"name": "test-task-generate",
|
"name": "test-task-generate",
|
||||||
"command": "/workflow:tools:test-task-generate",
|
"command": "/workflow:tools:test-task-generate",
|
||||||
"description": "Autonomous test-fix task generation using action-planning-agent with test-fix-retest cycle specification and discovery phase",
|
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||||
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
|
"arguments": "--session WFS-test-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "implementation",
|
||||||
@@ -800,8 +765,8 @@
|
|||||||
{
|
{
|
||||||
"name": "layout-extract",
|
"name": "layout-extract",
|
||||||
"command": "/workflow:ui-design:layout-extract",
|
"command": "/workflow:ui-design:layout-extract",
|
||||||
"description": "Extract structural layout information from reference images, URLs, or text prompts using Claude analysis with variant generation or refinement mode",
|
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--urls \"<list>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "ui-design",
|
"subcategory": "ui-design",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -823,7 +788,7 @@
|
|||||||
"name": "style-extract",
|
"name": "style-extract",
|
||||||
"command": "/workflow:ui-design:style-extract",
|
"command": "/workflow:ui-design:style-extract",
|
||||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--urls \"<list>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "ui-design",
|
"subcategory": "ui-design",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
|
|||||||
@@ -1,51 +1,5 @@
|
|||||||
{
|
{
|
||||||
"analysis": [
|
|
||||||
{
|
|
||||||
"name": "analyze",
|
|
||||||
"command": "/cli:analyze",
|
|
||||||
"description": "Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/analyze.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "bug-diagnosis",
|
|
||||||
"command": "/cli:mode:bug-diagnosis",
|
|
||||||
"description": "Read-only bug root cause analysis using Gemini/Qwen/Codex with systematic diagnosis template for fix suggestions",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] bug description",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/bug-diagnosis.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "review",
|
|
||||||
"command": "/workflow:review",
|
|
||||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
|
||||||
"arguments": "[--type=security|architecture|action-items|quality] [optional: session-id]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/review.md"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"general": [
|
"general": [
|
||||||
{
|
|
||||||
"name": "chat",
|
|
||||||
"command": "/cli:chat",
|
|
||||||
"description": "Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] inquiry",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/chat.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "cli-init",
|
"name": "cli-init",
|
||||||
"command": "/cli:cli-init",
|
"command": "/cli:cli-init",
|
||||||
@@ -57,17 +11,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "cli/cli-init.md"
|
"file_path": "cli/cli-init.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "code-analysis",
|
|
||||||
"command": "/cli:mode:code-analysis",
|
|
||||||
"description": "Read-only execution path tracing using Gemini/Qwen/Codex with specialized analysis template for call flow and optimization",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/code-analysis.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "enhance-prompt",
|
"name": "enhance-prompt",
|
||||||
"command": "/enhance-prompt",
|
"command": "/enhance-prompt",
|
||||||
@@ -255,6 +198,17 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/init.md"
|
"file_path": "workflow/init.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-fix",
|
||||||
|
"command": "/workflow:lite-fix",
|
||||||
|
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||||
|
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "general",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-fix.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "list",
|
"name": "list",
|
||||||
"command": "/workflow:session:list",
|
"command": "/workflow:session:list",
|
||||||
@@ -270,7 +224,7 @@
|
|||||||
"name": "start",
|
"name": "start",
|
||||||
"command": "/workflow:session:start",
|
"command": "/workflow:session:start",
|
||||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "session",
|
"subcategory": "session",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -335,8 +289,8 @@
|
|||||||
{
|
{
|
||||||
"name": "layout-extract",
|
"name": "layout-extract",
|
||||||
"command": "/workflow:ui-design:layout-extract",
|
"command": "/workflow:ui-design:layout-extract",
|
||||||
"description": "Extract structural layout information from reference images, URLs, or text prompts using Claude analysis with variant generation or refinement mode",
|
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--urls \"<list>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "ui-design",
|
"subcategory": "ui-design",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -347,7 +301,7 @@
|
|||||||
"name": "style-extract",
|
"name": "style-extract",
|
||||||
"command": "/workflow:ui-design:style-extract",
|
"command": "/workflow:ui-design:style-extract",
|
||||||
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
|
||||||
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--urls \"<list>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
"arguments": "[--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "ui-design",
|
"subcategory": "ui-design",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -355,163 +309,97 @@
|
|||||||
"file_path": "workflow/ui-design/style-extract.md"
|
"file_path": "workflow/ui-design/style-extract.md"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"implementation": [
|
"documentation": [
|
||||||
{
|
{
|
||||||
"name": "codex-execute",
|
"name": "code-map-memory",
|
||||||
"command": "/cli:codex-execute",
|
"command": "/memory:code-map-memory",
|
||||||
"description": "Multi-stage Codex execution with automatic task decomposition into grouped subtasks using resume mechanism for context continuity",
|
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
||||||
"arguments": "[--verify-git] task description or task-id",
|
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
||||||
"category": "cli",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "documentation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "cli/codex-execute.md"
|
"file_path": "memory/code-map-memory.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "execute",
|
"name": "docs-full-cli",
|
||||||
"command": "/cli:execute",
|
"command": "/memory:docs-full-cli",
|
||||||
"description": "Autonomous code implementation with YOLO auto-approval using Gemini/Qwen/Codex, supports task ID or description input with automatic file pattern detection",
|
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] description or task-id",
|
"arguments": "[path] [--tool <gemini|qwen|codex>]",
|
||||||
"category": "cli",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "documentation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "cli/execute.md"
|
"file_path": "memory/docs-full-cli.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "create",
|
"name": "docs-related-cli",
|
||||||
"command": "/task:create",
|
"command": "/memory:docs-related-cli",
|
||||||
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
|
||||||
"arguments": "\\\"task title\\",
|
"arguments": "[--tool <gemini|qwen|codex>]",
|
||||||
"category": "task",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "documentation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "task/create.md"
|
"file_path": "memory/docs-related-cli.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "execute",
|
"name": "docs",
|
||||||
"command": "/task:execute",
|
"command": "/memory:docs",
|
||||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
||||||
"arguments": "task-id",
|
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
||||||
"category": "task",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "documentation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "task/execute.md"
|
"file_path": "memory/docs.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "execute",
|
"name": "load-skill-memory",
|
||||||
"command": "/workflow:execute",
|
"command": "/memory:load-skill-memory",
|
||||||
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
||||||
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
"arguments": "[skill_name] \\\"task intent description\\",
|
||||||
"category": "workflow",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "documentation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/execute.md"
|
"file_path": "memory/load-skill-memory.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "lite-execute",
|
"name": "skill-memory",
|
||||||
"command": "/workflow:lite-execute",
|
"command": "/memory:skill-memory",
|
||||||
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
||||||
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
||||||
"category": "workflow",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "documentation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/lite-execute.md"
|
"file_path": "memory/skill-memory.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "test-cycle-execute",
|
"name": "style-skill-memory",
|
||||||
"command": "/workflow:test-cycle-execute",
|
"command": "/memory:style-skill-memory",
|
||||||
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
||||||
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
"arguments": "[package-name] [--regenerate]",
|
||||||
"category": "workflow",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "documentation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/test-cycle-execute.md"
|
"file_path": "memory/style-skill-memory.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "task-generate-agent",
|
"name": "workflow-skill-memory",
|
||||||
"command": "/workflow:tools:task-generate-agent",
|
"command": "/memory:workflow-skill-memory",
|
||||||
"description": "Autonomous task generation using action-planning-agent with discovery and output phases for workflow planning",
|
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
"arguments": "session <session-id> | all",
|
||||||
"category": "workflow",
|
"category": "memory",
|
||||||
"subcategory": "tools",
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "documentation",
|
||||||
"difficulty": "Advanced",
|
|
||||||
"file_path": "workflow/tools/task-generate-agent.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "task-generate-tdd",
|
|
||||||
"command": "/workflow:tools:task-generate-tdd",
|
|
||||||
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Advanced",
|
|
||||||
"file_path": "workflow/tools/task-generate-tdd.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "task-generate",
|
|
||||||
"command": "/workflow:tools:task-generate",
|
|
||||||
"description": "Generate task JSON files and IMPL_PLAN.md from analysis results using action-planning-agent with artifact integration",
|
|
||||||
"arguments": "--session WFS-session-id [--cli-execute]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/tools/task-generate.md"
|
"file_path": "memory/workflow-skill-memory.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "test-task-generate",
|
|
||||||
"command": "/workflow:tools:test-task-generate",
|
|
||||||
"description": "Autonomous test-fix task generation using action-planning-agent with test-fix-retest cycle specification and discovery phase",
|
|
||||||
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "tools",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/tools/test-task-generate.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "generate",
|
|
||||||
"command": "/workflow:ui-design:generate",
|
|
||||||
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
|
||||||
"arguments": "[--design-id <id>] [--session <id>]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": "ui-design",
|
|
||||||
"usage_scenario": "implementation",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/ui-design/generate.md"
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"planning": [
|
"planning": [
|
||||||
{
|
|
||||||
"name": "discuss-plan",
|
|
||||||
"command": "/cli:discuss-plan",
|
|
||||||
"description": "Multi-round collaborative planning using Gemini, Codex, and Claude synthesis with iterative discussion cycles (read-only, no code changes)",
|
|
||||||
"arguments": "[--topic '...'] [--task-id '...'] [--rounds N]",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/discuss-plan.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "plan",
|
|
||||||
"command": "/cli:mode:plan",
|
|
||||||
"description": "Read-only architecture planning using Gemini/Qwen/Codex with strategic planning template for modification plans and impact analysis",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] [--cd path] topic",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": "mode",
|
|
||||||
"usage_scenario": "planning",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "cli/mode/plan.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "breakdown",
|
"name": "breakdown",
|
||||||
"command": "/task:breakdown",
|
"command": "/task:breakdown",
|
||||||
@@ -581,19 +469,30 @@
|
|||||||
{
|
{
|
||||||
"name": "plan",
|
"name": "plan",
|
||||||
"command": "/workflow:plan",
|
"command": "/workflow:plan",
|
||||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
|
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||||
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
|
"arguments": "\\\"text description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/plan.md"
|
"file_path": "workflow/plan.md"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "replan",
|
||||||
|
"command": "/workflow:replan",
|
||||||
|
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
|
||||||
|
"arguments": "[--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/replan.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "tdd-plan",
|
"name": "tdd-plan",
|
||||||
"command": "/workflow:tdd-plan",
|
"command": "/workflow:tdd-plan",
|
||||||
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
|
||||||
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
|
"arguments": "\\\"feature description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -645,75 +544,154 @@
|
|||||||
"file_path": "workflow/ui-design/reference-page-generator.md"
|
"file_path": "workflow/ui-design/reference-page-generator.md"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"documentation": [
|
"implementation": [
|
||||||
{
|
{
|
||||||
"name": "code-map-memory",
|
"name": "create",
|
||||||
"command": "/memory:code-map-memory",
|
"command": "/task:create",
|
||||||
"description": "3-phase orchestrator: parse feature keyword → cli-explore-agent analyzes (Deep Scan dual-source) → orchestrator generates Mermaid docs + SKILL package (skips phase 2 if exists)",
|
"description": "Generate task JSON from natural language description with automatic file pattern detection, scope inference, and dependency analysis",
|
||||||
"arguments": "\\\"feature-keyword\\\" [--regenerate] [--tool <gemini|qwen>]",
|
"arguments": "\\\"task title\\",
|
||||||
"category": "memory",
|
"category": "task",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "documentation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "memory/code-map-memory.md"
|
"file_path": "task/create.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "docs",
|
"name": "execute",
|
||||||
"command": "/memory:docs",
|
"command": "/task:execute",
|
||||||
"description": "Plan documentation workflow with dynamic grouping (≤10 docs/task), generates IMPL tasks for parallel module trees, README, ARCHITECTURE, and HTTP API docs",
|
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
||||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]",
|
"arguments": "task-id",
|
||||||
"category": "memory",
|
"category": "task",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "documentation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "memory/docs.md"
|
"file_path": "task/execute.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "load-skill-memory",
|
"name": "execute",
|
||||||
"command": "/memory:load-skill-memory",
|
"command": "/workflow:execute",
|
||||||
"description": "Activate SKILL package (auto-detect from paths/keywords or manual) and intelligently load documentation based on task intent keywords",
|
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
|
||||||
"arguments": "[skill_name] \\\"task intent description\\",
|
"arguments": "[--resume-session=\\\"session-id\\\"]",
|
||||||
"category": "memory",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "documentation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "memory/load-skill-memory.md"
|
"file_path": "workflow/execute.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "skill-memory",
|
"name": "lite-execute",
|
||||||
"command": "/memory:skill-memory",
|
"command": "/workflow:lite-execute",
|
||||||
"description": "4-phase autonomous orchestrator: check docs → /memory:docs planning → /workflow:execute → generate SKILL.md with progressive loading index (skips phases 2-3 if docs exist)",
|
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
|
||||||
"arguments": "[path] [--tool <gemini|qwen|codex>] [--regenerate] [--mode <full|partial>] [--cli-execute]",
|
"arguments": "[--in-memory] [\\\"task description\\\"|file-path]",
|
||||||
"category": "memory",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "documentation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "memory/skill-memory.md"
|
"file_path": "workflow/lite-execute.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "style-skill-memory",
|
"name": "test-cycle-execute",
|
||||||
"command": "/memory:style-skill-memory",
|
"command": "/workflow:test-cycle-execute",
|
||||||
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
|
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
|
||||||
"arguments": "[package-name] [--regenerate]",
|
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
|
||||||
"category": "memory",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "documentation",
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "memory/style-skill-memory.md"
|
"file_path": "workflow/test-cycle-execute.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "workflow-skill-memory",
|
"name": "task-generate-agent",
|
||||||
"command": "/memory:workflow-skill-memory",
|
"command": "/workflow:tools:task-generate-agent",
|
||||||
"description": "Process WFS-* archived sessions using universal-executor agents with Gemini analysis to generate workflow-progress SKILL package (sessions-timeline, lessons, conflicts)",
|
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
|
||||||
"arguments": "session <session-id> | all",
|
"arguments": "--session WFS-session-id",
|
||||||
"category": "memory",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": "tools",
|
||||||
"usage_scenario": "documentation",
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Advanced",
|
||||||
|
"file_path": "workflow/tools/task-generate-agent.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "task-generate-tdd",
|
||||||
|
"command": "/workflow:tools:task-generate-tdd",
|
||||||
|
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
|
||||||
|
"arguments": "--session WFS-session-id",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "tools",
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Advanced",
|
||||||
|
"file_path": "workflow/tools/task-generate-tdd.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "test-task-generate",
|
||||||
|
"command": "/workflow:tools:test-task-generate",
|
||||||
|
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
|
||||||
|
"arguments": "--session WFS-test-session-id",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "tools",
|
||||||
|
"usage_scenario": "implementation",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "memory/workflow-skill-memory.md"
|
"file_path": "workflow/tools/test-task-generate.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "generate",
|
||||||
|
"command": "/workflow:ui-design:generate",
|
||||||
|
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
|
||||||
|
"arguments": "[--design-id <id>] [--session <id>]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": "ui-design",
|
||||||
|
"usage_scenario": "implementation",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/ui-design/generate.md"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"analysis": [
|
||||||
|
{
|
||||||
|
"name": "review-fix",
|
||||||
|
"command": "/workflow:review-fix",
|
||||||
|
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
|
||||||
|
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-fix.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review-module-cycle",
|
||||||
|
"command": "/workflow:review-module-cycle",
|
||||||
|
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
|
||||||
|
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-module-cycle.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "review",
|
||||||
|
"command": "/workflow:review",
|
||||||
|
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
||||||
|
"arguments": "[--type=security|architecture|action-items|quality] [optional: session-id]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "analysis",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review.md"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"session-management": [
|
"session-management": [
|
||||||
|
{
|
||||||
|
"name": "review-session-cycle",
|
||||||
|
"command": "/workflow:review-session-cycle",
|
||||||
|
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||||
|
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "session-management",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/review-session-cycle.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "complete",
|
"name": "complete",
|
||||||
"command": "/workflow:session:complete",
|
"command": "/workflow:session:complete",
|
||||||
@@ -735,17 +713,6 @@
|
|||||||
"usage_scenario": "session-management",
|
"usage_scenario": "session-management",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/session/resume.md"
|
"file_path": "workflow/session/resume.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "workflow:status",
|
|
||||||
"command": "/workflow:status",
|
|
||||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
|
||||||
"arguments": "[optional: --project|task-id|--validate]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/status.md"
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"testing": [
|
"testing": [
|
||||||
@@ -764,7 +731,7 @@
|
|||||||
"name": "test-fix-gen",
|
"name": "test-fix-gen",
|
||||||
"command": "/workflow:test-fix-gen",
|
"command": "/workflow:test-fix-gen",
|
||||||
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
|
||||||
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -775,7 +742,7 @@
|
|||||||
"name": "test-gen",
|
"name": "test-gen",
|
||||||
"command": "/workflow:test-gen",
|
"command": "/workflow:test-gen",
|
||||||
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
|
||||||
"arguments": "[--use-codex] [--cli-execute] source-session-id",
|
"arguments": "source-session-id",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "testing",
|
"usage_scenario": "testing",
|
||||||
@@ -796,7 +763,7 @@
|
|||||||
{
|
{
|
||||||
"name": "test-concept-enhanced",
|
"name": "test-concept-enhanced",
|
||||||
"command": "/workflow:tools:test-concept-enhanced",
|
"command": "/workflow:tools:test-concept-enhanced",
|
||||||
"description": "Analyze test requirements and generate test generation strategy using Gemini with test-context package",
|
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
|
||||||
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "tools",
|
"subcategory": "tools",
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"workflow:session:start",
|
"workflow:session:start",
|
||||||
"workflow:tools:context-gather",
|
"workflow:tools:context-gather",
|
||||||
"workflow:tools:conflict-resolution",
|
"workflow:tools:conflict-resolution",
|
||||||
"workflow:tools:task-generate",
|
|
||||||
"workflow:tools:task-generate-agent"
|
"workflow:tools:task-generate-agent"
|
||||||
],
|
],
|
||||||
"next_steps": [
|
"next_steps": [
|
||||||
@@ -239,5 +238,70 @@
|
|||||||
"next_steps": [
|
"next_steps": [
|
||||||
"workflow:ui-design:generate"
|
"workflow:ui-design:generate"
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
"workflow:lite-plan": {
|
||||||
|
"calls_internally": [
|
||||||
|
"workflow:lite-execute"
|
||||||
|
],
|
||||||
|
"next_steps": [
|
||||||
|
"workflow:lite-execute",
|
||||||
|
"workflow:status"
|
||||||
|
],
|
||||||
|
"alternatives": [
|
||||||
|
"workflow:plan"
|
||||||
|
],
|
||||||
|
"prerequisites": []
|
||||||
|
},
|
||||||
|
"workflow:lite-fix": {
|
||||||
|
"next_steps": [
|
||||||
|
"workflow:lite-execute",
|
||||||
|
"workflow:status"
|
||||||
|
],
|
||||||
|
"alternatives": [
|
||||||
|
"workflow:lite-plan"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:test-cycle-execute"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workflow:lite-execute": {
|
||||||
|
"prerequisites": [
|
||||||
|
"workflow:lite-plan",
|
||||||
|
"workflow:lite-fix"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:execute",
|
||||||
|
"workflow:status"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workflow:review-module-cycle": {
|
||||||
|
"next_steps": [
|
||||||
|
"workflow:review-fix"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:review-session-cycle",
|
||||||
|
"workflow:review"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workflow:review-session-cycle": {
|
||||||
|
"prerequisites": [
|
||||||
|
"workflow:execute"
|
||||||
|
],
|
||||||
|
"next_steps": [
|
||||||
|
"workflow:review-fix"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:review-module-cycle",
|
||||||
|
"workflow:review"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"workflow:review-fix": {
|
||||||
|
"prerequisites": [
|
||||||
|
"workflow:review-module-cycle",
|
||||||
|
"workflow:review-session-cycle"
|
||||||
|
],
|
||||||
|
"related": [
|
||||||
|
"workflow:test-cycle-execute"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,9 +1,31 @@
|
|||||||
[
|
[
|
||||||
|
{
|
||||||
|
"name": "lite-plan",
|
||||||
|
"command": "/workflow:lite-plan",
|
||||||
|
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution dispatch to lite-execute after user confirmation",
|
||||||
|
"arguments": "[-e|--explore] \\\"task description\\\"|file.md",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "planning",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-plan.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "lite-fix",
|
||||||
|
"command": "/workflow:lite-fix",
|
||||||
|
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
|
||||||
|
"arguments": "[--hotfix] \\\"bug description or issue reference\\",
|
||||||
|
"category": "workflow",
|
||||||
|
"subcategory": null,
|
||||||
|
"usage_scenario": "general",
|
||||||
|
"difficulty": "Intermediate",
|
||||||
|
"file_path": "workflow/lite-fix.md"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "plan",
|
"name": "plan",
|
||||||
"command": "/workflow:plan",
|
"command": "/workflow:plan",
|
||||||
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
|
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
|
||||||
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
|
"arguments": "\\\"text description\\\"|file.md",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "planning",
|
"usage_scenario": "planning",
|
||||||
@@ -21,22 +43,11 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/execute.md"
|
"file_path": "workflow/execute.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "workflow:status",
|
|
||||||
"command": "/workflow:status",
|
|
||||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
|
||||||
"arguments": "[optional: --project|task-id|--validate]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "session-management",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "workflow/status.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "start",
|
"name": "start",
|
||||||
"command": "/workflow:session:start",
|
"command": "/workflow:session:start",
|
||||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||||
"category": "workflow",
|
"category": "workflow",
|
||||||
"subcategory": "session",
|
"subcategory": "session",
|
||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
@@ -44,37 +55,15 @@
|
|||||||
"file_path": "workflow/session/start.md"
|
"file_path": "workflow/session/start.md"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "execute",
|
"name": "review-session-cycle",
|
||||||
"command": "/task:execute",
|
"command": "/workflow:review-session-cycle",
|
||||||
"description": "Execute task JSON using appropriate agent (@doc-generator/@implementation-agent/@test-agent) with pre-analysis context loading and status tracking",
|
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
|
||||||
"arguments": "task-id",
|
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
|
||||||
"category": "task",
|
"category": "workflow",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
"usage_scenario": "implementation",
|
"usage_scenario": "session-management",
|
||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "task/execute.md"
|
"file_path": "workflow/review-session-cycle.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "analyze",
|
|
||||||
"command": "/cli:analyze",
|
|
||||||
"description": "Read-only codebase analysis using Gemini (default), Qwen, or Codex with auto-pattern detection and template selection",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] analysis target",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/analyze.md"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "chat",
|
|
||||||
"command": "/cli:chat",
|
|
||||||
"description": "Read-only Q&A interaction with Gemini/Qwen/Codex for codebase questions with automatic context inference",
|
|
||||||
"arguments": "[--tool codex|gemini|qwen] [--enhance] inquiry",
|
|
||||||
"category": "cli",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Beginner",
|
|
||||||
"file_path": "cli/chat.md"
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "docs",
|
"name": "docs",
|
||||||
@@ -109,17 +98,6 @@
|
|||||||
"difficulty": "Intermediate",
|
"difficulty": "Intermediate",
|
||||||
"file_path": "workflow/action-plan-verify.md"
|
"file_path": "workflow/action-plan-verify.md"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "review",
|
|
||||||
"command": "/workflow:review",
|
|
||||||
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
|
|
||||||
"arguments": "[--type=security|architecture|action-items|quality] [optional: session-id]",
|
|
||||||
"category": "workflow",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "analysis",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "workflow/review.md"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "version",
|
"name": "version",
|
||||||
"command": "/version",
|
"command": "/version",
|
||||||
@@ -130,16 +108,5 @@
|
|||||||
"usage_scenario": "general",
|
"usage_scenario": "general",
|
||||||
"difficulty": "Beginner",
|
"difficulty": "Beginner",
|
||||||
"file_path": "version.md"
|
"file_path": "version.md"
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "enhance-prompt",
|
|
||||||
"command": "/enhance-prompt",
|
|
||||||
"description": "Enhanced prompt transformation using session memory and intent analysis with --enhance flag detection",
|
|
||||||
"arguments": "user input to enhance",
|
|
||||||
"category": "general",
|
|
||||||
"subcategory": null,
|
|
||||||
"usage_scenario": "general",
|
|
||||||
"difficulty": "Intermediate",
|
|
||||||
"file_path": "enhance-prompt.md"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
@@ -16,107 +16,86 @@ description: |
|
|||||||
color: yellow
|
color: yellow
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
|
## Overview
|
||||||
|
|
||||||
## Execution Process
|
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
|
||||||
|
|
||||||
### Input Processing
|
**Core Capabilities**:
|
||||||
**What you receive:**
|
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
|
||||||
- **Execution Context Package**: Structured context from command layer
|
- Generate task JSON files with 6-field schema and artifact integration
|
||||||
|
- Create IMPL_PLAN.md and TODO_LIST.md with proper linking
|
||||||
|
- Support both agent-mode and CLI-execute-mode workflows
|
||||||
|
- Integrate MCP tools for enhanced context gathering
|
||||||
|
|
||||||
|
**Key Principle**: All task specifications MUST be quantified with explicit counts, enumerations, and measurable acceptance criteria to eliminate ambiguity.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Input & Execution
|
||||||
|
|
||||||
|
### 1.1 Input Processing
|
||||||
|
|
||||||
|
**What you receive from command layer:**
|
||||||
|
- **Session Paths**: File paths to load content autonomously
|
||||||
|
- `session_metadata_path`: Session configuration and user input
|
||||||
|
- `context_package_path`: Context package with brainstorming artifacts catalog
|
||||||
|
- **Metadata**: Simple values
|
||||||
- `session_id`: Workflow session identifier (WFS-[topic])
|
- `session_id`: Workflow session identifier (WFS-[topic])
|
||||||
- `session_metadata`: Session configuration and state
|
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
|
||||||
- `analysis_results`: Analysis recommendations and task breakdown
|
|
||||||
- `artifacts_inventory`: Detected brainstorming outputs (role analyses, guidance-specification, role analyses)
|
|
||||||
- `context_package`: Project context and assets
|
|
||||||
- `mcp_capabilities`: Available MCP tools (exa-code, exa-web)
|
|
||||||
- `mcp_analysis`: Optional pre-executed MCP analysis results
|
|
||||||
|
|
||||||
**Legacy Support** (backward compatibility):
|
**Legacy Support** (backward compatibility):
|
||||||
- **pre_analysis configuration**: Multi-step array format with action, template, method fields
|
- **pre_analysis configuration**: Multi-step array format with action, template, method fields
|
||||||
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
||||||
- **Task requirements**: Direct task description
|
- **Task requirements**: Direct task description
|
||||||
|
|
||||||
### Execution Flow (Two-Phase)
|
### 1.2 Execution Flow
|
||||||
|
|
||||||
|
#### Phase 1: Context Loading & Assembly
|
||||||
|
|
||||||
|
**Step-by-step execution**:
|
||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: Context Validation & Enhancement (Discovery Results Provided)
|
1. Load session metadata → Extract user input
|
||||||
1. Receive and validate execution context package
|
- User description: Original task/feature requirements
|
||||||
2. Check memory-first rule compliance:
|
- Project scope: User-specified boundaries and goals
|
||||||
→ session_metadata: Use provided content (from memory or file)
|
- Technical constraints: User-provided technical requirements
|
||||||
→ analysis_results: Use provided content (from memory or file)
|
|
||||||
→ artifacts_inventory: Use provided list (from memory or scan)
|
2. Load context package → Extract structured context
|
||||||
→ mcp_analysis: Use provided results (optional)
|
Commands: Read({{context_package_path}})
|
||||||
3. Optional MCP enhancement (if not pre-executed):
|
Output: Complete context package object
|
||||||
|
|
||||||
|
3. Check existing plan (if resuming)
|
||||||
|
- If IMPL_PLAN.md exists: Read for continuity
|
||||||
|
- If task JSONs exist: Load for context
|
||||||
|
|
||||||
|
4. Load brainstorming artifacts (in priority order)
|
||||||
|
a. guidance-specification.md (Highest Priority)
|
||||||
|
→ Overall design framework and architectural decisions
|
||||||
|
b. Role analyses (progressive loading: load incrementally by priority)
|
||||||
|
→ Load role analysis files one at a time as needed
|
||||||
|
→ Reason: Each analysis.md is long; progressive loading prevents token overflow
|
||||||
|
c. Synthesis output (if exists)
|
||||||
|
→ Integrated view with clarifications
|
||||||
|
d. Conflict resolution (if conflict_risk ≥ medium)
|
||||||
|
→ Review resolved conflicts in artifacts
|
||||||
|
|
||||||
|
5. Optional MCP enhancement
|
||||||
→ mcp__exa__get_code_context_exa() for best practices
|
→ mcp__exa__get_code_context_exa() for best practices
|
||||||
→ mcp__exa__web_search_exa() for external research
|
→ mcp__exa__web_search_exa() for external research
|
||||||
4. Assess task complexity (simple/medium/complex) from analysis
|
|
||||||
|
|
||||||
Phase 2: Document Generation (Autonomous Output)
|
6. Assess task complexity (simple/medium/complex)
|
||||||
1. Extract task definitions from analysis_results
|
|
||||||
2. Generate task JSON files with 5-field schema + artifacts
|
|
||||||
3. Create IMPL_PLAN.md with context analysis and artifact references
|
|
||||||
4. Generate TODO_LIST.md with proper structure (▸, [ ], [x])
|
|
||||||
5. Update session state for execution readiness
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Context Package Usage
|
**MCP Integration** (when `mcp_capabilities` available):
|
||||||
|
|
||||||
**Standard Context Structure**:
|
|
||||||
```javascript
|
```javascript
|
||||||
{
|
// Exa Code Context (mcp_capabilities.exa_code = true)
|
||||||
"session_id": "WFS-auth-system",
|
|
||||||
"session_metadata": {
|
|
||||||
"project": "OAuth2 authentication",
|
|
||||||
"type": "medium",
|
|
||||||
"current_phase": "PLAN"
|
|
||||||
},
|
|
||||||
"analysis_results": {
|
|
||||||
"tasks": [
|
|
||||||
{"id": "IMPL-1", "title": "...", "requirements": [...]}
|
|
||||||
],
|
|
||||||
"complexity": "medium",
|
|
||||||
"dependencies": [...]
|
|
||||||
},
|
|
||||||
"artifacts_inventory": {
|
|
||||||
"synthesis_specification": ".workflow/WFS-auth/.brainstorming/role analysis documents",
|
|
||||||
"topic_framework": ".workflow/WFS-auth/.brainstorming/guidance-specification.md",
|
|
||||||
"role_analyses": [
|
|
||||||
".workflow/WFS-auth/.brainstorming/system-architect/analysis.md",
|
|
||||||
".workflow/WFS-auth/.brainstorming/subject-matter-expert/analysis.md"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"context_package": {
|
|
||||||
"assets": [...],
|
|
||||||
"focus_areas": [...]
|
|
||||||
},
|
|
||||||
"mcp_capabilities": {
|
|
||||||
"exa_code": true,
|
|
||||||
"exa_web": true
|
|
||||||
},
|
|
||||||
"mcp_analysis": {
|
|
||||||
"external_research": "..."
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Using Context in Task Generation**:
|
|
||||||
1. **Extract Tasks**: Parse `analysis_results.tasks` array
|
|
||||||
2. **Map Artifacts**: Use `artifacts_inventory` to add artifact references to task.context
|
|
||||||
3. **Assess Complexity**: Use `analysis_results.complexity` for document structure decision
|
|
||||||
4. **Session Paths**: Use `session_id` to construct output paths (.workflow/active/{session_id}/)
|
|
||||||
|
|
||||||
### MCP Integration Guidelines
|
|
||||||
|
|
||||||
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
|
|
||||||
```javascript
|
|
||||||
// Get best practices and examples
|
|
||||||
mcp__exa__get_code_context_exa(
|
mcp__exa__get_code_context_exa(
|
||||||
query="TypeScript OAuth2 JWT authentication patterns",
|
query="TypeScript OAuth2 JWT authentication patterns",
|
||||||
tokensNum="dynamic"
|
tokensNum="dynamic"
|
||||||
)
|
)
|
||||||
```
|
|
||||||
|
|
||||||
**Integration in flow_control.pre_analysis**:
|
// Integration in flow_control.pre_analysis
|
||||||
```json
|
|
||||||
{
|
{
|
||||||
"step": "local_codebase_exploration",
|
"step": "local_codebase_exploration",
|
||||||
"action": "Explore codebase structure",
|
"action": "Explore codebase structure",
|
||||||
@@ -128,28 +107,158 @@ mcp__exa__get_code_context_exa(
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Core Functions
|
**Context Package Structure** (fields defined by context-search-agent):
|
||||||
|
|
||||||
### 1. Stage Design
|
**Always Present**:
|
||||||
Break work into 3-5 logical implementation stages with:
|
- `metadata.task_description`: User's original task description
|
||||||
- Specific, measurable deliverables
|
- `metadata.keywords`: Extracted technical keywords
|
||||||
- Clear success criteria and test cases
|
- `metadata.complexity`: Task complexity level (simple/medium/complex)
|
||||||
- Dependencies on previous stages
|
- `metadata.session_id`: Workflow session identifier
|
||||||
- Estimated complexity and time requirements
|
- `project_context.architecture_patterns`: Architecture patterns (MVC, Service layer, etc.)
|
||||||
|
- `project_context.tech_stack`: Language, frameworks, libraries
|
||||||
|
- `project_context.coding_conventions`: Naming, error handling, async patterns
|
||||||
|
- `assets.source_code[]`: Relevant existing files with paths and metadata
|
||||||
|
- `assets.documentation[]`: Reference docs (CLAUDE.md, API docs)
|
||||||
|
- `assets.config[]`: Configuration files (package.json, .env.example)
|
||||||
|
- `assets.tests[]`: Test files
|
||||||
|
- `dependencies.internal[]`: Module dependencies
|
||||||
|
- `dependencies.external[]`: Package dependencies
|
||||||
|
- `conflict_detection.risk_level`: Conflict risk (low/medium/high)
|
||||||
|
|
||||||
### 2. Task JSON Generation (5-Field Schema + Artifacts)
|
**Conditionally Present** (check existence before loading):
|
||||||
Generate individual `.task/IMPL-*.json` files with:
|
- `brainstorm_artifacts.guidance_specification`: Overall design framework (if exists)
|
||||||
|
- Check: `brainstorm_artifacts?.guidance_specification?.exists === true`
|
||||||
|
- Content: Use `content` field if present, else load from `path`
|
||||||
|
- `brainstorm_artifacts.role_analyses[]`: Role-specific analyses (if array not empty)
|
||||||
|
- Each role: `role_analyses[i].files[j]` has `path` and `content`
|
||||||
|
- `brainstorm_artifacts.synthesis_output`: Synthesis results (if exists)
|
||||||
|
- Check: `brainstorm_artifacts?.synthesis_output?.exists === true`
|
||||||
|
- Content: Use `content` field if present, else load from `path`
|
||||||
|
- `conflict_detection.affected_modules[]`: Modules with potential conflicts (if risk ≥ medium)
|
||||||
|
|
||||||
|
**Field Access Examples**:
|
||||||
|
```javascript
|
||||||
|
// Always safe - direct field access
|
||||||
|
const techStack = contextPackage.project_context.tech_stack;
|
||||||
|
const riskLevel = contextPackage.conflict_detection.risk_level;
|
||||||
|
const existingCode = contextPackage.assets.source_code; // Array of files
|
||||||
|
|
||||||
|
// Conditional - use content if available, else load from path
|
||||||
|
if (contextPackage.brainstorm_artifacts?.guidance_specification?.exists) {
|
||||||
|
const spec = contextPackage.brainstorm_artifacts.guidance_specification;
|
||||||
|
const content = spec.content || Read(spec.path);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
|
||||||
|
// Progressive loading: load role analyses incrementally by priority
|
||||||
|
contextPackage.brainstorm_artifacts.role_analyses.forEach(role => {
|
||||||
|
role.files.forEach(file => {
|
||||||
|
const analysis = file.content || Read(file.path); // Load one at a time
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Phase 2: Document Generation
|
||||||
|
|
||||||
|
**Autonomous output generation**:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Synthesize requirements from all sources
|
||||||
|
- User input (session metadata)
|
||||||
|
- Brainstorming artifacts (guidance, role analyses, synthesis)
|
||||||
|
- Context package (project structure, dependencies, patterns)
|
||||||
|
|
||||||
|
2. Generate task JSON files
|
||||||
|
- Apply 6-field schema (id, title, status, meta, context, flow_control)
|
||||||
|
- Integrate artifacts catalog into context.artifacts array
|
||||||
|
- Add quantified requirements and measurable acceptance criteria
|
||||||
|
|
||||||
|
3. Create IMPL_PLAN.md
|
||||||
|
- Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
|
||||||
|
- Follow template structure and validation checklist
|
||||||
|
- Populate all 8 sections with synthesized context
|
||||||
|
- Document CCW workflow phase progression
|
||||||
|
- Update quality gate status
|
||||||
|
|
||||||
|
4. Generate TODO_LIST.md
|
||||||
|
- Flat structure ([ ] for pending, [x] for completed)
|
||||||
|
- Link to task JSONs and summaries
|
||||||
|
|
||||||
|
5. Update session state for execution readiness
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Output Specifications
|
||||||
|
|
||||||
|
### 2.1 Task JSON Schema (6-Field)
|
||||||
|
|
||||||
|
Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||||
|
|
||||||
|
#### Top-Level Fields
|
||||||
|
|
||||||
**Required Fields**:
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"id": "IMPL-N[.M]",
|
"id": "IMPL-N",
|
||||||
"title": "Descriptive task name",
|
"title": "Descriptive task name",
|
||||||
"status": "pending",
|
"status": "pending|active|completed|blocked",
|
||||||
|
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `id`: Task identifier
|
||||||
|
- Single module format: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||||
|
- Multi-module format: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1, IMPL-C1)
|
||||||
|
- Prefix: A, B, C... (assigned by module detection order)
|
||||||
|
- Sequence: 1, 2, 3... (per-module increment)
|
||||||
|
- `title`: Descriptive task name summarizing the work
|
||||||
|
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
||||||
|
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||||
|
|
||||||
|
#### Meta Object
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
"meta": {
|
"meta": {
|
||||||
"type": "feature|bugfix|refactor|test|docs",
|
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||||
"agent": "@code-developer"
|
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||||
},
|
"execution_group": "parallel-abc123|null",
|
||||||
|
"module": "frontend|backend|shared|null"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Descriptions**:
|
||||||
|
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
||||||
|
- `agent`: Assigned agent for execution
|
||||||
|
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||||
|
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||||
|
|
||||||
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"meta": {
|
||||||
|
"type": "test-gen|test-fix",
|
||||||
|
"agent": "@code-developer|@test-fix-agent",
|
||||||
|
"test_framework": "jest|vitest|pytest|junit|mocha",
|
||||||
|
"coverage_target": "80%"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test-Specific Fields**:
|
||||||
|
- `test_framework`: Existing test framework from project (required for test tasks)
|
||||||
|
- `coverage_target`: Target code coverage percentage (optional)
|
||||||
|
|
||||||
|
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
|
||||||
|
|
||||||
|
#### Context Object
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
"context": {
|
"context": {
|
||||||
"requirements": [
|
"requirements": [
|
||||||
"Implement 3 features: [authentication, authorization, session management]",
|
"Implement 3 features: [authentication, authorization, session management]",
|
||||||
@@ -163,164 +272,438 @@ Generate individual `.task/IMPL-*.json` files with:
|
|||||||
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
"Test coverage >=80%: verify by npm test -- --coverage | grep auth"
|
||||||
],
|
],
|
||||||
"depends_on": ["IMPL-N"],
|
"depends_on": ["IMPL-N"],
|
||||||
|
"inherited": {
|
||||||
|
"from": "IMPL-N",
|
||||||
|
"context": ["Authentication system design completed", "JWT strategy defined"]
|
||||||
|
},
|
||||||
|
"shared_context": {
|
||||||
|
"tech_stack": ["Node.js", "TypeScript", "Express"],
|
||||||
|
"auth_strategy": "JWT with refresh tokens",
|
||||||
|
"conventions": ["Follow existing auth patterns in src/auth/legacy/"]
|
||||||
|
},
|
||||||
"artifacts": [
|
"artifacts": [
|
||||||
{
|
{
|
||||||
"type": "synthesis_specification",
|
"type": "synthesis_specification|topic_framework|individual_role_analysis",
|
||||||
|
"source": "brainstorm_clarification|brainstorm_framework|brainstorm_roles",
|
||||||
"path": "{from artifacts_inventory}",
|
"path": "{from artifacts_inventory}",
|
||||||
"priority": "highest"
|
"priority": "highest|high|medium|low",
|
||||||
|
"usage": "Architecture decisions and API specifications",
|
||||||
|
"contains": "role_specific_requirements_and_design"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
},
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [
|
|
||||||
{
|
|
||||||
"step": "load_synthesis_specification",
|
|
||||||
"commands": ["bash(ls {path} 2>/dev/null)", "Read({path})"],
|
|
||||||
"output_to": "synthesis_specification",
|
|
||||||
"on_error": "skip_optional"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"step": "mcp_codebase_exploration",
|
|
||||||
"command": "mcp__code-index__find_files() && mcp__code-index__search_code_advanced()",
|
|
||||||
"output_to": "codebase_structure"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"implementation_approach": [
|
|
||||||
{
|
|
||||||
"step": 1,
|
|
||||||
"title": "Load and analyze role analyses",
|
|
||||||
"description": "Load 3 role analysis files and extract quantified requirements",
|
|
||||||
"modification_points": [
|
|
||||||
"Load 3 role analysis files: [system-architect/analysis.md, product-manager/analysis.md, ui-designer/analysis.md]",
|
|
||||||
"Extract 15 requirements from role analyses",
|
|
||||||
"Parse 8 architecture decisions from system-architect analysis"
|
|
||||||
],
|
|
||||||
"logic_flow": [
|
|
||||||
"Read 3 role analyses from artifacts inventory",
|
|
||||||
"Parse architecture decisions (8 total)",
|
|
||||||
"Extract implementation requirements (15 total)",
|
|
||||||
"Build consolidated requirements list"
|
|
||||||
],
|
|
||||||
"depends_on": [],
|
|
||||||
"output": "synthesis_requirements"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"step": 2,
|
|
||||||
"title": "Implement following specification",
|
|
||||||
"description": "Implement 3 features across 5 files following consolidated role analyses",
|
|
||||||
"modification_points": [
|
|
||||||
"Create 5 new files in src/auth/: [auth.service.ts (180 lines), auth.controller.ts (120 lines), auth.middleware.ts (60 lines), auth.types.ts (40 lines), auth.test.ts (200 lines)]",
|
|
||||||
"Modify 2 functions: [validateUser() in users.service.ts lines 45-60, hashPassword() in utils.ts lines 120-135]",
|
|
||||||
"Implement 3 core features: [JWT authentication, role-based authorization, session management]"
|
|
||||||
],
|
|
||||||
"logic_flow": [
|
|
||||||
"Apply 15 requirements from [synthesis_requirements]",
|
|
||||||
"Implement 3 features across 5 new files (600 total lines)",
|
|
||||||
"Modify 2 existing functions (30 lines total)",
|
|
||||||
"Write 25 test cases covering all features",
|
|
||||||
"Validate against 3 acceptance criteria"
|
|
||||||
],
|
|
||||||
"depends_on": [1],
|
|
||||||
"output": "implementation"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"target_files": [
|
|
||||||
"src/auth/auth.service.ts",
|
|
||||||
"src/auth/auth.controller.ts",
|
|
||||||
"src/auth/auth.middleware.ts",
|
|
||||||
"src/auth/auth.types.ts",
|
|
||||||
"tests/auth/auth.test.ts",
|
|
||||||
"src/users/users.service.ts:validateUser:45-60",
|
|
||||||
"src/utils/utils.ts:hashPassword:120-135"
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Artifact Mapping**:
|
**Field Descriptions**:
|
||||||
|
- `requirements`: **QUANTIFIED** implementation requirements (MUST include explicit counts and enumerated lists, e.g., "5 files: [list]")
|
||||||
|
- `focus_paths`: Target directories/files (concrete paths without wildcards)
|
||||||
|
- `acceptance`: **MEASURABLE** acceptance criteria (MUST include verification commands, e.g., "verify by ls ... | wc -l = N")
|
||||||
|
- `depends_on`: Prerequisite task IDs that must complete before this task starts
|
||||||
|
- `inherited`: Context, patterns, and dependencies passed from parent task
|
||||||
|
- `shared_context`: Tech stack, conventions, and architectural strategies for the task
|
||||||
|
- `artifacts`: Referenced brainstorming outputs with detailed metadata
|
||||||
|
|
||||||
|
**Artifact Mapping** (from context package):
|
||||||
- Use `artifacts_inventory` from context package
|
- Use `artifacts_inventory` from context package
|
||||||
- Highest priority: synthesis_specification
|
- **Priority levels**:
|
||||||
- Medium priority: topic_framework
|
- **Highest**: synthesis_specification (integrated view with clarifications)
|
||||||
- Low priority: role_analyses
|
- **High**: topic_framework (guidance-specification.md)
|
||||||
|
- **Medium**: individual_role_analysis (system-architect, subject-matter-expert, etc.)
|
||||||
|
- **Low**: supporting documentation
|
||||||
|
|
||||||
### 3. Implementation Plan Creation
|
#### Flow Control Object
|
||||||
Generate `IMPL_PLAN.md` at `.workflow/active/{session_id}/IMPL_PLAN.md`:
|
|
||||||
|
|
||||||
**Structure**:
|
**IMPORTANT**: The `pre_analysis` examples below are **reference templates only**. Agent MUST dynamically select, adapt, and expand steps based on actual task requirements. Apply the principle of **"举一反三"** (draw inferences from examples) - use these patterns as inspiration to create task-specific analysis steps.
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
identifier: {session_id}
|
|
||||||
source: "User requirements"
|
|
||||||
analysis: .workflow/active/{session_id}/.process/ANALYSIS_RESULTS.md
|
|
||||||
---
|
|
||||||
|
|
||||||
# Implementation Plan: {Project Title}
|
```json
|
||||||
|
{
|
||||||
## Summary
|
"flow_control": {
|
||||||
{Core requirements and technical approach from analysis_results}
|
"pre_analysis": [...],
|
||||||
|
"implementation_approach": [...],
|
||||||
## Context Analysis
|
"target_files": [...]
|
||||||
- **Project**: {from session_metadata and context_package}
|
}
|
||||||
- **Modules**: {from analysis_results}
|
}
|
||||||
- **Dependencies**: {from context_package}
|
|
||||||
- **Patterns**: {from analysis_results}
|
|
||||||
|
|
||||||
## Brainstorming Artifacts
|
|
||||||
{List from artifacts_inventory with priorities}
|
|
||||||
|
|
||||||
## Task Breakdown
|
|
||||||
- **Task Count**: {from analysis_results.tasks.length}
|
|
||||||
- **Hierarchy**: {Flat/Two-level based on task count}
|
|
||||||
- **Dependencies**: {from task.depends_on relationships}
|
|
||||||
|
|
||||||
## Implementation Plan
|
|
||||||
- **Execution Strategy**: {Sequential/Parallel}
|
|
||||||
- **Resource Requirements**: {Tools, dependencies}
|
|
||||||
- **Success Criteria**: {from analysis_results}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. TODO List Generation
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
Generate `TODO_LIST.md` at `.workflow/active/{session_id}/TODO_LIST.md`:
|
|
||||||
|
|
||||||
**Structure**:
|
```json
|
||||||
|
{
|
||||||
|
"flow_control": {
|
||||||
|
"pre_analysis": [...],
|
||||||
|
"implementation_approach": [...],
|
||||||
|
"target_files": [...],
|
||||||
|
"reusable_test_tools": [
|
||||||
|
"tests/helpers/testUtils.ts",
|
||||||
|
"tests/fixtures/mockData.ts",
|
||||||
|
"tests/setup/testSetup.ts"
|
||||||
|
],
|
||||||
|
"test_commands": {
|
||||||
|
"run_tests": "npm test",
|
||||||
|
"run_coverage": "npm test -- --coverage",
|
||||||
|
"run_specific": "npm test -- {test_file}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test-Specific Fields**:
|
||||||
|
- `reusable_test_tools`: List of existing test utility files to reuse (helpers, fixtures, mocks)
|
||||||
|
- `test_commands`: Test execution commands from project config (package.json, pytest.ini)
|
||||||
|
|
||||||
|
##### Pre-Analysis Patterns
|
||||||
|
|
||||||
|
**Dynamic Step Selection Guidelines**:
|
||||||
|
- **Context Loading**: Always include context package and role analysis loading
|
||||||
|
- **Architecture Analysis**: Add module structure analysis for complex projects
|
||||||
|
- **Pattern Discovery**: Use CLI tools (gemini/qwen/bash) based on task complexity and available tools
|
||||||
|
- **Tech-Specific Analysis**: Add language/framework-specific searches for specialized tasks
|
||||||
|
- **MCP Integration**: Utilize MCP tools when available for enhanced context
|
||||||
|
|
||||||
|
**Required Steps** (Always Include):
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"step": "load_context_package",
|
||||||
|
"action": "Load context package for artifact paths and smart context",
|
||||||
|
"commands": ["Read({{context_package_path}})"],
|
||||||
|
"output_to": "context_package",
|
||||||
|
"on_error": "fail"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"step": "load_role_analysis_artifacts",
|
||||||
|
"action": "Load role analyses from context-package.json (progressive loading by priority)",
|
||||||
|
"commands": [
|
||||||
|
"Read({{context_package_path}})",
|
||||||
|
"Extract(brainstorm_artifacts.role_analyses[].files[].path)",
|
||||||
|
"Read(extracted paths progressively)"
|
||||||
|
],
|
||||||
|
"output_to": "role_analysis_artifacts",
|
||||||
|
"on_error": "skip_optional"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Optional Steps** (Select and adapt based on task needs):
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
// Pattern: Project structure analysis
|
||||||
|
{
|
||||||
|
"step": "analyze_project_architecture",
|
||||||
|
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
|
||||||
|
"output_to": "project_architecture"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Local search (bash/rg/find)
|
||||||
|
{
|
||||||
|
"step": "search_existing_patterns",
|
||||||
|
"commands": [
|
||||||
|
"bash(rg '[pattern]' --type [lang] -n --max-count [N])",
|
||||||
|
"bash(find . -name '[pattern]' -type f | head -[N])"
|
||||||
|
],
|
||||||
|
"output_to": "search_results"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Gemini CLI deep analysis
|
||||||
|
{
|
||||||
|
"step": "gemini_analyze_[aspect]",
|
||||||
|
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
||||||
|
"output_to": "analysis_result"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||||
|
{
|
||||||
|
"step": "qwen_analyze_[aspect]",
|
||||||
|
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
||||||
|
"output_to": "analysis_result"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Pattern: MCP tools
|
||||||
|
{
|
||||||
|
"step": "mcp_search_[target]",
|
||||||
|
"command": "mcp__[tool]__[function](parameters)",
|
||||||
|
"output_to": "mcp_results"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step Selection Strategy** (举一反三 Principle):
|
||||||
|
|
||||||
|
The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
||||||
|
|
||||||
|
1. **Always Include** (Required):
|
||||||
|
- `load_context_package` - Essential for all tasks
|
||||||
|
- `load_role_analysis_artifacts` - Critical for accessing brainstorming insights
|
||||||
|
|
||||||
|
2. **Progressive Addition of Analysis Steps**:
|
||||||
|
Include additional analysis steps as needed for comprehensive planning:
|
||||||
|
- **Architecture analysis**: Project structure + architecture patterns
|
||||||
|
- **Execution flow analysis**: Code tracing + quality analysis
|
||||||
|
- **Component analysis**: Component searches + pattern analysis
|
||||||
|
- **Data analysis**: Schema review + endpoint searches
|
||||||
|
- **Security analysis**: Vulnerability scans + security patterns
|
||||||
|
- **Performance analysis**: Bottleneck identification + profiling
|
||||||
|
|
||||||
|
Default: Include progressively based on planning requirements, not limited by task type.
|
||||||
|
|
||||||
|
3. **Tool Selection Strategy**:
|
||||||
|
- **Gemini CLI**: Deep analysis (architecture, execution flow, patterns)
|
||||||
|
- **Qwen CLI**: Fallback or code quality analysis
|
||||||
|
- **Bash/rg/find**: Quick pattern matching and file discovery
|
||||||
|
- **MCP tools**: Semantic search and external research
|
||||||
|
|
||||||
|
4. **Command Composition Patterns**:
|
||||||
|
- **Single command**: `bash([simple_search])`
|
||||||
|
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||||
|
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
||||||
|
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||||
|
|
||||||
|
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||||
|
|
||||||
|
##### Implementation Approach
|
||||||
|
|
||||||
|
**Execution Modes**:
|
||||||
|
|
||||||
|
The `implementation_approach` supports **two execution modes** based on the presence of the `command` field:
|
||||||
|
|
||||||
|
1. **Default Mode (Agent Execution)** - `command` field **omitted**:
|
||||||
|
- Agent interprets `modification_points` and `logic_flow` autonomously
|
||||||
|
- Direct agent execution with full context awareness
|
||||||
|
- No external tool overhead
|
||||||
|
- **Use for**: Standard implementation tasks where agent capability is sufficient
|
||||||
|
- **Required fields**: `step`, `title`, `description`, `modification_points`, `logic_flow`, `depends_on`, `output`
|
||||||
|
|
||||||
|
2. **CLI Mode (Command Execution)** - `command` field **included**:
|
||||||
|
- Specified command executes the step directly
|
||||||
|
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||||
|
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||||
|
- **Required fields**: Same as default mode **PLUS** `command`
|
||||||
|
- **Command patterns**:
|
||||||
|
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
||||||
|
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
||||||
|
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
||||||
|
|
||||||
|
**Semantic CLI Tool Selection**:
|
||||||
|
|
||||||
|
Agent determines CLI tool usage per-step based on user semantics and task nature.
|
||||||
|
|
||||||
|
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
|
||||||
|
|
||||||
|
**User Semantic Triggers** (patterns to detect in task_description):
|
||||||
|
- "use Codex/codex" → Add `command` field with Codex CLI
|
||||||
|
- "use Gemini/gemini" → Add `command` field with Gemini CLI
|
||||||
|
- "use Qwen/qwen" → Add `command` field with Qwen CLI
|
||||||
|
- "CLI execution" / "automated" → Infer appropriate CLI tool
|
||||||
|
|
||||||
|
**Task-Based Selection** (when no explicit user preference):
|
||||||
|
- **Implementation/coding**: Codex preferred for autonomous development
|
||||||
|
- **Analysis/exploration**: Gemini preferred for large context analysis
|
||||||
|
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
|
||||||
|
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
||||||
|
|
||||||
|
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
||||||
|
- Agent orchestrates task execution
|
||||||
|
- When step has `command` field, agent executes it via Bash
|
||||||
|
- When step has no `command` field, agent implements directly
|
||||||
|
- This maintains agent control while leveraging CLI tool power
|
||||||
|
|
||||||
|
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
|
||||||
|
|
||||||
|
**Examples**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
// === DEFAULT MODE: Agent Execution (no command field) ===
|
||||||
|
{
|
||||||
|
"step": 1,
|
||||||
|
"title": "Load and analyze role analyses",
|
||||||
|
"description": "Load role analysis files and extract quantified requirements",
|
||||||
|
"modification_points": [
|
||||||
|
"Load N role analysis files: [list]",
|
||||||
|
"Extract M requirements from role analyses",
|
||||||
|
"Parse K architecture decisions"
|
||||||
|
],
|
||||||
|
"logic_flow": [
|
||||||
|
"Read role analyses from artifacts inventory",
|
||||||
|
"Parse architecture decisions",
|
||||||
|
"Extract implementation requirements",
|
||||||
|
"Build consolidated requirements list"
|
||||||
|
],
|
||||||
|
"depends_on": [],
|
||||||
|
"output": "synthesis_requirements"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"step": 2,
|
||||||
|
"title": "Implement following specification",
|
||||||
|
"description": "Implement features following consolidated role analyses",
|
||||||
|
"modification_points": [
|
||||||
|
"Create N new files: [list with line counts]",
|
||||||
|
"Modify M functions: [func() in file lines X-Y]",
|
||||||
|
"Implement K core features: [list]"
|
||||||
|
],
|
||||||
|
"logic_flow": [
|
||||||
|
"Apply requirements from [synthesis_requirements]",
|
||||||
|
"Implement features across new files",
|
||||||
|
"Modify existing functions",
|
||||||
|
"Write test cases covering all features",
|
||||||
|
"Validate against acceptance criteria"
|
||||||
|
],
|
||||||
|
"depends_on": [1],
|
||||||
|
"output": "implementation"
|
||||||
|
},
|
||||||
|
|
||||||
|
// === CLI MODE: Command Execution (optional command field) ===
|
||||||
|
{
|
||||||
|
"step": 3,
|
||||||
|
"title": "Execute implementation using CLI tool",
|
||||||
|
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||||
|
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
||||||
|
"modification_points": ["[Same as default mode]"],
|
||||||
|
"logic_flow": ["[Same as default mode]"],
|
||||||
|
"depends_on": [1, 2],
|
||||||
|
"output": "cli_implementation"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Target Files
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"target_files": [
|
||||||
|
"src/auth/auth.service.ts",
|
||||||
|
"src/auth/auth.controller.ts",
|
||||||
|
"src/auth/auth.middleware.ts",
|
||||||
|
"src/auth/auth.types.ts",
|
||||||
|
"tests/auth/auth.test.ts",
|
||||||
|
"src/users/users.service.ts:validateUser:45-60",
|
||||||
|
"src/utils/utils.ts:hashPassword:120-135"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Format**:
|
||||||
|
- New files: `file_path`
|
||||||
|
- Existing files with modifications: `file_path:function_name:line_range`
|
||||||
|
|
||||||
|
### 2.2 IMPL_PLAN.md Structure
|
||||||
|
|
||||||
|
**Template-Based Generation**:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Load template: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
|
||||||
|
2. Populate all sections following template structure
|
||||||
|
3. Complete template validation checklist
|
||||||
|
4. Generate at .workflow/active/{session_id}/IMPL_PLAN.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**Data Sources**:
|
||||||
|
- Session metadata (user requirements, session_id)
|
||||||
|
- Context package (project structure, dependencies, focus_paths)
|
||||||
|
- Analysis results (technical approach, architecture decisions)
|
||||||
|
- Brainstorming artifacts (role analyses, guidance specifications)
|
||||||
|
|
||||||
|
**Multi-Module Format** (when modules detected):
|
||||||
|
|
||||||
|
When multiple modules are detected (frontend/backend, etc.), organize IMPL_PLAN.md by module:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Implementation Plan
|
||||||
|
|
||||||
|
## Module A: Frontend (N tasks)
|
||||||
|
### IMPL-A1: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
### IMPL-A2: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
## Module B: Backend (N tasks)
|
||||||
|
### IMPL-B1: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
### IMPL-B2: [Task Title]
|
||||||
|
[Task details...]
|
||||||
|
|
||||||
|
## Cross-Module Dependencies
|
||||||
|
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||||
|
- IMPL-A2 → IMPL-B2 (UI state depends on Backend service)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Cross-Module Dependency Notation**:
|
||||||
|
- During parallel planning, use `CROSS::{module}::{pattern}` format
|
||||||
|
- Example: `depends_on: ["CROSS::B::api-endpoint"]`
|
||||||
|
- Integration phase resolves to actual task IDs: `CROSS::B::api → IMPL-B1`
|
||||||
|
|
||||||
|
### 2.3 TODO_LIST.md Structure
|
||||||
|
|
||||||
|
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
|
||||||
|
|
||||||
|
**Single Module Format**:
|
||||||
```markdown
|
```markdown
|
||||||
# Tasks: {Session Topic}
|
# Tasks: {Session Topic}
|
||||||
|
|
||||||
## Task Progress
|
## Task Progress
|
||||||
▸ **IMPL-001**: [Main Task] → [📋](./.task/IMPL-001.json)
|
- [ ] **IMPL-001**: [Task Title] → [📋](./.task/IMPL-001.json)
|
||||||
- [ ] **IMPL-001.1**: [Subtask] → [📋](./.task/IMPL-001.1.json)
|
- [ ] **IMPL-002**: [Task Title] → [📋](./.task/IMPL-002.json)
|
||||||
|
- [x] **IMPL-003**: [Task Title] → [✅](./.summaries/IMPL-003-summary.md)
|
||||||
- [ ] **IMPL-002**: [Simple Task] → [📋](./.task/IMPL-002.json)
|
|
||||||
|
|
||||||
## Status Legend
|
## Status Legend
|
||||||
- `▸` = Container task (has subtasks)
|
- `- [ ]` = Pending task
|
||||||
- `- [ ]` = Pending leaf task
|
- `- [x]` = Completed task
|
||||||
- `- [x]` = Completed leaf task
|
```
|
||||||
|
|
||||||
|
**Multi-Module Format** (hierarchical by module):
|
||||||
|
```markdown
|
||||||
|
# Tasks: {Session Topic}
|
||||||
|
|
||||||
|
## Module A (Frontend)
|
||||||
|
- [ ] **IMPL-A1**: [Task Title] → [📋](./.task/IMPL-A1.json)
|
||||||
|
- [ ] **IMPL-A2**: [Task Title] → [📋](./.task/IMPL-A2.json)
|
||||||
|
|
||||||
|
## Module B (Backend)
|
||||||
|
- [ ] **IMPL-B1**: [Task Title] → [📋](./.task/IMPL-B1.json)
|
||||||
|
- [ ] **IMPL-B2**: [Task Title] → [📋](./.task/IMPL-B2.json)
|
||||||
|
|
||||||
|
## Cross-Module Dependencies
|
||||||
|
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||||
|
|
||||||
|
## Status Legend
|
||||||
|
- `- [ ]` = Pending task
|
||||||
|
- `- [x]` = Completed task
|
||||||
```
|
```
|
||||||
|
|
||||||
**Linking Rules**:
|
**Linking Rules**:
|
||||||
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
||||||
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
||||||
- Consistent ID schemes: IMPL-XXX, IMPL-XXX.Y (max 2 levels)
|
- Consistent ID schemes: `IMPL-N` (single) or `IMPL-{prefix}{seq}` (multi-module)
|
||||||
|
|
||||||
|
### 2.4 Complexity & Structure Selection
|
||||||
|
|
||||||
|
|
||||||
### 5. Complexity Assessment & Document Structure
|
|
||||||
Use `analysis_results.complexity` or task count to determine structure:
|
Use `analysis_results.complexity` or task count to determine structure:
|
||||||
|
|
||||||
**Simple Tasks** (≤5 tasks):
|
**Single Module Mode**:
|
||||||
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
- **Simple Tasks** (≤5 tasks): Flat structure
|
||||||
- No container tasks, all leaf tasks
|
- **Medium Tasks** (6-12 tasks): Flat structure
|
||||||
|
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
|
||||||
|
|
||||||
**Medium Tasks** (6-10 tasks):
|
**Multi-Module Mode** (N+1 parallel planning):
|
||||||
- Two-level hierarchy: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
- **Per-module limit**: ≤9 tasks per module
|
||||||
- Optional container tasks for grouping
|
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
|
||||||
|
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||||
|
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
|
||||||
|
|
||||||
**Complex Tasks** (>10 tasks):
|
**Multi-Module Detection Triggers**:
|
||||||
- **Re-scope required**: Maximum 10 tasks hard limit
|
- Explicit frontend/backend separation (`src/frontend`, `src/backend`)
|
||||||
- If analysis_results contains >10 tasks, consolidate or request re-scoping
|
- Monorepo structure (`packages/*`, `apps/*`)
|
||||||
|
- Context-package dependency clustering (2+ distinct module groups)
|
||||||
|
|
||||||
## Quantification Requirements (MANDATORY)
|
---
|
||||||
|
|
||||||
|
## 3. Quality Standards
|
||||||
|
|
||||||
|
### 3.1 Quantification Requirements (MANDATORY)
|
||||||
|
|
||||||
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
|
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
|
||||||
|
|
||||||
@@ -344,46 +727,52 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
- [ ] Each implementation step has its own acceptance criteria
|
- [ ] Each implementation step has its own acceptance criteria
|
||||||
|
|
||||||
**Examples**:
|
**Examples**:
|
||||||
- ✅ GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||||
- ❌ BAD: `"Implement new commands"`
|
- BAD: `"Implement new commands"`
|
||||||
- ✅ GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||||
- ❌ BAD: `"All commands implemented successfully"`
|
- BAD: `"All commands implemented successfully"`
|
||||||
|
|
||||||
## Quality Standards
|
### 3.2 Planning & Organization Standards
|
||||||
|
|
||||||
**Planning Principles:**
|
**Planning Principles**:
|
||||||
- Each stage produces working, testable code
|
- Each stage produces working, testable code
|
||||||
- Clear success criteria for each deliverable
|
- Clear success criteria for each deliverable
|
||||||
- Dependencies clearly identified between stages
|
- Dependencies clearly identified between stages
|
||||||
- Incremental progress over big bangs
|
- Incremental progress over big bangs
|
||||||
|
|
||||||
**File Organization:**
|
**File Organization**:
|
||||||
- Session naming: `WFS-[topic-slug]`
|
- Session naming: `WFS-[topic-slug]`
|
||||||
- Task IDs: IMPL-XXX, IMPL-XXX.Y, IMPL-XXX.Y.Z
|
- Task IDs:
|
||||||
- Directory structure follows complexity (Level 0/1/2)
|
- Single module: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||||
|
- Multi-module: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||||
|
- Directory structure: flat task organization (all tasks in `.task/`)
|
||||||
|
|
||||||
**Document Standards:**
|
**Document Standards**:
|
||||||
- Proper linking between documents
|
- Proper linking between documents
|
||||||
- Consistent navigation and references
|
- Consistent navigation and references
|
||||||
|
|
||||||
## Key Reminders
|
### 3.3 Guidelines Checklist
|
||||||
|
|
||||||
**ALWAYS:**
|
**ALWAYS:**
|
||||||
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
|
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||||
- **Use provided context package**: Extract all information from structured context
|
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
|
||||||
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
|
- Use provided context package: Extract all information from structured context
|
||||||
- **Follow 5-field schema**: All task JSONs must have id, title, status, meta, context, flow_control
|
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
||||||
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
|
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||||
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
||||||
- **Validate task count**: Maximum 10 tasks hard limit, request re-scope if exceeded
|
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||||
- **Use session paths**: Construct all paths using provided session_id
|
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||||
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
|
- Use session paths: Construct all paths using provided session_id
|
||||||
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
|
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
|
||||||
|
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
|
||||||
|
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
|
||||||
|
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
|
||||||
|
|
||||||
**NEVER:**
|
**NEVER:**
|
||||||
- Load files directly (use provided context package instead)
|
- Load files directly (use provided context package instead)
|
||||||
- Assume default locations (always use session_id in paths)
|
- Assume default locations (always use session_id in paths)
|
||||||
- Create circular dependencies in task.depends_on
|
- Create circular dependencies in task.depends_on
|
||||||
- Exceed 10 tasks without re-scoping
|
- Exceed 12 tasks without re-scoping
|
||||||
- Skip artifact integration when artifacts_inventory is provided
|
- Skip artifact integration when artifacts_inventory is provided
|
||||||
- Ignore MCP capabilities when available
|
- Ignore MCP capabilities when available
|
||||||
|
- Use fixed pre-analysis steps without task-specific adaptation
|
||||||
|
|||||||
@@ -67,7 +67,7 @@ Score = 0
|
|||||||
|
|
||||||
**1. Project Structure**:
|
**1. Project Structure**:
|
||||||
```bash
|
```bash
|
||||||
~/.claude/scripts/get_modules_by_depth.sh
|
ccw tool exec get_modules_by_depth '{}'
|
||||||
```
|
```
|
||||||
|
|
||||||
**2. Content Search**:
|
**2. Content Search**:
|
||||||
|
|||||||
@@ -1,620 +1,182 @@
|
|||||||
---
|
---
|
||||||
name: cli-explore-agent
|
name: cli-explore-agent
|
||||||
description: |
|
description: |
|
||||||
Read-only code exploration and structural analysis agent specialized in module discovery, dependency mapping, and architecture comprehension using dual-source strategy (Bash rapid scan + Gemini CLI semantic analysis).
|
Read-only code exploration agent with dual-source analysis strategy (Bash + Gemini CLI).
|
||||||
|
Orchestrates 4-phase workflow: Task Understanding → Analysis Execution → Schema Validation → Output Generation
|
||||||
Core capabilities:
|
|
||||||
- Multi-layer module structure analysis (directory tree, file patterns, symbol discovery)
|
|
||||||
- Dependency graph construction (imports, exports, call chains, circular detection)
|
|
||||||
- Pattern discovery (design patterns, architectural styles, naming conventions)
|
|
||||||
- Code provenance tracing (definition lookup, usage sites, call hierarchies)
|
|
||||||
- Architecture summarization (component relationships, integration points, data flows)
|
|
||||||
|
|
||||||
Integration points:
|
|
||||||
- Gemini CLI: Deep semantic understanding, design intent analysis, non-standard pattern discovery
|
|
||||||
- Qwen CLI: Fallback for Gemini, specialized for code analysis tasks
|
|
||||||
- Bash tools: rg, tree, find, get_modules_by_depth.sh for rapid structural scanning
|
|
||||||
- MCP Code Index: Optional integration for enhanced file discovery and search
|
|
||||||
|
|
||||||
Key optimizations:
|
|
||||||
- Dual-source strategy: Bash structural scan (speed) + Gemini semantic analysis (depth)
|
|
||||||
- Language-agnostic analysis with syntax-aware extensions
|
|
||||||
- Progressive disclosure: Quick overview → detailed analysis → dependency deep-dive
|
|
||||||
- Context-aware filtering based on task requirements
|
|
||||||
|
|
||||||
color: yellow
|
color: yellow
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a specialized **CLI Exploration Agent** that executes read-only code analysis tasks autonomously to discover module structures, map dependencies, and understand architectural patterns.
|
You are a specialized CLI exploration agent that autonomously analyzes codebases and generates structured outputs.
|
||||||
|
|
||||||
## Agent Operation
|
## Core Capabilities
|
||||||
|
|
||||||
### Execution Flow
|
1. **Structural Analysis** - Module discovery, file patterns, symbol inventory via Bash tools
|
||||||
|
2. **Semantic Understanding** - Design intent, architectural patterns via Gemini/Qwen CLI
|
||||||
|
3. **Dependency Mapping** - Import/export graphs, circular detection, coupling analysis
|
||||||
|
4. **Structured Output** - Schema-compliant JSON generation with validation
|
||||||
|
|
||||||
```
|
**Analysis Modes**:
|
||||||
STEP 1: Parse Analysis Request
|
- `quick-scan` → Bash only (10-30s)
|
||||||
→ Extract task intent (structure, dependencies, patterns, provenance, summary)
|
- `deep-scan` → Bash + Gemini dual-source (2-5min)
|
||||||
→ Identify analysis mode (quick-scan | deep-scan | dependency-map)
|
- `dependency-map` → Graph construction (3-8min)
|
||||||
→ Determine scope (directory, file patterns, language filters)
|
|
||||||
|
|
||||||
STEP 2: Initialize Analysis Environment
|
|
||||||
→ Set project root and working directory
|
|
||||||
→ Validate access to required tools (rg, tree, find, Gemini CLI)
|
|
||||||
→ Optional: Initialize Code Index MCP for enhanced discovery
|
|
||||||
→ Load project context (CLAUDE.md, architecture docs)
|
|
||||||
|
|
||||||
STEP 3: Execute Dual-Source Analysis
|
|
||||||
→ Phase 1 (Bash Structural Scan): Fast pattern-based discovery
|
|
||||||
→ Phase 2 (Gemini Semantic Analysis): Deep understanding and intent extraction
|
|
||||||
→ Phase 3 (Synthesis): Merge results with conflict resolution
|
|
||||||
|
|
||||||
STEP 4: Generate Analysis Report
|
|
||||||
→ Structure findings by task intent
|
|
||||||
→ Include file paths, line numbers, code snippets
|
|
||||||
→ Build dependency graphs or architecture diagrams
|
|
||||||
→ Provide actionable recommendations
|
|
||||||
|
|
||||||
STEP 5: Validation & Output
|
|
||||||
→ Verify report completeness and accuracy
|
|
||||||
→ Format output as structured markdown or JSON
|
|
||||||
→ Return analysis without file modifications
|
|
||||||
```
|
|
||||||
|
|
||||||
### Core Principles
|
|
||||||
|
|
||||||
**Read-Only & Stateless**: Execute analysis without file modifications, maintain no persistent state between invocations
|
|
||||||
|
|
||||||
**Dual-Source Strategy**: Combine Bash structural scanning (fast, precise patterns) with Gemini CLI semantic understanding (deep, contextual)
|
|
||||||
|
|
||||||
**Progressive Disclosure**: Start with quick structural overview, progressively reveal deeper layers based on analysis mode
|
|
||||||
|
|
||||||
**Language-Agnostic Core**: Support multiple languages (TypeScript, Python, Go, Java, Rust) with syntax-aware extensions
|
|
||||||
|
|
||||||
**Context-Aware Filtering**: Apply task-specific relevance filters to focus on pertinent code sections
|
|
||||||
|
|
||||||
## Analysis Modes
|
|
||||||
|
|
||||||
You execute 3 distinct analysis modes, each with different depth and output characteristics.
|
|
||||||
|
|
||||||
### Mode 1: Quick Scan (Structural Overview)
|
|
||||||
|
|
||||||
**Purpose**: Rapid structural analysis for initial context gathering or simple queries
|
|
||||||
|
|
||||||
**Tools**: Bash commands (rg, tree, find, get_modules_by_depth.sh)
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
1. **Project Structure**: Run get_modules_by_depth.sh for hierarchical overview
|
|
||||||
2. **File Discovery**: Use find/glob patterns to locate relevant files
|
|
||||||
3. **Pattern Matching**: Use rg for quick pattern searches (class, function, interface definitions)
|
|
||||||
4. **Basic Metrics**: Count files, lines, major components
|
|
||||||
|
|
||||||
**Output**: Structured markdown with directory tree, file lists, basic component inventory
|
|
||||||
|
|
||||||
**Time Estimate**: 10-30 seconds
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Initial project exploration
|
|
||||||
- Quick file/pattern lookups
|
|
||||||
- Pre-planning reconnaissance
|
|
||||||
- Context package generation (breadth-first)
|
|
||||||
|
|
||||||
### Mode 2: Deep Scan (Semantic Analysis)
|
|
||||||
|
|
||||||
**Purpose**: Comprehensive understanding of code intent, design patterns, and architectural decisions
|
|
||||||
|
|
||||||
**Tools**: Bash commands (Phase 1) + Gemini CLI (Phase 2) + Synthesis (Phase 3)
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
|
|
||||||
**Phase 1: Bash Structural Pre-Scan** (Fast & Precise)
|
|
||||||
- Purpose: Discover standard patterns with zero ambiguity
|
|
||||||
- Execution:
|
|
||||||
```bash
|
|
||||||
# TypeScript/JavaScript
|
|
||||||
rg "^export (class|interface|type|function) " --type ts -n --max-count 50
|
|
||||||
rg "^import .* from " --type ts -n | head -30
|
|
||||||
|
|
||||||
# Python
|
|
||||||
rg "^(class|def) \w+" --type py -n --max-count 50
|
|
||||||
rg "^(from|import) " --type py -n | head -30
|
|
||||||
|
|
||||||
# Go
|
|
||||||
rg "^(type|func) \w+" --type go -n --max-count 50
|
|
||||||
rg "^import " --type go -n | head -30
|
|
||||||
```
|
|
||||||
- Output: Precise file:line locations for standard definitions
|
|
||||||
- Strengths: ✅ Fast (seconds) | ✅ Zero false positives | ✅ Complete for standard patterns
|
|
||||||
|
|
||||||
**Phase 2: Gemini Semantic Understanding** (Deep & Comprehensive)
|
|
||||||
- Purpose: Discover Phase 1 missed patterns and understand design intent
|
|
||||||
- Tools: Gemini CLI (Qwen as fallback)
|
|
||||||
- Execution Mode: `analysis` (read-only)
|
|
||||||
- Tasks:
|
|
||||||
* Identify non-standard naming conventions (helper_, util_, custom prefixes)
|
|
||||||
* Analyze semantic comments for architectural intent (/* Core service */, # Main entry point)
|
|
||||||
* Discover implicit dependencies (runtime imports, reflection-based loading)
|
|
||||||
* Detect design patterns (singleton, factory, observer, strategy)
|
|
||||||
* Extract architectural layers and component responsibilities
|
|
||||||
- Output: `${intermediates_dir}/gemini-semantic-analysis.json`
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"bash_missed_patterns": [
|
|
||||||
{
|
|
||||||
"pattern_type": "non_standard_export",
|
|
||||||
"location": "src/services/helper_auth.ts:45",
|
|
||||||
"naming_convention": "helper_ prefix pattern",
|
|
||||||
"confidence": "high"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"design_intent_summary": "Layered architecture with service-repository pattern",
|
|
||||||
"architectural_patterns": ["MVC", "Dependency Injection", "Repository Pattern"],
|
|
||||||
"implicit_dependencies": ["Config loaded via environment", "Logger injected at runtime"],
|
|
||||||
"recommendations": ["Standardize naming to match project conventions"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
- Strengths: ✅ Discovers hidden patterns | ✅ Understands intent | ✅ Finds non-standard code
|
|
||||||
|
|
||||||
**Phase 3: Dual-Source Synthesis** (Best of Both)
|
|
||||||
- Merge Bash (precise locations) + Gemini (semantic understanding)
|
|
||||||
- Strategy:
|
|
||||||
* Standard patterns: Use Bash results (file:line precision)
|
|
||||||
* Supplementary discoveries: Adopt Gemini findings
|
|
||||||
* Conflicting interpretations: Use Gemini semantic context for resolution
|
|
||||||
- Validation: Cross-reference both sources for completeness
|
|
||||||
- Attribution: Mark each finding as "bash-discovered" or "gemini-discovered"
|
|
||||||
|
|
||||||
**Output**: Comprehensive analysis report with architectural insights, design patterns, code intent
|
|
||||||
|
|
||||||
**Time Estimate**: 2-5 minutes
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Architecture review and refactoring planning
|
|
||||||
- Understanding unfamiliar codebase sections
|
|
||||||
- Pattern discovery for standardization
|
|
||||||
- Pre-implementation deep-dive
|
|
||||||
|
|
||||||
### Mode 3: Dependency Map (Relationship Analysis)
|
|
||||||
|
|
||||||
**Purpose**: Build complete dependency graphs with import/export chains and circular dependency detection
|
|
||||||
|
|
||||||
**Tools**: Bash + Gemini CLI + Graph construction logic
|
|
||||||
|
|
||||||
**Process**:
|
|
||||||
1. **Direct Dependencies** (Bash):
|
|
||||||
```bash
|
|
||||||
# Extract all imports
|
|
||||||
rg "^import .* from ['\"](.+)['\"]" --type ts -o -r '$1' -n
|
|
||||||
|
|
||||||
# Extract all exports
|
|
||||||
rg "^export .* (class|function|const|type|interface) (\w+)" --type ts -o -r '$2' -n
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Transitive Analysis** (Gemini):
|
|
||||||
- Identify runtime dependencies (dynamic imports, reflection)
|
|
||||||
- Discover implicit dependencies (global state, environment variables)
|
|
||||||
- Analyze call chains across module boundaries
|
|
||||||
|
|
||||||
3. **Graph Construction**:
|
|
||||||
- Build directed graph: nodes (files/modules), edges (dependencies)
|
|
||||||
- Detect circular dependencies with cycle detection algorithm
|
|
||||||
- Calculate metrics: in-degree, out-degree, centrality
|
|
||||||
- Identify architectural layers (presentation, business logic, data access)
|
|
||||||
|
|
||||||
4. **Risk Assessment**:
|
|
||||||
- Flag circular dependencies with impact analysis
|
|
||||||
- Identify highly coupled modules (fan-in/fan-out >10)
|
|
||||||
- Detect orphaned modules (no inbound references)
|
|
||||||
- Calculate change risk scores
|
|
||||||
|
|
||||||
**Output**: Dependency graph (JSON/DOT format) + risk assessment report
|
|
||||||
|
|
||||||
**Time Estimate**: 3-8 minutes (depends on project size)
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Refactoring impact analysis
|
|
||||||
- Module extraction planning
|
|
||||||
- Circular dependency resolution
|
|
||||||
- Architecture optimization
|
|
||||||
|
|
||||||
## Tool Integration
|
|
||||||
|
|
||||||
### Bash Structural Tools
|
|
||||||
|
|
||||||
**get_modules_by_depth.sh**:
|
|
||||||
- Purpose: Generate hierarchical project structure
|
|
||||||
- Usage: `bash ~/.claude/scripts/get_modules_by_depth.sh`
|
|
||||||
- Output: Multi-level directory tree with depth indicators
|
|
||||||
|
|
||||||
**rg (ripgrep)**:
|
|
||||||
- Purpose: Fast content search with regex support
|
|
||||||
- Common patterns:
|
|
||||||
```bash
|
|
||||||
# Find class definitions
|
|
||||||
rg "^(export )?class \w+" --type ts -n
|
|
||||||
|
|
||||||
# Find function definitions
|
|
||||||
rg "^(export )?(function|const) \w+\s*=" --type ts -n
|
|
||||||
|
|
||||||
# Find imports
|
|
||||||
rg "^import .* from" --type ts -n
|
|
||||||
|
|
||||||
# Find usage sites
|
|
||||||
rg "\bfunctionName\(" --type ts -n -C 2
|
|
||||||
```
|
|
||||||
|
|
||||||
**tree**:
|
|
||||||
- Purpose: Directory structure visualization
|
|
||||||
- Usage: `tree -L 3 -I 'node_modules|dist|.git'`
|
|
||||||
|
|
||||||
**find**:
|
|
||||||
- Purpose: File discovery by name patterns
|
|
||||||
- Usage: `find . -name "*.ts" -type f | grep -v node_modules`
|
|
||||||
|
|
||||||
### Gemini CLI (Primary Semantic Analysis)
|
|
||||||
|
|
||||||
**Command Template**:
|
|
||||||
```bash
|
|
||||||
cd [target_directory] && gemini -p "
|
|
||||||
PURPOSE: [Analysis objective - what to discover and why]
|
|
||||||
TASK:
|
|
||||||
• [Specific analysis task 1]
|
|
||||||
• [Specific analysis task 2]
|
|
||||||
• [Specific analysis task 3]
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @**/* | Memory: [Previous findings, related modules, architectural context]
|
|
||||||
EXPECTED: [Report format, key insights, specific deliverables]
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on [scope constraints] | analysis=READ-ONLY
|
|
||||||
" -m gemini-2.5-pro
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use Cases**:
|
|
||||||
- Non-standard pattern discovery
|
|
||||||
- Design intent extraction
|
|
||||||
- Architectural layer identification
|
|
||||||
- Code smell detection
|
|
||||||
|
|
||||||
**Fallback**: Qwen CLI with same command structure
|
|
||||||
|
|
||||||
### MCP Code Index (Optional Enhancement)
|
|
||||||
|
|
||||||
**Tools**:
|
|
||||||
- `mcp__code-index__set_project_path(path)` - Initialize index
|
|
||||||
- `mcp__code-index__find_files(pattern)` - File discovery
|
|
||||||
- `mcp__code-index__search_code_advanced(pattern, file_pattern, regex)` - Content search
|
|
||||||
- `mcp__code-index__get_file_summary(file_path)` - File structure analysis
|
|
||||||
|
|
||||||
**Integration Strategy**: Use as primary discovery tool when available, fallback to bash/rg otherwise
|
|
||||||
|
|
||||||
## Output Formats
|
|
||||||
|
|
||||||
### Structural Overview Report
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Code Structure Analysis: {Module/Directory Name}
|
|
||||||
|
|
||||||
## Project Structure
|
|
||||||
{Output from get_modules_by_depth.sh}
|
|
||||||
|
|
||||||
## File Inventory
|
|
||||||
- **Total Files**: {count}
|
|
||||||
- **Primary Language**: {language}
|
|
||||||
- **Key Directories**:
|
|
||||||
- `src/`: {brief description}
|
|
||||||
- `tests/`: {brief description}
|
|
||||||
|
|
||||||
## Component Discovery
|
|
||||||
### Classes ({count})
|
|
||||||
- {ClassName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
### Functions ({count})
|
|
||||||
- {functionName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
### Interfaces/Types ({count})
|
|
||||||
- {TypeName} - {file_path}:{line_number} - {brief description}
|
|
||||||
|
|
||||||
## Analysis Summary
|
|
||||||
- **Complexity**: {low|medium|high}
|
|
||||||
- **Architecture Style**: {pattern name}
|
|
||||||
- **Key Patterns**: {list}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Semantic Analysis Report
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Deep Code Analysis: {Module/Directory Name}
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
{High-level findings from Gemini semantic analysis}
|
|
||||||
|
|
||||||
## Architectural Patterns
|
|
||||||
- **Primary Pattern**: {pattern name}
|
|
||||||
- **Layer Structure**: {layers identified}
|
|
||||||
- **Design Intent**: {extracted from comments/structure}
|
|
||||||
|
|
||||||
## Dual-Source Findings
|
|
||||||
|
|
||||||
### Bash Structural Scan Results
|
|
||||||
- **Standard Patterns Found**: {count}
|
|
||||||
- **Key Exports**: {list with file:line}
|
|
||||||
- **Import Structure**: {summary}
|
|
||||||
|
|
||||||
### Gemini Semantic Discoveries
|
|
||||||
- **Non-Standard Patterns**: {list with explanations}
|
|
||||||
- **Implicit Dependencies**: {list}
|
|
||||||
- **Design Intent Summary**: {paragraph}
|
|
||||||
- **Recommendations**: {list}
|
|
||||||
|
|
||||||
### Synthesis
|
|
||||||
{Merged understanding with attributed sources}
|
|
||||||
|
|
||||||
## Code Inventory (Attributed)
|
|
||||||
### Classes
|
|
||||||
- {ClassName} [{bash-discovered|gemini-discovered}]
|
|
||||||
- Location: {file}:{line}
|
|
||||||
- Purpose: {from semantic analysis}
|
|
||||||
- Pattern: {design pattern if applicable}
|
|
||||||
|
|
||||||
### Functions
|
|
||||||
- {functionName} [{source}]
|
|
||||||
- Location: {file}:{line}
|
|
||||||
- Role: {from semantic analysis}
|
|
||||||
- Callers: {list if known}
|
|
||||||
|
|
||||||
## Actionable Insights
|
|
||||||
1. {Finding with recommendation}
|
|
||||||
2. {Finding with recommendation}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Dependency Map Report
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"analysis_metadata": {
|
|
||||||
"project_root": "/path/to/project",
|
|
||||||
"timestamp": "2025-01-25T10:30:00Z",
|
|
||||||
"analysis_mode": "dependency-map",
|
|
||||||
"languages": ["typescript"]
|
|
||||||
},
|
|
||||||
"dependency_graph": {
|
|
||||||
"nodes": [
|
|
||||||
{
|
|
||||||
"id": "src/auth/service.ts",
|
|
||||||
"type": "module",
|
|
||||||
"exports": ["AuthService", "login", "logout"],
|
|
||||||
"imports_count": 3,
|
|
||||||
"dependents_count": 5,
|
|
||||||
"layer": "business-logic"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"edges": [
|
|
||||||
{
|
|
||||||
"from": "src/auth/controller.ts",
|
|
||||||
"to": "src/auth/service.ts",
|
|
||||||
"type": "direct-import",
|
|
||||||
"symbols": ["AuthService"]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"circular_dependencies": [
|
|
||||||
{
|
|
||||||
"cycle": ["A.ts", "B.ts", "C.ts", "A.ts"],
|
|
||||||
"risk_level": "high",
|
|
||||||
"impact": "Refactoring A.ts requires changes to B.ts and C.ts"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"risk_assessment": {
|
|
||||||
"high_coupling": [
|
|
||||||
{
|
|
||||||
"module": "src/utils/helpers.ts",
|
|
||||||
"dependents_count": 23,
|
|
||||||
"risk": "Changes impact 23 modules"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"orphaned_modules": [
|
|
||||||
{
|
|
||||||
"module": "src/legacy/old_auth.ts",
|
|
||||||
"risk": "Dead code, candidate for removal"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"recommendations": [
|
|
||||||
"Break circular dependency between A.ts and B.ts by introducing interface abstraction",
|
|
||||||
"Refactor helpers.ts to reduce coupling (split into domain-specific utilities)"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Execution Patterns
|
|
||||||
|
|
||||||
### Pattern 1: Quick Project Reconnaissance
|
|
||||||
|
|
||||||
**Trigger**: User asks "What's the structure of X module?" or "Where is X defined?"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Run get_modules_by_depth.sh for structural overview
|
|
||||||
2. Use rg to find definitions: rg "class|function|interface X" -n
|
|
||||||
3. Generate structural overview report
|
|
||||||
4. Return markdown report without Gemini analysis
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Structural Overview Report
|
|
||||||
**Time**: <30 seconds
|
|
||||||
|
|
||||||
### Pattern 2: Architecture Deep-Dive
|
|
||||||
|
|
||||||
**Trigger**: User asks "How does X work?" or "Explain the architecture of X"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Phase 1 (Bash): Scan for standard patterns (classes, functions, imports)
|
|
||||||
2. Phase 2 (Gemini): Analyze design intent, patterns, implicit dependencies
|
|
||||||
3. Phase 3 (Synthesis): Merge results with attribution
|
|
||||||
4. Generate semantic analysis report with architectural insights
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Semantic Analysis Report
|
|
||||||
**Time**: 2-5 minutes
|
|
||||||
|
|
||||||
### Pattern 3: Refactoring Impact Analysis
|
|
||||||
|
|
||||||
**Trigger**: User asks "What depends on X?" or "Impact of changing X?"
|
|
||||||
|
|
||||||
**Execution**:
|
|
||||||
```
|
|
||||||
1. Build dependency graph using rg for direct dependencies
|
|
||||||
2. Use Gemini to discover runtime/implicit dependencies
|
|
||||||
3. Detect circular dependencies and high-coupling modules
|
|
||||||
4. Calculate change risk scores
|
|
||||||
5. Generate dependency map report with recommendations
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output**: Dependency Map Report (JSON + Markdown summary)
|
|
||||||
**Time**: 3-8 minutes
|
|
||||||
|
|
||||||
## Quality Assurance
|
|
||||||
|
|
||||||
### Validation Checks
|
|
||||||
|
|
||||||
**Completeness**:
|
|
||||||
- ✅ All requested analysis objectives addressed
|
|
||||||
- ✅ Key components inventoried with file:line locations
|
|
||||||
- ✅ Dual-source strategy applied (Bash + Gemini) for deep-scan mode
|
|
||||||
- ✅ Findings attributed to discovery source (bash/gemini)
|
|
||||||
|
|
||||||
**Accuracy**:
|
|
||||||
- ✅ File paths verified (exist and accessible)
|
|
||||||
- ✅ Line numbers accurate (cross-referenced with actual files)
|
|
||||||
- ✅ Code snippets match source (no fabrication)
|
|
||||||
- ✅ Dependency relationships validated (bidirectional checks)
|
|
||||||
|
|
||||||
**Actionability**:
|
|
||||||
- ✅ Recommendations specific and implementable
|
|
||||||
- ✅ Risk assessments quantified (low/medium/high with metrics)
|
|
||||||
- ✅ Next steps clearly defined
|
|
||||||
- ✅ No ambiguous findings (everything has file:line context)
|
|
||||||
|
|
||||||
### Error Recovery
|
|
||||||
|
|
||||||
**Common Issues**:
|
|
||||||
1. **Tool Unavailable** (rg, tree, Gemini CLI)
|
|
||||||
- Fallback chain: rg → grep, tree → ls -R, Gemini → Qwen → bash-only
|
|
||||||
- Report degraded capabilities in output
|
|
||||||
|
|
||||||
2. **Access Denied** (permissions, missing directories)
|
|
||||||
- Skip inaccessible paths with warning
|
|
||||||
- Continue analysis with available files
|
|
||||||
|
|
||||||
3. **Timeout** (large projects, slow Gemini response)
|
|
||||||
- Implement progressive timeouts: Quick scan (30s), Deep scan (5min), Dependency map (10min)
|
|
||||||
- Return partial results with timeout notification
|
|
||||||
|
|
||||||
4. **Ambiguous Patterns** (conflicting interpretations)
|
|
||||||
- Use Gemini semantic analysis as tiebreaker
|
|
||||||
- Document uncertainty in report with attribution
|
|
||||||
|
|
||||||
## Available Tools & Services
|
|
||||||
|
|
||||||
This agent can leverage the following tools to enhance analysis:
|
|
||||||
|
|
||||||
**Context Search Agent** (`context-search-agent`):
|
|
||||||
- **Use Case**: Get project-wide context before analysis
|
|
||||||
- **When to use**: Need comprehensive project understanding beyond file structure
|
|
||||||
- **Integration**: Call context-search-agent first, then use results to guide exploration
|
|
||||||
|
|
||||||
**MCP Tools** (Code Index):
|
|
||||||
- **Use Case**: Enhanced file discovery and search capabilities
|
|
||||||
- **When to use**: Large codebases requiring fast pattern discovery
|
|
||||||
- **Integration**: Prefer Code Index MCP when available, fallback to rg/bash tools
|
|
||||||
|
|
||||||
## Key Reminders
|
|
||||||
|
|
||||||
### ALWAYS
|
|
||||||
|
|
||||||
**Analysis Integrity**: ✅ Read-only operations | ✅ No file modifications | ✅ No state persistence | ✅ Verify file paths before reporting
|
|
||||||
|
|
||||||
**Dual-Source Strategy** (Deep-Scan Mode): ✅ Execute Bash scan first (Phase 1) | ✅ Run Gemini analysis (Phase 2) | ✅ Synthesize with attribution (Phase 3) | ✅ Cross-validate findings
|
|
||||||
|
|
||||||
**Tool Chain**: ✅ Prefer Code Index MCP when available | ✅ Fallback to rg/bash tools | ✅ Use Gemini CLI for semantic analysis (Qwen as fallback) | ✅ Handle tool unavailability gracefully
|
|
||||||
|
|
||||||
**Output Standards**: ✅ Include file:line locations | ✅ Attribute findings to source (bash/gemini) | ✅ Provide actionable recommendations | ✅ Use standardized report formats
|
|
||||||
|
|
||||||
**Mode Selection**: ✅ Match mode to task intent (quick-scan for simple queries, deep-scan for architecture, dependency-map for refactoring) | ✅ Communicate mode choice to user
|
|
||||||
|
|
||||||
### NEVER
|
|
||||||
|
|
||||||
**File Operations**: ❌ Modify files | ❌ Create/delete files | ❌ Execute write operations | ❌ Run build/test commands that change state
|
|
||||||
|
|
||||||
**Analysis Scope**: ❌ Exceed requested scope | ❌ Analyze unrelated modules | ❌ Include irrelevant findings | ❌ Mix multiple unrelated queries
|
|
||||||
|
|
||||||
**Output Quality**: ❌ Fabricate code snippets | ❌ Guess file locations | ❌ Report unverified dependencies | ❌ Provide ambiguous recommendations without context
|
|
||||||
|
|
||||||
**Tool Usage**: ❌ Skip Bash scan in deep-scan mode | ❌ Use Gemini for quick-scan mode (overkill) | ❌ Ignore fallback chain when tool fails | ❌ Proceed with incomplete tool setup
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Command Templates by Language
|
## 4-Phase Execution Workflow
|
||||||
|
|
||||||
### TypeScript/JavaScript
|
```
|
||||||
|
Phase 1: Task Understanding
|
||||||
```bash
|
↓ Parse prompt for: analysis scope, output requirements, schema path
|
||||||
# Quick structural scan
|
Phase 2: Analysis Execution
|
||||||
rg "^export (class|interface|type|function|const) " --type ts -n
|
↓ Bash structural scan + Gemini semantic analysis (based on mode)
|
||||||
|
Phase 3: Schema Validation (MANDATORY if schema specified)
|
||||||
# Find component definitions (React)
|
↓ Read schema → Extract EXACT field names → Validate structure
|
||||||
rg "^export (default )?(function|const) \w+.*=.*\(" --type tsx -n
|
Phase 4: Output Generation
|
||||||
|
↓ Agent report + File output (strictly schema-compliant)
|
||||||
# Find imports
|
|
||||||
rg "^import .* from ['\"](.+)['\"]" --type ts -o -r '$1'
|
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "*.test.ts" -o -name "*.spec.ts" | grep -v node_modules
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Python
|
---
|
||||||
|
|
||||||
|
## Phase 1: Task Understanding
|
||||||
|
|
||||||
|
**Extract from prompt**:
|
||||||
|
- Analysis target and scope
|
||||||
|
- Analysis mode (quick-scan / deep-scan / dependency-map)
|
||||||
|
- Output file path (if specified)
|
||||||
|
- Schema file path (if specified)
|
||||||
|
- Additional requirements and constraints
|
||||||
|
|
||||||
|
**Determine analysis depth from prompt keywords**:
|
||||||
|
- Quick lookup, structure overview → quick-scan
|
||||||
|
- Deep analysis, design intent, architecture → deep-scan
|
||||||
|
- Dependencies, impact analysis, coupling → dependency-map
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Analysis Execution
|
||||||
|
|
||||||
|
### Available Tools
|
||||||
|
|
||||||
|
- `Read()` - Load package.json, requirements.txt, pyproject.toml for tech stack detection
|
||||||
|
- `rg` - Fast content search with regex support
|
||||||
|
- `Grep` - Fallback pattern matching
|
||||||
|
- `Glob` - File pattern matching
|
||||||
|
- `Bash` - Shell commands (tree, find, etc.)
|
||||||
|
|
||||||
|
### Bash Structural Scan
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Find class definitions
|
# Project structure
|
||||||
rg "^class \w+.*:" --type py -n
|
ccw tool exec get_modules_by_depth '{}'
|
||||||
|
|
||||||
# Find function definitions
|
# Pattern discovery (adapt based on language)
|
||||||
rg "^def \w+\(" --type py -n
|
rg "^export (class|interface|function) " --type ts -n
|
||||||
|
rg "^(class|def) \w+" --type py -n
|
||||||
# Find imports
|
rg "^import .* from " -n | head -30
|
||||||
rg "^(from .* import|import )" --type py -n
|
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "test_*.py" -o -name "*_test.py"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Go
|
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Find type definitions
|
cd {dir} && gemini -p "
|
||||||
rg "^type \w+ (struct|interface)" --type go -n
|
PURPOSE: {from prompt}
|
||||||
|
TASK: {from prompt}
|
||||||
# Find function definitions
|
MODE: analysis
|
||||||
rg "^func (\(\w+ \*?\w+\) )?\w+\(" --type go -n
|
CONTEXT: @**/*
|
||||||
|
EXPECTED: {from prompt}
|
||||||
# Find imports
|
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
||||||
rg "^import \(" --type go -A 10
|
"
|
||||||
|
|
||||||
# Find test files
|
|
||||||
find . -name "*_test.go"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Java
|
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|
||||||
```bash
|
### Dual-Source Synthesis
|
||||||
# Find class definitions
|
|
||||||
rg "^(public |private |protected )?(class|interface|enum) \w+" --type java -n
|
|
||||||
|
|
||||||
# Find method definitions
|
1. Bash results: Precise file:line locations
|
||||||
rg "^\s+(public |private |protected ).*\w+\(.*\)" --type java -n
|
2. Gemini results: Semantic understanding, design intent
|
||||||
|
3. Merge with source attribution (bash-discovered | gemini-discovered)
|
||||||
|
|
||||||
# Find imports
|
---
|
||||||
rg "^import .*;" --type java -n
|
|
||||||
|
|
||||||
# Find test files
|
## Phase 3: Schema Validation
|
||||||
find . -name "*Test.java" -o -name "*Tests.java"
|
|
||||||
|
### ⚠️ CRITICAL: Schema Compliance Protocol
|
||||||
|
|
||||||
|
**This phase is MANDATORY when schema file is specified in prompt.**
|
||||||
|
|
||||||
|
**Step 1: Read Schema FIRST**
|
||||||
```
|
```
|
||||||
|
Read(schema_file_path)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Extract Schema Requirements**
|
||||||
|
|
||||||
|
Parse and memorize:
|
||||||
|
1. **Root structure** - Is it array `[...]` or object `{...}`?
|
||||||
|
2. **Required fields** - List all `"required": [...]` arrays
|
||||||
|
3. **Field names EXACTLY** - Copy character-by-character (case-sensitive)
|
||||||
|
4. **Enum values** - Copy exact strings (e.g., `"critical"` not `"Critical"`)
|
||||||
|
5. **Nested structures** - Note flat vs nested requirements
|
||||||
|
|
||||||
|
**Step 3: Pre-Output Validation Checklist**
|
||||||
|
|
||||||
|
Before writing ANY JSON output, verify:
|
||||||
|
|
||||||
|
- [ ] Root structure matches schema (array vs object)
|
||||||
|
- [ ] ALL required fields present at each level
|
||||||
|
- [ ] Field names EXACTLY match schema (character-by-character)
|
||||||
|
- [ ] Enum values EXACTLY match schema (case-sensitive)
|
||||||
|
- [ ] Nested structures follow schema pattern (flat vs nested)
|
||||||
|
- [ ] Data types correct (string, integer, array, object)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: Output Generation
|
||||||
|
|
||||||
|
### Agent Output (return to caller)
|
||||||
|
|
||||||
|
Brief summary:
|
||||||
|
- Task completion status
|
||||||
|
- Key findings summary
|
||||||
|
- Generated file paths (if any)
|
||||||
|
|
||||||
|
### File Output (as specified in prompt)
|
||||||
|
|
||||||
|
**⚠️ MANDATORY WORKFLOW**:
|
||||||
|
|
||||||
|
1. `Read()` schema file BEFORE generating output
|
||||||
|
2. Extract ALL field names from schema
|
||||||
|
3. Build JSON using ONLY schema field names
|
||||||
|
4. Validate against checklist before writing
|
||||||
|
5. Write file with validated content
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Tool Fallback**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|
||||||
|
**Schema Validation Failure**: Identify error → Correct → Re-validate
|
||||||
|
|
||||||
|
**Timeout**: Return partial results + timeout notification
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Reminders
|
||||||
|
|
||||||
|
**ALWAYS**:
|
||||||
|
1. Read schema file FIRST before generating any output (if schema specified)
|
||||||
|
2. Copy field names EXACTLY from schema (case-sensitive)
|
||||||
|
3. Verify root structure matches schema (array vs object)
|
||||||
|
4. Match nested/flat structures as schema requires
|
||||||
|
5. Use exact enum values from schema (case-sensitive)
|
||||||
|
6. Include ALL required fields at every level
|
||||||
|
7. Include file:line references in findings
|
||||||
|
8. Attribute discovery source (bash/gemini)
|
||||||
|
|
||||||
|
**NEVER**:
|
||||||
|
1. Modify any files (read-only agent)
|
||||||
|
2. Skip schema reading step when schema is specified
|
||||||
|
3. Guess field names - ALWAYS copy from schema
|
||||||
|
4. Assume structure - ALWAYS verify against schema
|
||||||
|
5. Omit required fields
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -66,8 +66,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
|
|||||||
"task_config": {
|
"task_config": {
|
||||||
"agent": "@test-fix-agent",
|
"agent": "@test-fix-agent",
|
||||||
"type": "test-fix-iteration",
|
"type": "test-fix-iteration",
|
||||||
"max_iterations": 5,
|
"max_iterations": 5
|
||||||
"use_codex": false
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -263,7 +262,6 @@ function extractModificationPoints() {
|
|||||||
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
"analysis_report": ".process/iteration-{iteration}-analysis.md",
|
||||||
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
|
||||||
"max_iterations": "{task_config.max_iterations}",
|
"max_iterations": "{task_config.max_iterations}",
|
||||||
"use_codex": "{task_config.use_codex}",
|
|
||||||
"parent_task": "{parent_task_id}",
|
"parent_task": "{parent_task_id}",
|
||||||
"created_by": "@cli-planning-agent",
|
"created_by": "@cli-planning-agent",
|
||||||
"created_at": "{timestamp}"
|
"created_at": "{timestamp}"
|
||||||
|
|||||||
@@ -24,8 +24,6 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- **Context-driven** - Use provided context and existing code patterns
|
- **Context-driven** - Use provided context and existing code patterns
|
||||||
- **Quality over speed** - Write boring, reliable code that works
|
- **Quality over speed** - Write boring, reliable code that works
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Execution Process
|
## Execution Process
|
||||||
|
|
||||||
### 1. Context Assessment
|
### 1. Context Assessment
|
||||||
@@ -35,7 +33,7 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- Project CLAUDE.md standards
|
- Project CLAUDE.md standards
|
||||||
- **context-package.json** (when available in workflow tasks)
|
- **context-package.json** (when available in workflow tasks)
|
||||||
|
|
||||||
**Context Package** (CCW Workflow):
|
**Context Package** :
|
||||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
||||||
```bash
|
```bash
|
||||||
# Get role analysis paths from context package
|
# Get role analysis paths from context package
|
||||||
|
|||||||
@@ -119,17 +119,6 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
- No dependency management
|
- No dependency management
|
||||||
- Used for temporary context preparation
|
- Used for temporary context preparation
|
||||||
|
|
||||||
### NOT Handled by This Agent
|
|
||||||
|
|
||||||
**JSON format** (used by code-developer, test-fix-agent):
|
|
||||||
```json
|
|
||||||
"flow_control": {
|
|
||||||
"pre_analysis": [...],
|
|
||||||
"implementation_approach": [...]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
This complete JSON format is stored in `.task/IMPL-*.json` files and handled by implementation agents, not conceptual-planning-agent.
|
|
||||||
|
|
||||||
### Role-Specific Analysis Dimensions
|
### Role-Specific Analysis Dimensions
|
||||||
|
|
||||||
@@ -146,14 +135,14 @@ This complete JSON format is stored in `.task/IMPL-*.json` files and handled by
|
|||||||
|
|
||||||
### Output Integration
|
### Output Integration
|
||||||
|
|
||||||
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into the single role's output:
|
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into role output documents:
|
||||||
- Enhanced `analysis.md` with codebase insights and architectural patterns
|
- Enhanced analysis documents with codebase insights and architectural patterns
|
||||||
- Role-specific technical recommendations based on existing conventions
|
- Role-specific technical recommendations based on existing conventions
|
||||||
- Pattern-based best practices from actual code examination
|
- Pattern-based best practices from actual code examination
|
||||||
- Realistic feasibility assessments based on current implementation
|
- Realistic feasibility assessments based on current implementation
|
||||||
|
|
||||||
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
||||||
- Enhanced `analysis.md` with autonomous development recommendations
|
- Enhanced analysis documents with autonomous development recommendations
|
||||||
- Role-specific strategy based on intelligent system understanding
|
- Role-specific strategy based on intelligent system understanding
|
||||||
- Autonomous development approaches and implementation guidance
|
- Autonomous development approaches and implementation guidance
|
||||||
- Self-guided optimization and integration recommendations
|
- Self-guided optimization and integration recommendations
|
||||||
@@ -229,26 +218,23 @@ Generate documents according to loaded role template specifications:
|
|||||||
|
|
||||||
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
||||||
|
|
||||||
**Required Files**:
|
**Output Files**:
|
||||||
- **analysis.md**: Main role perspective analysis incorporating user context and role template
|
- **analysis.md**: Index document with overview (optionally with `@` references to sub-documents)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
|
||||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
||||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
- **analysis-{slug}.md**: Section content documents (slug from section heading: lowercase, hyphens)
|
||||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
- Maximum 5 sub-documents (merge related sections if needed)
|
||||||
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
|
- **Content**: Analysis AND recommendations sections
|
||||||
|
|
||||||
**File Structure Example**:
|
**File Structure Example**:
|
||||||
```
|
```
|
||||||
.workflow/WFS-[session]/.brainstorming/system-architect/
|
.workflow/WFS-[session]/.brainstorming/system-architect/
|
||||||
├── analysis.md # Main system architecture analysis with recommendations
|
├── analysis.md # Index with overview + @references
|
||||||
├── analysis-1.md # (Optional) Continuation if content >800 lines
|
├── analysis-architecture-assessment.md # Section content
|
||||||
└── deliverables/ # (Optional) Additional role-specific outputs
|
├── analysis-technology-evaluation.md # Section content
|
||||||
├── technical-architecture.md # System design specifications
|
├── analysis-integration-strategy.md # Section content
|
||||||
├── technology-stack.md # Technology selection rationale
|
└── analysis-recommendations.md # Section content (max 5 sub-docs total)
|
||||||
└── scalability-plan.md # Scaling strategy
|
|
||||||
|
|
||||||
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
|
NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
|
||||||
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Role-Specific Planning Process
|
## Role-Specific Planning Process
|
||||||
@@ -268,14 +254,10 @@ FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefi
|
|||||||
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
||||||
|
|
||||||
### 3. Brainstorming Documentation Phase
|
### 3. Brainstorming Documentation Phase
|
||||||
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
|
- **Create analysis.md**: Main document with overview (optionally with `@` references)
|
||||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
- **Create sub-documents**: `analysis-{slug}.md` for major sections (max 5)
|
||||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
|
||||||
- **Content**: Include both analysis AND recommendations sections within analysis files
|
|
||||||
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
|
||||||
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
|
|
||||||
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
||||||
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
|
- **Naming Validation**: Verify ALL files start with `analysis` prefix
|
||||||
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
||||||
|
|
||||||
## Role-Specific Analysis Framework
|
## Role-Specific Analysis Framework
|
||||||
@@ -324,5 +306,3 @@ When analysis is complete, ensure:
|
|||||||
- **Relevance**: Directly addresses user's specified requirements
|
- **Relevance**: Directly addresses user's specified requirements
|
||||||
- **Actionability**: Provides concrete next steps and recommendations
|
- **Actionability**: Provides concrete next steps and recommendations
|
||||||
|
|
||||||
### Windows Path Format Guidelines
|
|
||||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
|
||||||
|
|||||||
@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
|
|||||||
### 1. Reference Documentation (Project Standards)
|
### 1. Reference Documentation (Project Standards)
|
||||||
**Tools**:
|
**Tools**:
|
||||||
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
||||||
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
|
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
|
||||||
- `Glob()` - Find documentation files
|
- `Glob()` - Find documentation files
|
||||||
|
|
||||||
**Use**: Phase 0 foundation setup
|
**Use**: Phase 0 foundation setup
|
||||||
@@ -82,7 +82,7 @@ mcp__code-index__set_project_path(process.cwd())
|
|||||||
mcp__code-index__refresh_index()
|
mcp__code-index__refresh_index()
|
||||||
|
|
||||||
// 2. Project Structure
|
// 2. Project Structure
|
||||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
bash(ccw tool exec get_modules_by_depth '{}')
|
||||||
|
|
||||||
// 3. Load Documentation (if not in memory)
|
// 3. Load Documentation (if not in memory)
|
||||||
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
||||||
@@ -100,7 +100,87 @@ if (!memory.has("README.md")) Read(README.md)
|
|||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
### Phase 2: Multi-Source Context Discovery
|
||||||
|
|
||||||
Execute all 3 tracks in parallel for comprehensive coverage.
|
Execute all tracks in parallel for comprehensive coverage.
|
||||||
|
|
||||||
|
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
|
||||||
|
|
||||||
|
#### Track 0: Exploration Synthesis (Optional)
|
||||||
|
|
||||||
|
**Trigger**: When `explorations-manifest.json` exists in session `.process/` folder
|
||||||
|
|
||||||
|
**Purpose**: Transform raw exploration data into prioritized, deduplicated insights. This is NOT simple aggregation - it synthesizes `critical_files` (priority-ranked), deduplicates patterns/integration_points, and generates `conflict_indicators`.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Check for exploration results from context-gather parallel explore phase
|
||||||
|
const manifestPath = `.workflow/active/${session_id}/.process/explorations-manifest.json`;
|
||||||
|
if (file_exists(manifestPath)) {
|
||||||
|
const manifest = JSON.parse(Read(manifestPath));
|
||||||
|
|
||||||
|
// Load full exploration data from each file
|
||||||
|
const explorationData = manifest.explorations.map(exp => ({
|
||||||
|
...exp,
|
||||||
|
data: JSON.parse(Read(exp.path))
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Build explorations array with summaries
|
||||||
|
const explorations = explorationData.map(exp => ({
|
||||||
|
angle: exp.angle,
|
||||||
|
file: exp.file,
|
||||||
|
path: exp.path,
|
||||||
|
index: exp.data._metadata?.exploration_index || exp.index,
|
||||||
|
summary: {
|
||||||
|
relevant_files_count: exp.data.relevant_files?.length || 0,
|
||||||
|
key_patterns: exp.data.patterns,
|
||||||
|
integration_points: exp.data.integration_points
|
||||||
|
}
|
||||||
|
}));
|
||||||
|
|
||||||
|
// SYNTHESIS (not aggregation): Transform raw data into prioritized insights
|
||||||
|
const aggregated_insights = {
|
||||||
|
// CRITICAL: Synthesize priority-ranked critical_files from multiple relevant_files lists
|
||||||
|
// - Deduplicate by path
|
||||||
|
// - Rank by: mention count across angles + individual relevance scores
|
||||||
|
// - Top 10-15 files only (focused, actionable)
|
||||||
|
critical_files: synthesizeCriticalFiles(explorationData.flatMap(e => e.data.relevant_files || [])),
|
||||||
|
|
||||||
|
// SYNTHESIS: Generate conflict indicators from pattern mismatches, constraint violations
|
||||||
|
conflict_indicators: synthesizeConflictIndicators(explorationData),
|
||||||
|
|
||||||
|
// Deduplicate clarification questions (merge similar questions)
|
||||||
|
clarification_needs: deduplicateQuestions(explorationData.flatMap(e => e.data.clarification_needs || [])),
|
||||||
|
|
||||||
|
// Preserve source attribution for traceability
|
||||||
|
constraints: explorationData.map(e => ({ constraint: e.data.constraints, source_angle: e.angle })).filter(c => c.constraint),
|
||||||
|
|
||||||
|
// Deduplicate patterns across angles (merge identical patterns)
|
||||||
|
all_patterns: deduplicatePatterns(explorationData.map(e => ({ patterns: e.data.patterns, source_angle: e.angle }))),
|
||||||
|
|
||||||
|
// Deduplicate integration points (merge by file:line location)
|
||||||
|
all_integration_points: deduplicateIntegrationPoints(explorationData.map(e => ({ points: e.data.integration_points, source_angle: e.angle })))
|
||||||
|
};
|
||||||
|
|
||||||
|
// Store for Phase 3 packaging
|
||||||
|
exploration_results = { manifest_path: manifestPath, exploration_count: manifest.exploration_count,
|
||||||
|
complexity: manifest.complexity, angles: manifest.angles_explored,
|
||||||
|
explorations, aggregated_insights };
|
||||||
|
}
|
||||||
|
|
||||||
|
// Synthesis helper functions (conceptual)
|
||||||
|
function synthesizeCriticalFiles(allRelevantFiles) {
|
||||||
|
// 1. Group by path
|
||||||
|
// 2. Count mentions across angles
|
||||||
|
// 3. Average relevance scores
|
||||||
|
// 4. Rank by: (mention_count * 0.6) + (avg_relevance * 0.4)
|
||||||
|
// 5. Return top 10-15 with mentioned_by_angles attribution
|
||||||
|
}
|
||||||
|
|
||||||
|
function synthesizeConflictIndicators(explorationData) {
|
||||||
|
// 1. Detect pattern mismatches across angles
|
||||||
|
// 2. Identify constraint violations
|
||||||
|
// 3. Flag files mentioned with conflicting integration approaches
|
||||||
|
// 4. Assign severity: critical/high/medium/low
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Track 1: Reference Documentation
|
#### Track 1: Reference Documentation
|
||||||
|
|
||||||
@@ -369,7 +449,12 @@ Calculate risk level based on:
|
|||||||
{
|
{
|
||||||
"path": "system-architect/analysis.md",
|
"path": "system-architect/analysis.md",
|
||||||
"type": "primary",
|
"type": "primary",
|
||||||
"content": "# System Architecture Analysis\n\n## Overview\n..."
|
"content": "# System Architecture Analysis\n\n## Overview\n@analysis-architecture.md\n@analysis-recommendations.md"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "system-architect/analysis-architecture.md",
|
||||||
|
"type": "supplementary",
|
||||||
|
"content": "# Architecture Assessment\n\n..."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -391,33 +476,40 @@ Calculate risk level based on:
|
|||||||
},
|
},
|
||||||
"affected_modules": ["auth", "user-model", "middleware"],
|
"affected_modules": ["auth", "user-model", "middleware"],
|
||||||
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
"mitigation_strategy": "Incremental refactoring with backward compatibility"
|
||||||
|
},
|
||||||
|
"exploration_results": {
|
||||||
|
"manifest_path": ".workflow/active/{session}/.process/explorations-manifest.json",
|
||||||
|
"exploration_count": 3,
|
||||||
|
"complexity": "Medium",
|
||||||
|
"angles": ["architecture", "dependencies", "testing"],
|
||||||
|
"explorations": [
|
||||||
|
{
|
||||||
|
"angle": "architecture",
|
||||||
|
"file": "exploration-architecture.json",
|
||||||
|
"path": ".workflow/active/{session}/.process/exploration-architecture.json",
|
||||||
|
"index": 1,
|
||||||
|
"summary": {
|
||||||
|
"relevant_files_count": 5,
|
||||||
|
"key_patterns": "Service layer with DI",
|
||||||
|
"integration_points": "Container.registerService:45-60"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"aggregated_insights": {
|
||||||
|
"critical_files": [{"path": "src/auth/AuthService.ts", "relevance": 0.95, "mentioned_by_angles": ["architecture"]}],
|
||||||
|
"conflict_indicators": [{"type": "pattern_mismatch", "description": "...", "source_angle": "architecture", "severity": "medium"}],
|
||||||
|
"clarification_needs": [{"question": "...", "context": "...", "options": [], "source_angle": "architecture"}],
|
||||||
|
"constraints": [{"constraint": "Must follow existing DI pattern", "source_angle": "architecture"}],
|
||||||
|
"all_patterns": [{"patterns": "Service layer with DI", "source_angle": "architecture"}],
|
||||||
|
"all_integration_points": [{"points": "Container.registerService:45-60", "source_angle": "architecture"}]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Execution Mode: Brainstorm vs Plan
|
**Note**: `exploration_results` is populated when exploration files exist (from context-gather parallel explore phase). If no explorations, this field is omitted or empty.
|
||||||
|
|
||||||
### Brainstorm Mode (Lightweight)
|
|
||||||
**Purpose**: Provide high-level context for generating brainstorming questions
|
|
||||||
**Execution**: Phase 1-2 only (skip deep analysis)
|
|
||||||
**Output**:
|
|
||||||
- Lightweight context-package with:
|
|
||||||
- Project structure overview
|
|
||||||
- Tech stack identification
|
|
||||||
- High-level existing module names
|
|
||||||
- Basic conflict risk (file count only)
|
|
||||||
- Skip: Detailed dependency graphs, deep code analysis, web research
|
|
||||||
|
|
||||||
### Plan Mode (Comprehensive)
|
|
||||||
**Purpose**: Detailed implementation planning with conflict detection
|
|
||||||
**Execution**: Full Phase 1-3 (complete discovery + analysis)
|
|
||||||
**Output**:
|
|
||||||
- Comprehensive context-package with:
|
|
||||||
- Detailed dependency graphs
|
|
||||||
- Deep code structure analysis
|
|
||||||
- Conflict detection with mitigation strategies
|
|
||||||
- Web research for unfamiliar tech
|
|
||||||
- Include: All discovery tracks, relevance scoring, 3-source synthesis
|
|
||||||
|
|
||||||
## Quality Validation
|
## Quality Validation
|
||||||
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
|
|||||||
|
|
||||||
## Core Mission
|
## Core Mission
|
||||||
|
|
||||||
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
|
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
|
||||||
|
|
||||||
## Input Context
|
## Input Context
|
||||||
|
|
||||||
@@ -42,12 +42,12 @@ TodoWrite([
|
|||||||
# 3. Launch parallel jobs (max 4)
|
# 3. Launch parallel jobs (max 4)
|
||||||
|
|
||||||
# Depth 5 example (Layer 3 - use multi-layer):
|
# Depth 5 example (Layer 3 - use multi-layer):
|
||||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
|
||||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
|
||||||
|
|
||||||
# Depth 1 example (Layer 2 - use single-layer):
|
# Depth 1 example (Layer 2 - use single-layer):
|
||||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
|
||||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
|
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
|
||||||
# ... up to 4 concurrent jobs
|
# ... up to 4 concurrent jobs
|
||||||
|
|
||||||
# 4. Wait for all depth jobs to complete
|
# 4. Wait for all depth jobs to complete
|
||||||
|
|||||||
@@ -397,23 +397,3 @@ function detect_framework_from_config() {
|
|||||||
- ✅ All missing tests catalogued with priority
|
- ✅ All missing tests catalogued with priority
|
||||||
- ✅ Execution time < 30 seconds (< 60s for large codebases)
|
- ✅ Execution time < 30 seconds (< 60s for large codebases)
|
||||||
|
|
||||||
## Integration Points
|
|
||||||
|
|
||||||
### Called By
|
|
||||||
- `/workflow:tools:test-context-gather` - Orchestrator command
|
|
||||||
|
|
||||||
### Calls
|
|
||||||
- Code-Index MCP tools (preferred)
|
|
||||||
- ripgrep/find (fallback)
|
|
||||||
- Bash file operations
|
|
||||||
|
|
||||||
### Followed By
|
|
||||||
- `/workflow:tools:test-concept-enhanced` - Test generation analysis
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
- **Detection-first**: Always check for existing test-context-package before analysis
|
|
||||||
- **Code-Index priority**: Use MCP tools when available, fallback to CLI
|
|
||||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, etc.
|
|
||||||
- **Coverage gap focus**: Primary goal is identifying missing tests
|
|
||||||
- **Source context critical**: Implementation summaries guide test generation
|
|
||||||
|
|||||||
@@ -142,9 +142,9 @@ run_test_layer "L1-unit" "$UNIT_CMD"
|
|||||||
|
|
||||||
### 3. Failure Diagnosis & Fixing Loop
|
### 3. Failure Diagnosis & Fixing Loop
|
||||||
|
|
||||||
**Execution Modes**:
|
**Execution Modes** (determined by `flow_control.implementation_approach`):
|
||||||
|
|
||||||
**A. Manual Mode (Default, meta.use_codex=false)**:
|
**A. Agent Mode (Default, no `command` field in steps)**:
|
||||||
```
|
```
|
||||||
WHILE tests are failing AND iterations < max_iterations:
|
WHILE tests are failing AND iterations < max_iterations:
|
||||||
1. Use Gemini to diagnose failure (bug-fix template)
|
1. Use Gemini to diagnose failure (bug-fix template)
|
||||||
@@ -155,17 +155,17 @@ WHILE tests are failing AND iterations < max_iterations:
|
|||||||
END WHILE
|
END WHILE
|
||||||
```
|
```
|
||||||
|
|
||||||
**B. Codex Mode (meta.use_codex=true)**:
|
**B. CLI Mode (`command` field present in implementation_approach steps)**:
|
||||||
```
|
```
|
||||||
WHILE tests are failing AND iterations < max_iterations:
|
WHILE tests are failing AND iterations < max_iterations:
|
||||||
1. Use Gemini to diagnose failure (bug-fix template)
|
1. Use Gemini to diagnose failure (bug-fix template)
|
||||||
2. Use Codex to apply fixes automatically with resume mechanism
|
2. Execute `command` field (e.g., Codex) to apply fixes automatically
|
||||||
3. Re-run test suite
|
3. Re-run test suite
|
||||||
4. Verify fix doesn't break other tests
|
4. Verify fix doesn't break other tests
|
||||||
END WHILE
|
END WHILE
|
||||||
```
|
```
|
||||||
|
|
||||||
**Codex Resume in Test-Fix Cycle** (when `meta.use_codex=true`):
|
**Codex Resume in Test-Fix Cycle** (when step has `command` with Codex):
|
||||||
- First iteration: Start new Codex session with full context
|
- First iteration: Start new Codex session with full context
|
||||||
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies
|
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies
|
||||||
|
|
||||||
@@ -331,6 +331,8 @@ When generating test results for orchestrator (saved to `.process/test-results.j
|
|||||||
- Break existing passing tests
|
- Break existing passing tests
|
||||||
- Skip final verification
|
- Skip final verification
|
||||||
- Leave tests failing - must achieve 100% pass rate
|
- Leave tests failing - must achieve 100% pass rate
|
||||||
|
- Use `run_in_background` for Bash() commands - always set `run_in_background=false` to ensure tests run in foreground for proper output capture
|
||||||
|
- Use complex bash pipe chains (`cmd | grep | awk | sed`) - prefer dedicated tools (Read, Grep, Glob) for file operations and content extraction; simple single-pipe commands are acceptable when necessary
|
||||||
|
|
||||||
## Quality Certification
|
## Quality Certification
|
||||||
|
|
||||||
|
|||||||
@@ -217,11 +217,6 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
|||||||
|
|
||||||
### Structure Optimization
|
### Structure Optimization
|
||||||
|
|
||||||
**Layout Structure Benefits**:
|
|
||||||
- Eliminates redundancy between structure and styling
|
|
||||||
- Layout properties co-located with DOM elements
|
|
||||||
- Responsive overrides apply directly to affected elements
|
|
||||||
- Single source of truth for each element
|
|
||||||
|
|
||||||
**Component State Coverage**:
|
**Component State Coverage**:
|
||||||
- Interactive components (button, input, dropdown) MUST define: default, hover, focus, active, disabled
|
- Interactive components (button, input, dropdown) MUST define: default, hover, focus, active, disabled
|
||||||
@@ -323,270 +318,21 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
|||||||
|
|
||||||
### design-tokens.json
|
### design-tokens.json
|
||||||
|
|
||||||
|
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/design-tokens.json`
|
||||||
|
|
||||||
**Format**: W3C Design Tokens Community Group Specification
|
**Format**: W3C Design Tokens Community Group Specification
|
||||||
|
|
||||||
**Schema Structure**:
|
**Structure Overview**:
|
||||||
```json
|
- **color**: Base colors, interactive states (primary, secondary, accent, destructive), muted, chart, sidebar
|
||||||
{
|
- **typography**: Font families, sizes, line heights, letter spacing, combinations
|
||||||
"$schema": "https://tr.designtokens.org/format/",
|
- **spacing**: Systematic scale (0-64, multiples of 0.25rem)
|
||||||
"name": "string - Token set name",
|
- **opacity**: disabled, hover, active
|
||||||
"description": "string - Token set description",
|
- **shadows**: 2xs to 2xl (8-tier system)
|
||||||
|
- **border_radius**: sm to xl + DEFAULT
|
||||||
"color": {
|
- **breakpoints**: sm to 2xl
|
||||||
"background": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" }, "$description": "optional" },
|
- **component**: 12+ components with base, size, variant, state structures
|
||||||
"foreground": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
- **elevation**: z-index values for layered components
|
||||||
"card": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
- **_metadata**: version, created, source, theme_colors_guide, conflicts, code_snippets, usage_recommendations
|
||||||
"card-foreground": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"border": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"input": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"ring": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
|
|
||||||
"interactive": {
|
|
||||||
"primary": {
|
|
||||||
"default": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"hover": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"active": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"disabled": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"foreground": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } }
|
|
||||||
},
|
|
||||||
"secondary": { "/* Same structure as primary */" },
|
|
||||||
"accent": { "/* Same structure (no disabled state) */" },
|
|
||||||
"destructive": { "/* Same structure (no active/disabled states) */" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"muted": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"muted-foreground": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
|
|
||||||
"chart": {
|
|
||||||
"1": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"2": { "/* ... */" },
|
|
||||||
"3": { "/* ... */" },
|
|
||||||
"4": { "/* ... */" },
|
|
||||||
"5": { "/* ... */" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"sidebar": {
|
|
||||||
"background": { "$type": "color", "$value": { "light": "oklch(...)", "dark": "oklch(...)" } },
|
|
||||||
"foreground": { "/* ... */" },
|
|
||||||
"primary": { "/* ... */" },
|
|
||||||
"primary-foreground": { "/* ... */" },
|
|
||||||
"accent": { "/* ... */" },
|
|
||||||
"accent-foreground": { "/* ... */" },
|
|
||||||
"border": { "/* ... */" },
|
|
||||||
"ring": { "/* ... */" }
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"typography": {
|
|
||||||
"font_families": {
|
|
||||||
"sans": "string - 'Font Name', fallback1, fallback2",
|
|
||||||
"serif": "string",
|
|
||||||
"mono": "string"
|
|
||||||
},
|
|
||||||
"font_sizes": {
|
|
||||||
"xs": "0.75rem",
|
|
||||||
"sm": "0.875rem",
|
|
||||||
"base": "1rem",
|
|
||||||
"lg": "1.125rem",
|
|
||||||
"xl": "1.25rem",
|
|
||||||
"2xl": "1.5rem",
|
|
||||||
"3xl": "1.875rem",
|
|
||||||
"4xl": "2.25rem"
|
|
||||||
},
|
|
||||||
"line_heights": {
|
|
||||||
"tight": "number",
|
|
||||||
"normal": "number",
|
|
||||||
"relaxed": "number"
|
|
||||||
},
|
|
||||||
"letter_spacing": {
|
|
||||||
"tight": "string",
|
|
||||||
"normal": "string",
|
|
||||||
"wide": "string"
|
|
||||||
},
|
|
||||||
"combinations": [
|
|
||||||
{
|
|
||||||
"name": "h1|h2|h3|h4|h5|h6|body|caption",
|
|
||||||
"font_family": "sans|serif|mono",
|
|
||||||
"font_size": "string - reference to font_sizes",
|
|
||||||
"font_weight": "number - 400|500|600|700",
|
|
||||||
"line_height": "string",
|
|
||||||
"letter_spacing": "string"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"spacing": {
|
|
||||||
"0": "0",
|
|
||||||
"1": "0.25rem",
|
|
||||||
"2": "0.5rem",
|
|
||||||
"/* Systematic scale: 3, 4, 6, 8, 12, 16, 20, 24, 32, 40, 48, 56, 64 */"
|
|
||||||
},
|
|
||||||
|
|
||||||
"opacity": {
|
|
||||||
"disabled": "0.5",
|
|
||||||
"hover": "0.8",
|
|
||||||
"active": "1"
|
|
||||||
},
|
|
||||||
|
|
||||||
"shadows": {
|
|
||||||
"2xs": "string - CSS shadow value",
|
|
||||||
"xs": "string",
|
|
||||||
"sm": "string",
|
|
||||||
"DEFAULT": "string",
|
|
||||||
"md": "string",
|
|
||||||
"lg": "string",
|
|
||||||
"xl": "string",
|
|
||||||
"2xl": "string"
|
|
||||||
},
|
|
||||||
|
|
||||||
"border_radius": {
|
|
||||||
"sm": "string - calc() or fixed",
|
|
||||||
"md": "string",
|
|
||||||
"lg": "string",
|
|
||||||
"xl": "string",
|
|
||||||
"DEFAULT": "string"
|
|
||||||
},
|
|
||||||
|
|
||||||
"breakpoints": {
|
|
||||||
"sm": "640px",
|
|
||||||
"md": "768px",
|
|
||||||
"lg": "1024px",
|
|
||||||
"xl": "1280px",
|
|
||||||
"2xl": "1536px"
|
|
||||||
},
|
|
||||||
|
|
||||||
"component": {
|
|
||||||
"/* COMPONENT PATTERN - Apply to: button, card, input, dialog, dropdown, toast, accordion, tabs, switch, checkbox, badge, alert */": {
|
|
||||||
"$type": "component",
|
|
||||||
"base": {
|
|
||||||
"/* Layout properties using camelCase */": "value or {token.path}",
|
|
||||||
"display": "inline-flex|flex|block",
|
|
||||||
"alignItems": "center",
|
|
||||||
"borderRadius": "{border_radius.md}",
|
|
||||||
"transition": "{transitions.default}"
|
|
||||||
},
|
|
||||||
"size": {
|
|
||||||
"small": { "height": "32px", "padding": "{spacing.2} {spacing.3}", "fontSize": "{typography.font_sizes.xs}" },
|
|
||||||
"default": { "height": "40px", "padding": "{spacing.2} {spacing.4}" },
|
|
||||||
"large": { "height": "48px", "padding": "{spacing.3} {spacing.6}", "fontSize": "{typography.font_sizes.base}" }
|
|
||||||
},
|
|
||||||
"variant": {
|
|
||||||
"variantName": {
|
|
||||||
"default": { "backgroundColor": "{color.interactive.primary.default}", "color": "{color.interactive.primary.foreground}" },
|
|
||||||
"hover": { "backgroundColor": "{color.interactive.primary.hover}" },
|
|
||||||
"active": { "backgroundColor": "{color.interactive.primary.active}" },
|
|
||||||
"disabled": { "backgroundColor": "{color.interactive.primary.disabled}", "opacity": "{opacity.disabled}", "cursor": "not-allowed" },
|
|
||||||
"focus": { "outline": "2px solid {color.ring}", "outlineOffset": "2px" }
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"state": {
|
|
||||||
"/* For stateful components (dialog, accordion, etc.) */": {
|
|
||||||
"open": { "animation": "{animation.name.component-open} {animation.duration.normal} {animation.easing.ease-out}" },
|
|
||||||
"closed": { "animation": "{animation.name.component-close} {animation.duration.normal} {animation.easing.ease-in}" }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"elevation": {
|
|
||||||
"$type": "elevation",
|
|
||||||
"base": { "$value": "0" },
|
|
||||||
"overlay": { "$value": "40" },
|
|
||||||
"dropdown": { "$value": "50" },
|
|
||||||
"dialog": { "$value": "50" },
|
|
||||||
"tooltip": { "$value": "60" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"_metadata": {
|
|
||||||
"version": "string - W3C version or custom version",
|
|
||||||
"created": "ISO timestamp - 2024-01-01T00:00:00Z",
|
|
||||||
"source": "code-import|explore|text",
|
|
||||||
"theme_colors_guide": {
|
|
||||||
"description": "Theme colors are the core brand identity colors that define the visual hierarchy and emotional tone of the design system",
|
|
||||||
"primary": {
|
|
||||||
"role": "Main brand color",
|
|
||||||
"usage": "Primary actions (CTAs, key interactive elements, navigation highlights, primary buttons)",
|
|
||||||
"contrast_requirement": "WCAG AA - 4.5:1 for text, 3:1 for UI components"
|
|
||||||
},
|
|
||||||
"secondary": {
|
|
||||||
"role": "Supporting brand color",
|
|
||||||
"usage": "Secondary actions and complementary elements (less prominent buttons, secondary navigation, supporting features)",
|
|
||||||
"principle": "Should complement primary without competing for attention"
|
|
||||||
},
|
|
||||||
"accent": {
|
|
||||||
"role": "Highlight color for emphasis",
|
|
||||||
"usage": "Attention-grabbing elements used sparingly (badges, notifications, special promotions, highlights)",
|
|
||||||
"principle": "Should create strong visual contrast to draw focus"
|
|
||||||
},
|
|
||||||
"destructive": {
|
|
||||||
"role": "Error and destructive action color",
|
|
||||||
"usage": "Delete buttons, error messages, critical warnings",
|
|
||||||
"principle": "Must signal danger or caution clearly"
|
|
||||||
},
|
|
||||||
"harmony_note": "All theme colors must work harmoniously together and align with brand identity. In multi-file extraction, prioritize definitions with semantic comments explaining brand intent."
|
|
||||||
},
|
|
||||||
"conflicts": [
|
|
||||||
{
|
|
||||||
"token_name": "string - which token has conflicts",
|
|
||||||
"category": "string - colors|typography|etc",
|
|
||||||
"definitions": [
|
|
||||||
{
|
|
||||||
"value": "string - token value",
|
|
||||||
"source_file": "string - absolute path",
|
|
||||||
"line_number": "number",
|
|
||||||
"context": "string - surrounding comment or null",
|
|
||||||
"semantic_intent": "string - interpretation of definition"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"selected_value": "string - final chosen value",
|
|
||||||
"selection_reason": "string - why this value was chosen"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"code_snippets": [
|
|
||||||
{
|
|
||||||
"category": "colors|typography|spacing|shadows|border_radius|component",
|
|
||||||
"token_name": "string - which token this snippet defines",
|
|
||||||
"source_file": "string - absolute path",
|
|
||||||
"line_start": "number",
|
|
||||||
"line_end": "number",
|
|
||||||
"snippet": "string - complete code block",
|
|
||||||
"context_type": "css-variable|css-class|js-object|scss-variable|etc"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"usage_recommendations": {
|
|
||||||
"typography": {
|
|
||||||
"common_sizes": {
|
|
||||||
"small_text": "sm (0.875rem)",
|
|
||||||
"body_text": "base (1rem)",
|
|
||||||
"heading": "2xl-4xl"
|
|
||||||
},
|
|
||||||
"common_combinations": [
|
|
||||||
{
|
|
||||||
"name": "Heading + Body",
|
|
||||||
"heading": "2xl",
|
|
||||||
"body": "base",
|
|
||||||
"use_case": "Article sections"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"spacing": {
|
|
||||||
"size_guide": {
|
|
||||||
"tight": "1-2 (0.25rem-0.5rem)",
|
|
||||||
"normal": "4-6 (1rem-1.5rem)",
|
|
||||||
"loose": "8-12 (2rem-3rem)"
|
|
||||||
},
|
|
||||||
"common_patterns": [
|
|
||||||
{
|
|
||||||
"pattern": "padding-4 margin-bottom-6",
|
|
||||||
"use_case": "Card content spacing",
|
|
||||||
"pixel_value": "1rem padding, 1.5rem margin"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required Components** (12+ components, use pattern above):
|
**Required Components** (12+ components, use pattern above):
|
||||||
- **button**: 5 variants (primary, secondary, destructive, outline, ghost) + 3 sizes + states (default, hover, active, disabled, focus)
|
- **button**: 5 variants (primary, secondary, destructive, outline, ghost) + 3 sizes + states (default, hover, active, disabled, focus)
|
||||||
@@ -637,136 +383,26 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
|||||||
|
|
||||||
### layout-templates.json
|
### layout-templates.json
|
||||||
|
|
||||||
|
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/layout-templates.json`
|
||||||
|
|
||||||
**Optimization**: Unified structure combining DOM and styling into single hierarchy
|
**Optimization**: Unified structure combining DOM and styling into single hierarchy
|
||||||
|
|
||||||
**Schema Structure**:
|
**Structure Overview**:
|
||||||
```json
|
- **templates[]**: Array of layout templates
|
||||||
{
|
- **target**: page/component name (hero-section, product-card)
|
||||||
"$schema": "https://tr.designtokens.org/format/",
|
- **component_type**: universal | specialized
|
||||||
"templates": [
|
- **device_type**: mobile | tablet | desktop | responsive
|
||||||
{
|
- **layout_strategy**: grid-3col, flex-row, stack, sidebar, etc.
|
||||||
"target": "string - page/component name (e.g., hero-section, product-card)",
|
- **structure**: Unified DOM + layout hierarchy
|
||||||
"description": "string - layout description",
|
- **tag**: HTML5 semantic tags
|
||||||
"component_type": "universal|specialized",
|
- **attributes**: class, role, aria-*, data-state
|
||||||
"device_type": "mobile|tablet|desktop|responsive",
|
- **layout**: Layout properties only (display, grid, flex, position, spacing) using {token.path}
|
||||||
"layout_strategy": "string - grid-3col|flex-row|stack|sidebar|etc",
|
- **responsive**: Breakpoint-specific overrides (ONLY changed properties)
|
||||||
|
- **children**: Recursive structure
|
||||||
"structure": {
|
- **content**: Text or {{placeholder}}
|
||||||
"tag": "string - HTML5 semantic tag (header|nav|main|section|article|aside|footer|div|etc)",
|
- **accessibility**: patterns, keyboard_navigation, focus_management, screen_reader_notes
|
||||||
"attributes": {
|
- **usage_guide**: common_sizes, variant_recommendations, usage_context, accessibility_tips
|
||||||
"class": "string - semantic class name",
|
- **extraction_metadata**: source, created, code_snippets
|
||||||
"role": "string - ARIA role (navigation|main|complementary|etc)",
|
|
||||||
"aria-label": "string - ARIA label",
|
|
||||||
"aria-describedby": "string - ARIA describedby",
|
|
||||||
"data-state": "string - data attributes for state management (open|closed|etc)"
|
|
||||||
},
|
|
||||||
"layout": {
|
|
||||||
"/* LAYOUT PROPERTIES ONLY - Use camelCase for property names */": "",
|
|
||||||
"display": "grid|flex|block|inline-flex",
|
|
||||||
"grid-template-columns": "{spacing.*} or CSS value (repeat(3, 1fr))",
|
|
||||||
"grid-template-rows": "string",
|
|
||||||
"gap": "{spacing.*}",
|
|
||||||
"padding": "{spacing.*}",
|
|
||||||
"margin": "{spacing.*}",
|
|
||||||
"alignItems": "start|center|end|stretch",
|
|
||||||
"justifyContent": "start|center|end|space-between|space-around",
|
|
||||||
"flexDirection": "row|column",
|
|
||||||
"flexWrap": "wrap|nowrap",
|
|
||||||
"position": "relative|absolute|fixed|sticky",
|
|
||||||
"top|right|bottom|left": "string",
|
|
||||||
"width": "string",
|
|
||||||
"height": "string",
|
|
||||||
"maxWidth": "string",
|
|
||||||
"minHeight": "string"
|
|
||||||
},
|
|
||||||
"responsive": {
|
|
||||||
"/* ONLY properties that CHANGE at each breakpoint - NO repetition */": "",
|
|
||||||
"sm": {
|
|
||||||
"grid-template-columns": "1fr",
|
|
||||||
"padding": "{spacing.4}"
|
|
||||||
},
|
|
||||||
"md": {
|
|
||||||
"grid-template-columns": "repeat(2, 1fr)",
|
|
||||||
"gap": "{spacing.6}"
|
|
||||||
},
|
|
||||||
"lg": {
|
|
||||||
"grid-template-columns": "repeat(3, 1fr)"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"children": [
|
|
||||||
{
|
|
||||||
"/* Recursive structure - same fields as parent */": "",
|
|
||||||
"tag": "string",
|
|
||||||
"attributes": {},
|
|
||||||
"layout": {},
|
|
||||||
"responsive": {},
|
|
||||||
"children": [],
|
|
||||||
"content": "string or {{placeholder}}"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"content": "string - text content or {{placeholder}} for dynamic content"
|
|
||||||
},
|
|
||||||
|
|
||||||
"accessibility": {
|
|
||||||
"patterns": [
|
|
||||||
"string - ARIA patterns used (e.g., WAI-ARIA Tabs pattern, Dialog pattern)"
|
|
||||||
],
|
|
||||||
"keyboard_navigation": [
|
|
||||||
"string - keyboard shortcuts (e.g., Tab/Shift+Tab navigation, Escape to close)"
|
|
||||||
],
|
|
||||||
"focus_management": "string - focus trap strategy, initial focus target",
|
|
||||||
"screen_reader_notes": [
|
|
||||||
"string - screen reader announcements (e.g., Dialog opened, Tab selected)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"usage_guide": {
|
|
||||||
"common_sizes": {
|
|
||||||
"small": {
|
|
||||||
"dimensions": "string - e.g., px-3 py-1.5 (height: ~32px)",
|
|
||||||
"use_case": "string - Compact UI, mobile views"
|
|
||||||
},
|
|
||||||
"medium": {
|
|
||||||
"dimensions": "string - e.g., px-4 py-2 (height: ~40px)",
|
|
||||||
"use_case": "string - Default size for most contexts"
|
|
||||||
},
|
|
||||||
"large": {
|
|
||||||
"dimensions": "string - e.g., px-6 py-3 (height: ~48px)",
|
|
||||||
"use_case": "string - Prominent CTAs, hero sections"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"variant_recommendations": {
|
|
||||||
"variant_name": {
|
|
||||||
"description": "string - when to use this variant",
|
|
||||||
"typical_actions": ["string - action examples"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"usage_context": [
|
|
||||||
"string - typical usage scenarios (e.g., Landing page hero, Product listing grid)"
|
|
||||||
],
|
|
||||||
"accessibility_tips": [
|
|
||||||
"string - accessibility best practices (e.g., Ensure heading hierarchy, Add aria-label)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"extraction_metadata": {
|
|
||||||
"source": "code-import|explore|text",
|
|
||||||
"created": "ISO timestamp",
|
|
||||||
"code_snippets": [
|
|
||||||
{
|
|
||||||
"component_name": "string - which layout component",
|
|
||||||
"source_file": "string - absolute path",
|
|
||||||
"line_start": "number",
|
|
||||||
"line_end": "number",
|
|
||||||
"snippet": "string - complete HTML/CSS/JS code block",
|
|
||||||
"context_type": "html-structure|css-utility|react-component|vue-component|etc"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Field Rules**:
|
**Field Rules**:
|
||||||
- $schema MUST reference W3C Design Tokens format specification
|
- $schema MUST reference W3C Design Tokens format specification
|
||||||
@@ -784,149 +420,25 @@ You execute 6 distinct task types organized into 3 patterns. Each task includes
|
|||||||
- usage_guide OPTIONAL for specialized components (can be simplified or omitted)
|
- usage_guide OPTIONAL for specialized components (can be simplified or omitted)
|
||||||
- extraction_metadata.code_snippets ONLY present in Code Import mode
|
- extraction_metadata.code_snippets ONLY present in Code Import mode
|
||||||
|
|
||||||
**Structure Optimization Benefits**:
|
|
||||||
- Eliminates redundancy between dom_structure and css_layout_rules
|
|
||||||
- Layout properties are co-located with corresponding DOM elements
|
|
||||||
- Responsive overrides apply directly to the element they affect
|
|
||||||
- Single source of truth for each element's structure and layout
|
|
||||||
- Easier to maintain and understand hierarchy
|
|
||||||
|
|
||||||
### animation-tokens.json
|
### animation-tokens.json
|
||||||
|
|
||||||
**Schema Structure**:
|
**Template Reference**: `~/.claude/workflows/cli-templates/ui-design/systems/animation-tokens.json`
|
||||||
```json
|
|
||||||
{
|
|
||||||
"$schema": "https://tr.designtokens.org/format/",
|
|
||||||
|
|
||||||
"duration": {
|
**Structure Overview**:
|
||||||
"$type": "duration",
|
- **duration**: instant (0ms), fast (150ms), normal (300ms), slow (500ms), slower (1000ms)
|
||||||
"instant": { "$value": "0ms" },
|
- **easing**: linear, ease-in, ease-out, ease-in-out, spring, bounce
|
||||||
"fast": { "$value": "150ms" },
|
- **keyframes**: Animation definitions in pairs (in/out, open/close, enter/exit)
|
||||||
"normal": { "$value": "300ms" },
|
- Required: fade-in/out, slide-up/down, scale-in/out, accordion-down/up, dialog-open/close, dropdown-open/close, toast-enter/exit, spin, pulse
|
||||||
"slow": { "$value": "500ms" },
|
- **interactions**: Component interaction animations with property, duration, easing
|
||||||
"slower": { "$value": "1000ms" }
|
- button-hover/active, card-hover, input-focus, dropdown-toggle, accordion-toggle, dialog-toggle, tabs-switch
|
||||||
},
|
- **transitions**: default, colors, transform, opacity, all-smooth
|
||||||
|
- **component_animations**: Maps components to animations (MUST match design-tokens.json components)
|
||||||
"easing": {
|
- State-based: dialog, dropdown, toast, accordion (use keyframes)
|
||||||
"$type": "cubicBezier",
|
- Interaction: button, card, input, tabs (use transitions)
|
||||||
"linear": { "$value": "linear" },
|
- **accessibility**: prefers_reduced_motion with CSS rule
|
||||||
"ease-in": { "$value": "cubic-bezier(0.4, 0, 1, 1)" },
|
- **_metadata**: version, created, source, code_snippets
|
||||||
"ease-out": { "$value": "cubic-bezier(0, 0, 0.2, 1)" },
|
|
||||||
"ease-in-out": { "$value": "cubic-bezier(0.4, 0, 0.2, 1)" },
|
|
||||||
"spring": { "$value": "cubic-bezier(0.68, -0.55, 0.265, 1.55)" },
|
|
||||||
"bounce": { "$value": "cubic-bezier(0.68, -0.6, 0.32, 1.6)" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"keyframes": {
|
|
||||||
"/* PATTERN: Define pairs (in/out, open/close, enter/exit) */": {
|
|
||||||
"0%": { "/* CSS properties */": "value" },
|
|
||||||
"100%": { "/* CSS properties */": "value" }
|
|
||||||
},
|
|
||||||
"/* Required keyframes for components: */": "",
|
|
||||||
"fade-in": { "0%": { "opacity": "0" }, "100%": { "opacity": "1" } },
|
|
||||||
"fade-out": { "/* reverse of fade-in */" },
|
|
||||||
"slide-up": { "0%": { "transform": "translateY(10px)", "opacity": "0" }, "100%": { "transform": "translateY(0)", "opacity": "1" } },
|
|
||||||
"slide-down": { "/* reverse direction */" },
|
|
||||||
"scale-in": { "0%": { "transform": "scale(0.95)", "opacity": "0" }, "100%": { "transform": "scale(1)", "opacity": "1" } },
|
|
||||||
"scale-out": { "/* reverse of scale-in */" },
|
|
||||||
"accordion-down": { "0%": { "height": "0", "opacity": "0" }, "100%": { "height": "var(--radix-accordion-content-height)", "opacity": "1" } },
|
|
||||||
"accordion-up": { "/* reverse */" },
|
|
||||||
"dialog-open": { "0%": { "transform": "translate(-50%, -48%) scale(0.96)", "opacity": "0" }, "100%": { "transform": "translate(-50%, -50%) scale(1)", "opacity": "1" } },
|
|
||||||
"dialog-close": { "/* reverse */" },
|
|
||||||
"dropdown-open": { "0%": { "transform": "scale(0.95) translateY(-4px)", "opacity": "0" }, "100%": { "transform": "scale(1) translateY(0)", "opacity": "1" } },
|
|
||||||
"dropdown-close": { "/* reverse */" },
|
|
||||||
"toast-enter": { "0%": { "transform": "translateX(100%)", "opacity": "0" }, "100%": { "transform": "translateX(0)", "opacity": "1" } },
|
|
||||||
"toast-exit": { "/* reverse */" },
|
|
||||||
"spin": { "0%": { "transform": "rotate(0deg)" }, "100%": { "transform": "rotate(360deg)" } },
|
|
||||||
"pulse": { "0%, 100%": { "opacity": "1" }, "50%": { "opacity": "0.5" } }
|
|
||||||
},
|
|
||||||
|
|
||||||
"interactions": {
|
|
||||||
"/* PATTERN: Define for each interactive component state */": {
|
|
||||||
"property": "string - CSS properties (comma-separated)",
|
|
||||||
"duration": "{duration.*}",
|
|
||||||
"easing": "{easing.*}"
|
|
||||||
},
|
|
||||||
"button-hover": { "property": "background-color, transform", "duration": "{duration.fast}", "easing": "{easing.ease-out}" },
|
|
||||||
"button-active": { "property": "transform", "duration": "{duration.instant}", "easing": "{easing.ease-in}" },
|
|
||||||
"card-hover": { "property": "box-shadow, transform", "duration": "{duration.normal}", "easing": "{easing.ease-in-out}" },
|
|
||||||
"input-focus": { "property": "border-color, box-shadow", "duration": "{duration.fast}", "easing": "{easing.ease-out}" },
|
|
||||||
"dropdown-toggle": { "property": "opacity, transform", "duration": "{duration.fast}", "easing": "{easing.ease-out}" },
|
|
||||||
"accordion-toggle": { "property": "height, opacity", "duration": "{duration.normal}", "easing": "{easing.ease-in-out}" },
|
|
||||||
"dialog-toggle": { "property": "opacity, transform", "duration": "{duration.normal}", "easing": "{easing.spring}" },
|
|
||||||
"tabs-switch": { "property": "color, border-color", "duration": "{duration.fast}", "easing": "{easing.ease-in-out}" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"transitions": {
|
|
||||||
"default": { "$value": "all {duration.normal} {easing.ease-in-out}" },
|
|
||||||
"colors": { "$value": "color {duration.fast} {easing.linear}, background-color {duration.fast} {easing.linear}" },
|
|
||||||
"transform": { "$value": "transform {duration.normal} {easing.spring}" },
|
|
||||||
"opacity": { "$value": "opacity {duration.fast} {easing.linear}" },
|
|
||||||
"all-smooth": { "$value": "all {duration.slow} {easing.ease-in-out}" }
|
|
||||||
},
|
|
||||||
|
|
||||||
"component_animations": {
|
|
||||||
"/* PATTERN: Map each component to its animations - MUST match design-tokens.json component list */": {
|
|
||||||
"stateOrInteraction": {
|
|
||||||
"animation": "keyframe-name {duration.*} {easing.*} OR none",
|
|
||||||
"transition": "{interactions.*} OR none"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"button": {
|
|
||||||
"hover": { "animation": "none", "transition": "{interactions.button-hover}" },
|
|
||||||
"active": { "animation": "none", "transition": "{interactions.button-active}" }
|
|
||||||
},
|
|
||||||
"card": {
|
|
||||||
"hover": { "animation": "none", "transition": "{interactions.card-hover}" }
|
|
||||||
},
|
|
||||||
"input": {
|
|
||||||
"focus": { "animation": "none", "transition": "{interactions.input-focus}" }
|
|
||||||
},
|
|
||||||
"dialog": {
|
|
||||||
"open": { "animation": "dialog-open {duration.normal} {easing.spring}" },
|
|
||||||
"close": { "animation": "dialog-close {duration.normal} {easing.ease-in}" }
|
|
||||||
},
|
|
||||||
"dropdown": {
|
|
||||||
"open": { "animation": "dropdown-open {duration.fast} {easing.ease-out}" },
|
|
||||||
"close": { "animation": "dropdown-close {duration.fast} {easing.ease-in}" }
|
|
||||||
},
|
|
||||||
"toast": {
|
|
||||||
"enter": { "animation": "toast-enter {duration.normal} {easing.ease-out}" },
|
|
||||||
"exit": { "animation": "toast-exit {duration.normal} {easing.ease-in}" }
|
|
||||||
},
|
|
||||||
"accordion": {
|
|
||||||
"open": { "animation": "accordion-down {duration.normal} {easing.ease-out}" },
|
|
||||||
"close": { "animation": "accordion-up {duration.normal} {easing.ease-in}" }
|
|
||||||
},
|
|
||||||
"/* Add mappings for: tabs, switch, checkbox, badge, alert */" : {}
|
|
||||||
},
|
|
||||||
|
|
||||||
"accessibility": {
|
|
||||||
"prefers_reduced_motion": {
|
|
||||||
"duration": "0ms",
|
|
||||||
"keyframes": {},
|
|
||||||
"note": "Disable animations when user prefers reduced motion",
|
|
||||||
"css_rule": "@media (prefers-reduced-motion: reduce) { *, *::before, *::after { animation-duration: 0.01ms !important; animation-iteration-count: 1 !important; transition-duration: 0.01ms !important; } }"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"_metadata": {
|
|
||||||
"version": "string",
|
|
||||||
"created": "ISO timestamp",
|
|
||||||
"source": "code-import|explore|text",
|
|
||||||
"code_snippets": [
|
|
||||||
{
|
|
||||||
"animation_name": "string - keyframe/transition name",
|
|
||||||
"source_file": "string - absolute path",
|
|
||||||
"line_start": "number",
|
|
||||||
"line_end": "number",
|
|
||||||
"snippet": "string - complete @keyframes or transition code",
|
|
||||||
"context_type": "css-keyframes|css-transition|js-animation|scss-animation|etc"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Field Rules**:
|
**Field Rules**:
|
||||||
- $schema MUST reference W3C Design Tokens format specification
|
- $schema MUST reference W3C Design Tokens format specification
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user