diff --git a/.claude/agents/action-planning-agent.md b/.claude/agents/action-planning-agent.md index 46d40e66..7d448c0f 100644 --- a/.claude/agents/action-planning-agent.md +++ b/.claude/agents/action-planning-agent.md @@ -291,12 +291,34 @@ function computeCliStrategy(task, allTasks) { - `agent`: Assigned agent for execution - `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks - `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module -- `execution_config`: CLI execution settings (from userConfig in task-generate-agent) +- `execution_config`: CLI execution settings (MUST align with userConfig from task-generate-agent) - `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only) - - `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto` + - `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, `auto`, or `null` (for agent-only) - `enable_resume`: Whether to use `--resume` for CLI continuity (default: true) - `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime) +**execution_config Alignment Rules** (MANDATORY): +``` +userConfig.executionMethod → meta.execution_config + implementation_approach + +"agent" → + meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false } + implementation_approach steps: NO command field (agent direct execution) + +"hybrid" → + meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool } + implementation_approach steps: command field ONLY on complex steps + +"cli" → + meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool } + implementation_approach steps: command field on ALL steps +``` + +**Consistency Check**: `meta.execution_config.method` MUST match presence of `command` fields: +- `method: "agent"` → 0 steps have command field +- `method: "hybrid"` → some steps have command field +- `method: "cli"` → all steps have command field + **Test Task Extensions** (for type="test-gen" or type="test-fix"): ```json diff --git a/.claude/agents/code-developer.md b/.claude/agents/code-developer.md index 6c1894f3..e9daccbc 100644 --- a/.claude/agents/code-developer.md +++ b/.claude/agents/code-developer.md @@ -41,11 +41,43 @@ Read(.workflow/active/${SESSION_ID}/.process/context-package.json) # Returns parsed JSON with brainstorm_artifacts, focus_paths, etc. ``` +**Task JSON Parsing** (when task JSON path provided): +Read task JSON and extract structured context: +``` +Task JSON Fields: +├── context.requirements[] → What to implement (list of requirements) +├── context.acceptance[] → How to verify (validation commands) +├── context.focus_paths[] → Where to focus (directories/files) +├── context.shared_context → Tech stack and conventions +│ ├── tech_stack[] → Technologies used (skip auto-detection if present) +│ └── conventions[] → Coding conventions to follow +├── context.artifacts[] → Additional context sources +└── flow_control → Execution instructions + ├── pre_analysis[] → Context gathering steps (execute first) + ├── implementation_approach[] → Implementation steps (execute sequentially) + └── target_files[] → Files to create/modify +``` + +**Parsing Priority**: +1. Read task JSON from provided path +2. Extract `context.requirements` as implementation goals +3. Extract `context.acceptance` as verification criteria +4. If `context.shared_context.tech_stack` exists → skip auto-detection, use provided stack +5. Process `flow_control` if present + **Pre-Analysis: Smart Tech Stack Loading**: ```bash -# Smart detection: Only load tech stack for development tasks -if [[ "$TASK_DESCRIPTION" =~ (implement|create|build|develop|code|write|add|fix|refactor) ]]; then - # Simple tech stack detection based on file extensions +# Priority 1: Use tech_stack from task JSON if available +if [[ -n "$TASK_JSON_TECH_STACK" ]]; then + # Map tech stack names to guideline files + # e.g., ["FastAPI", "SQLAlchemy"] → python-dev.md + case "$TASK_JSON_TECH_STACK" in + *FastAPI*|*Django*|*SQLAlchemy*) TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/python-dev.md) ;; + *React*|*Next*) TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/react-dev.md) ;; + *TypeScript*) TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/typescript-dev.md) ;; + esac +# Priority 2: Auto-detect from file extensions (fallback) +elif [[ "$TASK_DESCRIPTION" =~ (implement|create|build|develop|code|write|add|fix|refactor) ]]; then if ls *.ts *.tsx 2>/dev/null | head -1; then TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/typescript-dev.md) elif grep -q "react" package.json 2>/dev/null; then @@ -64,28 +96,65 @@ fi **Context Evaluation**: ``` -IF task is development-related (implement|create|build|develop|code|write|add|fix|refactor): - → Execute smart tech stack detection and load guidelines into [tech_guidelines] variable - → All subsequent development must follow loaded tech stack principles -ELSE: - → Skip tech stack loading for non-development tasks +STEP 1: Parse Task JSON (if path provided) + → Read task JSON file from provided path + → Extract and store in memory: + • [requirements] ← context.requirements[] + • [acceptance_criteria] ← context.acceptance[] + • [tech_stack] ← context.shared_context.tech_stack[] (skip auto-detection if present) + • [conventions] ← context.shared_context.conventions[] + • [focus_paths] ← context.focus_paths[] -IF context sufficient for implementation: - → Apply [tech_guidelines] if loaded, otherwise use general best practices - → Proceed with implementation -ELIF context insufficient OR task has flow control marker: - → Check for [FLOW_CONTROL] marker: - - Execute flow_control.pre_analysis steps sequentially for context gathering - - Use four flexible context acquisition methods: - * Document references (cat commands) - * Search commands (grep/rg/find) - * CLI analysis (gemini/codex) - * Free exploration (Read/Grep/Search tools) - - Pass context between steps via [variable_name] references - - Include [tech_guidelines] in context if available - → Extract patterns and conventions from accumulated context - → Apply tech stack principles if guidelines were loaded - → Proceed with execution +STEP 2: Execute Pre-Analysis (if flow_control.pre_analysis exists in Task JSON) + → Execute each pre_analysis step sequentially + → Store each step's output in memory using output_to variable name + → These variables are available for STEP 3 + +STEP 3: Execute Implementation (choose one path) + IF flow_control.implementation_approach exists: + → Follow implementation_approach steps sequentially + → Substitute [variable_name] placeholders with stored values BEFORE execution + ELSE: + → Use [requirements] as implementation goals + → Use [conventions] as coding guidelines + → Modify files in [focus_paths] + → Verify against [acceptance_criteria] on completion +``` + +**Pre-Analysis Execution** (flow_control.pre_analysis): +``` +For each step in pre_analysis[]: + step.step → Step identifier (string name) + step.action → Description of what to do + step.commands → Array of commands to execute (see Command-to-Tool Mapping) + step.output_to → Variable name to store results in memory + step.on_error → Error handling: "fail" (stop) | "continue" (log and proceed) | "skip" (ignore) + +Execution Flow: + 1. For each step in order: + 2. For each command in step.commands[]: + 3. Parse command format → Map to actual tool + 4. Execute tool → Capture output + 5. Concatenate all outputs → Store in [step.output_to] variable + 6. Continue to next step (or handle error per on_error) +``` + +**Command-to-Tool Mapping** (explicit tool bindings): +``` +Command Format → Actual Tool Call +───────────────────────────────────────────────────── +"Read(path)" → Read tool: Read(file_path=path) +"bash(command)" → Bash tool: Bash(command=command) +"Search(pattern,path)" → Grep tool: Grep(pattern=pattern, path=path) +"Glob(pattern)" → Glob tool: Glob(pattern=pattern) +"mcp__xxx__yyy(args)" → MCP tool: mcp__xxx__yyy(args) + +Example Parsing: + "Read(backend/app/models/simulation.py)" + → Tool: Read + → Parameter: file_path = "backend/app/models/simulation.py" + → Execute: Read(file_path="backend/app/models/simulation.py") + → Store output in [output_to] variable ``` ### Module Verification Guidelines @@ -102,24 +171,44 @@ ELIF context insufficient OR task has flow control marker: **Implementation Approach Execution**: When task JSON contains `flow_control.implementation_approach` array: -1. **Sequential Processing**: Execute steps in order, respecting `depends_on` dependencies -2. **Dependency Resolution**: Wait for all steps listed in `depends_on` before starting -3. **Variable Substitution**: Use `[variable_name]` to reference outputs from previous steps -4. **Step Structure**: - - `step`: Unique identifier (1, 2, 3...) - - `title`: Step title - - `description`: Detailed description with variable references - - `modification_points`: Code modification targets - - `logic_flow`: Business logic sequence - - `command`: Optional CLI command (only when explicitly specified) - - `depends_on`: Array of step numbers that must complete first - - `output`: Variable name for this step's output -5. **Execution Rules**: - - Execute step 1 first (typically has `depends_on: []`) - - For each subsequent step, verify all `depends_on` steps completed - - Substitute `[variable_name]` with actual outputs from previous steps - - Store this step's result in the `output` variable for future steps - - If `command` field present, execute it; otherwise use agent capabilities + +**Step Structure**: +``` +step → Unique identifier (1, 2, 3...) +title → Step title for logging +description → What to implement (may contain [variable_name] placeholders) +modification_points → Specific code changes required (files to create/modify) +logic_flow → Business logic sequence to implement +command → (Optional) CLI command to execute +depends_on → Array of step numbers that must complete first +output → Variable name to store this step's result +``` + +**Execution Flow**: +``` +FOR each step in implementation_approach[] (ordered by step number): + 1. Check depends_on: Wait for all listed step numbers to complete + 2. Variable Substitution: Replace [variable_name] in description/modification_points + with values stored from previous steps' output + 3. Execute step (choose one): + + IF step.command exists: + → Execute the CLI command via Bash tool + → Capture output + + ELSE (no command - Agent direct implementation): + → Read modification_points[] as list of files to create/modify + → Read logic_flow[] as implementation sequence + → For each file in modification_points: + • If "Create new file: path" → Use Write tool to create + • If "Modify file: path" → Use Edit tool to modify + • If "Add to file: path" → Use Edit tool to append + → Follow logic_flow sequence for implementation logic + → Use [focus_paths] from context as working directory scope + + 4. Store result in [step.output] variable for later steps + 5. Mark step complete, proceed to next +``` **CLI Command Execution (CLI Execute Mode)**: When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume: diff --git a/.claude/commands/workflow/execute.md b/.claude/commands/workflow/execute.md index 139719ef..7529f321 100644 --- a/.claude/commands/workflow/execute.md +++ b/.claude/commands/workflow/execute.md @@ -119,14 +119,7 @@ Auto-select and continue to Phase 2. List sessions with metadata and prompt user selection: ```bash -bash(for dir in .workflow/active/WFS-*/; do - session=$(basename "$dir") - project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null) - total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0") - completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0") - [ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0 - echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)" -done) +bash(for dir in .workflow/active/WFS-*/; do [ -d "$dir" ] || continue; session=$(basename "$dir"); project=$(jq -r '.project // "Unknown"' "${dir}workflow-session.json" 2>/dev/null || echo "Unknown"); total=$(grep -c '^\- \[' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); completed=$(grep -c '^\- \[x\]' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); if [ "$total" -gt 0 ]; then progress=$((completed * 100 / total)); else progress=0; fi; echo "$session | $project | $completed/$total tasks ($progress%)"; done) ``` Use AskUserQuestion to present formatted options (max 4 options shown): @@ -414,39 +407,40 @@ TodoWrite({ **Note**: Orchestrator does NOT execute flow control steps - Agent interprets and executes them autonomously. ### Agent Prompt Template -**Dynamic Generation**: Before agent invocation, read task JSON and extract key requirements. +**Path-Based Invocation**: Pass paths and trigger markers, let agent parse task JSON autonomously. ```bash Task(subagent_type="{meta.agent}", run_in_background=false, - prompt="Execute task: {task.title} + prompt="Implement task {task.id}: {task.title} - {[FLOW_CONTROL]} + [FLOW_CONTROL] - **Task Objectives** (from task JSON): - {task.context.objective} - - **Expected Deliverables** (from task JSON): - {task.context.deliverables} - - **Quality Standards** (from task JSON): - {task.context.acceptance_criteria} - - **MANDATORY FIRST STEPS**: - 1. Read complete task JSON: {session.task_json_path} - 2. Load context package: {session.context_package_path} - - - **Session Paths**: - - Workflow Dir: {session.workflow_dir} - - TODO List: {session.todo_list_path} - - Summaries Dir: {session.summaries_dir} + **Input**: + - Task JSON: {session.task_json_path} - Context Package: {session.context_package_path} - **Success Criteria**: Complete all objectives, meet all quality standards, deliver all outputs as specified above.", - description="Executing: {task.title}") + **Output Location**: + - Workflow: {session.workflow_dir} + - TODO List: {session.todo_list_path} + - Summaries: {session.summaries_dir} + + **Execution**: Read task JSON → Parse flow_control → Execute implementation_approach → Update TODO_LIST.md → Generate summary", + description="Implement: {task.id}") ``` +**Key Markers**: +- `Implement` keyword: Triggers tech stack detection and guidelines loading +- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution + +**Why Path-Based**: Agent (code-developer.md) autonomously: +- Reads and parses task JSON (requirements, acceptance, flow_control) +- Loads tech stack guidelines based on detected language +- Executes pre_analysis steps and implementation_approach +- Generates structured summary with integration points + +Embedding task content in prompt creates duplication and conflicts with agent's parsing logic. + ### Agent Assignment Rules ``` meta.agent specified → Use specified agent