mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-12 02:37:45 +08:00
fix: improve workflow execute command and agent task parsing
- Fix bash command syntax in execute.md for Windows compatibility - Convert multi-line for loop to single-line format - Use single quotes for grep patterns to avoid escaping issues - Simplify agent prompt template in execute.md - Path-based invocation instead of embedding task content - Add [FLOW_CONTROL] and "Implement" markers to trigger agent behavior - Enhance code-developer.md Task JSON parsing - Add explicit Task JSON field mapping (requirements, acceptance, etc.) - Add STEP 1/2/3 execution flow for clarity - Add Pre-Analysis Execution format with command-to-tool mapping - Define default behavior when command field is missing - Priority: Use tech_stack from JSON before auto-detection - Add execution_config alignment rules to action-planning-agent.md - Ensure meta.execution_config matches implementation_approach - "agent" mode: no command fields, cli_tool: null - "hybrid" mode: some command fields - "cli" mode: all steps have command fields
This commit is contained in:
@@ -291,12 +291,34 @@ function computeCliStrategy(task, allTasks) {
|
|||||||
- `agent`: Assigned agent for execution
|
- `agent`: Assigned agent for execution
|
||||||
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||||
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||||
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
|
- `execution_config`: CLI execution settings (MUST align with userConfig from task-generate-agent)
|
||||||
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
|
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
|
||||||
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
|
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, `auto`, or `null` (for agent-only)
|
||||||
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
|
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
|
||||||
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
|
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
|
||||||
|
|
||||||
|
**execution_config Alignment Rules** (MANDATORY):
|
||||||
|
```
|
||||||
|
userConfig.executionMethod → meta.execution_config + implementation_approach
|
||||||
|
|
||||||
|
"agent" →
|
||||||
|
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
|
||||||
|
implementation_approach steps: NO command field (agent direct execution)
|
||||||
|
|
||||||
|
"hybrid" →
|
||||||
|
meta.execution_config = { method: "hybrid", cli_tool: userConfig.preferredCliTool }
|
||||||
|
implementation_approach steps: command field ONLY on complex steps
|
||||||
|
|
||||||
|
"cli" →
|
||||||
|
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool }
|
||||||
|
implementation_approach steps: command field on ALL steps
|
||||||
|
```
|
||||||
|
|
||||||
|
**Consistency Check**: `meta.execution_config.method` MUST match presence of `command` fields:
|
||||||
|
- `method: "agent"` → 0 steps have command field
|
||||||
|
- `method: "hybrid"` → some steps have command field
|
||||||
|
- `method: "cli"` → all steps have command field
|
||||||
|
|
||||||
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
|||||||
@@ -41,11 +41,43 @@ Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
|
|||||||
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
|
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Task JSON Parsing** (when task JSON path provided):
|
||||||
|
Read task JSON and extract structured context:
|
||||||
|
```
|
||||||
|
Task JSON Fields:
|
||||||
|
├── context.requirements[] → What to implement (list of requirements)
|
||||||
|
├── context.acceptance[] → How to verify (validation commands)
|
||||||
|
├── context.focus_paths[] → Where to focus (directories/files)
|
||||||
|
├── context.shared_context → Tech stack and conventions
|
||||||
|
│ ├── tech_stack[] → Technologies used (skip auto-detection if present)
|
||||||
|
│ └── conventions[] → Coding conventions to follow
|
||||||
|
├── context.artifacts[] → Additional context sources
|
||||||
|
└── flow_control → Execution instructions
|
||||||
|
├── pre_analysis[] → Context gathering steps (execute first)
|
||||||
|
├── implementation_approach[] → Implementation steps (execute sequentially)
|
||||||
|
└── target_files[] → Files to create/modify
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parsing Priority**:
|
||||||
|
1. Read task JSON from provided path
|
||||||
|
2. Extract `context.requirements` as implementation goals
|
||||||
|
3. Extract `context.acceptance` as verification criteria
|
||||||
|
4. If `context.shared_context.tech_stack` exists → skip auto-detection, use provided stack
|
||||||
|
5. Process `flow_control` if present
|
||||||
|
|
||||||
**Pre-Analysis: Smart Tech Stack Loading**:
|
**Pre-Analysis: Smart Tech Stack Loading**:
|
||||||
```bash
|
```bash
|
||||||
# Smart detection: Only load tech stack for development tasks
|
# Priority 1: Use tech_stack from task JSON if available
|
||||||
if [[ "$TASK_DESCRIPTION" =~ (implement|create|build|develop|code|write|add|fix|refactor) ]]; then
|
if [[ -n "$TASK_JSON_TECH_STACK" ]]; then
|
||||||
# Simple tech stack detection based on file extensions
|
# Map tech stack names to guideline files
|
||||||
|
# e.g., ["FastAPI", "SQLAlchemy"] → python-dev.md
|
||||||
|
case "$TASK_JSON_TECH_STACK" in
|
||||||
|
*FastAPI*|*Django*|*SQLAlchemy*) TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/python-dev.md) ;;
|
||||||
|
*React*|*Next*) TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/react-dev.md) ;;
|
||||||
|
*TypeScript*) TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/typescript-dev.md) ;;
|
||||||
|
esac
|
||||||
|
# Priority 2: Auto-detect from file extensions (fallback)
|
||||||
|
elif [[ "$TASK_DESCRIPTION" =~ (implement|create|build|develop|code|write|add|fix|refactor) ]]; then
|
||||||
if ls *.ts *.tsx 2>/dev/null | head -1; then
|
if ls *.ts *.tsx 2>/dev/null | head -1; then
|
||||||
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/typescript-dev.md)
|
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/typescript-dev.md)
|
||||||
elif grep -q "react" package.json 2>/dev/null; then
|
elif grep -q "react" package.json 2>/dev/null; then
|
||||||
@@ -64,28 +96,65 @@ fi
|
|||||||
|
|
||||||
**Context Evaluation**:
|
**Context Evaluation**:
|
||||||
```
|
```
|
||||||
IF task is development-related (implement|create|build|develop|code|write|add|fix|refactor):
|
STEP 1: Parse Task JSON (if path provided)
|
||||||
→ Execute smart tech stack detection and load guidelines into [tech_guidelines] variable
|
→ Read task JSON file from provided path
|
||||||
→ All subsequent development must follow loaded tech stack principles
|
→ Extract and store in memory:
|
||||||
ELSE:
|
• [requirements] ← context.requirements[]
|
||||||
→ Skip tech stack loading for non-development tasks
|
• [acceptance_criteria] ← context.acceptance[]
|
||||||
|
• [tech_stack] ← context.shared_context.tech_stack[] (skip auto-detection if present)
|
||||||
|
• [conventions] ← context.shared_context.conventions[]
|
||||||
|
• [focus_paths] ← context.focus_paths[]
|
||||||
|
|
||||||
IF context sufficient for implementation:
|
STEP 2: Execute Pre-Analysis (if flow_control.pre_analysis exists in Task JSON)
|
||||||
→ Apply [tech_guidelines] if loaded, otherwise use general best practices
|
→ Execute each pre_analysis step sequentially
|
||||||
→ Proceed with implementation
|
→ Store each step's output in memory using output_to variable name
|
||||||
ELIF context insufficient OR task has flow control marker:
|
→ These variables are available for STEP 3
|
||||||
→ Check for [FLOW_CONTROL] marker:
|
|
||||||
- Execute flow_control.pre_analysis steps sequentially for context gathering
|
STEP 3: Execute Implementation (choose one path)
|
||||||
- Use four flexible context acquisition methods:
|
IF flow_control.implementation_approach exists:
|
||||||
* Document references (cat commands)
|
→ Follow implementation_approach steps sequentially
|
||||||
* Search commands (grep/rg/find)
|
→ Substitute [variable_name] placeholders with stored values BEFORE execution
|
||||||
* CLI analysis (gemini/codex)
|
ELSE:
|
||||||
* Free exploration (Read/Grep/Search tools)
|
→ Use [requirements] as implementation goals
|
||||||
- Pass context between steps via [variable_name] references
|
→ Use [conventions] as coding guidelines
|
||||||
- Include [tech_guidelines] in context if available
|
→ Modify files in [focus_paths]
|
||||||
→ Extract patterns and conventions from accumulated context
|
→ Verify against [acceptance_criteria] on completion
|
||||||
→ Apply tech stack principles if guidelines were loaded
|
```
|
||||||
→ Proceed with execution
|
|
||||||
|
**Pre-Analysis Execution** (flow_control.pre_analysis):
|
||||||
|
```
|
||||||
|
For each step in pre_analysis[]:
|
||||||
|
step.step → Step identifier (string name)
|
||||||
|
step.action → Description of what to do
|
||||||
|
step.commands → Array of commands to execute (see Command-to-Tool Mapping)
|
||||||
|
step.output_to → Variable name to store results in memory
|
||||||
|
step.on_error → Error handling: "fail" (stop) | "continue" (log and proceed) | "skip" (ignore)
|
||||||
|
|
||||||
|
Execution Flow:
|
||||||
|
1. For each step in order:
|
||||||
|
2. For each command in step.commands[]:
|
||||||
|
3. Parse command format → Map to actual tool
|
||||||
|
4. Execute tool → Capture output
|
||||||
|
5. Concatenate all outputs → Store in [step.output_to] variable
|
||||||
|
6. Continue to next step (or handle error per on_error)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Command-to-Tool Mapping** (explicit tool bindings):
|
||||||
|
```
|
||||||
|
Command Format → Actual Tool Call
|
||||||
|
─────────────────────────────────────────────────────
|
||||||
|
"Read(path)" → Read tool: Read(file_path=path)
|
||||||
|
"bash(command)" → Bash tool: Bash(command=command)
|
||||||
|
"Search(pattern,path)" → Grep tool: Grep(pattern=pattern, path=path)
|
||||||
|
"Glob(pattern)" → Glob tool: Glob(pattern=pattern)
|
||||||
|
"mcp__xxx__yyy(args)" → MCP tool: mcp__xxx__yyy(args)
|
||||||
|
|
||||||
|
Example Parsing:
|
||||||
|
"Read(backend/app/models/simulation.py)"
|
||||||
|
→ Tool: Read
|
||||||
|
→ Parameter: file_path = "backend/app/models/simulation.py"
|
||||||
|
→ Execute: Read(file_path="backend/app/models/simulation.py")
|
||||||
|
→ Store output in [output_to] variable
|
||||||
```
|
```
|
||||||
### Module Verification Guidelines
|
### Module Verification Guidelines
|
||||||
|
|
||||||
@@ -102,24 +171,44 @@ ELIF context insufficient OR task has flow control marker:
|
|||||||
|
|
||||||
**Implementation Approach Execution**:
|
**Implementation Approach Execution**:
|
||||||
When task JSON contains `flow_control.implementation_approach` array:
|
When task JSON contains `flow_control.implementation_approach` array:
|
||||||
1. **Sequential Processing**: Execute steps in order, respecting `depends_on` dependencies
|
|
||||||
2. **Dependency Resolution**: Wait for all steps listed in `depends_on` before starting
|
**Step Structure**:
|
||||||
3. **Variable Substitution**: Use `[variable_name]` to reference outputs from previous steps
|
```
|
||||||
4. **Step Structure**:
|
step → Unique identifier (1, 2, 3...)
|
||||||
- `step`: Unique identifier (1, 2, 3...)
|
title → Step title for logging
|
||||||
- `title`: Step title
|
description → What to implement (may contain [variable_name] placeholders)
|
||||||
- `description`: Detailed description with variable references
|
modification_points → Specific code changes required (files to create/modify)
|
||||||
- `modification_points`: Code modification targets
|
logic_flow → Business logic sequence to implement
|
||||||
- `logic_flow`: Business logic sequence
|
command → (Optional) CLI command to execute
|
||||||
- `command`: Optional CLI command (only when explicitly specified)
|
depends_on → Array of step numbers that must complete first
|
||||||
- `depends_on`: Array of step numbers that must complete first
|
output → Variable name to store this step's result
|
||||||
- `output`: Variable name for this step's output
|
```
|
||||||
5. **Execution Rules**:
|
|
||||||
- Execute step 1 first (typically has `depends_on: []`)
|
**Execution Flow**:
|
||||||
- For each subsequent step, verify all `depends_on` steps completed
|
```
|
||||||
- Substitute `[variable_name]` with actual outputs from previous steps
|
FOR each step in implementation_approach[] (ordered by step number):
|
||||||
- Store this step's result in the `output` variable for future steps
|
1. Check depends_on: Wait for all listed step numbers to complete
|
||||||
- If `command` field present, execute it; otherwise use agent capabilities
|
2. Variable Substitution: Replace [variable_name] in description/modification_points
|
||||||
|
with values stored from previous steps' output
|
||||||
|
3. Execute step (choose one):
|
||||||
|
|
||||||
|
IF step.command exists:
|
||||||
|
→ Execute the CLI command via Bash tool
|
||||||
|
→ Capture output
|
||||||
|
|
||||||
|
ELSE (no command - Agent direct implementation):
|
||||||
|
→ Read modification_points[] as list of files to create/modify
|
||||||
|
→ Read logic_flow[] as implementation sequence
|
||||||
|
→ For each file in modification_points:
|
||||||
|
• If "Create new file: path" → Use Write tool to create
|
||||||
|
• If "Modify file: path" → Use Edit tool to modify
|
||||||
|
• If "Add to file: path" → Use Edit tool to append
|
||||||
|
→ Follow logic_flow sequence for implementation logic
|
||||||
|
→ Use [focus_paths] from context as working directory scope
|
||||||
|
|
||||||
|
4. Store result in [step.output] variable for later steps
|
||||||
|
5. Mark step complete, proceed to next
|
||||||
|
```
|
||||||
|
|
||||||
**CLI Command Execution (CLI Execute Mode)**:
|
**CLI Command Execution (CLI Execute Mode)**:
|
||||||
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
|
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
|
||||||
|
|||||||
@@ -119,14 +119,7 @@ Auto-select and continue to Phase 2.
|
|||||||
|
|
||||||
List sessions with metadata and prompt user selection:
|
List sessions with metadata and prompt user selection:
|
||||||
```bash
|
```bash
|
||||||
bash(for dir in .workflow/active/WFS-*/; do
|
bash(for dir in .workflow/active/WFS-*/; do [ -d "$dir" ] || continue; session=$(basename "$dir"); project=$(jq -r '.project // "Unknown"' "${dir}workflow-session.json" 2>/dev/null || echo "Unknown"); total=$(grep -c '^\- \[' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); completed=$(grep -c '^\- \[x\]' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); if [ "$total" -gt 0 ]; then progress=$((completed * 100 / total)); else progress=0; fi; echo "$session | $project | $completed/$total tasks ($progress%)"; done)
|
||||||
session=$(basename "$dir")
|
|
||||||
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
|
|
||||||
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
|
||||||
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
|
||||||
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
|
|
||||||
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
|
|
||||||
done)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Use AskUserQuestion to present formatted options (max 4 options shown):
|
Use AskUserQuestion to present formatted options (max 4 options shown):
|
||||||
@@ -414,39 +407,40 @@ TodoWrite({
|
|||||||
**Note**: Orchestrator does NOT execute flow control steps - Agent interprets and executes them autonomously.
|
**Note**: Orchestrator does NOT execute flow control steps - Agent interprets and executes them autonomously.
|
||||||
|
|
||||||
### Agent Prompt Template
|
### Agent Prompt Template
|
||||||
**Dynamic Generation**: Before agent invocation, read task JSON and extract key requirements.
|
**Path-Based Invocation**: Pass paths and trigger markers, let agent parse task JSON autonomously.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="{meta.agent}",
|
Task(subagent_type="{meta.agent}",
|
||||||
run_in_background=false,
|
run_in_background=false,
|
||||||
prompt="Execute task: {task.title}
|
prompt="Implement task {task.id}: {task.title}
|
||||||
|
|
||||||
{[FLOW_CONTROL]}
|
[FLOW_CONTROL]
|
||||||
|
|
||||||
**Task Objectives** (from task JSON):
|
**Input**:
|
||||||
{task.context.objective}
|
- Task JSON: {session.task_json_path}
|
||||||
|
|
||||||
**Expected Deliverables** (from task JSON):
|
|
||||||
{task.context.deliverables}
|
|
||||||
|
|
||||||
**Quality Standards** (from task JSON):
|
|
||||||
{task.context.acceptance_criteria}
|
|
||||||
|
|
||||||
**MANDATORY FIRST STEPS**:
|
|
||||||
1. Read complete task JSON: {session.task_json_path}
|
|
||||||
2. Load context package: {session.context_package_path}
|
|
||||||
|
|
||||||
|
|
||||||
**Session Paths**:
|
|
||||||
- Workflow Dir: {session.workflow_dir}
|
|
||||||
- TODO List: {session.todo_list_path}
|
|
||||||
- Summaries Dir: {session.summaries_dir}
|
|
||||||
- Context Package: {session.context_package_path}
|
- Context Package: {session.context_package_path}
|
||||||
|
|
||||||
**Success Criteria**: Complete all objectives, meet all quality standards, deliver all outputs as specified above.",
|
**Output Location**:
|
||||||
description="Executing: {task.title}")
|
- Workflow: {session.workflow_dir}
|
||||||
|
- TODO List: {session.todo_list_path}
|
||||||
|
- Summaries: {session.summaries_dir}
|
||||||
|
|
||||||
|
**Execution**: Read task JSON → Parse flow_control → Execute implementation_approach → Update TODO_LIST.md → Generate summary",
|
||||||
|
description="Implement: {task.id}")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Key Markers**:
|
||||||
|
- `Implement` keyword: Triggers tech stack detection and guidelines loading
|
||||||
|
- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution
|
||||||
|
|
||||||
|
**Why Path-Based**: Agent (code-developer.md) autonomously:
|
||||||
|
- Reads and parses task JSON (requirements, acceptance, flow_control)
|
||||||
|
- Loads tech stack guidelines based on detected language
|
||||||
|
- Executes pre_analysis steps and implementation_approach
|
||||||
|
- Generates structured summary with integration points
|
||||||
|
|
||||||
|
Embedding task content in prompt creates duplication and conflicts with agent's parsing logic.
|
||||||
|
|
||||||
### Agent Assignment Rules
|
### Agent Assignment Rules
|
||||||
```
|
```
|
||||||
meta.agent specified → Use specified agent
|
meta.agent specified → Use specified agent
|
||||||
|
|||||||
Reference in New Issue
Block a user