refactor(agents): deduplicate agent invocation prompts and strengthen project artifact consumption

Remove duplicated content from caller prompts that repeat agent spec definitions:
- EXECUTION METHOD MAPPING, CLI EXECUTION ID strategies, Quantification Rules
- MANDATORY FIRST STEPS (now internalized in cli-explore-agent)
- relevant_files schema details, Phase execution flow re-specifications
- plan.json/task JSON field listings (reference schemas instead)

Strengthen project-tech.json and project-guidelines.json consumption:
- context-search-agent: add Phase 1.1b mandatory project context loading
- cli-explore-agent: add Autonomous Initialization with 4 self-contained steps
- action-planning-agent: strengthen Phase 1 Step 0 with detailed usage guidance
- All caller prompts: add/reinforce PROJECT CONTEXT (MANDATORY) sections

Agent specs modified: action-planning-agent, cli-explore-agent, context-search-agent
Caller prompts slimmed: 04-task-generation, 05-tdd-task-generation,
  02-context-gathering, 01-lite-plan, collaborative-plan-with-file,
  05-test-cycle-execute
This commit is contained in:
catlog22
2026-02-25 18:44:51 +08:00
parent 5c51315a7e
commit db5797faa3
9 changed files with 213 additions and 330 deletions

View File

@@ -56,7 +56,23 @@ color: yellow
**Step-by-step execution**:
```
0. Load planning notes → Extract phase-level constraints (NEW)
0. Load project context (MANDATORY - from init.md products)
a. Read .workflow/project-tech.json (if exists)
→ tech_stack, architecture_type, key_components, build_system, test_framework
→ Usage: Populate plan.json shared_context, set correct build/test commands,
align task tech choices with actual project stack
→ If missing: Fall back to context-package.project_context fields
b. Read .workflow/project-guidelines.json (if exists)
→ coding_conventions, naming_rules, forbidden_patterns, quality_gates, custom_constraints
→ Usage: Apply as HARD CONSTRAINTS on all tasks — implementation steps,
acceptance criteria, and convergence.verification MUST respect these rules
→ If empty/missing: No additional constraints (proceed normally)
NOTE: These files provide project-level context that supplements (not replaces)
session-specific context from planning-notes.md and context-package.json.
1. Load planning notes → Extract phase-level constraints (NEW)
Commands: Read('.workflow/active/{session-id}/planning-notes.md')
Output: Consolidated constraints from all workflow phases
Structure:
@@ -67,16 +83,16 @@ color: yellow
USAGE: This is the PRIMARY source of constraints. All task generation MUST respect these constraints.
1. Load session metadata → Extract user input
2. Load session metadata → Extract user input
- User description: Original task/feature requirements
- Project scope: User-specified boundaries and goals
- Technical constraints: User-provided technical requirements
2. Load context package → Extract structured context
3. Load context package → Extract structured context
Commands: Read({{context_package_path}})
Output: Complete context package object
3. Check existing plan (if resuming)
4. Check existing plan (if resuming)
- If IMPL_PLAN.md exists: Read for continuity
- If task JSONs exist: Load for context
@@ -989,7 +1005,8 @@ Use `analysis_results.complexity` or task count to determine structure:
### 3.4 Guidelines Checklist
**ALWAYS:**
- **Load planning-notes.md FIRST**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Load project context FIRST**: Read `.workflow/project-tech.json` and `.workflow/project-guidelines.json` before any session-specific files. Apply project-guidelines as hard constraints on all tasks
- **Load planning-notes.md SECOND**: Read planning-notes.md before context-package.json. Use its Consolidated Constraints as primary constraint source for all task generation
- **Record N+1 Context**: Update `## N+1 Context` section with key decisions and deferred items
- **Search Tool Priority**: ACE (`mcp__ace-tool__search_context`) → CCW (`mcp__ccw-tools__smart_search`) / Built-in (`Grep`, `Glob`, `Read`)
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points

View File

@@ -39,6 +39,36 @@ Phase 4: Output Generation
## Phase 1: Task Understanding
### Autonomous Initialization (execute before any analysis)
**These steps are MANDATORY and self-contained** -- the agent executes them regardless of caller prompt content. Callers do NOT need to repeat these instructions.
1. **Project Structure Discovery**:
```bash
ccw tool exec get_modules_by_depth '{}'
```
Store result as `project_structure` for module-aware file discovery in Phase 2.
2. **Output Schema Loading** (if output file path specified in prompt):
- Exploration output → `cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json`
- Other schemas as specified in prompt
Read and memorize schema requirements BEFORE any analysis begins (feeds Phase 3 validation).
3. **Project Context Loading** (from init.md products):
- Read `.workflow/project-tech.json` (if exists):
- Extract: `tech_stack`, `architecture`, `key_components`, `overview`
- Usage: Align analysis scope and patterns with actual project technology choices
- Read `.workflow/project-guidelines.json` (if exists):
- Extract: `conventions`, `constraints`, `quality_rules`, `learnings`
- Usage: Apply as constraints during pattern analysis, integration point evaluation, and recommendations
- If either file does not exist, proceed with fresh analysis (no error).
4. **Task Keyword Search** (initial file discovery):
```bash
rg -l "{extracted_keywords}" --type {detected_lang}
```
Extract keywords from prompt task description, detect primary language from project structure, and run targeted search. Store results as `keyword_files` for Phase 2 scoping.
**Extract from prompt**:
- Analysis target and scope
- Analysis mode (quick-scan / deep-scan / dependency-map)

View File

@@ -75,6 +75,23 @@ if (file_exists(contextPackagePath)) {
}
```
**1.1b Project Context Loading** (MANDATORY):
```javascript
// Load project-level context (from workflow:init products)
// These provide foundational constraints for ALL context gathering
const projectTech = file_exists('.workflow/project-tech.json')
? JSON.parse(Read('.workflow/project-tech.json')) // tech_stack, architecture_type, key_components, build_system, test_framework
: null;
const projectGuidelines = file_exists('.workflow/project-guidelines.json')
? JSON.parse(Read('.workflow/project-guidelines.json')) // coding_conventions, naming_rules, forbidden_patterns, quality_gates
: null;
// Usage:
// - projectTech → Populate project_context fields (tech_stack, architecture_patterns)
// - projectGuidelines → Apply as constraints during relevance scoring and conflict detection
// - If missing: Proceed with fresh analysis (discover from codebase)
```
**1.2 Foundation Setup**:
```javascript
// 1. Initialize CodexLens (if available)
@@ -275,6 +292,10 @@ score = (0.4 × direct_match) + // Filename/path match
(0.1 × dependency_link) // Connection strength
// Filter: Include only score > 0.5
// Apply projectGuidelines constraints (from 1.1b) when available:
// - Boost files matching projectGuidelines.quality_gates patterns
// - Penalize files matching projectGuidelines.forbidden_patterns
```
**3.2 Dependency Graph**
@@ -292,19 +313,23 @@ Merge with conflict resolution:
```javascript
const context = {
// Priority: Project docs > Existing code > Web examples
architecture: ref_docs.patterns || code.structure,
// Priority: projectTech/projectGuidelines (1.1b) > Project docs > Existing code > Web examples
architecture: projectTech?.architecture_type || ref_docs.patterns || code.structure,
conventions: {
naming: ref_docs.standards || code.actual_patterns,
error_handling: ref_docs.standards || code.patterns || web.best_practices
naming: projectGuidelines?.naming_rules || ref_docs.standards || code.actual_patterns,
error_handling: ref_docs.standards || code.patterns || web.best_practices,
forbidden_patterns: projectGuidelines?.forbidden_patterns || [],
quality_gates: projectGuidelines?.quality_gates || []
},
tech_stack: {
// Actual (package.json) takes precedence
language: code.actual.language,
frameworks: merge_unique([ref_docs.declared, code.actual]),
libraries: code.actual.libraries
// projectTech provides authoritative baseline; actual (package.json) fills gaps
language: projectTech?.tech_stack?.language || code.actual.language,
frameworks: merge_unique([projectTech?.tech_stack?.frameworks, ref_docs.declared, code.actual]),
libraries: merge_unique([projectTech?.tech_stack?.libraries, code.actual.libraries]),
build_system: projectTech?.build_system || code.actual.build_system,
test_framework: projectTech?.test_framework || code.actual.test_framework
},
// Web examples fill gaps
@@ -314,9 +339,9 @@ const context = {
```
**Conflict Resolution**:
1. Architecture: Docs > Code > Web
2. Conventions: Declared > Actual > Industry
3. Tech Stack: Actual (package.json) > Declared
1. Architecture: projectTech > Docs > Code > Web
2. Conventions: projectGuidelines > Declared > Actual > Industry
3. Tech Stack: projectTech > Actual (package.json) > Declared
4. Missing: Use web examples
**3.5 Brainstorm Artifacts Integration**
@@ -381,6 +406,8 @@ Calculate risk level based on:
- Existing file count (<5: low, 5-15: medium, >15: high)
- API/architecture/data model changes
- Breaking changes identification
- Violations of projectGuidelines.forbidden_patterns (from 1.1b, if available)
- Deviations from projectGuidelines.coding_conventions (from 1.1b, if available)
**3.7 Context Packaging & Output**

View File

@@ -205,6 +205,11 @@ Task(
1. **Prioritize Latest Documentation**: Search for and reference latest README, design docs, architecture guides when available
2. **Handle Ambiguities**: When requirement ambiguities exist, ask user for clarification (use AskUserQuestion) instead of assuming interpretations
### Project Context (MANDATORY)
Read and incorporate:
- \`.workflow/project-tech.json\` (if exists): Technology stack, architecture
- \`.workflow/project-guidelines.json\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS on sub-domain splitting and plan structure
### Input Requirements
${taskDescription}
@@ -349,21 +354,19 @@ subDomains.map(sub =>
**TASK ID Range**: ${sub.task_id_range[0]}-${sub.task_id_range[1]}
**Session**: ${sessionId}
### Project Context (MANDATORY)
Read and incorporate:
- \`.workflow/project-tech.json\` (if exists): Technology stack, architecture
- \`.workflow/project-guidelines.json\` (if exists): Constraints, conventions -- apply as HARD CONSTRAINTS
## Dual Output Tasks
### Task 1: Generate Two-Layer Plan Output
Output: ${sessionFolder}/agents/${sub.focus_area}/plan.json (overview with task_ids[])
Output: ${sessionFolder}/agents/${sub.focus_area}/.task/TASK-*.json (independent task files)
Output: ${sessionFolder}/agents/${sub.focus_area}/plan.json
Output: ${sessionFolder}/agents/${sub.focus_area}/.task/TASK-*.json
Schema (plan): ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
Schema (tasks): ~/.ccw/workflows/cli-templates/schemas/task-schema.json
**Two-Layer Output Format**:
- plan.json: Overview with task_ids[] referencing .task/ files (NO tasks[] array)
- .task/TASK-*.json: Independent task files following task-schema.json
- plan.json required: summary, approach, task_ids, task_count, _metadata (with plan_type)
- Task files required: id, title, description, depends_on, convergence (with criteria[])
- Task fields: files[].change (not modification_points), convergence.criteria (not acceptance), test (not verification)
### Task 2: Sync Summary to plan-note.md
**Locate Your Sections**:

View File

@@ -202,13 +202,8 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
- **Task Description**: ${task_description}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
4. Read: .workflow/project-tech.json (technology stack and architecture context)
5. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
## Agent Initialization
cli-explore-agent autonomously handles: project structure discovery, schema loading, project context loading (project-tech.json, project-guidelines.json), and keyword search. These steps execute automatically.
## Exploration Strategy (${angle} focus)
@@ -228,27 +223,16 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
## Expected Output
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Schema Reference**: explore-json-schema.json (auto-loaded by agent during initialization)
**Required Fields** (all ${angle} focused):
- project_structure: Modules/architecture relevant to ${angle}
- relevant_files: Files affected from ${angle} perspective
**MANDATORY**: Every file MUST use structured object format with ALL required fields:
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Contains AuthService.login() - entry point for JWT token generation", role: "modify_target", discovery_source: "bash-scan", key_symbols: ["AuthService", "login"]}]\`
- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic)
- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only
- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual
- **key_symbols** (recommended): Key functions/classes/types in the file relevant to the task
- Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
- patterns: ${angle}-related patterns to follow
- dependencies: Dependencies relevant to ${angle}
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
- constraints: ${angle}-specific limitations/conventions
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
- Follow explore-json-schema.json exactly (auto-loaded by agent)
- All fields scoped to ${angle} perspective
- Ensure rationale is specific and >10 chars (not generic)
- Include file:line locations in integration_points
- _metadata.exploration_angle: "${angle}"
## Success Criteria
- [ ] Schema obtained via cat explore-json-schema.json
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with specific rationale + role
- [ ] Every file has rationale >10 chars (not generic like "Related to ${angle}")
@@ -528,9 +512,9 @@ Generate plan.json and .task/*.json following the schema obtained above. Key con
- plan.json: Overview with task_ids[] referencing .task/ files (NO tasks[] array)
- .task/TASK-*.json: Independent task files following task-schema.json
plan.json required fields: summary, approach, task_ids, task_count, _metadata (with plan_type: "feature")
Each task file required fields: id, title, description, depends_on, convergence (with criteria[])
Task fields use: files[].change (not modification_points), convergence.criteria (not acceptance), test (not verification)
Follow plan-overview-base-schema.json (loaded via cat command above) for plan.json structure.
Follow task-schema.json for .task/TASK-*.json structure.
Note: Use files[].change (not modification_points), convergence.criteria (not acceptance).
## Task Grouping Rules
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)

View File

@@ -96,16 +96,14 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
## Agent Initialization
The cli-explore-agent autonomously executes project structure discovery, schema loading, project context loading, and keyword search as part of its Phase 1 initialization. No manual steps needed.
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan** (Bash)
- get_modules_by_depth.sh -> identify modules related to ${angle}
- find/rg -> locate files relevant to ${angle} aspect
- Identify modules related to ${angle}
- Locate files relevant to ${angle} aspect
- Analyze imports/dependencies from ${angle} perspective
**Step 2: Semantic Analysis** (Gemini CLI)
@@ -121,28 +119,13 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
- project_structure: Modules/architecture relevant to ${angle}
- relevant_files: Files affected from ${angle} perspective
**MANDATORY**: Every file MUST use structured object format with ALL required fields:
[{path: "src/file.ts", relevance: 0.85, rationale: "Contains AuthService.login()", role: "modify_target", discovery_source: "bash-scan", key_symbols: ["AuthService", "login"]}]
- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic)
- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only
- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual
- **key_symbols** (recommended): Key functions/classes/types in the file relevant to the task
- Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
- patterns: ${angle}-related patterns to follow
- dependencies: Dependencies relevant to ${angle}
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
- constraints: ${angle}-specific limitations/conventions
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
- _metadata.exploration_angle: "${angle}"
- Follow explore-json-schema.json exactly (loaded during agent initialization)
- All fields scoped to ${angle} perspective
- Ensure rationale is specific to ${angle} topic (not generic)
- Include file:line locations in integration_points
## Success Criteria
- [ ] Schema obtained via cat explore-json-schema.json
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with ${angle} rationale
- [ ] Patterns are actionable (code examples, not generic advice)
- [ ] Integration points include file:line locations
@@ -217,84 +200,34 @@ This is the PRIMARY context source - all subsequent analysis must align with use
- **Complexity**: ${complexity}
## Mission
Execute complete context-search-agent workflow for implementation planning:
Execute complete context-search-agent workflow (Phase 1-3) for implementation planning.
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**:
- Read and parse .workflow/project-tech.json. Use its overview section as the foundational project_context.
- Read and parse .workflow/project-guidelines.json. Load conventions, constraints, and learnings into a project_guidelines section.
- If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize CodexLens, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity
Key emphasis:
- Load project-tech.json and project-guidelines.json FIRST (per your spec Phase 1.1b)
- Synthesize exploration results with project context
- Generate prioritized_context with user_intent alignment
- Apply project-guidelines.json constraints during conflict detection
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
- **Track 4**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 5-source data**: Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from project-tech.json** for architecture and tech stack
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score >= 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate project_context**: Directly use the overview from project-tech.json
5. **Populate project_guidelines**: Load from project-guidelines.json
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject historical conflicts** from archive analysis into conflict_detection
9. **Generate prioritized_context section**:
{
"prioritized_context": {
"user_intent": { "goal": "...", "scope": "...", "key_constraints": ["..."] },
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...], "medium": [...], "low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment, exploration critical files, and dependency graph"
}
}
10. Generate and validate context-package.json with prioritized_context field
Input priority: User Intent > project-tech.json > Exploration results > Code discovery > Web examples
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: description, technology_stack, architecture, key_components (from project-tech.json)
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (from project-guidelines.json)
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights}
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] File relevance accuracy >80%
- [ ] Dependency graph complete (max 2 transitive levels)
- [ ] Conflict risk level calculated correctly
- [ ] No sensitive data exposed
- [ ] Total files <= 50 (prioritize high-relevance)
Complete context-package.json must include a **prioritized_context** section:
```json
{
"prioritized_context": {
"user_intent": { "goal": "...", "scope": "...", "key_constraints": ["..."] },
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [], "medium": [], "low": []
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment, exploration critical files, and dependency graph"
}
}
```
All other required fields (metadata, project_context, project_guidelines, assets, dependencies, brainstorm_artifacts, conflict_detection, exploration_results) follow context-search-agent standard output schema.
## Planning Notes Record (REQUIRED)
After completing context-package.json, append to planning-notes.md:

View File

@@ -144,6 +144,8 @@ function autoDetectModules(contextPackage, projectRoot) {
**Purpose**: Generate IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT code implementation.
**Design Note**: The agent specification (action-planning-agent.md) already defines schemas, strategies, quality standards, and loading algorithms. This prompt provides **instance-specific parameters only** — session paths, user config, and context-package consumption guidance unique to this session.
```javascript
Task(
subagent_type="action-planning-agent",
@@ -151,29 +153,14 @@ Task(
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
CRITICAL: Follow the progressive loading strategy defined in agent specification
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/${sessionId}/planning-notes.md
This document contains:
- User Intent: Original GOAL and KEY_CONSTRAINTS from Phase 1
- Context Findings: Critical files, architecture, and constraints from Phase 2
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 3
- Consolidated Constraints: All constraints from all phases
**USAGE**: Read planning-notes.md FIRST. Use Consolidated Constraints list to guide task sequencing and dependencies.
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session ${sessionId}
## SESSION PATHS
Session Root: .workflow/active/${sessionId}/
Input:
- Session Metadata: .workflow/active/${sessionId}/workflow-session.json
- Planning Notes: .workflow/active/${sessionId}/planning-notes.md
- Context Package: .workflow/active/${sessionId}/.process/context-package.json
Output:
- Task Dir: .workflow/active/${sessionId}/.task/
- IMPL_PLAN: .workflow/active/${sessionId}/IMPL_PLAN.md
@@ -183,43 +170,29 @@ Output:
Session ID: ${sessionId}
MCP Capabilities: {exa_code, exa_web, code_index}
## FEATURE SPECIFICATIONS (conditional)
If context-package has brainstorm_artifacts.feature_index_path:
Feature Index: [from context-package]
Feature Spec Dir: [from context-package]
Else if .workflow/active/${sessionId}/.brainstorming/feature-specs/ exists:
Feature Index: .workflow/active/${sessionId}/.brainstorming/feature-specs/feature-index.json
Feature Spec Dir: .workflow/active/${sessionId}/.brainstorming/feature-specs/
## PROJECT CONTEXT (MANDATORY - load before planning-notes)
These files provide project-level constraints that apply to ALL tasks:
Use feature-index.json to:
- Map features to implementation tasks (feature_id -> task alignment)
- Reference individual feature spec files (spec_path) for detailed requirements
- Identify cross-cutting concerns that span multiple tasks
- Align task priorities with feature priorities
1. **.workflow/project-tech.json** (auto-generated tech analysis)
- Contains: tech_stack, architecture_type, key_components, build_system, test_framework
- Usage: Populate plan.json shared_context, align task tech choices, set correct test commands
- If missing: Fall back to context-package.project_context
If the directory does not exist, skip this section.
2. **.workflow/project-guidelines.json** (user-maintained rules and constraints)
- Contains: coding_conventions, naming_rules, forbidden_patterns, quality_gates, custom_constraints
- Usage: Apply as HARD CONSTRAINTS on all generated tasks — task implementation steps,
acceptance criteria, and convergence.verification MUST respect these guidelines
- If empty/missing: No additional constraints (proceed normally)
Loading order: project-tech.json → project-guidelines.json → planning-notes.md → context-package.json
## USER CONFIGURATION (from Step 4.0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
"agent" ->
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
"cli" ->
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
"hybrid" ->
Per-task decision: Simple tasks (<=3 files) -> "agent", Complex tasks (>3 files) -> "cli"
IMPORTANT: Do NOT add command field to implementation steps. Execution routing is controlled by task-level meta.execution_config.method only.
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Context sorting is ALREADY COMPLETED in Phase 2/3. DO NOT re-sort.
Direct usage:
- **user_intent**: Use goal/scope/key_constraints for task alignment
- **priority_tiers.critical**: PRIMARY focus for task generation
@@ -234,65 +207,20 @@ If prioritized_context is incomplete, fall back to exploration_results:
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## FEATURE SPECIFICATIONS (conditional)
If context-package has brainstorm_artifacts.feature_index_path:
Feature Index: [from context-package]
Feature Spec Dir: [from context-package]
Else if .workflow/active/${sessionId}/.brainstorming/feature-specs/ exists:
Feature Index: .workflow/active/${sessionId}/.brainstorming/feature-specs/feature-index.json
## EXPECTED DELIVERABLES
1. Task JSON Files (.task/IMPL-*.json)
- Unified flat schema (task-schema.json)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- **focus_paths from prioritized_context.priority_tiers (critical + high)**
- Pre-analysis steps (use dependency_order for task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
If the directory does not exist, skip this section.
2. Implementation Plan (IMPL_PLAN.md)
- Context analysis and artifact references
- Task breakdown and execution strategy
3. Plan Overview (plan.json)
- Structured plan overview (plan-overview-base-schema)
- Machine-readable task IDs, shared context, metadata
4. TODO List (TODO_LIST.md)
- Hierarchical structure
- Links to task JSONs and summaries
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution.id**: Unique ID (format: {session_id}-{task_id})
- **cli_execution**: Strategy object based on depends_on:
- No deps -> { "strategy": "new" }
- 1 dep (single child) -> { "strategy": "resume", "resume_from": "parent-cli-id" }
- 1 dep (multiple children) -> { "strategy": "fork", "resume_from": "parent-cli-id" }
- N deps -> { "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }
## QUALITY STANDARDS
Hard Constraints:
- Task count <= 18 (hard limit)
- All requirements quantified
- Acceptance criteria measurable
- Artifact references mapped from context package
## PLANNING NOTES RECORD (REQUIRED)
After completing, update planning-notes.md:
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| [choice] | [why] | [Yes/No] |
### Deferred
- [ ] [item] - [reason]
## SESSION-SPECIFIC NOTES
- Deliverables: Task JSONs + IMPL_PLAN.md + plan.json + TODO_LIST.md (all 4 required)
- focus_paths: Derive from prioritized_context.priority_tiers (critical + high)
- Task sequencing: Use dependency_order from context-package (pre-computed)
- All other schemas, strategies, quality standards: Follow agent specification
`
)
```

View File

@@ -200,6 +200,8 @@ const userConfig = {
### Step 5.1: Execute TDD Task Generation (Agent Invocation)
**Design Note**: The agent specification (action-planning-agent.md) already defines schemas, CLI execution strategies, quantification standards, and loading algorithms. This prompt provides **instance-specific parameters** and **TDD-specific requirements** only.
```javascript
Task(
subagent_type="action-planning-agent",
@@ -207,16 +209,10 @@ Task(
description="Generate TDD planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
Generate TDD implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
CRITICAL: Follow the progressive loading strategy (load analysis.md files incrementally due to file size):
- **Core**: session metadata + context-package.json (always)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
Generate TDD implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session ${sessionId}
## SESSION PATHS
Session Root: .workflow/active/${sessionId}/
Input:
- Session Metadata: .workflow/active/${sessionId}/workflow-session.json
- Context Package: .workflow/active/${sessionId}/.process/context-package.json
@@ -232,44 +228,33 @@ Session ID: ${sessionId}
Workflow Type: TDD
MCP Capabilities: {exa_code, exa_web, code_index}
## PROJECT CONTEXT (MANDATORY - load before planning-notes)
These files provide project-level constraints that apply to ALL tasks:
1. **.workflow/project-tech.json** (auto-generated tech analysis)
- Contains: tech_stack, architecture_type, key_components, build_system, test_framework
- Usage: Populate plan.json shared_context, align task tech choices, set correct test commands
- If missing: Fall back to context-package.project_context
2. **.workflow/project-guidelines.json** (user-maintained rules and constraints)
- Contains: coding_conventions, naming_rules, forbidden_patterns, quality_gates, custom_constraints
- Usage: Apply as HARD CONSTRAINTS on all generated tasks — task implementation steps,
acceptance criteria, and convergence.verification MUST respect these guidelines
- If empty/missing: No additional constraints (proceed normally)
Loading order: project-tech.json → project-guidelines.json → planning-notes.md → context-package.json
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod}
Preferred CLI Tool: ${userConfig.preferredCliTool}
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
"agent" ->
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes Red-Green-Refactor phases directly
"cli" ->
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" ->
Per-task decision: Analyze TDD cycle complexity, set method to "agent" OR "cli" per task
- Simple cycles (<=5 test cases, <=3 files) -> method: "agent"
- Complex cycles (>5 test cases, >3 files, integration tests) -> method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation steps. Execution routing is controlled by task-level meta.execution_config.method only.
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete, fall back to exploration_results:
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## TEST CONTEXT INTEGRATION
- Load test-context-package.json for existing test patterns and coverage analysis
@@ -300,8 +285,6 @@ IMPORTANT: Do NOT add command field to implementation steps. Execution routing i
- **Schema**: Unified flat schema (task-schema.json) with TDD-specific metadata
- meta.tdd_workflow: true (REQUIRED)
- meta.max_iterations: 3 (Green phase test-fix cycle limit)
- cli_execution.id: Unique CLI execution ID (format: {session_id}-{task_id})
- cli_execution: Strategy object (new|resume|fork|merge_fork)
- tdd_cycles: Array with quantified test cases and coverage
- focus_paths: Absolute or clear relative paths (enhanced with exploration critical_files)
- implementation: Exactly 3 steps with tdd_phase field
@@ -309,7 +292,6 @@ IMPORTANT: Do NOT add command field to implementation steps. Execution routing i
2. Green Phase (tdd_phase: "green"): Implement to pass tests
3. Refactor Phase (tdd_phase: "refactor"): Improve code quality
- pre_analysis: Include exploration integration_points analysis
- meta.execution_config: Set per userConfig.executionMethod (agent/cli/hybrid)
##### 2. IMPL_PLAN.md (TDD Variant)
- **Location**: .workflow/active/${sessionId}/IMPL_PLAN.md
@@ -320,37 +302,18 @@ IMPORTANT: Do NOT add command field to implementation steps. Execution routing i
- **Location**: .workflow/active/${sessionId}/TODO_LIST.md
- **Format**: Hierarchical task list with internal TDD phase indicators (Red -> Green -> Refactor)
### CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution.id**: Unique ID for CLI execution (format: {session_id}-{task_id})
- **cli_execution**: Strategy object based on depends_on:
- No deps -> { "strategy": "new" }
- 1 dep (single child) -> { "strategy": "resume", "resume_from": "parent-cli-id" }
- 1 dep (multiple children) -> { "strategy": "fork", "resume_from": "parent-cli-id" }
- N deps -> { "strategy": "merge_fork", "resume_from": ["id1", "id2", ...] }
### Quantification Requirements (MANDATORY)
**Core Rules**:
1. **Explicit Test Case Counts**: Red phase specifies exact number with enumerated list
2. **Quantified Coverage**: Acceptance includes measurable percentage (e.g., ">=85%")
3. **Detailed Implementation Scope**: Green phase enumerates files, functions, line counts
4. **Enumerated Refactoring Targets**: Refactor phase lists specific improvements with counts
**TDD Phase Formats**:
- **Red Phase**: "Write N test cases: [test1, test2, ...]"
- **Green Phase**: "Implement N functions in file lines X-Y: [func1() X1-Y1, func2() X2-Y2, ...]"
- **Refactor Phase**: "Apply N refactorings: [improvement1 (details), improvement2 (details), ...]"
- **Acceptance**: "All N tests pass with >=X% coverage: verify by [test command]"
## SUCCESS CRITERIA
- All planning documents generated successfully:
- Task JSONs valid and saved to .task/ directory with cli_execution.id
- Task JSONs valid and saved to .task/ directory
- IMPL_PLAN.md created with complete TDD structure
- TODO_LIST.md generated matching task JSONs
- CLI execution strategies assigned based on task dependencies
- Return completion status with document count and task breakdown summary
## SESSION-SPECIFIC NOTES
- Workflow Type: TDD — tasks use Red-Green-Refactor phases
- Deliverables: Task JSONs + IMPL_PLAN.md + plan.json + TODO_LIST.md (all 4 required)
- focus_paths: Derive from exploration critical_files and test context
- All other schemas, CLI execution strategies, quantification standards: Follow agent specification
`
)
```

View File

@@ -129,19 +129,13 @@ const taskTypeSpecificReads = {
const taskTypeGuidance = {
"test-gen": `
- Review task.context.requirements for test scenarios
- Analyze codebase to understand implementation
- Generate tests covering: happy paths, edge cases, error handling
- Follow existing test patterns and framework conventions
- Read task.context.requirements for test scenarios
- Generate tests following existing patterns and framework conventions
`,
"test-fix": `
- Run test command from task.context or project config
- Capture: pass/fail counts, error messages, stack traces
- Assess criticality for each failure:
* high: core functionality broken, security issues
* medium: feature degradation, data integrity issues
* low: edge cases, flaky tests, env-specific issues
- Save structured results to test-results.json
- Execute multi-layer test suite (follow your Layer-Aware Diagnosis spec)
- Save structured results to ${session.test_results_path}
- Apply criticality assessment per your spec (high/medium/low)
`,
"test-fix-iteration": `
- Load fix_strategy from task.context.fix_strategy
@@ -249,6 +243,10 @@ Task(
## Strategy
${selectedStrategy} - ${strategyDescription}
## PROJECT CONTEXT (MANDATORY)
1. Read: .workflow/project-tech.json (tech stack, test framework, build system)
2. Read: .workflow/project-guidelines.json (constraints — apply as HARD CONSTRAINTS on fixes)
## MANDATORY FIRST STEPS
1. Read test results: ${session.test_results_path}
2. Read test output: ${session.test_output_path}