feat: 强化任务生成命令,新增量化要求以消除模糊性

This commit is contained in:
catlog22
2025-11-08 14:39:45 +08:00
parent 0404a7eb7c
commit 1cb83c07e0
3 changed files with 367 additions and 50 deletions

View File

@@ -177,6 +177,69 @@ If conflict_risk was medium/high, modifications have been applied to:
- **Function-based**: Complete units (logic + UI + tests + config)
- **Hierarchy**: Flat (≤5) | Two-level (6-10) | Re-scope (>10)
### Quantification Requirements (MANDATORY)
**CRITICAL**: All task specifications MUST include explicit counts and enumerations to eliminate ambiguity.
**Core Rules**:
1. **Extract Counts from Analysis**: If analysis mentions "implement features", search for HOW MANY and list them explicitly
2. **Enforce Explicit Lists**: Every deliverable MUST use format \`{count} {type}: [{explicit_list}]\`
3. **Make Acceptance Measurable**: Replace vague terms ("complete", "comprehensive") with verifiable criteria
4. **Quantify Modification Points**: Each point must specify exact targets (files, functions, features with line counts)
5. **Step-Level Verification**: Each implementation step includes its own verification criteria
**Mandatory Task JSON Formats**:
**Requirements Format**:
\`\`\`
"requirements": [
"GOOD: Implement 5 session commands: [session-start, session-resume, session-list, session-complete, session-archive]",
"GOOD: Create 3 directories: [.workflow/, .task/, .summaries/]",
"BAD: Implement session management system",
"BAD: Complete workflow infrastructure"
]
\`\`\`
**Acceptance Format**:
\`\`\`
"acceptance": [
"GOOD: 5 command files created: verify by ls .claude/commands/workflow/session/*.md | wc -l = 5",
"GOOD: 3 directories exist: verify by ls .workflow/ | grep -E '(task|summaries|process)' | wc -l = 3",
"GOOD: Each command contains: usage section + 3 examples + parameter docs",
"BAD: All session commands implemented successfully",
"BAD: Workflow infrastructure complete"
]
\`\`\`
**Modification Points Format**:
\`\`\`
"modification_points": [
"GOOD: Create 5 command files in .claude/commands/workflow/session/: [start.md, resume.md, list.md, complete.md, archive.md]",
"GOOD: Modify 2 functions: [executeTask() in executor.ts lines 45-120, validateSession() in validator.ts lines 30-55]",
"BAD: Apply requirements from role analysis",
"BAD: Implement features following specifications"
]
\`\`\`
**Forbidden Language Patterns** (REJECT these during generation):
- BAD: "Implement feature" -> GOOD: "Implement 3 features: [auth, validation, logging]"
- BAD: "Complete refactoring" -> GOOD: "Refactor 5 files: [file1.ts, file2.ts, ...] (total 800 lines)"
- BAD: "Reorganize structure" -> GOOD: "Move 12 files from old/ to new/ directory structure"
- BAD: "Comprehensive tests" -> GOOD: "Write 25 test cases covering: [scenario1, scenario2, ...] (>=80% coverage)"
**Quantification Extraction Process** (Apply during Step 2):
1. **Scan Analysis Documents**: Search for numbers + nouns (e.g., "5 files", "17 commands", "3 features")
2. **Enumerate Lists**: Build explicit lists for each deliverable (no "..." unless list >20 items)
3. **Assign Verification Methods**: For each deliverable, specify how to verify (bash command, count check, coverage metric)
4. **Flag Ambiguity**: Detect vague language ("complete", "comprehensive", "reorganize") and require quantification
**Validation Checklist** (Run before generating task JSON):
- [ ] Every requirement contains explicit count or enumerated list
- [ ] Every acceptance criterion is measurable with verification command
- [ ] Every modification_point specifies exact targets (files/functions/lines)
- [ ] No vague language in requirements, acceptance, or modification_points
- [ ] Each implementation step has its own acceptance criteria
### Required Outputs
#### 1. Task JSON Files (.task/IMPL-*.json)
@@ -240,20 +303,31 @@ $(cat ~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)
- Read template from the provided path: `Read({template_path})`
- This template is already the correct one based on execution mode
**Step 2: Extract and Decompose Tasks**
**Step 2: Extract and Decompose Tasks (WITH QUANTIFICATION)**
- Parse role analysis.md files for requirements, design specs, and task recommendations
- **CRITICAL: Apply Quantification Extraction Process**:
- Scan for counts: numbers + nouns (e.g., "5 files", "17 commands", "3 features")
- Build explicit lists for each deliverable (no "..." unless list >20 items)
- Flag vague language ("complete", "comprehensive", "reorganize") for replacement
- Extract verification methods for each deliverable
- Review synthesis enhancements and clarifications in role analyses
- Apply conflict resolution strategies (if CONFLICT_RESOLUTION.md exists)
- Apply task merging rules (merge when possible, decompose only when necessary)
- Map artifacts to tasks based on domain (UI → ui-designer, Backend → system-architect, Data → data-architect)
- Ensure task count ≤10
**Step 3: Generate Task JSON Files**
**Step 3: Generate Task JSON Files (ENFORCE QUANTIFICATION)**
- Use the template structure from Step 1
- Create .task/IMPL-*.json files with proper structure
- **MANDATORY: Apply Quantification Formats**:
- Every requirement: \`{count} {type}: [{explicit_list}]\`
- Every acceptance: Measurable with verification command
- Every modification_point: Exact targets (files/functions/lines)
- NO vague language in any field
- Replace all {placeholder} variables with actual session paths
- Embed artifacts array with brainstorming outputs
- Include MCP tool integration in pre_analysis steps
- **Validation**: Run checklist from Quantification Requirements section before writing files
**Step 4: Create IMPL_PLAN.md**
- Use IMPL_PLAN template

View File

@@ -55,6 +55,97 @@ Generate TDD-specific tasks from analysis results with complete Red-Green-Refact
- **Context-Aware**: Analyzes existing codebase and test patterns
- **Iterative Green Phase**: Auto-diagnose and fix test failures with Gemini + optional Codex
- **Safety-First**: Auto-revert on max iterations to prevent broken state
- **Quantification-Enforced**: All test cases, coverage requirements, and implementation scope MUST include explicit counts and enumerations (e.g., "15 test cases: [test1, test2, ...]" not "comprehensive tests")
## Quantification Requirements for TDD (MANDATORY)
**Purpose**: Eliminate ambiguity in TDD task generation by enforcing explicit test case counts, coverage metrics, and implementation scope.
**Core Rules**:
1. **Explicit Test Case Counts**: Red phase MUST specify exact number of test cases with enumerated list
2. **Quantified Coverage**: Acceptance criteria MUST include measurable coverage percentage (e.g., ">=85%")
3. **Detailed Implementation Scope**: Green phase MUST enumerate exact files, functions, and line counts
4. **Enumerated Refactoring Targets**: Refactor phase MUST list specific improvements with counts
**Test Case Format (Red Phase)**:
```
"requirements": [
"GOOD: Red Phase - Write 15 test cases: [test_user_auth_success, test_user_auth_invalid_password, test_user_auth_missing_token, test_session_create, test_session_resume, test_session_expire, test_api_post_valid, test_api_post_invalid, test_api_get_list, test_api_get_item, test_validation_required_fields, test_validation_type_checking, test_error_404, test_error_500, test_integration_full_flow]",
"BAD: Red Phase - Write comprehensive test suite covering all scenarios"
]
```
**Coverage Format (Acceptance)**:
```
"acceptance": [
"GOOD: All 15 tests pass with >=85% coverage: verify by pytest --cov=src/auth --cov-report=term | grep TOTAL",
"GOOD: Test 5 functions: [authenticate(), createSession(), validateToken(), refreshSession(), revokeSession()]",
"BAD: All tests pass with good coverage",
"BAD: Test suite complete"
]
```
**Implementation Scope Format (Green Phase)**:
```
"modification_points": [
"GOOD: Implement 5 functions in src/auth/service.ts lines 20-180: [authenticate() 20-45, createSession() 50-75, validateToken() 80-100, refreshSession() 105-135, revokeSession() 140-160]",
"GOOD: Create 3 files: [src/auth/service.ts (180 lines), src/auth/models.ts (50 lines), src/auth/utils.ts (30 lines)]",
"BAD: Implement authentication service following requirements",
"BAD: Create necessary files for feature"
]
```
**Refactoring Targets Format (Refactor Phase)**:
```
"modification_points": [
"GOOD: Apply 4 refactorings: [extract validateInput() helper (15 lines), merge duplicate error handlers (3 occurrences), rename confusing variables (token->authToken in 8 locations), add JSDoc to 5 public functions]",
"BAD: Improve code quality and maintainability",
"BAD: Refactor implementation"
]
```
**TDD Cycles Array Format**:
```
"tdd_cycles": [
{
"cycle": 1,
"feature": "User authentication with password validation",
"test_count": 5,
"test_cases": [
"test_auth_valid_credentials",
"test_auth_invalid_password",
"test_auth_missing_username",
"test_auth_account_locked",
"test_auth_password_expired"
],
"implementation_scope": "Implement authenticate() function in src/auth/service.ts lines 20-65 (45 lines)",
"expected_coverage": ">=90%"
},
{
"cycle": 2,
"feature": "Session management lifecycle",
"test_count": 6,
"test_cases": [
"test_session_create",
"test_session_resume",
"test_session_expire",
"test_session_refresh",
"test_session_revoke",
"test_session_concurrent_limit"
],
"implementation_scope": "Implement 3 functions in src/auth/service.ts lines 70-150: [createSession() 70-95, resumeSession() 100-125, revokeSession() 130-150]",
"expected_coverage": ">=85%"
}
]
```
**Validation Checklist** (Run before TDD task generation):
- [ ] Every Red phase specifies exact test case count with enumerated list
- [ ] Every Green phase enumerates files, functions, and estimated line counts
- [ ] Every Refactor phase lists specific improvements with counts
- [ ] Every acceptance criterion includes measurable coverage percentage
- [ ] tdd_cycles array contains test_count and test_cases for each cycle
- [ ] No vague language ("comprehensive", "complete", "thorough")
## Core Responsibilities
- Parse analysis results and identify testable features
@@ -144,25 +235,49 @@ For each feature, generate task(s) with ID format:
"use_codex": false // false=manual fixes, true=Codex automated fixes
},
"context": {
"requirements": [ // Feature requirements with TDD phases
"Feature description",
"Red: Test scenarios to write",
"Green: Implementation approach with test-fix cycle",
"Refactor: Code quality improvements"
"requirements": [ // Feature requirements with TDD phases (QUANTIFIED)
"Implement user authentication with session management",
"Red: Write 11 test cases: [test_auth_valid, test_auth_invalid_pwd, test_auth_missing_user, test_session_create, test_session_resume, test_session_expire, test_session_refresh, test_session_revoke, test_validation_required, test_error_401, test_integration_full_auth_flow]",
"Green: Implement 5 functions in src/auth/service.ts (160 lines total): [authenticate() 20-50, createSession() 55-80, validateToken() 85-105, refreshSession() 110-135, revokeSession() 140-160]",
"Refactor: Apply 3 improvements: [extract validateInput() helper (12 lines), merge 2 duplicate error handlers, add JSDoc to 5 public functions]"
],
"tdd_cycles": [ // OPTIONAL: Detailed test cycles
"tdd_cycles": [ // REQUIRED: Detailed test cycles with counts
{
"cycle": 1,
"feature": "Specific functionality",
"test_focus": "What to test",
"expected_failure": "Why test should fail initially"
"feature": "User authentication with password validation",
"test_count": 5,
"test_cases": [
"test_auth_valid_credentials",
"test_auth_invalid_password",
"test_auth_missing_username",
"test_auth_account_locked",
"test_auth_session_created"
],
"implementation_scope": "Implement authenticate() function in src/auth/service.ts lines 20-50 (30 lines)",
"expected_coverage": ">=90%"
},
{
"cycle": 2,
"feature": "Session lifecycle management",
"test_count": 6,
"test_cases": [
"test_session_create",
"test_session_resume_valid",
"test_session_expire_timeout",
"test_session_refresh_extend",
"test_session_revoke_immediate",
"test_session_concurrent_limit"
],
"implementation_scope": "Implement 3 functions in src/auth/service.ts lines 55-160: [createSession() 55-80, validateToken() 85-105, refreshSession() 110-135, revokeSession() 140-160]",
"expected_coverage": ">=85%"
}
],
"focus_paths": ["D:\\project\\src\\path", "./tests/path"], // Absolute or clear relative paths from project root
"acceptance": [ // Success criteria
"All tests pass (Red → Green)",
"Code refactored (Refactor complete)",
"Test coverage ≥80%"
"acceptance": [ // Success criteria (QUANTIFIED)
"All 11 tests pass: verify by npm test -- tests/auth/ (exit code 0)",
"Test coverage >=85%: verify by npm test -- --coverage | grep TOTAL | awk '{print $4}' >= 85",
"5 functions implemented with proper error handling",
"3 refactoring improvements applied: code passes npm run lint with 0 warnings"
],
"depends_on": [] // Task dependencies
},
@@ -181,10 +296,22 @@ For each feature, generate task(s) with ID format:
"step": 1,
"title": "RED Phase: Write failing tests",
"tdd_phase": "red", // REQUIRED: Phase identifier
"description": "Write comprehensive failing tests",
"modification_points": ["Files/changes to make"],
"logic_flow": ["Step-by-step process"],
"acceptance": ["Phase success criteria"],
"description": "Write 11 failing test cases for authentication and session management",
"modification_points": [
"Create tests/auth/authentication.test.ts with 5 test cases: [test_auth_valid_credentials, test_auth_invalid_password, test_auth_missing_username, test_auth_account_locked, test_auth_session_created]",
"Create tests/auth/session.test.ts with 6 test cases: [test_session_create, test_session_resume_valid, test_session_expire_timeout, test_session_refresh_extend, test_session_revoke_immediate, test_session_concurrent_limit]"
],
"logic_flow": [
"Create test file structure: tests/auth/ directory",
"Write 5 authentication test cases in authentication.test.ts",
"Write 6 session management test cases in session.test.ts",
"Verify all 11 tests fail with expected errors"
],
"acceptance": [
"11 test cases written: verify by grep -E 'test_.*\\(' tests/auth/*.test.ts | wc -l = 11",
"All tests fail: npm test -- tests/auth/ exits with non-zero status",
"Each test has clear assertion and expected failure reason"
],
"depends_on": [],
"output": "failing_tests"
},
@@ -192,20 +319,28 @@ For each feature, generate task(s) with ID format:
"step": 2,
"title": "GREEN Phase: Implement to pass tests",
"tdd_phase": "green", // REQUIRED: Phase identifier
"description": "Minimal implementation with test-fix cycle",
"modification_points": ["Implementation files"],
"logic_flow": [
"Implement minimal code",
"Run tests",
"If fail → Enter iteration loop (max 3):",
" 1. Extract failure messages",
" 2. Gemini bug-fix diagnosis",
" 3. Apply fixes",
" 4. Rerun tests",
"If max_iterations → Auto-revert"
"description": "Implement 5 functions (160 lines) in src/auth/service.ts with test-fix cycle",
"modification_points": [
"Create src/auth/service.ts implementing 5 functions (160 lines total): [authenticate() 20-50, createSession() 55-80, validateToken() 85-105, refreshSession() 110-135, revokeSession() 140-160]",
"Create src/auth/models.ts with 2 interfaces: [User, Session]",
"Create src/auth/utils.ts with 2 helper functions: [hashPassword(), generateToken()]"
],
"acceptance": ["All tests pass"],
"command": "bash(npm test -- tests/path/)",
"logic_flow": [
"Implement minimal code for 5 functions",
"Run tests: npm test -- tests/auth/",
"If fail -> Enter iteration loop (max 3):",
" 1. Extract failure messages from test output",
" 2. Use Gemini for bug-fix diagnosis",
" 3. Apply fixes to failing functions",
" 4. Rerun tests",
"If max_iterations reached -> Auto-revert via git reset --hard HEAD"
],
"acceptance": [
"All 11 tests pass: npm test -- tests/auth/ exits with status 0",
"5 functions implemented: grep -E '^(export )?function ' src/auth/service.ts | wc -l = 5",
"Test coverage >=85%: npm test -- --coverage | grep TOTAL | awk '{print $4}' >= 85"
],
"command": "bash(npm test -- tests/auth/)",
"depends_on": [1],
"output": "passing_implementation"
},
@@ -213,10 +348,25 @@ For each feature, generate task(s) with ID format:
"step": 3,
"title": "REFACTOR Phase: Improve code quality",
"tdd_phase": "refactor", // REQUIRED: Phase identifier
"description": "Refactor while keeping tests green",
"modification_points": ["Quality improvements"],
"logic_flow": ["Incremental refactoring with test verification"],
"acceptance": ["Tests still pass", "Code quality improved"],
"description": "Apply 3 refactoring improvements while maintaining test coverage",
"modification_points": [
"Extract validateInput() helper function (12 lines) from authenticate() and createSession()",
"Merge 2 duplicate error handlers in session.ts into single handleSessionError() function",
"Add JSDoc comments to 5 public functions: [authenticate(), createSession(), validateToken(), refreshSession(), revokeSession()]"
],
"logic_flow": [
"Extract validateInput() helper, update 2 call sites",
"Merge duplicate error handlers, verify 2 usages updated",
"Add JSDoc to 5 functions with @param, @returns, @throws",
"Run tests after each refactoring to ensure green state",
"Run linter: npm run lint (expect 0 warnings)"
],
"acceptance": [
"All 11 tests still pass: npm test -- tests/auth/ exits with status 0",
"3 refactorings applied: validateInput() extracted + error handlers merged + 5 JSDoc added",
"Lint passes: npm run lint exits with 0 warnings",
"Test coverage maintained >=85%"
],
"command": "bash(npm run lint && npm test)",
"depends_on": [2],
"output": "refactored_implementation"

View File

@@ -45,8 +45,66 @@ This command is built on a set of core principles to ensure efficient and reliab
- **Memory-First**: Prioritizes using documents already loaded in conversation memory to avoid redundant file operations
- **Mode-Flexible**: Supports both agent-driven execution (default) and CLI tool execution (with `--cli-execute` flag)
- **Multi-Step Support**: Complex tasks can use multiple sequential steps in `implementation_approach` with codex resume mechanism
- **Quantification-Enforced**: **NEW** - All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations to prevent ambiguity (e.g., "17 commands: [list]" not "implement commands")
- **Responsibility**: Parses analysis, detects artifacts, generates enhanced task JSONs, creates `IMPL_PLAN.md` and `TODO_LIST.md`, updates session state
## 3.5. Quantification Requirements (MANDATORY)
**Purpose**: Eliminate ambiguity by enforcing explicit counts and enumerations in all task specifications.
**Core Rules**:
1. **Extract Counts from Analysis**: If analysis mentions "implement commands", search for HOW MANY and list them explicitly
2. **Enforce Explicit Lists**: Every deliverable MUST use format `{count} {type}: [{explicit_list}]`
3. **Make Acceptance Measurable**: Replace vague terms ("complete", "comprehensive") with verifiable criteria
4. **Quantify Modification Points**: Each point must specify exact targets (files, functions, features)
5. **Step-Level Verification**: Each implementation step includes its own verification criteria
**Mandatory Formats**:
**Requirements Format**:
```
"requirements": [
"GOOD: Implement 17 commands: [literature:add, literature:search, ..., context:synthesize]",
"GOOD: Create 5 directories: [literature/, experiment/, data-analysis/, visualization/, context/]",
"BAD: Implement new commands for material management",
"BAD: Reorganize command structure"
]
```
**Acceptance Format**:
```
"acceptance": [
"GOOD: 17 commands implemented: verify by ls .claude/commands/*/*.md | wc -l = 17",
"GOOD: 5 directories created: verify by ls .claude/commands/ | grep -E '(lit|exp|...)' | wc -l = 5",
"GOOD: Each command file contains: usage section, 3+ examples, parameter documentation",
"BAD: All commands implemented successfully",
"BAD: Command structure reorganized logically"
]
```
**Modification Points Format**:
```
"modification_points": [
"GOOD: Create 5 files: [session-start.md, session-resume.md, session-list.md, session-complete.md, session-archive.md]",
"GOOD: Modify 3 functions: [parseConfig() lines 15-30, validateInput() lines 45-60, executeTask() lines 80-120]",
"BAD: Apply requirements from role analysis",
"BAD: Implement feature following specifications"
]
```
**Forbidden Language Patterns**:
- BAD: "Implement feature" -> GOOD: "Implement 3 features: [auth, validation, logging]"
- BAD: "Complete refactoring" -> GOOD: "Refactor 5 files: [file1.ts, file2.ts, ...] (total 800 lines)"
- BAD: "Reorganize structure" -> GOOD: "Move 12 files from old/ to new/ directory structure"
- BAD: "Comprehensive tests" -> GOOD: "Write 25 test cases covering: [scenario1, scenario2, ...] (>=80% coverage)"
**Validation Checklist** (Run before task generation):
- [ ] Every requirement contains explicit count or enumerated list
- [ ] Every acceptance criterion is measurable with verification command
- [ ] Every modification_point specifies exact targets (files/functions/lines)
- [ ] No vague language ("complete", "comprehensive", "reorganize", "refactor")
- [ ] Each implementation step has its own acceptance criteria
## 4. Execution Flow
The command follows a streamlined, three-step process to convert analysis into executable tasks.
@@ -59,13 +117,39 @@ The process begins by gathering all necessary inputs. It follows a **Memory-Firs
### Step 2: Task Decomposition & Grouping
Once all inputs are loaded, the command analyzes the tasks defined in the analysis results and groups them based on shared context.
1. **Task Definition Parsing**: Extracts task definitions, requirements, and dependencies.
2. **Context Signature Analysis**: Computes a unique hash (`context_signature`) for each task based on its `focus_paths` and referenced `artifacts`.
**Phase 2.1: Quantification Extraction (NEW - CRITICAL)**
1. **Count Extraction**: Scan analysis documents for quantifiable information:
- Search for numbers + nouns (e.g., "5 files", "17 commands", "3 features")
- Identify enumerated lists (bullet points, numbered lists, comma-separated items)
- Extract explicit counts from tables, diagrams, or structured data
- Store extracted counts with their context (what is being counted)
2. **List Enumeration**: Build explicit lists for each deliverable:
- If analysis says "implement session commands", enumerate ALL commands: [start, resume, list, complete, archive]
- If analysis mentions "create categories", list ALL categories: [literature, experiment, data-analysis, visualization, context]
- If analysis describes "modify functions", list ALL functions with line numbers
- Maintain full enumerations (no "..." unless list exceeds 20 items)
3. **Verification Method Assignment**: For each deliverable, determine verification approach:
- File count: `ls {path}/*.{ext} | wc -l = {count}`
- Directory existence: `ls {parent}/ | grep -E '(name1|name2|...)' | wc -l = {count}`
- Test coverage: `pytest --cov={module} --cov-report=term | grep TOTAL | awk '{print $4}' >= {percentage}`
- Function existence: `grep -E '(func1|func2|...)' {file} | wc -l = {count}`
4. **Ambiguity Detection**: Flag vague language for replacement:
- Detect words: "complete", "comprehensive", "reorganize", "refactor", "implement", "create" without counts
- Require quantification: "implement" → "implement {N} {items}: [{list}]"
- Reject unquantified deliverables
**Phase 2.2: Task Definition & Grouping**
1. **Task Definition Parsing**: Extracts task definitions, requirements, and dependencies from quantified analysis
2. **Context Signature Analysis**: Computes a unique hash (`context_signature`) for each task based on its `focus_paths` and referenced `artifacts`
3. **Task Grouping**:
* Tasks with the **same signature** are candidates for merging, as they operate on the same context.
* Tasks with **different signatures** and no dependencies are grouped for parallel execution.
* Tasks with `depends_on` relationships are marked for sequential execution.
4. **Modification Target Determination**: Extracts specific code locations (`file:function:lines`) from the analysis to populate the `target_files` field.
* Tasks with the **same signature** are candidates for merging, as they operate on the same context
* Tasks with **different signatures** and no dependencies are grouped for parallel execution
* Tasks with `depends_on` relationships are marked for sequential execution
4. **Modification Target Determination**: Extracts specific code locations (`file:function:lines`) from the analysis to populate the `target_files` field
### Step 3: Output Generation
Finally, the command generates all the necessary output files.
@@ -182,9 +266,18 @@ This enhanced 5-field schema embeds all necessary context, artifacts, and execut
"context_signature": "hash-of-focus_paths-and-artifacts"
},
"context": {
"requirements": ["Clear requirement from analysis"],
"requirements": [
"Implement 5 session commands: [session-start, session-resume, session-list, session-complete, session-archive]",
"Create 3 directories: [.workflow/, .task/, .summaries/]",
"Modify 2 functions: [executeTask() in executor.ts lines 45-120, validateSession() in validator.ts lines 30-55]"
],
"focus_paths": ["D:\\project\\src\\module\\path", "./tests/module/path"],
"acceptance": ["Measurable acceptance criterion"],
"acceptance": [
"5 command files created: verify by ls .claude/commands/workflow/session/*.md | wc -l = 5",
"3 directories exist: verify by ls .workflow/ | grep -E '(task|summaries|process)' | wc -l = 3",
"Each command file contains: usage section + 3 examples + parameter docs + agent invocation",
"All tests pass: pytest tests/workflow/session/ --cov=src/workflow/session --cov-report=term (>=85% coverage)"
],
"parent": "IMPL-N",
"depends_on": ["IMPL-N.M"],
"inherited": {"shared_patterns": [], "common_dependencies": []},
@@ -264,12 +357,12 @@ This enhanced 5-field schema embeds all necessary context, artifacts, and execut
"title": "Implement task following role analyses and context",
"description": "Implement '[title]' following this priority: 1) role analysis.md files (requirements, design specs, enhancements from synthesis), 2) guidance-specification.md (finalized decisions with resolved conflicts), 3) context-package.json (smart context, focus paths, patterns). Role analyses are enhanced by synthesis phase with concept improvements and clarifications. If conflict_risk was medium/high, conflict resolutions are already applied in-place.",
"modification_points": [
"Apply requirements and design specs from role analysis documents",
"Use enhancements and clarifications from synthesis phase",
"Use finalized decisions from guidance-specification.md (includes resolved conflicts)",
"Use context-package.json for focus paths and dependency resolution",
"Consult specific role artifacts for implementation details when needed",
"Integrate with existing patterns"
"Create 5 command files in .claude/commands/workflow/session/: [start.md, resume.md, list.md, complete.md, archive.md]",
"Modify 2 functions following role analysis requirements: [executeTask() in src/workflow/executor.ts lines 45-120, validateSession() in src/workflow/validator.ts lines 30-55]",
"Implement 3 features from synthesis enhancements: [session lifecycle hooks, automatic status tracking, rollback on error]",
"Apply 2 design decisions from guidance-specification.md: [use Git-based versioning for sessions, integrate with MaterialDB for context persistence]",
"Follow 4 existing patterns from context-package.json: [SlashCommand structure, agent invocation via Task tool, TodoWrite for progress tracking, conventional commit messages]",
"Add 8 test cases covering: [start workflow, resume existing, list all sessions, complete session, archive session, error handling, validation checks, integration with MaterialDB]"
],
"logic_flow": [
"Load role analyses (requirements, design, enhancements from synthesis)",