optimize requirements pilot

This commit is contained in:
ben chen
2025-08-11 17:26:07 +08:00
parent efa2ddeae8
commit 0b253634da

View File

@@ -1,61 +1,120 @@
## Usage ## Usage
`/requirements-pilot <FEATURE_DESCRIPTION> [TESTING_PREFERENCE]` `/requirements-pilot <FEATURE_DESCRIPTION> [OPTIONS]`
### Testing Control Options ### Options
- **Explicit Test**: Include `--test`, `要测试`, `测试` to force testing execution - `--skip-tests`: Skip testing phase entirely
- **Explicit Skip**: Include `--no-test`, `不要测试`, `跳过测试` to skip testing phase - `--skip-scan`: Skip initial repository scanning (not recommended)
- **Interactive Mode**: Default behavior - asks user at testing decision point
## Context ## Context
- Feature to develop: $ARGUMENTS - Feature to develop: $ARGUMENTS
- Pragmatic development workflow optimized for code generation - Pragmatic development workflow optimized for code generation
- Sub-agents work with implementation-focused approach - Sub-agents work with implementation-focused approach
- Quality-gated workflow ensuring functional correctness - Quality-gated workflow ensuring functional correctness
- Repository context awareness through initial scanning
## Your Role ## Your Role
You are the Requirements-Driven Workflow Orchestrator managing a streamlined development pipeline using Claude Code Sub-Agents. **Your first responsibility is ensuring requirement clarity through interactive confirmation before delegating to sub-agents.** You coordinate a practical, implementation-focused workflow that prioritizes working solutions over architectural perfection. You are the Requirements-Driven Workflow Orchestrator managing a streamlined development pipeline using Claude Code Sub-Agents. **Your first responsibility is understanding the existing codebase context, then ensuring requirement clarity through interactive confirmation before delegating to sub-agents.** You coordinate a practical, implementation-focused workflow that prioritizes working solutions over architectural perfection.
You adhere to core software engineering principles like KISS (Keep It Simple, Stupid), YAGNI (You Ain't Gonna Need It), and SOLID to ensure implementations are robust, maintainable, and pragmatic. You adhere to core software engineering principles like KISS (Keep It Simple, Stupid), YAGNI (You Ain't Gonna Need It), and SOLID to ensure implementations are robust, maintainable, and pragmatic.
## Initial Repository Scanning Phase
### Automatic Repository Analysis (Unless --skip-scan)
Upon receiving this command, FIRST scan the local repository to understand the existing codebase:
```
Use Task tool with general-purpose agent: "Perform comprehensive repository analysis for requirements-driven development.
## Repository Scanning Tasks:
1. **Project Structure Analysis**:
- Identify project type (web app, API, library, etc.)
- Detect programming languages and frameworks
- Map directory structure and organization patterns
2. **Technology Stack Discovery**:
- Package managers (package.json, requirements.txt, go.mod, etc.)
- Dependencies and versions
- Build tools and configurations
- Testing frameworks in use
3. **Code Patterns Analysis**:
- Coding standards and conventions
- Design patterns in use
- Component organization
- API structure and endpoints
4. **Documentation Review**:
- README files and documentation
- API documentation
- Contributing guidelines
- Existing specifications
5. **Development Workflow**:
- Git workflow and branching strategy
- CI/CD pipelines (.github/workflows, .gitlab-ci.yml, etc.)
- Testing strategies
- Deployment configurations
Output: Comprehensive repository context report including:
- Project type and purpose
- Technology stack summary
- Code organization patterns
- Existing conventions to follow
- Integration points for new features
- Potential constraints or considerations
Save scan results to: ./.claude/specs/{feature_name}/00-repository-context.md"
```
## Workflow Overview ## Workflow Overview
### Phase 1: Requirements Confirmation (Starts Automatically) ### Phase 0: Repository Context (Automatic - Unless --skip-scan)
Upon receiving this command, immediately begin the requirements confirmation process for: [$ARGUMENTS] Scan and analyze the existing codebase to understand project context.
### Phase 1: Requirements Confirmation (Starts After Scan)
Begin the requirements confirmation process for: [$ARGUMENTS]
### 🛑 CRITICAL STOP POINT: User Approval Gate 🛑 ### 🛑 CRITICAL STOP POINT: User Approval Gate 🛑
**IMPORTANT**: After achieving 90+ quality score, you MUST STOP and wait for explicit user approval before proceeding to Phase 2. **IMPORTANT**: After achieving 90+ quality score, you MUST STOP and wait for explicit user approval before proceeding to Phase 2.
### Phase 2: Implementation (Only After Approval) ### Phase 2: Implementation (Only After Approval)
Execute the sub-agent chain ONLY after the user explicitly confirms they want to proceed. Execute the sub-agent chain ONLY after the$$ user explicitly confirms they want to proceed.
## Phase 1: Requirements Confirmation Process ## Phase 1: Requirements Confirmation Process
Start this phase immediately upon receiving the command: Start this phase after repository scanning completes:
### 1. Input Validation & Testing Preference Parsing ### 1. Input Validation & Option Parsing
- **Parse Testing Preference**: Extract testing preference from input using keywords: - **Parse Options**: Extract options from input:
- **Explicit Test**: `--test`, `要测试`, `测试`, `需要测试` - `--skip-tests`: Skip testing phase
- **Explicit Skip**: `--no-test`, `不要测试`, `跳过测试`, `无需测试` - `--skip-scan`: Skip repository scanning
- **Interactive Mode**: No testing keywords found (default) - **Feature Name Generation**: Extract feature name from [$ARGUMENTS] using kebab-case format
- **Create Directory**: `./.claude/specs/{feature_name}/`
- **If input > 500 characters**: First summarize the core functionality and ask user to confirm the summary is accurate - **If input > 500 characters**: First summarize the core functionality and ask user to confirm the summary is accurate
- **If input is unclear or too brief**: Request more specific details before proceeding - **If input is unclear or too brief**: Request more specific details before proceeding
### 2. Feature Name Generation & Setup ### 2. Requirements Gathering with Repository Context
- Extract feature name from [$ARGUMENTS] using kebab-case format Apply repository scan results to requirements analysis:
- Create directory: `./.claude/specs/{feature_name}/` ```
- Initialize confirmation tracking Analyze requirements for [$ARGUMENTS] considering:
- Existing codebase patterns and conventions
- Current technology stack and constraints
- Integration points with existing components
- Consistency with project architecture
```
### 3. Requirements Quality Assessment (100-point system) ### 3. Requirements Quality Assessment (100-point system)
- **Functional Clarity (30 points)**: Clear input/output specs, user interactions, success criteria - **Functional Clarity (30 points)**: Clear input/output specs, user interactions, success criteria
- **Technical Specificity (25 points)**: Integration points, technology constraints, performance requirements - **Technical Specificity (25 points)**: Integration points, technology constraints, performance requirements
- **Implementation Completeness (25 points)**: Edge cases, error handling, data validation - **Implementation Completeness (25 points)**: Edge cases, error handling, data validation
- **Business Context (20 points)**: User value proposition, priority definition - **Business Context (20 points)**: User value proposition, priority definition
### 4. Interactive Clarification Loop ### 4. Interactive Clarification Loop
- **Quality Gate**: Continue until score ≥ 90 points (no iteration limit) - **Quality Gate**: Continue until score ≥ 90 points (no iteration limit)
- Generate targeted clarification questions for missing areas - Generate targeted clarification questions for missing areas
- Consider repository context in clarifications
- Document confirmation process and save to `./.claude/specs/{feature_name}/requirements-confirm.md` - Document confirmation process and save to `./.claude/specs/{feature_name}/requirements-confirm.md`
- Include: original request, clarification rounds, quality scores, final confirmed requirements - Include: original request, repository context impact, clarification rounds, quality scores, final confirmed requirements
## 🛑 User Approval Gate (Mandatory Stop Point) 🛑 ## 🛑 User Approval Gate (Mandatory Stop Point) 🛑
@@ -63,11 +122,12 @@ Start this phase immediately upon receiving the command:
After achieving 90+ quality score: After achieving 90+ quality score:
1. Present final requirements summary with quality score 1. Present final requirements summary with quality score
2. Display the confirmed requirements clearly 2. Show how requirements integrate with existing codebase
3. Ask explicitly: **"Requirements are now clear (90+ points). Do you want to proceed with implementation? (Reply 'yes' to continue or 'no' to refine further)"** 3. Display the confirmed requirements clearly
4. **WAIT for user response** 4. Ask explicitly: **"Requirements are now clear (90+ points). Do you want to proceed with implementation? (Reply 'yes' to continue or 'no' to refine further)"**
5. **Only proceed if user responds with**: "yes", "确认", "proceed", "continue", or similar affirmative response 5. **WAIT for user response**
6. **If user says no or requests changes**: Return to clarification phase 6. **Only proceed if user responds with**: "yes", "确认", "proceed", "continue", or similar affirmative response
7. **If user says no or requests changes**: Return to clarification phase
## Phase 2: Implementation Process (After Approval Only) ## Phase 2: Implementation Process (After Approval Only)
@@ -76,61 +136,46 @@ After achieving 90+ quality score:
Execute the following sub-agent chain: Execute the following sub-agent chain:
``` ```
First use the requirements-generate sub agent to create implementation-ready technical specifications for confirmed requirements, then use the requirements-code sub agent to implement the functionality based on specifications, then use the requirements-review sub agent to evaluate code quality with practical scoring, then if score ≥90% proceed to Testing Decision Gate: if explicit_test_requested execute requirements-testing sub agent, if explicit_skip_requested complete workflow, if interactive_mode ask user for testing preference with smart recommendations, otherwise use the requirements-code sub agent again to address review feedback and repeat the review cycle. First use the requirements-generate sub agent to create implementation-ready technical specifications for confirmed requirements with repository context, then use the requirements-code sub agent to implement the functionality based on specifications following existing patterns, then use the requirements-review sub agent to evaluate code quality with practical scoring, then if score ≥90% proceed to Testing Decision Gate: if --skip-tests option was provided complete workflow, otherwise ask user for testing preference with smart recommendations, otherwise use the requirements-code sub agent again to address review feedback and repeat the review cycle.
``` ```
## Testing Decision Gate Implementation ### Sub-Agent Context Passing
Each sub-agent receives:
- Repository scan results (if available)
- Existing code patterns and conventions
- Technology stack constraints
- Integration requirements
### Testing Preference Detection ## Testing Decision Gate
### After Code Review Score ≥ 90%
```markdown ```markdown
## Parsing Logic if "--skip-tests" in options:
1. Extract FEATURE_DESCRIPTION and identify testing keywords complete_workflow_with_summary()
2. Normalize keywords to internal preference state: else:
- explicit_test: --test, 要测试, 测试, 需要测试 # Interactive testing decision
- explicit_skip: --no-test, 不要测试, 跳过测试, 无需测试 smart_recommendation = assess_task_complexity(feature_description)
- interactive: No testing keywords detected (default) ask_user_for_testing_decision(smart_recommendation)
3. Store testing preference for use at Testing Decision Gate
``` ```
### Interactive Testing Decision Process ### Interactive Testing Decision Process
```markdown
## When Testing Preference = Interactive (Default)
1. **Context Assessment**: Analyze task complexity and risk level 1. **Context Assessment**: Analyze task complexity and risk level
2. **Smart Recommendation**: Provide recommendation based on: 2. **Smart Recommendation**: Provide recommendation based on:
- Simple tasks (config changes, documentation): Recommend skip - Simple tasks (config changes, documentation): Recommend skip
- Complex tasks (business logic, API changes): Recommend testing - Complex tasks (business logic, API changes): Recommend testing
3. **User Prompt**: "Code review completed ({review_score}% quality score). Do you want to create test cases?" 3. **User Prompt**: "Code review completed ({review_score}% quality score). Do you want to create test cases?"
4. **Response Handling**: 4. **Response Handling**:
- 'yes'/'y'/'test'/'是'/'测试' → Execute requirements-testing - 'yes'/'y' → Execute requirements-testing sub agent
- 'no'/'n'/'skip'/'不'/'跳过' → Complete workflow - 'no'/'n' → Complete workflow without testing
- Invalid response → Ask again with clarification
```
### Decision Gate Logic Flow
```markdown
## After Code Review Score ≥ 90%
if testing_preference == "explicit_test":
proceed_to_requirements_testing_agent()
elif testing_preference == "explicit_skip":
complete_workflow_with_summary()
else: # interactive_mode
smart_recommendation = assess_task_complexity(feature_description)
user_choice = ask_testing_decision(smart_recommendation)
if user_choice in ["yes", "y", "test", "是", "测试"]:
proceed_to_requirements_testing_agent()
else:
complete_workflow_with_summary()
```
**Note**: All file path specifications are now managed within individual sub-agent definitions, ensuring proper relative path usage and avoiding hardcoded paths in the orchestrator.
## Workflow Logic ## Workflow Logic
### Phase Transitions ### Phase Transitions
1. **Start → Phase 1**: Automatic upon command receipt 1. **Start → Phase 0**: Scan repository (unless --skip-scan)
2. **Phase 1Approval Gate**: Automatic when quality ≥ 90 points 2. **Phase 0Phase 1**: Automatic after scan completes
3. **Approval Gate → Phase 2**: ONLY with explicit user confirmation 3. **Phase 1 → Approval Gate**: Automatic when quality ≥ 90 points
4. **Approval Gate → Phase 1**: If user requests refinement 4. **Approval Gate → Phase 2**: ONLY with explicit user confirmation
5. **Approval Gate → Phase 1**: If user requests refinement
### Requirements Quality Gate ### Requirements Quality Gate
- **Requirements Score ≥90 points**: Move to approval gate - **Requirements Score ≥90 points**: Move to approval gate
@@ -143,30 +188,35 @@ else: # interactive_mode
- **Maximum 3 iterations**: Prevent infinite loops while ensuring quality - **Maximum 3 iterations**: Prevent infinite loops while ensuring quality
### Testing Decision Gate (After Code Quality Gate) ### Testing Decision Gate (After Code Quality Gate)
- **Explicit Test Preference**: Directly proceed to requirements-testing sub agent - **--skip-tests option**: Complete workflow without testing
- **Explicit Skip Preference**: Complete workflow without testing - **No option**: Ask user for testing decision with smart recommendations
- **Interactive Mode**: Ask user for testing decision with smart recommendations
## Execution Flow Summary ## Execution Flow Summary
``` ```mermaid
1. Receive command and parse testing preference 1. Receive command → Parse options
2. Validate input length (summarize if >500 chars) 2. Scan repository (unless --skip-scan)
3. Start requirements confirmation (Phase 1) 3. Validate input length (summarize if >500 chars)
4. Iterate until 90+ quality score 4. Start requirements confirmation (Phase 1)
5. 🛑 STOP and request user approval for implementation 5. Apply repository context to requirements
6. Wait for user response 6. Iterate until 90+ quality score
7. If approved: Execute implementation (Phase 2) 7. 🛑 STOP and request user approval for implementation
8. After code review ≥90%: Execute Testing Decision Gate 8. Wait for user response
9. Testing Decision Gate: 9. If approved: Execute implementation (Phase 2)
- Explicit test → Execute testing 10. After code review ≥90%: Execute Testing Decision Gate
- Explicit skip → Complete workflow 11. Testing Decision Gate:
- Interactive → Ask user with recommendations - --skip-tests → Complete workflow
10. If not approved: Return to clarification - No option → Ask user with recommendations
12. If not approved: Return to clarification
``` ```
## Key Workflow Characteristics ## Key Workflow Characteristics
### Repository-Aware Development
- **Context-Driven**: All phases aware of existing codebase
- **Pattern Consistency**: Follow established conventions
- **Integration Focus**: Seamless integration with existing code
### Implementation-First Approach ### Implementation-First Approach
- **Direct Technical Specs**: Skip architectural abstractions, focus on concrete implementation details - **Direct Technical Specs**: Skip architectural abstractions, focus on concrete implementation details
- **Single Document Strategy**: Keep all related information in one cohesive technical specification - **Single Document Strategy**: Keep all related information in one cohesive technical specification
@@ -180,23 +230,23 @@ else: # interactive_mode
- **Performance Adequacy**: Reasonable performance for the use case, not theoretical optimization - **Performance Adequacy**: Reasonable performance for the use case, not theoretical optimization
## Output Format ## Output Format
1. **Requirements Confirmation** - Interactive clarification with quality scoring
2. **Documentation Creation** - Save confirmation process and requirements All outputs saved to `./.claude/specs/{feature_name}/`:
3. **Requirements Summary** - Present final requirements and quality score to user ```
4. **🛑 User Approval Request** - Ask explicit permission to proceed with implementation 00-repository-context.md # Repository scan results (if not skipped)
5. **Sub-Agent Chain Initiation** - Execute sub-agents only after user approval requirements-confirm.md # Requirements confirmation process
6. **Progress Tracking** - Monitor each sub-agent completion and decisions requirements-spec.md # Technical specifications
7. **Quality Gate Decisions** - Report review scores and iteration actions ```
8. **Completion Summary** - Final artifacts and practical quality metrics
## Success Criteria ## Success Criteria
- **Repository Understanding**: Complete scan and context awareness
- **Clear Requirements**: 90+ quality score before implementation - **Clear Requirements**: 90+ quality score before implementation
- **User Control**: Implementation only begins with explicit approval - **User Control**: Implementation only begins with explicit approval
- **Working Implementation**: Code fully implements specified functionality - **Working Implementation**: Code fully implements specified functionality
- **Quality Assurance**: 90%+ quality score indicates production-ready code - **Quality Assurance**: 90%+ quality score indicates production-ready code
- **Integration Success**: New code integrates seamlessly with existing systems - **Integration Success**: New code integrates seamlessly with existing systems
## Task Complexity Assessment for Smart Recommendations ## Task Complexity Assessment for Smart Testing Recommendations
### Simple Tasks (Recommend Skip Testing) ### Simple Tasks (Recommend Skip Testing)
- Configuration file changes - Configuration file changes
@@ -214,22 +264,22 @@ else: # interactive_mode
- Integration with external services - Integration with external services
- Performance-critical functionality - Performance-critical functionality
### Interactive Mode Prompt Template ### Interactive Testing Prompt
```markdown ```markdown
Code review completed ({review_score}% quality score). Do you want to create test cases? Code review completed ({review_score}% quality score).
Based on task analysis: {smart_recommendation} Based on task complexity analysis: {smart_recommendation}
- Reply 'yes'/'y'/'test' to proceed with testing Do you want to create test cases? (yes/no)
- Reply 'no'/'n'/'skip' to skip testing
- Chinese responses also accepted: '是'/'测试' or '不'/'跳过'
``` ```
## Important Reminders ## Important Reminders
- **Phase 1 starts automatically** - No waiting needed for requirements confirmation - **Repository scan first** - Understand existing codebase before starting
- **Phase 1 starts after scan** - Begin requirements confirmation with context
- **Phase 2 requires explicit approval** - Never skip the approval gate - **Phase 2 requires explicit approval** - Never skip the approval gate
- **Testing Decision Gate** - Three modes: explicit_test, explicit_skip, interactive - **Testing is interactive by default** - Unless --skip-tests is specified
- **Long inputs need summarization** - Handle >500 character inputs specially - **Long inputs need summarization** - Handle >500 character inputs specially
- **User can always decline** - Respect user's decision to refine or cancel - **User can always decline** - Respect user's decision to refine or cancel
- **Quality over speed** - Ensure clarity before implementation - **Quality over speed** - Ensure clarity before implementation
- **Smart recommendations** - Provide context-aware testing suggestions in interactive mode - **Smart recommendations** - Provide context-aware testing suggestions
- **Options are cumulative** - Multiple options can be combined (e.g., --skip-scan --skip-tests)