Docs: sync READMEs with actual commands/agents; remove nonexistent commands; enhance requirements-pilot with testing decision gate and options.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
ben chen
2025-08-05 12:03:06 +08:00
parent 18042ae72e
commit 6960e7af52
3 changed files with 174 additions and 72 deletions

View File

@@ -1,5 +1,10 @@
## Usage
`/requirements-pilot <FEATURE_DESCRIPTION>`
`/requirements-pilot <FEATURE_DESCRIPTION> [TESTING_PREFERENCE]`
### Testing Control Options
- **Explicit Test**: Include `--test`, `要测试`, `测试` to force testing execution
- **Explicit Skip**: Include `--no-test`, `不要测试`, `跳过测试` to skip testing phase
- **Interactive Mode**: Default behavior - asks user at testing decision point
## Context
- Feature to develop: $ARGUMENTS
@@ -27,7 +32,11 @@ Execute the sub-agent chain ONLY after the user explicitly confirms they want to
Start this phase immediately upon receiving the command:
### 1. Input Validation & Length Handling
### 1. Input Validation & Testing Preference Parsing
- **Parse Testing Preference**: Extract testing preference from input using keywords:
- **Explicit Test**: `--test`, `要测试`, `测试`, `需要测试`
- **Explicit Skip**: `--no-test`, `不要测试`, `跳过测试`, `无需测试`
- **Interactive Mode**: No testing keywords found (default)
- **If input > 500 characters**: First summarize the core functionality and ask user to confirm the summary is accurate
- **If input is unclear or too brief**: Request more specific details before proceeding
@@ -67,7 +76,50 @@ After achieving 90+ quality score:
Execute the following sub-agent chain:
```
First use the requirements-generate sub agent to create implementation-ready technical specifications for confirmed requirements, then use the requirements-code sub agent to implement the functionality based on specifications, then use the requirements-review sub agent to evaluate code quality with practical scoring, then if score ≥90% use the requirements-testing sub agent to create functional test suite, otherwise use the requirements-code sub agent again to address review feedback and repeat the review cycle.
First use the requirements-generate sub agent to create implementation-ready technical specifications for confirmed requirements, then use the requirements-code sub agent to implement the functionality based on specifications, then use the requirements-review sub agent to evaluate code quality with practical scoring, then if score ≥90% proceed to Testing Decision Gate: if explicit_test_requested execute requirements-testing sub agent, if explicit_skip_requested complete workflow, if interactive_mode ask user for testing preference with smart recommendations, otherwise use the requirements-code sub agent again to address review feedback and repeat the review cycle.
```
## Testing Decision Gate Implementation
### Testing Preference Detection
```markdown
## Parsing Logic
1. Extract FEATURE_DESCRIPTION and identify testing keywords
2. Normalize keywords to internal preference state:
- explicit_test: --test, 要测试, 测试, 需要测试
- explicit_skip: --no-test, 不要测试, 跳过测试, 无需测试
- interactive: No testing keywords detected (default)
3. Store testing preference for use at Testing Decision Gate
```
### Interactive Testing Decision Process
```markdown
## When Testing Preference = Interactive (Default)
1. **Context Assessment**: Analyze task complexity and risk level
2. **Smart Recommendation**: Provide recommendation based on:
- Simple tasks (config changes, documentation): Recommend skip
- Complex tasks (business logic, API changes): Recommend testing
3. **User Prompt**: "Code review completed ({review_score}% quality score). Do you want to create test cases?"
4. **Response Handling**:
- 'yes'/'y'/'test'/'是'/'测试' → Execute requirements-testing
- 'no'/'n'/'skip'/'不'/'跳过' → Complete workflow
- Invalid response → Ask again with clarification
```
### Decision Gate Logic Flow
```markdown
## After Code Review Score ≥ 90%
if testing_preference == "explicit_test":
proceed_to_requirements_testing_agent()
elif testing_preference == "explicit_skip":
complete_workflow_with_summary()
else: # interactive_mode
smart_recommendation = assess_task_complexity(feature_description)
user_choice = ask_testing_decision(smart_recommendation)
if user_choice in ["yes", "y", "test", "是", "测试"]:
proceed_to_requirements_testing_agent()
else:
complete_workflow_with_summary()
```
**Note**: All file path specifications are now managed within individual sub-agent definitions, ensuring proper relative path usage and avoiding hardcoded paths in the orchestrator.
@@ -86,21 +138,31 @@ First use the requirements-generate sub agent to create implementation-ready tec
- **No iteration limit**: Quality-driven approach ensures requirement clarity
### Code Quality Gate (Phase 2 Only)
- **Review Score ≥90%**: Proceed to requirements-testing sub agent
- **Review Score ≥90%**: Proceed to Testing Decision Gate
- **Review Score <90%**: Loop back to requirements-code sub agent with feedback
- **Maximum 3 iterations**: Prevent infinite loops while ensuring quality
### Testing Decision Gate (After Code Quality Gate)
- **Explicit Test Preference**: Directly proceed to requirements-testing sub agent
- **Explicit Skip Preference**: Complete workflow without testing
- **Interactive Mode**: Ask user for testing decision with smart recommendations
## Execution Flow Summary
```
1. Receive command
1. Receive command and parse testing preference
2. Validate input length (summarize if >500 chars)
3. Start requirements confirmation (Phase 1)
4. Iterate until 90+ quality score
5. 🛑 STOP and request user approval
5. 🛑 STOP and request user approval for implementation
6. Wait for user response
7. If approved: Execute implementation (Phase 2)
8. If not approved: Return to clarification
8. After code review ≥90%: Execute Testing Decision Gate
9. Testing Decision Gate:
- Explicit test → Execute testing
- Explicit skip → Complete workflow
- Interactive → Ask user with recommendations
10. If not approved: Return to clarification
```
## Key Workflow Characteristics
@@ -134,9 +196,40 @@ First use the requirements-generate sub agent to create implementation-ready tec
- **Quality Assurance**: 90%+ quality score indicates production-ready code
- **Integration Success**: New code integrates seamlessly with existing systems
## Task Complexity Assessment for Smart Recommendations
### Simple Tasks (Recommend Skip Testing)
- Configuration file changes
- Documentation updates
- Simple utility functions
- UI text/styling changes
- Basic data structure additions
- Environment variable updates
### Complex Tasks (Recommend Testing)
- Business logic implementation
- API endpoint changes
- Database schema modifications
- Authentication/authorization features
- Integration with external services
- Performance-critical functionality
### Interactive Mode Prompt Template
```markdown
Code review completed ({review_score}% quality score). Do you want to create test cases?
Based on task analysis: {smart_recommendation}
- Reply 'yes'/'y'/'test' to proceed with testing
- Reply 'no'/'n'/'skip' to skip testing
- Chinese responses also accepted: '是'/'测试' or '不'/'跳过'
```
## Important Reminders
- **Phase 1 starts automatically** - No waiting needed for requirements confirmation
- **Phase 2 requires explicit approval** - Never skip the approval gate
- **Testing Decision Gate** - Three modes: explicit_test, explicit_skip, interactive
- **Long inputs need summarization** - Handle >500 character inputs specially
- **User can always decline** - Respect user's decision to refine or cancel
- **Quality over speed** - Ensure clarity before implementation
- **Quality over speed** - Ensure clarity before implementation
- **Smart recommendations** - Provide context-aware testing suggestions in interactive mode