Docs: sync READMEs with actual commands/agents; remove nonexistent commands; enhance requirements-pilot with testing decision gate and options.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
ben chen
2025-08-05 12:03:06 +08:00
parent 18042ae72e
commit 6960e7af52
3 changed files with 174 additions and 72 deletions

View File

@@ -60,22 +60,25 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
**架构**: 针对性专业技能的独立斜杠命令
```bash
/ask # 技术咨询和架构指导
/spec # 交互式需求 → 设计 → 任务工作流
/code # 功能实现,带约束条件
/debug # 使用UltraThink方法论的系统化问题分析
/test # 全面测试策略
/review # 多维度代码验证
/optimize # 性能优化协调
/bugfix # 错误解决工作流
/refactor # 代码重构协调
/docs # 文档生成
/think # 高级思考和分析
```
**渐进式示例**:
```bash
# 逐步开发,手动控制每个环节
/ask "帮我理解微服务架构需求"
/spec "生成API网关规格说明"
/code "实现带限流功能的网关"
/test "创建负载测试套件"
/review "验证安全性和性能"
/optimize "为生产环境优化性能"
```
## 🚀 快速开始
@@ -85,8 +88,8 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
克隆或复制配置结构:
```bash
# 你的项目目录
├── commands/ # 12个专业斜杠命令
├── agents/ # 7个专家智能体配置
├── commands/ # 11个专业斜杠命令
├── agents/ # 9个专家智能体配置
└── CLAUDE.md # 项目特定指导原则
```
@@ -100,8 +103,9 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
**手动开发流程**:
```bash
/ask "可扩展微服务的设计原则"
/spec "OAuth2服务规格说明"
/code "实现OAuth2遵循安全最佳实践"
/test "创建全面测试套件"
/review "验证实现质量"
```
### 3. 预期输出
@@ -118,12 +122,14 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
### 核心组件
#### **Commands 目录** (`/commands/`)
- **规格说明**: `/spec` - 交互式需求 → 设计 → 任务
- **咨询服务**: `/ask` - 架构指导(不修改代码)
- **实现工具**: `/code` - 带约束的功能开发
- **质量保证**: `/test`, `/review`, `/debug`
- **优化工具**: `/optimize`, `/refactor`
- **运维工具**: `/deploy-check`, `/cicd`
- **错误解决**: `/bugfix` - 系统化错误修复工作流
- **文档工具**: `/docs` - 文档生成
- **分析工具**: `/think` - 高级思考和分析
- **需求工具**: `/requirements-pilot` - 完整需求驱动工作流
#### **Agents 目录** (`/agents/`)
- **requirements-generate**: 为代码生成优化的技术规格生成
@@ -185,11 +191,11 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
/ask "高性能API网关的设计考虑"
# (交互式咨询阶段)
/spec "微服务API网关支持限流和熔断器"
# (规格生成阶段)
/code "基于规格说明实现网关"
/code "实现微服务API网关支持限流和熔断器"
# (实现阶段)
/test "为网关创建全面测试套件"
# (测试阶段)
```
**结果**:
@@ -205,13 +211,12 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
# 调试 → 修复 → 验证工作流
先使用 debug 分析 [性能问题]
然后用 code 实现修复,
再用 spec-validation 确保质量
再用 review 确保质量
# 完整开发 + 优化流水线
先使用 spec-generation 处理 [功能]
然后 spec-executor 进行实现
再用 spec-validation 进行质量检查
如果评分 ≥95% 则使用 spec-testing
先使用 requirements-pilot 处理 [功能开发]
然后用 review 进行质量验证
如果评分 ≥95% 则使用 test 进行全面测试
最后用 optimize 确保生产就绪
```
@@ -219,7 +224,7 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
```bash
# 迭代质量改进
先使用 spec-validation 评估 [现有代码]
先使用 review 评估 [现有代码]
如果评分 <95% 则用 code 基于反馈改进,
重复直到达到质量阈值
```
@@ -261,9 +266,8 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
```bash
# 带特定质量要求的自定义工作流
先使用 spec-generation 处理 [严格安全要求]
然后 spec-executor 处理 [性能约束]
再用 spec-validation 设置 [90%最低阈值]
先使用 requirements-pilot 处理 [严格安全要求和性能约束]
然后用 review 验证并设置 [90%最低阈值]
继续优化直到达到阈值
```
@@ -283,7 +287,11 @@ requirements-generate → requirements-code → requirements-review → (≥90%?
### 优化命令
- `/optimize` - 性能优化协调
- `/refactor` - 带质量门控的代码重构
- `/deploy-check` - 部署就绪验证
### 其他命令
- `/bugfix` - 错误解决工作流
- `/docs` - 文档生成
- `/think` - 高级思考和分析
## 🤝 贡献
@@ -363,38 +371,32 @@ AI会深入询问
- 限流和熔断机制
- 监控和日志需求
**Step 2 - 规格生成**:
**Step 2 - 需求驱动开发**:
```bash
/spec "基于讨论,生成API网关完整规格"
/requirements-pilot "基于讨论,实现API网关完整功能"
```
生成内容
- **requirements.md** - 明确的用户故事和验收标准
- **design.md** - 考虑高并发的架构设计
- **tasks.md** - 详细的开发任务分解
**Step 3 - 自动实现**:
```bash
# 基于规格的自动化实现
先使用 spec-executor 基于规格实现代码,
然后用 spec-validation 验证质量,
如果评分 ≥95% 则用 spec-testing 生成测试
```
自动化执行
- **需求确认** - 交互式澄清和质量评估
- **技术规格** - 考虑高并发的架构设计
- **代码实现** - 详细的功能实现
- **质量验证** - 多维度质量评估
- **测试套件** - 功能和性能测试
## 💡 最佳实践
### 1. 需求澄清优先
**不要急于/spec,先用/ask充分交流**
**不要急于实现,先用/ask充分交流**
```bash
# 错误做法:直接开始
/spec "用户管理系统"
/requirements-pilot "用户管理系统"
# 正确做法:先理解需求
/ask "企业用户管理系统需要考虑哪些方面?"
# 经过3-5轮对话澄清需求后
/spec "基于讨论,生成企业用户管理系统规格"
/requirements-pilot "基于讨论,实现企业用户管理系统"
```
### 2. 渐进式复杂度
@@ -418,11 +420,10 @@ AI会深入询问
```bash
# 设置更高质量要求
先使用 spec-generation 生成规格
然后 spec-executor 实现
再用 spec-validation 验证,
先使用 requirements-pilot 实现功能
然后用 review 验证质量
如果评分 <98% 则继续优化,
达标后用 spec-testing 和 optimize
达标后用 test 和 optimize 完善
```
## 🔍 深度解析:为什么这样更有效?

View File

@@ -60,22 +60,25 @@ then if score ≥90% use requirements-testing
**Architecture**: Individual slash commands for targeted expertise
```bash
/ask # Technical consultation and architecture guidance
/spec # Interactive requirements → design → tasks workflow
/code # Feature implementation with constraints
/debug # Systematic problem analysis using UltraThink
/test # Comprehensive testing strategy
/review # Multi-dimensional code validation
/optimize # Performance optimization coordination
/bugfix # Bug resolution workflows
/refactor # Code refactoring coordination
/docs # Documentation generation
/think # Advanced thinking and analysis
```
**Progression Example**:
```bash
# Step-by-step development with manual control
/ask "Help me understand microservices architecture requirements"
/spec "Generate API gateway specifications"
/code "Implement gateway with rate limiting"
/test "Create load testing suite"
/review "Validate security and performance"
/optimize "Enhance performance for production"
```
## 🚀 Quick Start
@@ -85,8 +88,8 @@ then if score ≥90% use requirements-testing
Clone or copy the configuration structure:
```bash
# Your project directory
├── commands/ # 12 specialized slash commands
├── agents/ # 7 expert agent configurations
├── commands/ # 11 specialized slash commands
├── agents/ # 9 expert agent configurations
└── CLAUDE.md # Project-specific guidelines
```
@@ -94,14 +97,15 @@ Clone or copy the configuration structure:
**Complete Feature Development**:
```bash
/spec-workflow "Implement OAuth2 authentication with refresh tokens"
/requirements-pilot "Implement OAuth2 authentication with refresh tokens"
```
**Manual Development Flow**:
```bash
/ask "Design principles for scalable microservices"
/spec "OAuth2 service specifications"
/code "Implement OAuth2 with security best practices"
/test "Create comprehensive test suite"
/review "Validate implementation quality"
```
### 3. Expected Outputs
@@ -118,12 +122,14 @@ Clone or copy the configuration structure:
### Core Components
#### **Commands Directory** (`/commands/`)
- **Specification**: `/spec` - Interactive requirements → design → tasks
- **Consultation**: `/ask` - Architecture guidance (no code changes)
- **Implementation**: `/code` - Feature development with constraints
- **Quality Assurance**: `/test`, `/review`, `/debug`
- **Optimization**: `/optimize`, `/refactor`
- **Operations**: `/deploy-check`, `/cicd`
- **Bug Resolution**: `/bugfix` - Systematic bug fixing workflows
- **Documentation**: `/docs` - Documentation generation
- **Analysis**: `/think` - Advanced thinking and analysis
- **Requirements**: `/requirements-pilot` - Complete requirements-driven workflow
#### **Agents Directory** (`/agents/`)
- **requirements-generate**: Technical specification generation optimized for code generation
@@ -185,11 +191,11 @@ Clone or copy the configuration structure:
/ask "Design considerations for high-performance API gateway"
# (Interactive consultation phase)
/spec "Microservices API gateway with rate limiting and circuit breakers"
# (Specification generation)
/code "Implement gateway based on specifications"
/code "Implement microservices API gateway with rate limiting and circuit breakers"
# (Implementation phase)
/test "Create comprehensive test suite for gateway"
# (Testing phase)
```
**Results**:
@@ -205,13 +211,12 @@ Clone or copy the configuration structure:
# Debug → Fix → Validate workflow
First use debug to analyze [performance issue],
then use code to implement fixes,
then use spec-validation to ensure quality
then use review to ensure quality
# Complete development + optimization pipeline
First use spec-generation for [feature],
then spec-executor for implementation,
then spec-validation for quality check,
then if score ≥95% use spec-testing,
First use requirements-pilot for [feature development],
then use review for quality validation,
then if score ≥95% use test for comprehensive testing,
finally use optimize for production readiness
```
@@ -219,7 +224,7 @@ finally use optimize for production readiness
```bash
# Iterative quality improvement
First use spec-validation to score [existing code],
First use review to score [existing code],
then if score <95% use code to improve based on feedback,
repeat until quality threshold achieved
```
@@ -261,9 +266,8 @@ Quality feedback drives automatic specification refinement, creating intelligent
```bash
# Custom workflow with specific quality requirements
First use requirements-generate with [strict security requirements],
then requirements-code with [performance constraints],
then requirements-review with [90% minimum threshold],
First use requirements-pilot with [strict security requirements and performance constraints],
then use review to validate with [90% minimum threshold],
continue optimization until threshold met
```
@@ -283,7 +287,11 @@ continue optimization until threshold met
### Optimization Commands
- `/optimize` - Performance optimization coordination
- `/refactor` - Code refactoring with quality gates
- `/deploy-check` - Deployment readiness validation
### Additional Commands
- `/bugfix` - Bug resolution workflows
- `/docs` - Documentation generation
- `/think` - Advanced thinking and analysis
## 🤝 Contributing

View File

@@ -1,5 +1,10 @@
## Usage
`/requirements-pilot <FEATURE_DESCRIPTION>`
`/requirements-pilot <FEATURE_DESCRIPTION> [TESTING_PREFERENCE]`
### Testing Control Options
- **Explicit Test**: Include `--test`, `要测试`, `测试` to force testing execution
- **Explicit Skip**: Include `--no-test`, `不要测试`, `跳过测试` to skip testing phase
- **Interactive Mode**: Default behavior - asks user at testing decision point
## Context
- Feature to develop: $ARGUMENTS
@@ -27,7 +32,11 @@ Execute the sub-agent chain ONLY after the user explicitly confirms they want to
Start this phase immediately upon receiving the command:
### 1. Input Validation & Length Handling
### 1. Input Validation & Testing Preference Parsing
- **Parse Testing Preference**: Extract testing preference from input using keywords:
- **Explicit Test**: `--test`, `要测试`, `测试`, `需要测试`
- **Explicit Skip**: `--no-test`, `不要测试`, `跳过测试`, `无需测试`
- **Interactive Mode**: No testing keywords found (default)
- **If input > 500 characters**: First summarize the core functionality and ask user to confirm the summary is accurate
- **If input is unclear or too brief**: Request more specific details before proceeding
@@ -67,7 +76,50 @@ After achieving 90+ quality score:
Execute the following sub-agent chain:
```
First use the requirements-generate sub agent to create implementation-ready technical specifications for confirmed requirements, then use the requirements-code sub agent to implement the functionality based on specifications, then use the requirements-review sub agent to evaluate code quality with practical scoring, then if score ≥90% use the requirements-testing sub agent to create functional test suite, otherwise use the requirements-code sub agent again to address review feedback and repeat the review cycle.
First use the requirements-generate sub agent to create implementation-ready technical specifications for confirmed requirements, then use the requirements-code sub agent to implement the functionality based on specifications, then use the requirements-review sub agent to evaluate code quality with practical scoring, then if score ≥90% proceed to Testing Decision Gate: if explicit_test_requested execute requirements-testing sub agent, if explicit_skip_requested complete workflow, if interactive_mode ask user for testing preference with smart recommendations, otherwise use the requirements-code sub agent again to address review feedback and repeat the review cycle.
```
## Testing Decision Gate Implementation
### Testing Preference Detection
```markdown
## Parsing Logic
1. Extract FEATURE_DESCRIPTION and identify testing keywords
2. Normalize keywords to internal preference state:
- explicit_test: --test, 要测试, 测试, 需要测试
- explicit_skip: --no-test, 不要测试, 跳过测试, 无需测试
- interactive: No testing keywords detected (default)
3. Store testing preference for use at Testing Decision Gate
```
### Interactive Testing Decision Process
```markdown
## When Testing Preference = Interactive (Default)
1. **Context Assessment**: Analyze task complexity and risk level
2. **Smart Recommendation**: Provide recommendation based on:
- Simple tasks (config changes, documentation): Recommend skip
- Complex tasks (business logic, API changes): Recommend testing
3. **User Prompt**: "Code review completed ({review_score}% quality score). Do you want to create test cases?"
4. **Response Handling**:
- 'yes'/'y'/'test'/'是'/'测试' → Execute requirements-testing
- 'no'/'n'/'skip'/'不'/'跳过' → Complete workflow
- Invalid response → Ask again with clarification
```
### Decision Gate Logic Flow
```markdown
## After Code Review Score ≥ 90%
if testing_preference == "explicit_test":
proceed_to_requirements_testing_agent()
elif testing_preference == "explicit_skip":
complete_workflow_with_summary()
else: # interactive_mode
smart_recommendation = assess_task_complexity(feature_description)
user_choice = ask_testing_decision(smart_recommendation)
if user_choice in ["yes", "y", "test", "是", "测试"]:
proceed_to_requirements_testing_agent()
else:
complete_workflow_with_summary()
```
**Note**: All file path specifications are now managed within individual sub-agent definitions, ensuring proper relative path usage and avoiding hardcoded paths in the orchestrator.
@@ -86,21 +138,31 @@ First use the requirements-generate sub agent to create implementation-ready tec
- **No iteration limit**: Quality-driven approach ensures requirement clarity
### Code Quality Gate (Phase 2 Only)
- **Review Score ≥90%**: Proceed to requirements-testing sub agent
- **Review Score ≥90%**: Proceed to Testing Decision Gate
- **Review Score <90%**: Loop back to requirements-code sub agent with feedback
- **Maximum 3 iterations**: Prevent infinite loops while ensuring quality
### Testing Decision Gate (After Code Quality Gate)
- **Explicit Test Preference**: Directly proceed to requirements-testing sub agent
- **Explicit Skip Preference**: Complete workflow without testing
- **Interactive Mode**: Ask user for testing decision with smart recommendations
## Execution Flow Summary
```
1. Receive command
1. Receive command and parse testing preference
2. Validate input length (summarize if >500 chars)
3. Start requirements confirmation (Phase 1)
4. Iterate until 90+ quality score
5. 🛑 STOP and request user approval
5. 🛑 STOP and request user approval for implementation
6. Wait for user response
7. If approved: Execute implementation (Phase 2)
8. If not approved: Return to clarification
8. After code review ≥90%: Execute Testing Decision Gate
9. Testing Decision Gate:
- Explicit test → Execute testing
- Explicit skip → Complete workflow
- Interactive → Ask user with recommendations
10. If not approved: Return to clarification
```
## Key Workflow Characteristics
@@ -134,9 +196,40 @@ First use the requirements-generate sub agent to create implementation-ready tec
- **Quality Assurance**: 90%+ quality score indicates production-ready code
- **Integration Success**: New code integrates seamlessly with existing systems
## Task Complexity Assessment for Smart Recommendations
### Simple Tasks (Recommend Skip Testing)
- Configuration file changes
- Documentation updates
- Simple utility functions
- UI text/styling changes
- Basic data structure additions
- Environment variable updates
### Complex Tasks (Recommend Testing)
- Business logic implementation
- API endpoint changes
- Database schema modifications
- Authentication/authorization features
- Integration with external services
- Performance-critical functionality
### Interactive Mode Prompt Template
```markdown
Code review completed ({review_score}% quality score). Do you want to create test cases?
Based on task analysis: {smart_recommendation}
- Reply 'yes'/'y'/'test' to proceed with testing
- Reply 'no'/'n'/'skip' to skip testing
- Chinese responses also accepted: '是'/'测试' or '不'/'跳过'
```
## Important Reminders
- **Phase 1 starts automatically** - No waiting needed for requirements confirmation
- **Phase 2 requires explicit approval** - Never skip the approval gate
- **Testing Decision Gate** - Three modes: explicit_test, explicit_skip, interactive
- **Long inputs need summarization** - Handle >500 character inputs specially
- **User can always decline** - Respect user's decision to refine or cancel
- **Quality over speed** - Ensure clarity before implementation
- **Quality over speed** - Ensure clarity before implementation
- **Smart recommendations** - Provide context-aware testing suggestions in interactive mode