mirror of
https://github.com/cexll/myclaude.git
synced 2026-02-05 02:30:26 +08:00
add test-cases skill
This commit is contained in:
199
skills/test-cases/SKILL.md
Normal file
199
skills/test-cases/SKILL.md
Normal file
@@ -0,0 +1,199 @@
|
||||
---
|
||||
name: test-cases
|
||||
description: This skill should be used when generating comprehensive test cases from PRD documents or user requirements. Triggers when users request test case generation, QA planning, test scenario creation, or need structured test documentation. Produces detailed test cases covering functional, edge case, error handling, and state transition scenarios.
|
||||
license: MIT
|
||||
---
|
||||
|
||||
# Test Cases Generator
|
||||
|
||||
This skill generates comprehensive, requirement-driven test cases from PRD documents or user requirements.
|
||||
|
||||
## Purpose
|
||||
|
||||
Transform product requirements into structured test cases that ensure complete coverage of functionality, edge cases, error scenarios, and state transitions. The skill follows a pragmatic testing philosophy: test what matters, ensure every requirement has corresponding test coverage, and maintain test quality over quantity.
|
||||
|
||||
## When to Use
|
||||
|
||||
Trigger this skill when:
|
||||
- User provides a PRD or requirements document and requests test cases
|
||||
- User asks to "generate test cases", "create test scenarios", or "plan QA"
|
||||
- User mentions testing coverage for a feature or requirement
|
||||
- User needs structured test documentation in markdown format
|
||||
|
||||
## Core Testing Principles
|
||||
|
||||
Follow these principles when generating test cases:
|
||||
|
||||
1. **Requirement-driven, not implementation-driven** - Test cases must map directly to requirements, not implementation details
|
||||
2. **Complete coverage** - Every requirement must have at least one test case covering:
|
||||
- Happy path (normal use cases)
|
||||
- Edge cases (boundary values, empty inputs, max limits)
|
||||
- Error handling (invalid inputs, failure scenarios, permission errors)
|
||||
- State transitions (if stateful, cover all valid state changes)
|
||||
3. **Clear and actionable** - Each test case must be executable by a QA engineer without ambiguity
|
||||
4. **Traceable** - Maintain clear mapping between requirements and test cases
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Gather Requirements
|
||||
|
||||
First, identify the source of requirements:
|
||||
|
||||
1. If user provides a file path to a PRD, read it using the Read tool
|
||||
2. If user describes requirements verbally, capture them
|
||||
3. If requirements are unclear or incomplete, use AskUserQuestion to clarify:
|
||||
- What are the core user flows?
|
||||
- What are the acceptance criteria?
|
||||
- What are the edge cases or error scenarios to consider?
|
||||
- Are there any state transitions or workflows?
|
||||
- What platforms or environments need testing?
|
||||
|
||||
### Step 2: Extract Test Scenarios
|
||||
|
||||
Analyze requirements and extract test scenarios:
|
||||
|
||||
1. **Functional scenarios** - Normal use cases from requirements
|
||||
2. **Edge case scenarios** - Boundary conditions, empty states, maximum limits
|
||||
3. **Error scenarios** - Invalid inputs, permission failures, network errors
|
||||
4. **State transition scenarios** - If the feature involves state, map all transitions
|
||||
|
||||
For each requirement, identify:
|
||||
- Preconditions (what must be true before testing)
|
||||
- Test steps (actions to perform)
|
||||
- Expected results (what should happen)
|
||||
- Postconditions (state after test completes)
|
||||
|
||||
### Step 3: Structure Test Cases
|
||||
|
||||
Organize test cases using this structure:
|
||||
|
||||
```markdown
|
||||
# Test Cases: [Feature Name]
|
||||
|
||||
## Overview
|
||||
- **Feature**: [Feature name]
|
||||
- **Requirements Source**: [PRD file path or description]
|
||||
- **Test Coverage**: [Summary of what's covered]
|
||||
- **Last Updated**: [Date]
|
||||
|
||||
## Test Case Categories
|
||||
|
||||
### 1. Functional Tests
|
||||
Test cases covering normal user flows and core functionality.
|
||||
|
||||
#### TC-F-001: [Test Case Title]
|
||||
- **Requirement**: [Link to specific requirement]
|
||||
- **Priority**: [High/Medium/Low]
|
||||
- **Preconditions**:
|
||||
- [Condition 1]
|
||||
- [Condition 2]
|
||||
- **Test Steps**:
|
||||
1. [Step 1]
|
||||
2. [Step 2]
|
||||
3. [Step 3]
|
||||
- **Expected Results**:
|
||||
- [Expected result 1]
|
||||
- [Expected result 2]
|
||||
- **Postconditions**: [State after test]
|
||||
|
||||
### 2. Edge Case Tests
|
||||
Test cases covering boundary conditions and unusual inputs.
|
||||
|
||||
#### TC-E-001: [Test Case Title]
|
||||
[Same structure as above]
|
||||
|
||||
### 3. Error Handling Tests
|
||||
Test cases covering error scenarios and failure modes.
|
||||
|
||||
#### TC-ERR-001: [Test Case Title]
|
||||
[Same structure as above]
|
||||
|
||||
### 4. State Transition Tests
|
||||
Test cases covering state changes and workflows (if applicable).
|
||||
|
||||
#### TC-ST-001: [Test Case Title]
|
||||
[Same structure as above]
|
||||
|
||||
## Test Coverage Matrix
|
||||
|
||||
| Requirement ID | Test Cases | Coverage Status |
|
||||
|---------------|------------|-----------------|
|
||||
| REQ-001 | TC-F-001, TC-E-001 | ✓ Complete |
|
||||
| REQ-002 | TC-F-002 | ⚠ Partial |
|
||||
|
||||
## Notes
|
||||
- [Any additional testing considerations]
|
||||
- [Known limitations or assumptions]
|
||||
```
|
||||
|
||||
### Step 4: Generate Test Cases
|
||||
|
||||
For each identified scenario, create a detailed test case following the structure above. Ensure:
|
||||
|
||||
1. **Unique IDs** - Use prefixes: TC-F (functional), TC-E (edge), TC-ERR (error), TC-ST (state)
|
||||
2. **Clear titles** - Descriptive titles that explain what's being tested
|
||||
3. **Requirement traceability** - Link each test case to specific requirements
|
||||
4. **Priority assignment** - Mark critical paths as High priority
|
||||
5. **Executable steps** - Steps must be clear enough for any QA engineer to execute
|
||||
6. **Measurable results** - Expected results must be verifiable
|
||||
|
||||
### Step 5: Validate Coverage
|
||||
|
||||
Before finalizing, verify:
|
||||
|
||||
1. Every requirement has at least one test case
|
||||
2. Happy path is covered for all user flows
|
||||
3. Edge cases are identified for boundary conditions
|
||||
4. Error scenarios are covered for failure modes
|
||||
5. State transitions are tested if feature is stateful
|
||||
|
||||
If coverage gaps exist, generate additional test cases.
|
||||
|
||||
### Step 6: Output Test Cases
|
||||
|
||||
Write the test cases to `tests/<name>-test-cases.md` where `<name>` is derived from:
|
||||
- The feature name from the PRD
|
||||
- The user's specified name
|
||||
- A sanitized version of the requirement title
|
||||
|
||||
Use the Write tool to create the file with the structured test cases.
|
||||
|
||||
### Step 7: Summary
|
||||
|
||||
After generating test cases, provide a brief summary in Chinese:
|
||||
- Total number of test cases generated
|
||||
- Coverage breakdown (functional, edge, error, state)
|
||||
- Any assumptions made or areas needing clarification
|
||||
- File path where test cases were saved
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before finalizing test cases, verify:
|
||||
|
||||
- [ ] Every requirement has corresponding test cases
|
||||
- [ ] Happy path scenarios are covered
|
||||
- [ ] Edge cases include boundary values, empty inputs, max limits
|
||||
- [ ] Error handling covers invalid inputs and failure scenarios
|
||||
- [ ] State transitions are tested if applicable
|
||||
- [ ] Test case IDs are unique and follow naming convention
|
||||
- [ ] Test steps are clear and executable
|
||||
- [ ] Expected results are measurable and verifiable
|
||||
- [ ] Coverage matrix shows complete coverage
|
||||
- [ ] File is written to tests/<name>-test-cases.md
|
||||
|
||||
## Example Usage
|
||||
|
||||
**User**: "Generate test cases for the user authentication feature in docs/auth-prd.md"
|
||||
|
||||
**Process**:
|
||||
1. Read docs/auth-prd.md
|
||||
2. Extract requirements: login, logout, password reset, session management
|
||||
3. Identify scenarios: successful login, invalid credentials, expired session, etc.
|
||||
4. Generate test cases covering all scenarios
|
||||
5. Write to tests/auth-test-cases.md
|
||||
6. Summarize coverage in Chinese
|
||||
|
||||
## References
|
||||
|
||||
For detailed testing methodologies and best practices, see:
|
||||
- `references/testing-principles.md` - Core testing principles and patterns
|
||||
224
skills/test-cases/references/testing-principles.md
Normal file
224
skills/test-cases/references/testing-principles.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Testing Principles and Best Practices
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
**Test what matters** - Focus on functionality that impacts users: behavior, performance, data integrity, and user experience. Avoid testing implementation details that can change without affecting outcomes.
|
||||
|
||||
**Requirement-driven testing** - Every test must trace back to a specific requirement. If a requirement exists without tests, coverage is incomplete. If a test exists without a requirement, it may be testing implementation rather than behavior.
|
||||
|
||||
**Quality over quantity** - A small set of stable, meaningful tests is more valuable than extensive flaky tests. Flaky tests erode trust and waste time. Every shipped bug represents a process failure.
|
||||
|
||||
## Coverage Requirements
|
||||
|
||||
### 1. Happy Path Coverage
|
||||
Test all normal use cases from requirements:
|
||||
- Primary user flows
|
||||
- Expected inputs and outputs
|
||||
- Standard workflows
|
||||
- Common scenarios
|
||||
|
||||
**Example**: For a login feature, test successful login with valid credentials.
|
||||
|
||||
### 2. Edge Case Coverage
|
||||
Test boundary conditions and unusual inputs:
|
||||
- Empty inputs (null, undefined, empty string, empty array)
|
||||
- Boundary values (min, max, zero, negative)
|
||||
- Maximum limits (character limits, file size limits, array lengths)
|
||||
- Special characters and encoding
|
||||
- Concurrent operations
|
||||
|
||||
**Example**: For a login feature, test with empty username, maximum length password, special characters in credentials.
|
||||
|
||||
### 3. Error Handling Coverage
|
||||
Test failure scenarios and error conditions:
|
||||
- Invalid inputs (wrong type, format, range)
|
||||
- Permission errors (unauthorized access, insufficient privileges)
|
||||
- Network failures (timeout, connection lost, server error)
|
||||
- Resource exhaustion (out of memory, disk full)
|
||||
- Dependency failures (database down, API unavailable)
|
||||
|
||||
**Example**: For a login feature, test with invalid credentials, account locked, server timeout.
|
||||
|
||||
### 4. State Transition Coverage
|
||||
If the feature involves state, test all valid state changes:
|
||||
- Initial state to each possible next state
|
||||
- All valid state transitions
|
||||
- Invalid state transitions (should be rejected)
|
||||
- State persistence across sessions
|
||||
- Concurrent state modifications
|
||||
|
||||
**Example**: For a login feature, test transitions: logged out → logging in → logged in → logging out → logged out.
|
||||
|
||||
## Test Case Structure
|
||||
|
||||
### Essential Components
|
||||
|
||||
Every test case must include:
|
||||
|
||||
1. **Unique ID** - Consistent naming convention (TC-F-001, TC-E-001, etc.)
|
||||
2. **Title** - Clear, descriptive name explaining what's being tested
|
||||
3. **Requirement Link** - Traceability to specific requirement
|
||||
4. **Priority** - High/Medium/Low based on user impact
|
||||
5. **Preconditions** - State that must exist before test execution
|
||||
6. **Test Steps** - Clear, numbered, executable actions
|
||||
7. **Expected Results** - Measurable, verifiable outcomes
|
||||
8. **Postconditions** - State after test completion
|
||||
|
||||
### Test Case Naming Convention
|
||||
|
||||
Use prefixes to categorize test cases:
|
||||
- **TC-F-XXX**: Functional tests (happy path)
|
||||
- **TC-E-XXX**: Edge case tests (boundaries)
|
||||
- **TC-ERR-XXX**: Error handling tests (failures)
|
||||
- **TC-ST-XXX**: State transition tests (workflows)
|
||||
- **TC-PERF-XXX**: Performance tests (speed, load)
|
||||
- **TC-SEC-XXX**: Security tests (auth, permissions)
|
||||
|
||||
## Test Design Patterns
|
||||
|
||||
### Pattern 1: Arrange-Act-Assert (AAA)
|
||||
|
||||
Structure test steps using AAA pattern:
|
||||
1. **Arrange** - Set up preconditions and test data
|
||||
2. **Act** - Execute the action being tested
|
||||
3. **Assert** - Verify expected results
|
||||
|
||||
**Example**:
|
||||
```
|
||||
Preconditions:
|
||||
- User account exists with username "testuser"
|
||||
- User is logged out
|
||||
|
||||
Test Steps:
|
||||
1. Navigate to login page (Arrange)
|
||||
2. Enter username "testuser" and password "password123" (Arrange)
|
||||
3. Click "Login" button (Act)
|
||||
4. Verify user is redirected to dashboard (Assert)
|
||||
5. Verify welcome message displays "Welcome, testuser" (Assert)
|
||||
```
|
||||
|
||||
### Pattern 2: Equivalence Partitioning
|
||||
|
||||
Group inputs into equivalence classes and test one representative from each class:
|
||||
- Valid equivalence class
|
||||
- Invalid equivalence classes
|
||||
- Boundary values
|
||||
|
||||
**Example**: For age input (valid range 18-100):
|
||||
- Valid class: 18, 50, 100
|
||||
- Invalid class: 17, 101, -1, "abc"
|
||||
- Boundaries: 17, 18, 100, 101
|
||||
|
||||
### Pattern 3: State Transition Testing
|
||||
|
||||
For stateful features, create a state transition table and test each transition:
|
||||
|
||||
| Current State | Action | Next State | Test Case |
|
||||
|--------------|--------|------------|-----------|
|
||||
| Logged Out | Login Success | Logged In | TC-ST-001 |
|
||||
| Logged Out | Login Failure | Logged Out | TC-ST-002 |
|
||||
| Logged In | Logout | Logged Out | TC-ST-003 |
|
||||
| Logged In | Session Timeout | Logged Out | TC-ST-004 |
|
||||
|
||||
## Test Prioritization
|
||||
|
||||
Prioritize test cases based on:
|
||||
|
||||
1. **High Priority**
|
||||
- Core user flows (login, checkout, data submission)
|
||||
- Data integrity (create, update, delete operations)
|
||||
- Security-critical paths (authentication, authorization)
|
||||
- Revenue-impacting features (payment, subscription)
|
||||
|
||||
2. **Medium Priority**
|
||||
- Secondary user flows
|
||||
- Edge cases for high-priority features
|
||||
- Error handling for common failures
|
||||
- Performance-sensitive operations
|
||||
|
||||
3. **Low Priority**
|
||||
- Rare edge cases
|
||||
- Cosmetic issues
|
||||
- Nice-to-have features
|
||||
- Non-critical error scenarios
|
||||
|
||||
## Test Quality Indicators
|
||||
|
||||
### Good Test Cases
|
||||
- ✓ Maps directly to a requirement
|
||||
- ✓ Tests behavior, not implementation
|
||||
- ✓ Has clear, executable steps
|
||||
- ✓ Has measurable expected results
|
||||
- ✓ Is independent of other tests
|
||||
- ✓ Is repeatable and deterministic
|
||||
- ✓ Fails only when behavior is broken
|
||||
|
||||
### Poor Test Cases
|
||||
- ✗ Tests implementation details
|
||||
- ✗ Has vague or ambiguous steps
|
||||
- ✗ Has unmeasurable expected results
|
||||
- ✗ Depends on execution order
|
||||
- ✗ Is flaky or non-deterministic
|
||||
- ✗ Fails due to environment issues
|
||||
|
||||
## Coverage Validation
|
||||
|
||||
Before finalizing test cases, verify:
|
||||
|
||||
1. **Requirement Coverage**
|
||||
- Every requirement has at least one test case
|
||||
- Critical requirements have multiple test cases
|
||||
- Coverage matrix shows complete mapping
|
||||
|
||||
2. **Scenario Coverage**
|
||||
- Happy path: All normal flows covered
|
||||
- Edge cases: Boundaries and limits covered
|
||||
- Error handling: Failure modes covered
|
||||
- State transitions: All valid transitions covered
|
||||
|
||||
3. **Risk Coverage**
|
||||
- High-risk areas have comprehensive coverage
|
||||
- Security-sensitive features are thoroughly tested
|
||||
- Data integrity operations are validated
|
||||
|
||||
## Common Pitfalls to Avoid
|
||||
|
||||
1. **Testing implementation instead of behavior** - Focus on what the system does, not how it does it
|
||||
2. **Incomplete edge case coverage** - Don't forget empty inputs, boundaries, and limits
|
||||
3. **Missing error scenarios** - Test failure modes, not just success paths
|
||||
4. **Vague expected results** - Make results measurable and verifiable
|
||||
5. **Test interdependencies** - Each test should be independent
|
||||
6. **Ignoring state transitions** - For stateful features, test all transitions
|
||||
7. **Over-testing trivial code** - Focus on logic that matters to users
|
||||
|
||||
## Test Documentation Standards
|
||||
|
||||
### File Organization
|
||||
```
|
||||
tests/
|
||||
├── <feature>-test-cases.md # Test cases for specific feature
|
||||
├── <module>-test-cases.md # Test cases for specific module
|
||||
└── integration-test-cases.md # Cross-feature integration tests
|
||||
```
|
||||
|
||||
### Markdown Structure
|
||||
- Use clear headings for test categories
|
||||
- Use tables for coverage matrices
|
||||
- Use code blocks for test data examples
|
||||
- Use checkboxes for test execution tracking
|
||||
- Include metadata (feature, date, version)
|
||||
|
||||
### Maintenance
|
||||
- Update test cases when requirements change
|
||||
- Remove obsolete test cases
|
||||
- Add new test cases for bug fixes
|
||||
- Review coverage regularly
|
||||
- Keep test cases synchronized with implementation
|
||||
|
||||
## References
|
||||
|
||||
These principles are derived from:
|
||||
- Industry-standard QA practices
|
||||
- Game QA methodologies (Unity Test Framework, Unreal Automation, Godot GUT)
|
||||
- Pragmatic testing philosophy: "Test what matters"
|
||||
- Requirement-driven testing approach from CLAUDE.md context
|
||||
Reference in New Issue
Block a user