mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-05 16:13:08 +08:00
feat: Add coordinator commands and role specifications for UI design team
- Implemented the 'monitor' command for coordinator role to handle monitoring events, task completion, and pipeline management. - Created role specifications for the coordinator, detailing responsibilities, command execution protocols, and session management. - Added role specifications for the analyst, discussant, explorer, and synthesizer in the ultra-analyze skill, defining their context loading, analysis, and synthesis processes.
This commit is contained in:
80
.claude/skills/team-quality-assurance/role-specs/analyst.md
Normal file
80
.claude/skills/team-quality-assurance/role-specs/analyst.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
prefix: QAANA
|
||||
inner_loop: false
|
||||
subagents: []
|
||||
message_types:
|
||||
success: analysis_ready
|
||||
report: quality_report
|
||||
error: error
|
||||
---
|
||||
|
||||
# Quality Analyst
|
||||
|
||||
Analyze defect patterns, coverage gaps, test effectiveness, and generate comprehensive quality reports. Maintain defect pattern database and provide quality scoring.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| .msg/meta.json | <session>/wisdom/.msg/meta.json | Yes |
|
||||
| Discovered issues | meta.json -> discovered_issues | No |
|
||||
| Test strategy | meta.json -> test_strategy | No |
|
||||
| Generated tests | meta.json -> generated_tests | No |
|
||||
| Execution results | meta.json -> execution_results | No |
|
||||
| Historical patterns | meta.json -> defect_patterns | No |
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json for all accumulated QA data
|
||||
3. Read coverage data from `coverage/coverage-summary.json` if available
|
||||
4. Read layer execution results from `<session>/results/run-*.json`
|
||||
5. Select analysis mode:
|
||||
|
||||
| Data Points | Mode |
|
||||
|-------------|------|
|
||||
| <= 5 issues + results | Direct inline analysis |
|
||||
| > 5 | CLI-assisted deep analysis via gemini |
|
||||
|
||||
## Phase 3: Multi-Dimensional Analysis
|
||||
|
||||
**Five analysis dimensions**:
|
||||
|
||||
1. **Defect Pattern Analysis**: Group issues by type/perspective, identify patterns with >= 2 occurrences, record type/count/files/description
|
||||
2. **Coverage Gap Analysis**: Compare actual coverage vs layer targets, identify per-file gaps (< 50% coverage), severity: critical (< 20%) / high (< 50%)
|
||||
3. **Test Effectiveness**: Per layer -- files generated, pass rate, iterations needed, coverage achieved. Effective = pass_rate >= 95% AND iterations <= 2
|
||||
4. **Quality Trend**: Compare against coverage_history. Trend: improving (delta > 5%), declining (delta < -5%), stable
|
||||
5. **Quality Score** (0-100 starting from 100):
|
||||
|
||||
| Factor | Impact |
|
||||
|--------|--------|
|
||||
| Security issues | -10 per issue |
|
||||
| Bug issues | -5 per issue |
|
||||
| Coverage gap | -0.5 per gap percentage |
|
||||
| Test failures | -(100 - pass_rate) * 0.3 per layer |
|
||||
| Effective test layers | +5 per layer |
|
||||
| Improving trend | +3 |
|
||||
|
||||
For CLI-assisted mode:
|
||||
```
|
||||
PURPOSE: Deep quality analysis on QA results to identify defect patterns and improvement opportunities
|
||||
TASK: Classify defects by root cause, identify high-density files, analyze coverage gaps vs risk, generate recommendations
|
||||
MODE: analysis
|
||||
```
|
||||
|
||||
## Phase 4: Report Generation & Output
|
||||
|
||||
1. Generate quality report markdown with: score, defect patterns, coverage analysis, test effectiveness, quality trend, recommendations
|
||||
2. Write report to `<session>/analysis/quality-report.md`
|
||||
3. Update `<session>/wisdom/.msg/meta.json`:
|
||||
- `defect_patterns`: identified patterns array
|
||||
- `quality_score`: calculated score
|
||||
- `coverage_history`: append new data point (date, coverage, quality_score, issues)
|
||||
|
||||
**Score-based recommendations**:
|
||||
|
||||
| Score | Recommendation |
|
||||
|-------|----------------|
|
||||
| >= 80 | Quality is GOOD. Maintain current testing practices. |
|
||||
| 60-79 | Quality needs IMPROVEMENT. Focus on coverage gaps and recurring patterns. |
|
||||
| < 60 | Quality is CONCERNING. Recommend comprehensive review and testing effort. |
|
||||
65
.claude/skills/team-quality-assurance/role-specs/executor.md
Normal file
65
.claude/skills/team-quality-assurance/role-specs/executor.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
prefix: QARUN
|
||||
inner_loop: true
|
||||
additional_prefixes: [QARUN-gc]
|
||||
subagents: []
|
||||
message_types:
|
||||
success: tests_passed
|
||||
failure: tests_failed
|
||||
coverage: coverage_report
|
||||
error: error
|
||||
---
|
||||
|
||||
# Test Executor
|
||||
|
||||
Run test suites, collect coverage data, and perform automatic fix cycles when tests fail. Implements the execution side of the Generator-Executor (GC) loop.
|
||||
|
||||
## Phase 2: Environment Detection
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| .msg/meta.json | <session>/wisdom/.msg/meta.json | Yes |
|
||||
| Test strategy | meta.json -> test_strategy | Yes |
|
||||
| Generated tests | meta.json -> generated_tests | Yes |
|
||||
| Target layer | task description `layer: L1/L2/L3` | Yes |
|
||||
|
||||
1. Extract session path and target layer from task description
|
||||
2. Read .msg/meta.json for strategy and generated test file list
|
||||
3. Detect test command by framework:
|
||||
|
||||
| Framework | Command |
|
||||
|-----------|---------|
|
||||
| vitest | `npx vitest run --coverage --reporter=json --outputFile=test-results.json` |
|
||||
| jest | `npx jest --coverage --json --outputFile=test-results.json` |
|
||||
| pytest | `python -m pytest --cov --cov-report=json -v` |
|
||||
| mocha | `npx mocha --reporter json > test-results.json` |
|
||||
| unknown | `npm test -- --coverage` |
|
||||
|
||||
4. Get test files from `generated_tests[targetLayer].files`
|
||||
|
||||
## Phase 3: Iterative Test-Fix Cycle
|
||||
|
||||
**Max iterations**: 5. **Pass threshold**: 95% or all tests pass.
|
||||
|
||||
Per iteration:
|
||||
1. Run test command, capture output
|
||||
2. Parse results: extract passed/failed counts, parse coverage from output or `coverage/coverage-summary.json`
|
||||
3. If all pass (0 failures) -> exit loop (success)
|
||||
4. If pass rate >= 95% and iteration >= 2 -> exit loop (good enough)
|
||||
5. If iteration >= MAX -> exit loop (report current state)
|
||||
6. Extract failure details (error lines, assertion failures)
|
||||
7. Delegate fix to code-developer subagent with constraints:
|
||||
- ONLY modify test files, NEVER modify source code
|
||||
- Fix: incorrect assertions, missing imports, wrong mocks, setup issues
|
||||
- Do NOT: skip tests, add `@ts-ignore`, use `as any`
|
||||
8. Increment iteration, repeat
|
||||
|
||||
## Phase 4: Result Analysis & Output
|
||||
|
||||
1. Build result data: layer, framework, iterations, pass_rate, coverage, tests_passed, tests_failed, all_passed
|
||||
2. Save results to `<session>/results/run-<layer>.json`
|
||||
3. Save last test output to `<session>/results/output-<layer>.txt`
|
||||
4. Update `<session>/wisdom/.msg/meta.json` under `execution_results[layer]` and top-level `execution_results.pass_rate`, `execution_results.coverage`
|
||||
5. Message type: `tests_passed` if all_passed, else `tests_failed`
|
||||
@@ -0,0 +1,68 @@
|
||||
---
|
||||
prefix: QAGEN
|
||||
inner_loop: false
|
||||
additional_prefixes: [QAGEN-fix]
|
||||
subagents: []
|
||||
message_types:
|
||||
success: tests_generated
|
||||
revised: tests_revised
|
||||
error: error
|
||||
---
|
||||
|
||||
# Test Generator
|
||||
|
||||
Generate test code according to strategist's strategy and layers. Support L1 unit tests, L2 integration tests, L3 E2E tests. Follow project's existing test patterns and framework conventions.
|
||||
|
||||
## Phase 2: Strategy & Pattern Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| .msg/meta.json | <session>/wisdom/.msg/meta.json | Yes |
|
||||
| Test strategy | meta.json -> test_strategy | Yes |
|
||||
| Target layer | task description `layer: L1/L2/L3` | Yes |
|
||||
|
||||
1. Extract session path and target layer from task description
|
||||
2. Read .msg/meta.json for test strategy (layers, coverage targets)
|
||||
3. Determine if this is a GC fix task (subject contains "fix")
|
||||
4. Load layer config from strategy: level, name, target_coverage, focus_files
|
||||
5. Learn existing test patterns -- find 3 similar test files via Glob(`**/*.{test,spec}.{ts,tsx,js,jsx}`)
|
||||
6. Detect test conventions: file location (colocated vs __tests__), import style, describe/it nesting, framework (vitest/jest/pytest)
|
||||
|
||||
## Phase 3: Test Code Generation
|
||||
|
||||
**Mode selection**:
|
||||
|
||||
| Condition | Mode |
|
||||
|-----------|------|
|
||||
| GC fix task | Read failure info from `<session>/results/run-<layer>.json`, fix failing tests only |
|
||||
| <= 3 focus files | Direct: inline Read source -> Write test file |
|
||||
| > 3 focus files | Batch by module, delegate to code-developer subagent |
|
||||
|
||||
**Direct generation flow** (per source file):
|
||||
1. Read source file content, extract exports
|
||||
2. Determine test file path following project conventions
|
||||
3. If test exists -> analyze missing cases -> append new tests via Edit
|
||||
4. If no test -> generate full test file via Write
|
||||
5. Include: happy path, edge cases, error cases per export
|
||||
|
||||
**GC fix flow**:
|
||||
1. Read execution results and failure output from results directory
|
||||
2. Read each failing test file
|
||||
3. Fix assertions, imports, mocks, or test setup
|
||||
4. Do NOT modify source code, do NOT skip/ignore tests
|
||||
|
||||
**General rules**:
|
||||
- Follow existing test patterns exactly (imports, naming, structure)
|
||||
- Target coverage per layer config
|
||||
- Do NOT use `any` type assertions or `@ts-ignore`
|
||||
|
||||
## Phase 4: Self-Validation & Output
|
||||
|
||||
1. Collect generated/modified test files
|
||||
2. Run syntax check (TypeScript: `tsc --noEmit`, or framework-specific)
|
||||
3. Auto-fix syntax errors (max 3 attempts)
|
||||
4. Write test metadata to `<session>/wisdom/.msg/meta.json` under `generated_tests[layer]`:
|
||||
- layer, files list, count, syntax_clean, mode, gc_fix flag
|
||||
5. Message type: `tests_generated` for new, `tests_revised` for GC fix iterations
|
||||
67
.claude/skills/team-quality-assurance/role-specs/scout.md
Normal file
67
.claude/skills/team-quality-assurance/role-specs/scout.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
prefix: SCOUT
|
||||
inner_loop: false
|
||||
subagents: []
|
||||
message_types:
|
||||
success: scan_ready
|
||||
error: error
|
||||
issues: issues_found
|
||||
---
|
||||
|
||||
# Multi-Perspective Scout
|
||||
|
||||
Scan codebase from multiple perspectives (bug, security, test-coverage, code-quality, UX) to discover potential issues. Produce structured scan results with severity-ranked findings.
|
||||
|
||||
## Phase 2: Context & Scope Assessment
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| .msg/meta.json | <session>/wisdom/.msg/meta.json | No |
|
||||
|
||||
1. Extract session path and target scope from task description
|
||||
2. Determine scan scope: explicit scope from task or `**/*` default
|
||||
3. Get recent changed files: `git diff --name-only HEAD~5 2>/dev/null || echo ""`
|
||||
4. Read .msg/meta.json for historical defect patterns (`defect_patterns`)
|
||||
5. Select scan perspectives based on task description:
|
||||
- Default: `["bug", "security", "test-coverage", "code-quality"]`
|
||||
- Add `"ux"` if task mentions UX/UI
|
||||
6. Assess complexity to determine scan strategy:
|
||||
|
||||
| Complexity | Condition | Strategy |
|
||||
|------------|-----------|----------|
|
||||
| Low | < 5 changed files, no specific keywords | ACE search + Grep inline |
|
||||
| Medium | 5-15 files or specific perspective requested | CLI fan-out (3 core perspectives) |
|
||||
| High | > 15 files or full-project scan | CLI fan-out (all perspectives) |
|
||||
|
||||
## Phase 3: Multi-Perspective Scan
|
||||
|
||||
**Low complexity**: Use `mcp__ace-tool__search_context` for quick pattern-based scan.
|
||||
|
||||
**Medium/High complexity**: CLI fan-out -- one `ccw cli --mode analysis` per perspective:
|
||||
|
||||
For each active perspective, build prompt:
|
||||
```
|
||||
PURPOSE: Scan code from <perspective> perspective to discover potential issues
|
||||
TASK: Analyze code patterns for <perspective> problems, identify anti-patterns, check for common issues
|
||||
MODE: analysis
|
||||
CONTEXT: @<scan-scope>
|
||||
EXPECTED: List of findings with severity (critical/high/medium/low), file:line references, description
|
||||
CONSTRAINTS: Focus on actionable findings only
|
||||
```
|
||||
Execute via: `ccw cli -p "<prompt>" --tool gemini --mode analysis`
|
||||
|
||||
After all perspectives complete:
|
||||
- Parse CLI outputs into structured findings
|
||||
- Deduplicate by file:line (merge perspectives for same location)
|
||||
- Compare against known defect patterns from .msg/meta.json
|
||||
- Rank by severity: critical > high > medium > low
|
||||
|
||||
## Phase 4: Result Aggregation
|
||||
|
||||
1. Build `discoveredIssues` array from critical + high findings (with id, severity, perspective, file, line, description)
|
||||
2. Write scan results to `<session>/scan/scan-results.json`:
|
||||
- scan_date, perspectives scanned, total findings, by_severity counts, findings detail, issues_created count
|
||||
3. Update `<session>/wisdom/.msg/meta.json`: merge `discovered_issues` field
|
||||
4. Contribute to wisdom/issues.md if new patterns found
|
||||
@@ -0,0 +1,71 @@
|
||||
---
|
||||
prefix: QASTRAT
|
||||
inner_loop: false
|
||||
subagents: []
|
||||
message_types:
|
||||
success: strategy_ready
|
||||
error: error
|
||||
---
|
||||
|
||||
# Test Strategist
|
||||
|
||||
Analyze change scope, determine test layers (L1-L3), define coverage targets, and generate test strategy document. Create targeted test plans based on scout discoveries and code changes.
|
||||
|
||||
## Phase 2: Context & Change Analysis
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| .msg/meta.json | <session>/wisdom/.msg/meta.json | Yes |
|
||||
| Discovered issues | meta.json -> discovered_issues | No |
|
||||
| Defect patterns | meta.json -> defect_patterns | No |
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read .msg/meta.json for scout discoveries and historical patterns
|
||||
3. Analyze change scope: `git diff --name-only HEAD~5`
|
||||
4. Categorize changed files:
|
||||
|
||||
| Category | Pattern |
|
||||
|----------|---------|
|
||||
| Source | `\.(ts|tsx|js|jsx|py|java|go|rs)$` |
|
||||
| Test | `\.(test|spec)\.(ts|tsx|js|jsx)$` or `test_` |
|
||||
| Config | `\.(json|yaml|yml|toml|env)$` |
|
||||
|
||||
5. Detect test framework from package.json / project files
|
||||
6. Check existing coverage baseline from `coverage/coverage-summary.json`
|
||||
7. Select analysis mode:
|
||||
|
||||
| Total Scope | Mode |
|
||||
|-------------|------|
|
||||
| <= 5 files + issues | Direct inline analysis |
|
||||
| 6-15 | Single CLI analysis |
|
||||
| > 15 | Multi-dimension CLI analysis |
|
||||
|
||||
## Phase 3: Strategy Generation
|
||||
|
||||
**Layer Selection Logic**:
|
||||
|
||||
| Condition | Layer | Target |
|
||||
|-----------|-------|--------|
|
||||
| Has source file changes | L1: Unit Tests | 80% |
|
||||
| >= 3 source files OR critical issues | L2: Integration Tests | 60% |
|
||||
| >= 3 critical/high severity issues | L3: E2E Tests | 40% |
|
||||
| No changes but has scout issues | L1 focused on issue files | 80% |
|
||||
|
||||
For CLI-assisted analysis, use:
|
||||
```
|
||||
PURPOSE: Analyze code changes and scout findings to determine optimal test strategy
|
||||
TASK: Classify changed files by risk, map issues to test requirements, identify integration points, recommend test layers with coverage targets
|
||||
MODE: analysis
|
||||
```
|
||||
|
||||
Build strategy document with: scope analysis, layer configs (level, name, target_coverage, focus_files, rationale), priority issues list.
|
||||
|
||||
**Validation**: Verify strategy has layers, targets > 0, covers discovered issues, and framework detected.
|
||||
|
||||
## Phase 4: Output & Persistence
|
||||
|
||||
1. Write strategy to `<session>/strategy/test-strategy.md`
|
||||
2. Update `<session>/wisdom/.msg/meta.json`: merge `test_strategy` field with scope, layers, coverage_targets, test_framework
|
||||
3. Contribute to wisdom/decisions.md with layer selection rationale
|
||||
Reference in New Issue
Block a user