mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-10 17:11:04 +08:00
Refactor team collaboration skills and update documentation
- Renamed `team-lifecycle-v5` to `team-lifecycle` across various documentation files for consistency. - Updated references in code examples and usage sections to reflect the new skill name. - Added a new command file for the `monitor` functionality in the `team-iterdev` skill, detailing the coordinator's monitoring events and task management. - Introduced new components for dynamic pipeline visualization and session coordinates display in the frontend. - Implemented utility functions for pipeline stage detection and status derivation based on message history. - Enhanced the team role panel to map members to their respective pipeline roles with status indicators. - Updated Chinese documentation to reflect the changes in skill names and descriptions.
This commit is contained in:
@@ -1,220 +0,0 @@
|
||||
# Command: run-fix-cycle
|
||||
|
||||
> 迭代测试执行与自动修复。运行测试套件,解析结果,失败时委派 code-developer 修复,最多迭代 5 次。
|
||||
|
||||
## When to Use
|
||||
|
||||
- Phase 3 of Executor
|
||||
- 测试代码已生成,需要执行并验证
|
||||
- GC 循环中重新执行修复后的测试
|
||||
|
||||
**Trigger conditions**:
|
||||
- QARUN-* 任务进入执行阶段
|
||||
- Generator 报告测试生成完成
|
||||
- GC 循环中 coordinator 创建的重新执行任务
|
||||
|
||||
## Strategy
|
||||
|
||||
### Delegation Mode
|
||||
|
||||
**Mode**: Sequential Delegation(修复时)/ Direct(执行时)
|
||||
**Agent Type**: `code-developer`(仅用于修复)
|
||||
**Max Iterations**: 5
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```javascript
|
||||
// 每次迭代的决策
|
||||
function shouldContinue(iteration, passRate, testsFailed) {
|
||||
if (iteration >= MAX_ITERATIONS) return false
|
||||
if (testsFailed === 0) return false // 全部通过
|
||||
if (passRate >= 95 && iteration >= 2) return false // 足够好
|
||||
return true
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Context Preparation
|
||||
|
||||
```javascript
|
||||
// 检测测试框架和命令
|
||||
const strategy = sharedMemory.test_strategy || {}
|
||||
const framework = strategy.test_framework || 'vitest'
|
||||
const targetLayer = task.description.match(/layer:\s*(L[123])/)?.[1] || 'L1'
|
||||
|
||||
// 构建测试命令
|
||||
function buildTestCommand(framework, layer) {
|
||||
const layerFilter = {
|
||||
'L1': 'unit',
|
||||
'L2': 'integration',
|
||||
'L3': 'e2e'
|
||||
}
|
||||
|
||||
const commands = {
|
||||
'vitest': `npx vitest run --coverage --reporter=json --outputFile=test-results.json`,
|
||||
'jest': `npx jest --coverage --json --outputFile=test-results.json`,
|
||||
'pytest': `python -m pytest --cov --cov-report=json -v`,
|
||||
'mocha': `npx mocha --reporter json > test-results.json`
|
||||
}
|
||||
|
||||
let cmd = commands[framework] || 'npm test -- --coverage'
|
||||
|
||||
// 添加层级过滤(如果测试文件按目录组织)
|
||||
const filter = layerFilter[layer]
|
||||
if (filter && framework === 'vitest') {
|
||||
cmd += ` --testPathPattern="${filter}"`
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
const testCommand = buildTestCommand(framework, targetLayer)
|
||||
|
||||
// 获取关联的测试文件
|
||||
const generatedTests = sharedMemory.generated_tests?.[targetLayer]?.files || []
|
||||
```
|
||||
|
||||
### Step 2: Execute Strategy
|
||||
|
||||
```javascript
|
||||
let iteration = 0
|
||||
const MAX_ITERATIONS = 5
|
||||
let lastOutput = ''
|
||||
let passRate = 0
|
||||
let coverage = 0
|
||||
let testsPassed = 0
|
||||
let testsFailed = 0
|
||||
|
||||
while (iteration < MAX_ITERATIONS) {
|
||||
// ===== EXECUTE TESTS =====
|
||||
lastOutput = Bash(`${testCommand} 2>&1 || true`)
|
||||
|
||||
// ===== PARSE RESULTS =====
|
||||
// 解析通过/失败数
|
||||
const passedMatch = lastOutput.match(/(\d+)\s*(?:passed|passing)/)
|
||||
const failedMatch = lastOutput.match(/(\d+)\s*(?:failed|failing)/)
|
||||
testsPassed = passedMatch ? parseInt(passedMatch[1]) : 0
|
||||
testsFailed = failedMatch ? parseInt(failedMatch[1]) : 0
|
||||
const testsTotal = testsPassed + testsFailed
|
||||
|
||||
passRate = testsTotal > 0 ? Math.round(testsPassed / testsTotal * 100) : 0
|
||||
|
||||
// 解析覆盖率
|
||||
try {
|
||||
const coverageJson = JSON.parse(Read('coverage/coverage-summary.json'))
|
||||
coverage = coverageJson.total?.lines?.pct || 0
|
||||
} catch {
|
||||
// 尝试从输出解析
|
||||
const covMatch = lastOutput.match(/(?:Lines|Stmts|All files)\s*[:|]\s*(\d+\.?\d*)%/)
|
||||
coverage = covMatch ? parseFloat(covMatch[1]) : 0
|
||||
}
|
||||
|
||||
// ===== CHECK PASS =====
|
||||
if (testsFailed === 0) {
|
||||
break // 全部通过
|
||||
}
|
||||
|
||||
// ===== SHOULD CONTINUE? =====
|
||||
if (!shouldContinue(iteration + 1, passRate, testsFailed)) {
|
||||
break
|
||||
}
|
||||
|
||||
// ===== AUTO-FIX =====
|
||||
iteration++
|
||||
|
||||
// 提取失败详情
|
||||
const failureLines = lastOutput.split('\n')
|
||||
.filter(l => /FAIL|Error|AssertionError|Expected|Received|TypeError|ReferenceError/.test(l))
|
||||
.slice(0, 30)
|
||||
.join('\n')
|
||||
|
||||
// 委派修复给 code-developer
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: `Fix ${testsFailed} test failures (iteration ${iteration}/${MAX_ITERATIONS})`,
|
||||
prompt: `## Goal
|
||||
Fix failing tests. ONLY modify test files, NEVER modify source code.
|
||||
|
||||
## Test Output
|
||||
\`\`\`
|
||||
${failureLines}
|
||||
\`\`\`
|
||||
|
||||
## Test Files to Fix
|
||||
${generatedTests.map(f => `- ${f}`).join('\n')}
|
||||
|
||||
## Rules
|
||||
- Read each failing test file before modifying
|
||||
- Fix: incorrect assertions, missing imports, wrong mocks, setup issues
|
||||
- Do NOT: skip tests, add \`@ts-ignore\`, use \`as any\`, modify source code
|
||||
- Keep existing test structure and naming
|
||||
- If a test is fundamentally wrong about expected behavior, fix the assertion to match actual source behavior`
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Result Processing
|
||||
|
||||
```javascript
|
||||
const resultData = {
|
||||
layer: targetLayer,
|
||||
framework: framework,
|
||||
iterations: iteration,
|
||||
pass_rate: passRate,
|
||||
coverage: coverage,
|
||||
tests_passed: testsPassed,
|
||||
tests_failed: testsFailed,
|
||||
all_passed: testsFailed === 0,
|
||||
max_iterations_reached: iteration >= MAX_ITERATIONS
|
||||
}
|
||||
|
||||
// 保存执行结果
|
||||
Bash(`mkdir -p "${sessionFolder}/results"`)
|
||||
Write(`${sessionFolder}/results/run-${targetLayer}.json`, JSON.stringify(resultData, null, 2))
|
||||
|
||||
// 保存最后一次测试输出(截取关键部分)
|
||||
const outputSummary = lastOutput.split('\n').slice(-30).join('\n')
|
||||
Write(`${sessionFolder}/results/output-${targetLayer}.txt`, outputSummary)
|
||||
|
||||
// 更新 shared memory
|
||||
sharedMemory.execution_results = sharedMemory.execution_results || {}
|
||||
sharedMemory.execution_results[targetLayer] = resultData
|
||||
sharedMemory.execution_results.pass_rate = passRate
|
||||
sharedMemory.execution_results.coverage = coverage
|
||||
Write(`${sessionFolder}/.msg/meta.json`, JSON.stringify(sharedMemory, null, 2))
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```
|
||||
## Test Execution Results
|
||||
|
||||
### Layer: [L1|L2|L3]
|
||||
### Framework: [vitest|jest|pytest]
|
||||
### Status: [PASS|FAIL]
|
||||
|
||||
### Results
|
||||
- Tests passed: [count]
|
||||
- Tests failed: [count]
|
||||
- Pass rate: [percent]%
|
||||
- Coverage: [percent]%
|
||||
- Fix iterations: [count]/[max]
|
||||
|
||||
### Failure Details (if any)
|
||||
- [test name]: [error description]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Test command not found | Try fallback: npm test → npx vitest → npx jest → pytest |
|
||||
| Test environment broken | Report error to coordinator, suggest manual fix |
|
||||
| Max iterations reached with failures | Report current state, let coordinator decide (GC loop or accept) |
|
||||
| Coverage data unavailable | Report 0%, note coverage collection failure |
|
||||
| Sub-agent fix introduces new failures | Revert last fix, try different approach |
|
||||
| No test files to run | Report empty, notify coordinator |
|
||||
| Agent/CLI failure | Retry once, then fallback to inline execution |
|
||||
| Timeout (>5 min) | Report partial results, notify coordinator |
|
||||
@@ -1,175 +0,0 @@
|
||||
# Executor Role
|
||||
|
||||
Test executor. Run test suites, collect coverage data, and perform automatic fix cycles when tests fail. Implement the execution side of the Generator-Executor (GC) loop.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `executor` | **Tag**: `[executor]`
|
||||
- **Task Prefix**: `QARUN-*`
|
||||
- **Responsibility**: Validation (test execution and fix)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process `QARUN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[executor]` identifier
|
||||
- Only communicate with coordinator via SendMessage
|
||||
- Execute tests and collect coverage
|
||||
- Attempt automatic fix on failure
|
||||
- Work strictly within test execution responsibility scope
|
||||
|
||||
### MUST NOT
|
||||
- Execute work outside this role's responsibility scope
|
||||
- Generate new tests from scratch (that's generator's responsibility)
|
||||
- Modify source code (unless fixing tests themselves)
|
||||
- Communicate directly with other worker roles (must go through coordinator)
|
||||
- Create tasks for other roles (TaskCreate is coordinator-exclusive)
|
||||
- Omit `[executor]` identifier in any output
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Commands
|
||||
|
||||
| Command | File | Phase | Description |
|
||||
|---------|------|-------|-------------|
|
||||
| `run-fix-cycle` | [commands/run-fix-cycle.md](commands/run-fix-cycle.md) | Phase 3 | Iterative test execution and auto-fix |
|
||||
|
||||
### Tool Capabilities
|
||||
|
||||
| Tool | Type | Used By | Purpose |
|
||||
|------|------|---------|---------|
|
||||
| `code-developer` | subagent | run-fix-cycle.md | Test failure auto-fix |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `tests_passed` | executor -> coordinator | All tests pass | Contains coverage data |
|
||||
| `tests_failed` | executor -> coordinator | Tests fail | Contains failure details and fix attempts |
|
||||
| `coverage_report` | executor -> coordinator | Coverage collected | Coverage data |
|
||||
| `error` | executor -> coordinator | Execution environment error | Blocking error |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: <session-id>,
|
||||
from: "executor",
|
||||
type: <message-type>,
|
||||
data: { ref: <results-file>, pass_rate, coverage, iterations }
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --session-id <session-id> --from executor --type <message-type> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `QARUN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
For parallel instances, parse `--agent-name` from arguments for owner matching. Falls back to `executor` for single-instance execution.
|
||||
|
||||
### Phase 2: Environment Detection
|
||||
|
||||
**Detection steps**:
|
||||
|
||||
1. Extract session path from task description
|
||||
2. Read shared memory for strategy and generated tests
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Shared memory | <session-folder>/.msg/meta.json | Yes |
|
||||
| Test strategy | sharedMemory.test_strategy | Yes |
|
||||
| Generated tests | sharedMemory.generated_tests | Yes |
|
||||
| Target layer | task description | Yes |
|
||||
|
||||
3. Detect test command based on framework:
|
||||
|
||||
| Framework | Command Pattern |
|
||||
|-----------|-----------------|
|
||||
| jest | `npx jest --coverage --testPathPattern="<layer>"` |
|
||||
| vitest | `npx vitest run --coverage --reporter=json` |
|
||||
| pytest | `python -m pytest --cov --cov-report=json` |
|
||||
| mocha | `npx mocha --reporter json` |
|
||||
| unknown | `npm test -- --coverage` |
|
||||
|
||||
4. Get changed test files from generated_tests[targetLayer].files
|
||||
|
||||
### Phase 3: Execution & Fix Cycle
|
||||
|
||||
Delegate to `commands/run-fix-cycle.md` if available, otherwise execute inline.
|
||||
|
||||
**Iterative Test-Fix Cycle**:
|
||||
|
||||
| Step | Action |
|
||||
|------|--------|
|
||||
| 1 | Run test command |
|
||||
| 2 | Parse results -> check pass rate |
|
||||
| 3 | Pass rate >= 95% -> exit loop (success) |
|
||||
| 4 | Extract failing test details |
|
||||
| 5 | Delegate fix to code-developer subagent |
|
||||
| 6 | Increment iteration counter |
|
||||
| 7 | iteration >= MAX (5) -> exit loop (report failures) |
|
||||
| 8 | Go to Step 1 |
|
||||
|
||||
**Fix Agent Prompt Structure**:
|
||||
- Goal: Fix failing tests
|
||||
- Constraint: Do NOT modify source code, only fix test files
|
||||
- Input: Failure details, test file list
|
||||
- Instructions: Read failing tests, fix assertions/imports/setup, do NOT skip/ignore tests
|
||||
|
||||
### Phase 4: Result Analysis
|
||||
|
||||
**Analyze test outcomes**:
|
||||
|
||||
| Metric | Source | Threshold |
|
||||
|--------|--------|-----------|
|
||||
| Pass rate | Test output parser | >= 95% |
|
||||
| Coverage | Coverage tool output | Per layer target |
|
||||
| Flaky tests | Compare runs | 0 flaky |
|
||||
|
||||
**Result Data Structure**:
|
||||
- layer, iterations, pass_rate, coverage
|
||||
- tests_passed, tests_failed, all_passed
|
||||
|
||||
Save results to `<session-folder>/results/run-<layer>.json`.
|
||||
|
||||
Update shared memory with `execution_results` field.
|
||||
|
||||
### Phase 5: Report to Coordinator
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
Standard report flow: team_msg log -> SendMessage with `[executor]` prefix -> TaskUpdate completed -> Loop to Phase 1 for next task.
|
||||
|
||||
Message type selection: `tests_passed` if all_passed, else `tests_failed`.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No QARUN-* tasks available | Idle, wait for coordinator |
|
||||
| Test command fails to execute | Try fallback: `npm test`, `npx vitest run`, `pytest` |
|
||||
| Max iterations reached | Report current pass rate, let coordinator decide |
|
||||
| Coverage data unavailable | Report 0%, note coverage collection failure |
|
||||
| Test environment broken | SendMessage error to coordinator, suggest manual fix |
|
||||
| Sub-agent fix introduces new failures | Revert fix, try next failure |
|
||||
Reference in New Issue
Block a user