mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-10 02:24:35 +08:00
refactor(commands/agents): replace bash/jq with ccw session commands
Replace legacy bash/jq operations with ccw session commands for better consistency and maintainability across workflow commands and agents. Changes: - commands/memory/docs.md: Use ccw session update/read for session ops - commands/workflow/review.md: Replace cat/jq with ccw session read - commands/workflow/tdd-verify.md: Replace find/jq with ccw session read - agents/conceptual-planning-agent.md: Use ccw session read for metadata - agents/test-fix-agent.md: Use ccw session read for context package - skills/command-guide/reference/*: Mirror changes to skill docs - ccw/src/commands/session.js: Fix EPIPE error when piping to jq/head 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata
|
- Action: Load session metadata
|
||||||
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
|
- Command: ccw session read WFS-{session} --type session
|
||||||
- Output: session_metadata
|
- Output: session_metadata
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -155,7 +155,7 @@ When called, you receive:
|
|||||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||||
- **Output Location**: Directory path for generated analysis files
|
- **Output Location**: Directory path for generated analysis files
|
||||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Artifact paths catalog - use `ccw session read {session} --type context` to get context package
|
||||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||||
|
|
||||||
|
|||||||
@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
|
|||||||
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
||||||
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
||||||
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
||||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Use `ccw session read {session} --type context` to get context package with artifact paths
|
||||||
- Identify test commands from project configuration
|
- Identify test commands from project configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Detect test framework and multi-layered commands
|
# Detect test framework and multi-layered commands
|
||||||
if [ -f "package.json" ]; then
|
if [ -f "package.json" ]; then
|
||||||
# Extract layer-specific test commands
|
# Extract layer-specific test commands using Read tool or jq
|
||||||
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
|
PKG_JSON=$(cat package.json)
|
||||||
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
|
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
|
||||||
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
|
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||||
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
|
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
|
||||||
|
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
|
||||||
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
||||||
LINT_CMD="ruff check . || flake8 ."
|
LINT_CMD="ruff check . || flake8 ."
|
||||||
UNIT_CMD="pytest tests/unit/"
|
UNIT_CMD="pytest tests/unit/"
|
||||||
|
|||||||
@@ -74,7 +74,7 @@ SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Update workflow-session.json with docs-specific fields
|
# Update workflow-session.json with docs-specific fields
|
||||||
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
|
ccw session update {sessionId} --type session --content '{"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}'
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 2: Analyze Structure
|
### Phase 2: Analyze Structure
|
||||||
@@ -136,7 +136,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Count existing docs from doc-planning-data.json
|
# Count existing docs from doc-planning-data.json
|
||||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
|
ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq '.existing_docs.file_list | length'
|
||||||
|
# Or read entire process file and parse
|
||||||
```
|
```
|
||||||
|
|
||||||
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
||||||
@@ -190,10 +191,10 @@ Large Projects (single dir >10 docs):
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Get top-level directories from doc-planning-data.json
|
# 1. Get top-level directories from doc-planning-data.json
|
||||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
|
ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq -r '.top_level_dirs[]'
|
||||||
|
|
||||||
# 2. Get mode from workflow-session.json
|
# 2. Get mode from workflow-session.json
|
||||||
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
|
ccw session read WFS-docs-{timestamp} --type session --raw | jq -r '.mode // "full"'
|
||||||
|
|
||||||
# 3. Check for HTTP API
|
# 3. Check for HTTP API
|
||||||
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
|
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
|
||||||
@@ -222,7 +223,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
|||||||
|
|
||||||
**Task ID Calculation**:
|
**Task ID Calculation**:
|
||||||
```bash
|
```bash
|
||||||
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
|
group_count=$(ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq '.groups.count')
|
||||||
readme_id=$((group_count + 1)) # Next ID after groups
|
readme_id=$((group_count + 1)) # Next ID after groups
|
||||||
arch_id=$((group_count + 2))
|
arch_id=$((group_count + 2))
|
||||||
api_id=$((group_count + 3))
|
api_id=$((group_count + 3))
|
||||||
@@ -285,8 +286,8 @@ api_id=$((group_count + 3))
|
|||||||
"step": "load_precomputed_data",
|
"step": "load_precomputed_data",
|
||||||
"action": "Load Phase 2 analysis and extract group directories",
|
"action": "Load Phase 2 analysis and extract group directories",
|
||||||
"commands": [
|
"commands": [
|
||||||
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
|
"ccw session read ${session_id} --type process --filename doc-planning-data.json",
|
||||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
|
"ccw session read ${session_id} --type process --filename doc-planning-data.json --raw | jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories'"
|
||||||
],
|
],
|
||||||
"output_to": "phase2_context",
|
"output_to": "phase2_context",
|
||||||
"note": "Single JSON file contains all Phase 2 analysis results"
|
"note": "Single JSON file contains all Phase 2 analysis results"
|
||||||
|
|||||||
@@ -113,13 +113,14 @@ After bash validation, the model takes control to:
|
|||||||
1. **Load Context**: Read completed task summaries and changed files
|
1. **Load Context**: Read completed task summaries and changed files
|
||||||
```bash
|
```bash
|
||||||
# Load implementation summaries
|
# Load implementation summaries
|
||||||
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
|
ccw session read ${sessionId} --type summary --raw
|
||||||
|
|
||||||
# Load test results (if available)
|
# Load test results (if available)
|
||||||
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
ccw session read ${sessionId} --type summary --filename "TEST-FIX-*.md" --raw 2>/dev/null
|
||||||
|
|
||||||
# Get changed files
|
# Get session created_at for git log filter
|
||||||
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
created_at=$(ccw session read ${sessionId} --type session --raw | jq -r .created_at)
|
||||||
|
git log --since="$created_at" --name-only --pretty=format: | sort -u
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Perform Specialized Review**: Based on `review_type`
|
2. **Perform Specialized Review**: Based on `review_type`
|
||||||
@@ -169,11 +170,11 @@ After bash validation, the model takes control to:
|
|||||||
- Verify all requirements and acceptance criteria met:
|
- Verify all requirements and acceptance criteria met:
|
||||||
```bash
|
```bash
|
||||||
# Load task requirements and acceptance criteria
|
# Load task requirements and acceptance criteria
|
||||||
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
ccw session read ${sessionId} --type task --raw | jq -r '
|
||||||
"Task: " + .id + "\n" +
|
"Task: " + .id + "\n" +
|
||||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||||
"Acceptance: " + (.context.acceptance | join(", "))
|
"Acceptance: " + (.context.acceptance | join(", "))
|
||||||
' {} \;
|
'
|
||||||
|
|
||||||
# Check implementation summaries against requirements
|
# Check implementation summaries against requirements
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
cd .workflow/active/${sessionId} && gemini -p "
|
||||||
|
|||||||
@@ -77,18 +77,18 @@ find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Load all task JSONs
|
# Load all task JSONs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json'
|
ccw session read {sessionId} --type task
|
||||||
|
|
||||||
# Extract task IDs
|
# Extract task IDs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
ccw session read {sessionId} --type task --raw | jq -r '.id'
|
||||||
|
|
||||||
# Check dependencies
|
# Check dependencies - read tasks and filter for IMPL/REFACTOR
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
ccw session read {sessionId} --type task --task-id "IMPL-*" --raw | jq -r '.context.depends_on[]?'
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
ccw session read {sessionId} --type task --task-id "REFACTOR-*" --raw | jq -r '.context.depends_on[]?'
|
||||||
|
|
||||||
# Check meta fields
|
# Check meta fields
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
ccw session read {sessionId} --type task --raw | jq -r '.meta.tdd_phase'
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
ccw session read {sessionId} --type task --raw | jq -r '.meta.agent'
|
||||||
```
|
```
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
|
|||||||
@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata
|
- Action: Load session metadata
|
||||||
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
|
- Command: ccw session read WFS-{session} --type session
|
||||||
- Output: session_metadata
|
- Output: session_metadata
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -155,7 +155,7 @@ When called, you receive:
|
|||||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||||
- **Output Location**: Directory path for generated analysis files
|
- **Output Location**: Directory path for generated analysis files
|
||||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Artifact paths catalog - use `ccw session read {session} --type context` to get context package
|
||||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||||
|
|
||||||
|
|||||||
@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
|
|||||||
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
||||||
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
||||||
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
||||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Use `ccw session read {session} --type context` to get context package with artifact paths
|
||||||
- Identify test commands from project configuration
|
- Identify test commands from project configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Detect test framework and multi-layered commands
|
# Detect test framework and multi-layered commands
|
||||||
if [ -f "package.json" ]; then
|
if [ -f "package.json" ]; then
|
||||||
# Extract layer-specific test commands
|
# Extract layer-specific test commands using Read tool or jq
|
||||||
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
|
PKG_JSON=$(cat package.json)
|
||||||
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
|
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
|
||||||
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
|
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||||
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
|
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
|
||||||
|
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
|
||||||
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
||||||
LINT_CMD="ruff check . || flake8 ."
|
LINT_CMD="ruff check . || flake8 ."
|
||||||
UNIT_CMD="pytest tests/unit/"
|
UNIT_CMD="pytest tests/unit/"
|
||||||
|
|||||||
@@ -74,7 +74,7 @@ SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Update workflow-session.json with docs-specific fields
|
# Update workflow-session.json with docs-specific fields
|
||||||
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
|
ccw session update {sessionId} --type session --content '{"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}'
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 2: Analyze Structure
|
### Phase 2: Analyze Structure
|
||||||
@@ -136,7 +136,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Count existing docs from doc-planning-data.json
|
# Count existing docs from doc-planning-data.json
|
||||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
|
ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq '.existing_docs.file_list | length'
|
||||||
|
# Or read entire process file and parse
|
||||||
```
|
```
|
||||||
|
|
||||||
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
||||||
|
|||||||
@@ -115,7 +115,7 @@ List sessions with metadata and prompt user selection:
|
|||||||
```bash
|
```bash
|
||||||
bash(for dir in .workflow/active/WFS-*/; do
|
bash(for dir in .workflow/active/WFS-*/; do
|
||||||
session=$(basename "$dir")
|
session=$(basename "$dir")
|
||||||
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
|
project=$(ccw session read "$session" --type session --raw 2>/dev/null | jq -r '.project // "Unknown"')
|
||||||
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||||
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||||
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
|
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
|
||||||
@@ -152,7 +152,7 @@ Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "a
|
|||||||
|
|
||||||
#### Step 1.3: Load Session Metadata
|
#### Step 1.3: Load Session Metadata
|
||||||
```bash
|
```bash
|
||||||
bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
ccw session read ${sessionId} --type session
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: Store session metadata in memory
|
**Output**: Store session metadata in memory
|
||||||
|
|||||||
@@ -113,13 +113,14 @@ After bash validation, the model takes control to:
|
|||||||
1. **Load Context**: Read completed task summaries and changed files
|
1. **Load Context**: Read completed task summaries and changed files
|
||||||
```bash
|
```bash
|
||||||
# Load implementation summaries
|
# Load implementation summaries
|
||||||
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
|
ccw session read ${sessionId} --type summary --raw
|
||||||
|
|
||||||
# Load test results (if available)
|
# Load test results (if available)
|
||||||
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
ccw session read ${sessionId} --type summary --filename "TEST-FIX-*.md" --raw 2>/dev/null
|
||||||
|
|
||||||
# Get changed files
|
# Get session created_at for git log filter
|
||||||
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
created_at=$(ccw session read ${sessionId} --type session --raw | jq -r .created_at)
|
||||||
|
git log --since="$created_at" --name-only --pretty=format: | sort -u
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Perform Specialized Review**: Based on `review_type`
|
2. **Perform Specialized Review**: Based on `review_type`
|
||||||
@@ -169,11 +170,11 @@ After bash validation, the model takes control to:
|
|||||||
- Verify all requirements and acceptance criteria met:
|
- Verify all requirements and acceptance criteria met:
|
||||||
```bash
|
```bash
|
||||||
# Load task requirements and acceptance criteria
|
# Load task requirements and acceptance criteria
|
||||||
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
ccw session read ${sessionId} --type task --raw | jq -r '
|
||||||
"Task: " + .id + "\n" +
|
"Task: " + .id + "\n" +
|
||||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||||
"Acceptance: " + (.context.acceptance | join(", "))
|
"Acceptance: " + (.context.acceptance | join(", "))
|
||||||
' {} \;
|
'
|
||||||
|
|
||||||
# Check implementation summaries against requirements
|
# Check implementation summaries against requirements
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
cd .workflow/active/${sessionId} && gemini -p "
|
||||||
|
|||||||
@@ -77,18 +77,18 @@ find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Load all task JSONs
|
# Load all task JSONs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json'
|
ccw session read {sessionId} --type task
|
||||||
|
|
||||||
# Extract task IDs
|
# Extract task IDs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
ccw session read {sessionId} --type task --raw | jq -r '.id'
|
||||||
|
|
||||||
# Check dependencies
|
# Check dependencies - read tasks and filter for IMPL/REFACTOR
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
ccw session read {sessionId} --type task --task-id "IMPL-*" --raw | jq -r '.context.depends_on[]?'
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
ccw session read {sessionId} --type task --task-id "REFACTOR-*" --raw | jq -r '.context.depends_on[]?'
|
||||||
|
|
||||||
# Check meta fields
|
# Check meta fields
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
ccw session read {sessionId} --type task --raw | jq -r '.meta.tdd_phase'
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
ccw session read {sessionId} --type task --raw | jq -r '.meta.agent'
|
||||||
```
|
```
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
|
|||||||
@@ -7,6 +7,14 @@ import chalk from 'chalk';
|
|||||||
import http from 'http';
|
import http from 'http';
|
||||||
import { executeTool } from '../tools/index.js';
|
import { executeTool } from '../tools/index.js';
|
||||||
|
|
||||||
|
// Handle EPIPE errors gracefully (occurs when piping to head/jq that closes early)
|
||||||
|
process.stdout.on('error', (err) => {
|
||||||
|
if (err.code === 'EPIPE') {
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
throw err;
|
||||||
|
});
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Notify dashboard of granular events (fire and forget)
|
* Notify dashboard of granular events (fire and forget)
|
||||||
* @param {Object} data - Event data
|
* @param {Object} data - Event data
|
||||||
|
|||||||
Reference in New Issue
Block a user