refactor(commands/agents): replace bash/jq with ccw session commands

Replace legacy bash/jq operations with ccw session commands for better
consistency and maintainability across workflow commands and agents.

Changes:
- commands/memory/docs.md: Use ccw session update/read for session ops
- commands/workflow/review.md: Replace cat/jq with ccw session read
- commands/workflow/tdd-verify.md: Replace find/jq with ccw session read
- agents/conceptual-planning-agent.md: Use ccw session read for metadata
- agents/test-fix-agent.md: Use ccw session read for context package
- skills/command-guide/reference/*: Mirror changes to skill docs
- ccw/src/commands/session.js: Fix EPIPE error when piping to jq/head

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
catlog22
2025-12-11 14:48:02 +08:00
parent e815c3c10e
commit 0a4c205105
12 changed files with 67 additions and 53 deletions

View File

@@ -74,7 +74,7 @@ SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}
```bash
# Update workflow-session.json with docs-specific fields
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
ccw session update {sessionId} --type session --content '{"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}'
```
### Phase 2: Analyze Structure
@@ -136,7 +136,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq '.existing_docs.file_list | length'
# Or read entire process file and parse
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -190,10 +191,10 @@ Large Projects (single dir >10 docs):
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq -r '.top_level_dirs[]'
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
ccw session read WFS-docs-{timestamp} --type session --raw | jq -r '.mode // "full"'
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
@@ -222,7 +223,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
group_count=$(ccw session read WFS-docs-{timestamp} --type process --filename doc-planning-data.json --raw | jq '.groups.count')
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -285,8 +286,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
"ccw session read ${session_id} --type process --filename doc-planning-data.json",
"ccw session read ${session_id} --type process --filename doc-planning-data.json --raw | jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories'"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"