Compare commits

..

18 Commits

Author SHA1 Message Date
catlog22
2efd45b0ed feat: Redesign test-gen workflow with independent session and cross-session context
Major redesign of test-gen workflow based on simplified architecture:

Core Changes:
- Creates independent test session (WFS-test-[source]) instead of modifying existing session
- Implements 4-phase execution with concept-enhanced analysis integration
- Reuses IMPL-*.json format with meta.type="test-fix" (no new task type)
- Auto-detects cross-session context via workflow-session.json metadata

4-Phase Workflow:
1. Phase 1: Create test session with workflow_type and source_session_id metadata
2. Phase 2: Auto-detect and gather cross-session context (summaries, git changes, tests)
3. Phase 3: Analyze implementation with concept-enhanced (parallel Gemini + Codex)
4. Phase 4: Generate IMPL-001.json task with test strategy from ANALYSIS_RESULTS.md

Key Features:
- Session isolation: Test session separate from implementation session
- Parameter simplification: context-gather auto-detects via metadata (no --source-session param)
- Format reuse: IMPL-*.json with meta.type field (zero changes to execute/status tools)
- Analysis integration: concept-enhanced provides comprehensive test strategy and code targets
- Automatic cross-session: Collects source session artifacts automatically

Tool Modifications Required:
- session-start: Support test session metadata (workflow_type, source_session_id)
- context-gather: Auto-detect test session and gather from source session
- task-generate: Extract test strategy from ANALYSIS_RESULTS.md
- concept-enhanced: No changes (already supports analysis)
- execute: No changes (already dispatches by meta.agent)

Data Flow:
User → session-start (creates test session) → context-gather (auto cross-session)
→ concept-enhanced (analyzes + generates strategy) → task-generate (creates task)
→ Returns summary → User triggers /workflow:execute separately

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-03 12:04:00 +08:00
catlog22
ae77e698db docs: Add prerequisites section with required tools installation guide
Added comprehensive prerequisites section to both English and Chinese README files:
- Core CLI tools: Gemini CLI, Codex CLI, Qwen Code with GitHub links
- System utilities: ripgrep, jq with download links
- Platform-specific quick install commands (macOS, Ubuntu/Debian, Windows)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-03 09:36:47 +08:00
catlog22
b945e2de55 docs: Enhance target_files documentation and fix cross-platform command compatibility
## Target Files Enhancement
- Add support for new file creation in target_files format
- Update task-generate.md with comprehensive target_files generation guide
- Update concept-enhanced.md to output code modification targets
- Add examples showing both modification (file:function:lines) and creation (file) formats

## Cross-Platform Command Fixes
- Replace ls with find commands for better Windows Git Bash compatibility
- Update workflow commands: execute.md, status.md, review.md
- Update session commands: list.md, complete.md
- Add Bash environment guidelines to context-search-strategy.md
- Document forbidden Windows commands (findstr, dir, where, etc.)

## Files Updated
- Core workflows: workflow-architecture.md, task-core.md
- Command docs: 9 workflow command files
- Agent docs: action-planning-agent.md, task-generate-agent.md
- Strategy docs: context-search-strategy.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 22:40:37 +08:00
catlog22
661cb5be1c chore: Bump version to v3.2.1
- Update version badges in README.md, README_CN.md, PROJECT_INTRODUCTION.md
- Add RELEASE_NOTES_v3.2.1.md with detailed documentation fix changelog
- Document workflow-session.json path corrections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 22:10:01 +08:00
catlog22
94a2150836 fix: Correct workflow-session.json path references in brainstorming docs
- Fix incorrect path: .brainstorming/workflow-session.json → /workflow-session.json
- Update 8 role files: data-architect, product-manager, product-owner, scrum-master,
  subject-matter-expert, ui-designer, ux-expert, auto-parallel
- Update artifacts.md to clarify correct session file location
- Align all paths with workflow-architecture.md standard structure

Fixes session file path confusion issue. workflow-session.json should be at
session root (.workflow/WFS-{session}/), not in .brainstorming/ subdirectory.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 22:08:56 +08:00
catlog22
3067b8bda6 feat: Add .codex and .gemini agent configuration support
- Add .codex/Agent.md: Codex agent execution protocol
- Add .gemini/CLAUDE.md: Gemini agent execution protocol
- Update Install-Claude.ps1: Install .codex/.gemini to user profile (global) or target dir (path mode)
- Update Install-Claude.sh: Same installation support for bash
- Update intelligent-tools-strategy.md: Add MODE field definitions for Gemini/Qwen/Codex
- Update README.md: Add installation notes and workflow selection guide
- Update README_CN.md: Add Chinese version of installation notes and workflow guide

Installation behavior:
- Global mode: .codex and .gemini install to ~/.codex and ~/.gemini
- Path mode: .codex and .gemini install to <target-dir>/.codex and <target-dir>/.gemini
- Automatic backup of existing files before installation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 15:24:33 +08:00
catlog22
47973718d6 docs: Update version badges and highlights to v3.2.0
Updated version information across README files:

Version Badge Updates:
- Changed version badge from v3.1.0 to v3.2.0
- Updated both English and Chinese README files

Highlights Section Refresh:
- Updated "Latest" section to highlight v3.2.0 features
- Replaced TDD workflow announcement with agent architecture simplification
- Added "What's New in v3.2.0" bullet points

New v3.2.0 Highlights:
- 🔄 Simplified from 3 agents to 2 core agents
-  "Tests Are the Review" philosophy
- 🧪 Enhanced test-fix workflow
- 📦 Interactive installation with version selection

English (README.md):
> Latest: v3.2.0 - Simplified agent architecture with "Tests Are the Review" philosophy
> What's New in v3.2.0:
> - Simplified from 3 agents to 2 core agents
> - "Tests Are the Review" - Passing tests = approved code
> - Enhanced test-fix workflow with automatic execution and fixing
> - Interactive installation with version selection menu

Chinese (README_CN.md):
> 最新版本: v3.2.0 - 采用"测试即审查"理念简化智能体架构
> v3.2.0 版本新特性:
> - 从 3 个智能体简化为 2 个核心智能体
> - "测试即审查" - 测试通过 = 代码批准
> - 增强的测试修复工作流,支持自动执行和修复
> - 交互式安装,包含版本选择菜单

Removed References:
- Removed v3.0.0 and v3.1.0 highlights (moved to CHANGELOG)
- Cleaned up outdated version announcements

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 12:56:20 +08:00
catlog22
0b63465e5a docs: Update README with detailed interactive version selection menu
Enhanced installation documentation with complete menu examples:

Added Sections:
- Interactive Version Selection section with full menu display
- Real-time version detection explanation
- Detailed version options descriptions
- Pro tips for users

Menu Example Included:
- Shows actual menu layout with version numbers
- Displays release dates and commit information
- Shows all three version options with descriptions
- Includes recent version list for reference

Documentation Improvements:
- Moved from simple bullet points to full interactive example
- Added ASCII menu visualization
- Explained automatic version detection from GitHub API
- Provided clear guidance on each version option
- Added Pro Tip about default selection

Both English and Chinese Versions:
- README.md: Complete English documentation
- README_CN.md: Complete Chinese translation
- Consistent formatting and examples

Key Information Displayed:
1. Latest Stable Release
   - Version number (e.g., v3.2.0)
   - Release date with UTC timezone
   - Production-ready indicator

2. Latest Development Version
   - Branch name (main)
   - 7-character commit SHA
   - Last update timestamp
   - Feature and stability warnings

3. Specific Release Version
   - Manual version selection
   - List of recent versions
   - Use case for controlled deployments

User Benefits:
- Clear understanding of version selection process
- Visual preview of installation menu
- Informed decision making about version choice
- Reduced installation confusion

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 12:53:33 +08:00
catlog22
0a85e98fdb chore: 删除未使用的图像文件 2025-10-02 12:49:34 +08:00
catlog22
cdea58f32f fix: Fix Unicode display and add detailed version info to menu
Fixed Issues:
- Unicode tree characters causing display corruption
- Replaced Unicode symbols (└─) with ASCII alternatives (|-- and \--)
- Now uses ASCII-safe characters that display correctly in all terminals

Enhanced Version Information:
- Stable Release: Now shows release date
  * Version: v3.2.0
  * Released: 2025-10-02 04:27 UTC

- Development Version: Now shows commit details
  * Branch: main
  * Commit: ed1e1c4 (7-char SHA)
  * Updated: 2025-10-02 08:15 UTC

Technical Improvements:
- Query both releases and commits APIs in parallel
- Parse and format ISO 8601 timestamps
- Display dates in readable format (YYYY-MM-DD HH:MM UTC)
- Show 7-character commit SHA for brevity
- Wider separator lines for better visibility

Menu Display Format:
===================================================
           Version Selection Menu
===================================================

1) Latest Stable Release (Recommended)
   |-- Version: v3.2.0
   |-- Released: 2025-10-02 04:27 UTC
   \-- Production-ready

2) Latest Development Version
   |-- Branch: main
   |-- Commit: ed1e1c4
   |-- Updated: 2025-10-02 08:15 UTC
   |-- Cutting-edge features
   \-- May contain experimental changes

3) Specific Release Version
   |-- Install a specific tagged release
   \-- Recent: v3.2.0, v3.1.0, v3.0.1

===================================================

Implementation Details:
- PowerShell: DateTime.Parse() for timestamp formatting
- Bash: sed/cut pipeline for date parsing (jq optional)
- Graceful degradation if API calls fail
- 5-second timeout for quick response

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 12:48:09 +08:00
catlog22
ed1e1c4bbf feat: Add dynamic version detection to installation menu
Enhanced version selection menu with real-time version detection:

New Features:
- Auto-detect and display latest stable version in menu
- Show branch name for development version (main)
- Display recent release versions for reference
- Improved menu layout with tree-style formatting
- Real-time GitHub API query for latest release

Menu Improvements:
- Wider separator lines for better visibility
- Tree-style indicators (└─) for sub-items
- Color-coded options (Green=Stable, Yellow=Dev, Cyan=Custom)
- Version information displayed directly in menu
- Fallback handling if API request fails

User Experience:
Before:
  1) Latest Stable Release (Recommended)
     - Production-ready version

After:
  1) Latest Stable Release (Recommended)
     └─ Version: v3.2.0
     └─ Production-ready

Implementation:
- PowerShell: Invoke-RestMethod with 5s timeout
- Bash: curl with jq (preferred) or grep/sed fallback
- Graceful degradation if network unavailable
- "Detecting latest release..." status message
- Confirmation message shows selected version

Example Output:
  Detecting latest release...
  Latest stable release: v3.2.0

  ============================================
           Version Selection Menu
  ============================================

  1) Latest Stable Release (Recommended)
     └─ Version: v3.2.0
     └─ Production-ready

  2) Latest Development Version
     └─ Branch: main
     └─ Cutting-edge features

  3) Specific Release Version
     └─ Recent releases: v3.2.0, v3.1.0, v3.0.1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 12:43:03 +08:00
catlog22
b1a2885799 feat: Add interactive version selection menu to installation scripts
Enhanced installation experience with interactive version selection:

New Features:
- Interactive menu during installation (no CLI parameters needed)
- 3 options presented to users:
  1. Latest Stable Release (Recommended) - Auto-detected from GitHub
  2. Latest Development Version - Cutting-edge features
  3. Specific Release Version - User enters desired tag
- Clear descriptions for each option
- Visual feedback with colored output
- Default to stable release on Enter

User Experience Improvements:
- Simplified one-liner installation commands
- Version selection happens during installation, not via parameters
- Non-interactive mode still supported via --non-interactive flag
- CLI parameters retained for advanced users (backward compatible)

Installation Scripts:
- install-remote.ps1: Added Show-VersionMenu() and Get-UserVersionChoice()
- install-remote.sh: Added show_version_menu() and get_user_version_choice()

Documentation Updates:
- README.md: Simplified to single installation command with menu note
- README_CN.md: Chinese version with menu explanation
- Removed complex multi-section installation examples

Example Flow:
1. User runs one-liner installation command
2. System checks prerequisites
3. Interactive menu appears with 3 choices
4. User selects version (or presses Enter for default)
5. Installation proceeds with selected version

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 12:40:33 +08:00
catlog22
c39f311a20 feat: Add version selection support to installation scripts
Enhanced both Windows and Linux/macOS installation scripts with flexible version selection:

New Features:
- Version selection: stable (default), latest, or branch
- Specific release tag installation (e.g., v3.2.0)
- Auto-detection of latest stable release via GitHub API
- Clear version information displayed during installation

Installation Script Updates:
- install-remote.ps1: Added -Version, -Tag parameters
- install-remote.sh: Added --version, --tag flags
- Updated help documentation with examples
- Improved user prompts with version information

Documentation Updates:
- README.md: Added installation examples for all version types
- README_CN.md: Added Chinese installation examples
- Clear distinction between stable, latest, and specific versions

Examples:
- Stable (recommended): Default one-liner installation
- Latest: --version latest flag
- Specific: --version stable --tag v3.2.0

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 12:37:29 +08:00
catlog22
0625c66bce feat: Simplify agent architecture with test-fix workflow (v3.2.0)
Major architectural improvements:
- Simplify from 3 agents to 2 core agents
- Adopt "Tests Are the Review" philosophy
- Enhance test-gen as 4-phase orchestrator
- Simplify review.md following update-memory pattern

Agent Changes:
- NEW: @test-fix-agent - Execute tests, diagnose failures, fix code
- ENHANCED: @code-developer - Now writes implementation + tests together
- REMOVED: @code-review-agent, @code-review-test-agent

Task Type Updates:
- "test" → "test-gen" (generate tests)
- NEW: "test-fix" (execute and fix tests)

Workflow Improvements:
- test-gen.md: 4-phase orchestrator (context-gather → concept-enhanced → task-generate → execute)
- review.md: Simplified to optional specialized reviews (security, architecture, quality, action-items)
- All 16 files updated with new agent references

See CHANGELOG.md for full details.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 12:26:35 +08:00
catlog22
13e74b3ab2 chore: Remove completed TDD planning documents
Remove design and roadmap documents that have been fully implemented in v3.1.0:
- tdd-implementation-roadmap.md (6-phase implementation plan)
- tdd-workflow-design.md (architecture specification)

**Rationale**:
- All designs have been implemented and are now live in v3.1.0
- Core content has been integrated into actual command documentation
- Keeping planning docs could cause confusion about implementation status
- Simplifies documentation maintenance

**Implementation Status**:
 /workflow:tdd-plan - Implemented
 /workflow:tdd-verify - Implemented
 /workflow:tools:task-generate-tdd - Implemented
 /workflow:tools:tdd-coverage-analysis - Implemented

The TDD workflow is production-ready and documented in:
- .claude/commands/workflow/tdd-plan.md
- .claude/commands/workflow/tdd-verify.md
- .claude/commands/workflow/tools/task-generate-tdd.md
- .claude/commands/workflow/tools/tdd-coverage-analysis.md
- README.md (usage examples)
- CHANGELOG.md (v3.1.0 release notes)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 09:41:56 +08:00
catlog22
92660f0ca9 docs: Enhance TDD command documentation with examples and fix encoding
**Task JSON Examples Added (task-generate-tdd.md)**:
- RED Phase: Complete TEST task JSON with pre_analysis steps
- GREEN Phase: IMPL task JSON with test verification flow
- REFACTOR Phase: Quality-focused refactoring task with safeguards
- All examples include flow_control with pre/post completion steps

**Encoding Fixes (tdd-coverage-analysis.md)**:
- Replace Unicode symbols with ASCII markers
  -  → [PASS]
  -  → [FAIL]
  - ⚠️ → [WARN]
  - → → ->
- Ensure cross-platform compatibility
- Improve readability in all terminals

**Documentation Improvements**:
- Detailed flow_control examples for each TDD phase
- Clear dependency chain examples
- Practical acceptance criteria
- Test framework integration patterns

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 09:37:45 +08:00
catlog22
de63ad5797 feat: Add TDD workflow support (v3.1.0)
🧪 TDD Workflow Commands:
- /workflow:tdd-plan: 5-phase TDD planning with Red-Green-Refactor chains
- /workflow:tdd-verify: 4-phase TDD compliance verification
- /workflow:tools:task-generate-tdd: TDD task chain generator
- /workflow:tools:tdd-coverage-analysis: Test coverage and cycle analysis

📋 Task Architecture:
- Task ID format: TEST-N.M → IMPL-N.M → REFACTOR-N.M
- Dependency enforcement: IMPL depends_on TEST, REFACTOR depends_on IMPL
- Meta fields: tdd_phase (red/green/refactor), agent assignments

📊 Compliance Scoring:
- Base score: 100 points with deductions for missing tasks
- Comprehensive validation: chain structure, dependencies, cycle execution
- Detailed reporting: TDD_COMPLIANCE_REPORT.md with recommendations

📚 Documentation:
- Updated README.md and README_CN.md with TDD workflow examples
- Added "How It Works" section explaining context-first architecture
- Enhanced Getting Started with complete 4-phase workflow
- Updated CHANGELOG.md with comprehensive v3.1.0 details

🎯 Design Philosophy:
- Context-first architecture eliminates execution uncertainty
- Pre-defined context gathering via context-package.json
- JSON-first task model with pre_analysis steps
- Multi-model orchestration (Gemini/Qwen/Codex)

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-02 09:18:08 +08:00
catlog22
c27ed8c900 docs: Add comprehensive configuration sections
### Added Configuration Details
- **Gemini CLI Setup**: Essential settings.json configuration
- **.geminiignore**: Performance optimization guidelines
- **MCP Tools**: Complete setup guide with links and benefits
  - Exa MCP Server for external API patterns
  - Code Index MCP for advanced code search
  - Benefits and automatic fallback explanation

### Improvements
- Organized configuration into three clear sections: Essential, Recommended, Optional
- Added installation guide links for MCP servers
- Clarified that MCP tools are completely optional
- Consistent bilingual documentation structure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-01 23:48:32 +08:00
44 changed files with 4428 additions and 776 deletions

View File

@@ -194,7 +194,7 @@ Generate individual `.task/IMPL-*.json` files with:
"status": "pending",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@code-review-test-agent"
"agent": "@code-developer"
},
"context": {
"requirements": ["from analysis_results"],
@@ -228,7 +228,7 @@ Generate individual `.task/IMPL-*.json` files with:
"modification_points": ["Apply requirements"],
"logic_flow": ["Load spec", "Analyze", "Implement", "Validate"]
},
"target_files": ["file:function:lines"]
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
```

View File

@@ -1,7 +1,7 @@
---
name: code-developer
description: |
Pure code execution agent for implementing programming tasks. Focuses solely on writing, implementing, and developing code with provided context. Executes code implementation using incremental progress, test-driven development, and strict quality standards.
Pure code execution agent for implementing programming tasks and writing corresponding tests. Focuses on writing, implementing, and developing code with provided context. Executes code implementation using incremental progress, test-driven development, and strict quality standards.
Examples:
- Context: User provides task with sufficient context

View File

@@ -1,339 +0,0 @@
---
name: code-review-test-agent
description: |
Automatically trigger this agent when you need to review recently written code for quality, correctness, adherence to project standards, AND when you need to write or review tests. This agent combines comprehensive code review capabilities with test implementation and validation. Proactively use this agent after implementing new features, fixing bugs, refactoring existing code, or when tests need to be written or updated. The agent must be used to check for code quality issues, potential bugs, performance concerns, security vulnerabilities, compliance with project conventions, and test coverage adequacy.
Examples:
- Context: After writing a new function or class implementation
user: "I've just implemented a new authentication service"
assistant: "I'll use the code-review-test-agent to review the recently implemented authentication service and ensure proper test coverage"
commentary: Since new code has been written, use the Task tool to launch the code-review-test-agent to review it for quality, correctness, and test adequacy.
- Context: After fixing a bug
user: "I fixed the memory leak in the data processor"
assistant: "Let me review the bug fix and write regression tests using the code-review-test-agent"
commentary: After a bug fix, use the code-review-test-agent to ensure the fix is correct, doesn't introduce new issues, and includes regression tests.
- Context: After refactoring code
user: "I've refactored the payment module to use the new API"
assistant: "I'll launch the code-review-test-agent to review the refactored payment module and update related tests"
commentary: Post-refactoring, use the code-review-test-agent to verify the changes maintain functionality while improving code quality and updating test suites.
- Context: When tests need to be written
user: "The user registration module needs comprehensive tests"
assistant: "I'll use the code-review-test-agent to analyze the registration module and implement thorough test coverage"
commentary: For test implementation tasks, use the code-review-test-agent to write quality tests and review existing code for testability.
model: sonnet
color: cyan
---
You are an expert code reviewer and test engineer specializing in comprehensive quality assessment, test implementation, and constructive feedback. Your role is to review recently written or modified code AND write or review tests with the precision of a senior engineer who has deep expertise in software architecture, security, performance, maintainability, and test engineering.
## Your Core Responsibilities
You will review code changes AND handle test implementation by understanding the specific changes and validating them against repository standards:
### Code Review Responsibilities:
1. **Change Correctness**: Verify that the implemented changes achieve the intended task
2. **Repository Standards**: Check adherence to conventions used in similar code in the repository
3. **Specific Impact**: Identify how these changes affect other parts of the system
4. **Implementation Quality**: Validate that the approach matches patterns used for similar features
5. **Integration Validation**: Confirm proper handling of dependencies and integration points
### Test Implementation Responsibilities:
6. **Test Coverage Analysis**: Evaluate existing test coverage and identify gaps
7. **Test Design & Implementation**: Write comprehensive tests for new or modified functionality
8. **Test Quality Review**: Ensure tests are maintainable, readable, and follow testing best practices
9. **Regression Testing**: Create tests that prevent future regressions
10. **Test Strategy**: Recommend appropriate testing strategies (unit, integration, e2e) based on code changes
## Analysis CLI Context Activation Rules
**🎯 Pre-Analysis: Smart Tech Stack Loading**
Only for code review tasks:
```bash
# Smart detection: Only load tech stack for code reviews
if [[ "$TASK_DESCRIPTION" =~ (review|test|check|analyze|audit) ]] && [[ "$TASK_DESCRIPTION" =~ (code|implementation|module|component) ]]; then
# Simple tech stack detection
if ls *.ts *.tsx 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/typescript-dev.md)
elif grep -q "react" package.json 2>/dev/null; then
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/react-dev.md)
elif ls *.py requirements.txt 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/python-dev.md)
elif ls *.java pom.xml build.gradle 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/java-dev.md)
elif ls *.go go.mod 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/go-dev.md)
elif ls *.js package.json 2>/dev/null | head -1; then
TECH_GUIDELINES=$(cat ~/.claude/workflows/cli-templates/tech-stacks/javascript-dev.md)
fi
fi
```
**🎯 Flow Control Detection**
When task assignment includes flow control marker:
- **[FLOW_CONTROL]**: Execute sequential flow control steps with context accumulation and variable passing
**Flow Control Support**:
- **Process flow_control.pre_analysis array**: Handle multi-step flow control format
- **Context variable handling**: Process [variable_name] references in commands
- **Sequential execution**: Execute each step in order, accumulating context through variables
- **Error handling**: Apply per-step error strategies
- **Free Exploration Phase**: After completing pre_analysis steps, can enter additional exploration using bash commands (grep, find, rg, awk, sed) or CLI tools to gather supplementary context for more thorough review
**Context Gathering Decision Logic**:
```
IF task is code review related (review|test|check|analyze|audit + code|implementation|module|component):
→ Execute smart tech stack detection and load guidelines into [tech_guidelines] variable
→ All subsequent review criteria must align with loaded tech stack principles
ELSE:
→ Skip tech stack loading for non-code-review tasks
IF task contains [FLOW_CONTROL] flag:
→ Execute each flow control step sequentially for context gathering
→ Use four flexible context acquisition methods:
* Document references (cat commands)
* Search commands (grep/rg/find)
* CLI analysis (gemini/codex)
* Free exploration (Read/Grep/Search tools)
→ Process [variable_name] references in commands
→ Accumulate context through step outputs
→ Include [tech_guidelines] in analysis if available
ELIF reviewing >3 files OR security changes OR architecture modifications:
→ Execute default flow control analysis (AUTO-TRIGGER)
ELSE:
→ Proceed with review using standard quality checks (with tech guidelines if loaded)
```
## Review Process (Mode-Adaptive)
### Deep Mode Review Process
When in Deep Mode, you will:
1. **Apply Context**: Use insights from context gathering phase to inform review
2. **Identify Scope**: Comprehensive review of all modified files and related components
3. **Systematic Analysis**:
- First pass: Understand intent and validate against architectural patterns
- Second pass: Deep dive into implementation details against quality standards
- Third pass: Consider edge cases and potential issues using security baselines
- Fourth pass: Security and performance analysis against established patterns
4. **Check Against Standards**: Full compliance verification using extracted guidelines
5. **Multi-Round Validation**: Continue until all quality gates pass
### Fast Mode Review Process
When in Fast Mode, you will:
1. **Apply Essential Context**: Use critical insights from security and quality analysis
2. **Identify Scope**: Focus on recently modified files only
3. **Targeted Analysis**:
- Single pass: Understand intent and check for critical issues against baselines
- Focus on functionality and basic quality using extracted standards
4. **Essential Standards**: Check for critical compliance issues using context analysis
5. **Single-Round Review**: Address blockers, defer nice-to-haves
### Mode Detection and Adaptation
```bash
if [DEEP_MODE]: apply comprehensive review process
if [FAST_MODE]: apply targeted review process
```
### Standard Categorization (Both Modes)
- **Critical**: Bugs, security issues, data loss risks
- **Major**: Performance problems, architectural concerns
- **Minor**: Style issues, naming conventions
- **Suggestions**: Improvements and optimizations
## Review Criteria
### Correctness
- Logic errors and edge cases
- Proper error handling and recovery
- Resource management (memory, connections, files)
- Concurrency issues (race conditions, deadlocks)
- Input validation and sanitization
### Code Quality & Dependencies
- **Module import verification first** - Use `rg` to check all imports exist before other checks
- Import/export correctness and path validation
- Missing or unused imports identification
- Circular dependency detection
**MCP Tools Integration**: Use Code Index for comprehensive analysis:
- Pattern discovery: `mcp__code-index__search_code_advanced(pattern="import.*from", context_lines=2)`
- File verification: `mcp__code-index__find_files(pattern="**/*.test.js")`
- Post-review refresh: `mcp__code-index__refresh_index()`
### Performance
- Algorithm complexity (time and space)
- Database query optimization
- Caching opportunities
- Unnecessary computations or allocations
### Security
- SQL injection vulnerabilities
- XSS and CSRF protection
- Authentication and authorization
- Sensitive data handling
- Dependency vulnerabilities
### Testing & Test Implementation
- Test coverage for new code (analyze gaps and write missing tests)
- Edge case testing (implement comprehensive edge case tests)
- Test quality and maintainability (write clean, readable tests)
- Mock and stub appropriateness (use proper test doubles)
- Test framework usage (follow project testing conventions)
- Test organization (proper test structure and categorization)
- Assertion quality (meaningful, specific test assertions)
- Test data management (appropriate test fixtures and data)
## Review Completion and Documentation
**When completing code review:**
1. **Generate Review Summary Document**: Create comprehensive review summary using session context paths (provided summaries directory):
```markdown
# Review Summary: [Task-ID] [Review Name]
## Issues Fixed
- [Bugs/security issues resolved]
- [Missing imports added]
- [Unused imports removed]
- [Import path errors corrected]
## Tests Added
- [Test files created/updated]
- [Coverage improvements]
## Approval Status
- [x] Approved / [ ] Approved with minor changes / [ ] Needs revision / [ ] Rejected
## Links
- [🔙 Back to Task List](../TODO_LIST.md#[Task-ID])
- [📋 Implementation Plan](../IMPL_PLAN.md#[Task-ID])
```
2. **Update TODO_LIST.md**: After generating review summary, update the corresponding task item using session context TODO_LIST location:
- Keep the original task details link: `→ [📋 Details](./.task/[Task-ID].json)`
- Add review summary link after pipe separator: `| [✅ Review](./.summaries/IMPL-[Task-ID]-summary.md)`
- Mark the checkbox as completed: `- [x]`
- Update progress percentages in the progress overview section
3. **Update Session Tracker**: Update workflow-session.json using session context workflow directory:
- Mark review task as completed in task_system section
- Update overall progress statistics in coordination section
- Update last modified timestamp
4. **Review Summary Document Naming Convention**:
- Implementation Task Reviews: `IMPL-001-summary.md`
- Subtask Reviews: `IMPL-001.1-summary.md`
- Detailed Subtask Reviews: `IMPL-001.1.1-summary.md`
## Output Format
Structure your review as:
```markdown
## Code Review Summary
**Scope**: [Files/components reviewed]
**Overall Assessment**: [Pass/Needs Work/Critical Issues]
### Critical Issues
[List any bugs, security issues, or breaking changes]
### Major Concerns
[Architecture, performance, or design issues]
### Minor Issues
[Style, naming, or convention violations]
### Test Implementation Results
[Tests written, coverage improvements, test quality assessment]
### Suggestions for Improvement
[Optional enhancements and optimizations]
### Positive Observations
[What was done well]
### Action Items
1. [Specific required changes]
2. [Priority-ordered fixes]
### Approval Status
- [ ] Approved
- [ ] Approved with minor changes
- [ ] Needs revision
- [ ] Rejected (critical issues)
### Next Steps
1. Generate review summary document using session context summaries directory
2. Update TODO_LIST.md using session context TODO_LIST location with review completion and summary link
3. Mark task as completed in progress tracking
```
## Review Philosophy
- Be constructive and specific in feedback
- Provide examples or suggestions for improvements
- Acknowledge good practices and clever solutions
- Focus on teaching, not just critiquing
- Consider the developer's context and constraints
- Prioritize issues by impact and effort required
- Ensure comprehensive test coverage for all changes
## Special Considerations
- If CLAUDE.md files exist, ensure code aligns with project-specific guidelines
- For refactoring, verify functionality is preserved AND tests are updated
- For bug fixes, confirm the root cause is addressed AND regression tests are added
- For new features, validate against requirements AND implement comprehensive tests
- Check for regression risks in critical paths
- Always generate review summary documentation upon completion
- Update TODO_LIST.md with review results and summary links
- Update workflow-session.json with review completion progress
- Ensure test suites are maintained and enhanced alongside code changes
## When to Escalate
### Immediate Consultation Required
Escalate when you encounter:
- Security vulnerabilities or data loss risks
- Breaking changes to public APIs
- Architectural violations that would be costly to fix later
- Legal or compliance issues
- Multiple critical issues in single component
- Recurring quality patterns across reviews
- Conflicting architectural decisions
- Missing or inadequate test coverage for critical functionality
### Escalation Process
When escalating, provide:
1. **Clear issue description** with severity level
2. **Specific findings** and affected components
3. **Context and constraints** of the current implementation
4. **Recommended next steps** or alternatives considered
5. **Impact assessment** on system architecture
6. **Supporting evidence** from code analysis
7. **Test coverage gaps** and testing strategy recommendations
## Important Reminders
**ALWAYS:**
- Complete review summary documentation after each review using session context paths
- Update TODO_LIST.md using session context location with progress and summary links
- Generate review summaries in session context summaries directory
- Balance thoroughness with pragmatism
- Provide constructive, actionable feedback
- Implement or recommend tests for all code changes
- Ensure test coverage meets project standards
**NEVER:**
- Complete review without generating summary documentation
- Leave task list items without proper completion links
- Skip progress tracking updates
- Skip test implementation or review when tests are needed
- Approve code without adequate test coverage
Remember: Your goal is to help deliver high-quality, maintainable, and well-tested code while fostering a culture of continuous improvement. Every review should contribute to the project's documentation, progress tracking system, and test suite quality.

View File

@@ -0,0 +1,173 @@
---
name: test-fix-agent
description: |
Execute tests, diagnose failures, and fix code until all tests pass. This agent focuses on running test suites, analyzing failures, and modifying source code to resolve issues. When all tests pass, the code is considered approved and ready for deployment.
Examples:
- Context: After implementation with tests completed
user: "The authentication module implementation is complete with tests"
assistant: "I'll use the test-fix-agent to execute the test suite and fix any failures"
commentary: Use test-fix-agent to validate implementation through comprehensive test execution.
- Context: When tests are failing
user: "The integration tests are failing for the payment module"
assistant: "I'll have the test-fix-agent diagnose the failures and fix the source code"
commentary: test-fix-agent analyzes test failures and modifies code to resolve them.
- Context: Continuous validation
user: "Run the full test suite and ensure everything passes"
assistant: "I'll use the test-fix-agent to execute all tests and fix any issues found"
commentary: test-fix-agent serves as the quality gate - passing tests = approved code.
model: sonnet
color: green
---
You are a specialized **Test Execution & Fix Agent**. Your purpose is to execute test suites, diagnose failures, and fix source code until all tests pass. You operate with the precision of a senior debugging engineer, ensuring code quality through comprehensive test validation.
## Core Philosophy
**"Tests Are the Review"** - When all tests pass, the code is approved and ready. No separate review process is needed.
## Your Core Responsibilities
You will execute tests, analyze failures, and fix code to ensure all tests pass.
### Test Execution & Fixing Responsibilities:
1. **Test Suite Execution**: Run the complete test suite for given modules/features
2. **Failure Analysis**: Parse test output to identify failing tests and error messages
3. **Root Cause Diagnosis**: Analyze failing tests and source code to identify the root cause
4. **Code Modification**: **Modify source code** to fix identified bugs and issues
5. **Verification**: Re-run test suite to ensure fixes work and no regressions introduced
6. **Approval Certification**: When all tests pass, certify code as approved
## Execution Process
### 1. Context Assessment & Test Discovery
- Analyze task context to identify test files and source code paths
- Load test framework configuration (Jest, Pytest, Mocha, etc.)
- Identify test command from project configuration
```bash
# Detect test framework and command
if [ -f "package.json" ]; then
TEST_CMD=$(cat package.json | jq -r '.scripts.test')
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
TEST_CMD="pytest"
fi
```
### 2. Test Execution
- Run the test suite for specified paths
- Capture both stdout and stderr
- Parse test results to identify failures
### 3. Failure Diagnosis & Fixing Loop
```
WHILE tests are failing:
1. Analyze failure output
2. Identify root cause in source code
3. Modify source code to fix issue
4. Re-run affected tests
5. Verify fix doesn't break other tests
END WHILE
```
### 4. Code Quality Certification
- All tests pass → Code is APPROVED ✅
- Generate summary documenting:
- Issues found
- Fixes applied
- Final test results
## Fixing Criteria
### Bug Identification
- Logic errors causing test failures
- Edge cases not handled properly
- Integration issues between components
- Incorrect error handling
- Resource management problems
### Code Modification Approach
- **Minimal changes**: Fix only what's needed
- **Preserve functionality**: Don't change working code
- **Follow patterns**: Use existing code conventions
- **Test-driven fixes**: Let tests guide the solution
### Verification Standards
- All tests pass without errors
- No new test failures introduced
- Performance remains acceptable
- Code follows project conventions
## Output Format
When you complete a test-fix task, provide:
```markdown
# Test-Fix Summary: [Task-ID] [Feature Name]
## Execution Results
### Initial Test Run
- **Total Tests**: [count]
- **Passed**: [count]
- **Failed**: [count]
- **Errors**: [count]
## Issues Found & Fixed
### Issue 1: [Description]
- **Test**: `tests/auth/login.test.ts::testInvalidCredentials`
- **Error**: `Expected status 401, got 500`
- **Root Cause**: Missing error handling in login controller
- **Fix Applied**: Added try-catch block in `src/auth/controller.ts:45`
- **Files Modified**: `src/auth/controller.ts`
### Issue 2: [Description]
- **Test**: `tests/payment/process.test.ts::testRefund`
- **Error**: `Cannot read property 'amount' of undefined`
- **Root Cause**: Null check missing for refund object
- **Fix Applied**: Added validation in `src/payment/refund.ts:78`
- **Files Modified**: `src/payment/refund.ts`
## Final Test Results
**All tests passing**
- **Total Tests**: [count]
- **Passed**: [count]
- **Duration**: [time]
## Code Approval
**Status**: ✅ APPROVED
All tests pass - code is ready for deployment.
## Files Modified
- `src/auth/controller.ts`: Added error handling
- `src/payment/refund.ts`: Added null validation
```
## Important Reminders
**ALWAYS:**
- **Execute tests first** - Understand what's failing before fixing
- **Diagnose thoroughly** - Find root cause, not just symptoms
- **Fix minimally** - Change only what's needed to pass tests
- **Verify completely** - Run full suite after each fix
- **Document fixes** - Explain what was changed and why
- **Certify approval** - When tests pass, code is approved
**NEVER:**
- Skip test execution - always run tests first
- Make changes without understanding the failure
- Fix symptoms without addressing root cause
- Break existing passing tests
- Skip final verification
- Leave tests failing - must achieve 100% pass rate
## Quality Certification
**Your ultimate responsibility**: Ensure all tests pass. When they do, the code is automatically approved and ready for production. You are the final quality gate.
**Tests passing = Code approved = Mission complete**

View File

@@ -88,8 +88,9 @@ Files updated: .task/IMPL-1.json + 2 subtask files + TODO_LIST.md
### Agent Assignment
- **Design/Planning** → `@planning-agent`
- **Implementation** → `@code-developer`
- **Testing** → `@code-review-test-agent`
- **Review** → `@review-agent`
- **Testing** → `@code-developer` (type: "test-gen")
- **Test Validation** → `@test-fix-agent` (type: "test-fix")
- **Review** → `@general-purpose` (optional)
### Context Inheritance
- Subtasks inherit parent requirements
@@ -161,8 +162,8 @@ See @~/.claude/workflows/workflow-architecture.md for:
▸ impl-1: Build authentication (container)
├── impl-1.1: Design schema → @planning-agent
├── impl-1.2: Implement logic → @code-developer
└── impl-1.3: Write tests → @code-review-test-agent
├── impl-1.2: Implement logic + tests → @code-developer
└── impl-1.3: Execute & fix tests → @test-fix-agent
```
## Error Handling

View File

@@ -107,8 +107,9 @@ Tasks inherit from:
Based on task type and title keywords:
- **Build/Implement** → @code-developer
- **Design/Plan** → @planning-agent
- **Test/Validate** → @code-review-test-agent
- **Review/Audit** → @review-agent`
- **Test Generation** → @code-developer (type: "test-gen")
- **Test Execution/Fix** → @test-fix-agent (type: "test-fix")
- **Review/Audit** → @general-purpose (optional, only when explicitly requested)
## Validation Rules

View File

@@ -24,8 +24,8 @@ examples:
- Executes step-by-step, requiring user confirmation at each checkpoint.
- Allows for dynamic adjustments and manual review during the process.
- **review**
- Executes under the supervision of a `@review-agent`.
- Performs quality checks and provides detailed feedback at each step.
- Optional manual review using `@general-purpose`.
- Used only when explicitly requested by user.
### 🤖 **Agent Selection Logic**
@@ -45,10 +45,12 @@ FUNCTION select_agent(task, agent_override):
RETURN "@code-developer"
WHEN CONTAINS "Design schema", "Plan":
RETURN "@planning-agent"
WHEN CONTAINS "Write tests":
RETURN "@code-review-test-agent"
WHEN CONTAINS "Write tests", "Generate tests":
RETURN "@code-developer" // type: test-gen
WHEN CONTAINS "Execute tests", "Fix tests", "Validate":
RETURN "@test-fix-agent" // type: test-fix
WHEN CONTAINS "Review code":
RETURN "@review-agent"
RETURN "@general-purpose" // Optional manual review
DEFAULT:
RETURN "@code-developer" // Default agent
END CASE
@@ -232,13 +234,15 @@ Different agents receive context tailored to their function, including implement
- Implementation risks and mitigation strategies
- Architecture implications from implementation.context_notes
**`@code-review-test-agent`**:
- Files to test from implementation.files[].path
- Logic flows to validate from implementation.modifications.logic_flow
- Error conditions to test from implementation.context_notes.error_handling
- Performance benchmarks from implementation.context_notes.performance_considerations
**`@test-fix-agent`**:
- Test files to execute from task.context.focus_paths
- Source files to fix from implementation.files[].path
- Expected behaviors from implementation.modifications.logic_flow
- Error conditions to validate from implementation.context_notes.error_handling
- Performance requirements from implementation.context_notes.performance_considerations
**`@review-agent`**:
**`@general-purpose`**:
- Used for optional manual reviews when explicitly requested
- Code quality standards and implementation patterns
- Security considerations from implementation.context_notes.risks
- Dependency validation from implementation.context_notes.dependencies

View File

@@ -91,10 +91,11 @@ allowed-tools: TodoWrite(*), Read(*), Write(*), Bash(*), Glob(*)
**Document Structure**:
```
.workflow/WFS-[topic]/.brainstorming/
── topic-framework.md # ★ STRUCTURED FRAMEWORK DOCUMENT
└── workflow-session.json # Framework metadata and role assignments
── topic-framework.md # ★ STRUCTURED FRAMEWORK DOCUMENT
```
**Note**: `workflow-session.json` is located at `.workflow/WFS-[topic]/workflow-session.json` (session root), not inside `.brainstorming/`.
## Framework Template Structures
### Dynamic Role-Based Framework

View File

@@ -149,7 +149,7 @@ Task(subagent_type="conceptual-planning-agent",
3. **load_session_metadata**
- Action: Load session metadata and topic description
- Command: bash(cat .workflow/WFS-{topic}/.brainstorming/workflow-session.json 2>/dev/null || echo '{}')
- Command: bash(cat .workflow/WFS-{topic}/workflow-session.json 2>/dev/null || echo '{}')
- Output: session_metadata
### Implementation Context
@@ -162,7 +162,7 @@ Task(subagent_type="conceptual-planning-agent",
### Session Context
**Workflow Directory**: .workflow/WFS-{topic}/.brainstorming/
**Output Directory**: .workflow/WFS-{topic}/.brainstorming/{role}/
**Session JSON**: .workflow/WFS-{topic}/.brainstorming/workflow-session.json
**Session JSON**: .workflow/WFS-{topic}/workflow-session.json
### Dependencies & Context
**Topic**: {user-provided-topic}

View File

@@ -88,7 +88,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/.brainstorming/session.json)
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -136,7 +136,7 @@ TodoWrite({
activeForm: "Generating structured data-architect analysis"
},
{
content: "Update session.json with data-architect completion status",
content: "Update workflow-session.json with data-architect completion status",
status: "pending",
activeForm: "Updating session metadata"
}

View File

@@ -88,7 +88,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/.brainstorming/session.json)
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -136,7 +136,7 @@ TodoWrite({
activeForm: "Generating structured product-manager analysis"
},
{
content: "Update session.json with product-manager completion status",
content: "Update workflow-session.json with product-manager completion status",
status: "pending",
activeForm: "Updating session metadata"
}

View File

@@ -88,7 +88,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/.brainstorming/session.json)
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -136,7 +136,7 @@ TodoWrite({
activeForm: "Generating structured product-owner analysis"
},
{
content: "Update session.json with product-owner completion status",
content: "Update workflow-session.json with product-owner completion status",
status: "pending",
activeForm: "Updating session metadata"
}

View File

@@ -88,7 +88,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/.brainstorming/session.json)
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -136,7 +136,7 @@ TodoWrite({
activeForm: "Generating structured scrum-master analysis"
},
{
content: "Update session.json with scrum-master completion status",
content: "Update workflow-session.json with scrum-master completion status",
status: "pending",
activeForm: "Updating session metadata"
}

View File

@@ -88,7 +88,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/.brainstorming/session.json)
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -136,7 +136,7 @@ TodoWrite({
activeForm: "Generating structured subject-matter-expert analysis"
},
{
content: "Update session.json with subject-matter-expert completion status",
content: "Update workflow-session.json with subject-matter-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}

View File

@@ -88,7 +88,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/.brainstorming/session.json)
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -136,7 +136,7 @@ TodoWrite({
activeForm: "Generating structured ui-designer analysis"
},
{
content: "Update session.json with ui-designer completion status",
content: "Update workflow-session.json with ui-designer completion status",
status: "pending",
activeForm: "Updating session metadata"
}

View File

@@ -88,7 +88,7 @@ ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/WFS-{session}/.brainstorming/session.json)
- Command: Read(.workflow/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
@@ -136,7 +136,7 @@ TodoWrite({
activeForm: "Generating structured ux-expert analysis"
},
{
content: "Update session.json with ux-expert completion status",
content: "Update workflow-session.json with ux-expert completion status",
status: "pending",
activeForm: "Updating session metadata"
}

View File

@@ -185,9 +185,9 @@ TodoWrite({
activeForm: "Executing IMPL-1.2: Implement auth logic"
},
{
content: "Execute IMPL-2: Review implementations [code-review-agent]",
content: "Execute TEST-FIX-1: Validate implementation tests [test-fix-agent]",
status: "pending",
activeForm: "Executing IMPL-2: Review implementations"
activeForm: "Executing TEST-FIX-1: Validate implementation tests"
}
]
});
@@ -356,8 +356,8 @@ Task(subagent_type="{meta.agent}",
**WORKFLOW COMPLETION CHECK**:
After updating task status, check if workflow is complete:
total_tasks=\$(ls .workflow/*/\.task/*.json | wc -l)
completed_tasks=\$(ls .workflow/*/\.summaries/*.md 2>/dev/null | wc -l)
total_tasks=\$(find .workflow/*/\.task/ -name "*.json" -type f 2>/dev/null | wc -l)
completed_tasks=\$(find .workflow/*/\.summaries/ -name "*.md" -type f 2>/dev/null | wc -l)
if [ \$total_tasks -eq \$completed_tasks ]; then
SlashCommand(command=\"/workflow:session:complete\")
fi"),
@@ -384,8 +384,8 @@ Task(subagent_type="{meta.agent}",
"title": "Task title",
"status": "pending|active|completed|blocked",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@planning-agent|@code-review-test-agent"
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@test-fix-agent|@general-purpose"
},
"context": {
"requirements": ["req1", "req2"],
@@ -433,7 +433,7 @@ Task(subagent_type="{meta.agent}",
"task_description": "Implement following consolidated synthesis specification...",
"modification_points": ["Apply synthesis specification requirements..."]
},
"target_files": ["file:function:lines"]
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
```
@@ -451,8 +451,9 @@ Task(subagent_type="{meta.agent}",
meta.agent specified → Use specified agent
meta.agent missing → Infer from meta.type:
- "feature" → @code-developer
- "test" → @code-review-test-agent
- "review" → @code-review-agent
- "test-gen" → @code-developer
- "test-fix" → @test-fix-agent
- "review" → @general-purpose
- "docs" → @doc-generator
```

View File

@@ -1,85 +1,272 @@
---
name: review
description: Execute review phase for quality validation
usage: /workflow:review
argument-hint: none
description: Optional specialized review (security, architecture, docs) for completed implementation
usage: /workflow:review [--type=<type>] [session-id]
argument-hint: "[--type=security|architecture|action-items|quality] [session-id]"
examples:
- /workflow:review
- /workflow:review # Quality review of active session
- /workflow:review --type=security # Security audit of active session
- /workflow:review --type=architecture WFS-user-auth # Architecture review of specific session
- /workflow:review --type=action-items # Pre-deployment verification
---
# Workflow Review Command (/workflow:review)
### 🚀 Command Overview: `/workflow:review`
## Overview
Final phase for quality validation, testing, and completion.
**Optional specialized review** for completed implementations. In the standard workflow, **passing tests = approved code**. Use this command only when specialized review is required (security, architecture, compliance, docs).
## Core Principles
**Session Management:** @~/.claude/workflows/workflow-architecture.md
## Philosophy: "Tests Are the Review"
## Review Process
-**Default**: All tests pass → Code approved
- 🔍 **Optional**: Specialized reviews for:
- 🔒 Security audits (vulnerabilities, auth/authz)
- 🏗️ Architecture compliance (patterns, technical debt)
- 📋 Action items verification (requirements met, acceptance criteria)
1. **Validation Checks**
- All tasks completed
- Tests passing
- Code quality metrics
- Documentation complete
## Review Types
2. **Generate Review Report**
| Type | Focus | Use Case |
|------|-------|----------|
| `quality` | Code quality, best practices, maintainability | Default general review |
| `security` | Security vulnerabilities, data handling, access control | Security audits |
| `architecture` | Architectural patterns, technical debt, design decisions | Architecture compliance |
| `action-items` | Requirements met, acceptance criteria verified, action items completed | Pre-deployment verification |
**Notes**:
- For documentation generation, use `/workflow:tools:docs`
- For CLAUDE.md updates, use `/update-memory-related`
## Execution Template
```bash
#!/bin/bash
# Optional specialized review for completed implementation
# Step 1: Session ID resolution
if [ -n "$SESSION_ARG" ]; then
sessionId="$SESSION_ARG"
else
sessionId=$(find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//')
fi
# Step 2: Validation
if [ ! -d ".workflow/${sessionId}" ]; then
echo "❌ Session ${sessionId} not found"
exit 1
fi
# Check for completed tasks
if [ ! -d ".workflow/${sessionId}/.summaries" ] || [ -z "$(find .workflow/${sessionId}/.summaries/ -name "IMPL-*.md" -type f 2>/dev/null)" ]; then
echo "❌ No completed implementation found. Complete implementation first"
exit 1
fi
# Step 3: Determine review type (default: quality)
review_type="${TYPE_ARG:-quality}"
# Redirect docs review to specialized command
if [ "$review_type" = "docs" ]; then
echo "💡 For documentation generation, please use:"
echo " /workflow:tools:docs"
echo ""
echo "The docs command provides:"
echo " - Hierarchical architecture documentation"
echo " - API documentation generation"
echo " - Documentation structure analysis"
exit 0
fi
# Step 4: Analysis handover → Model takes control
# BASH_EXECUTION_STOPS → MODEL_ANALYSIS_BEGINS
```
### 🧠 Model Analysis Phase
After bash validation, the model takes control to:
1. **Load Context**: Read completed task summaries and changed files
```bash
# Load implementation summaries
cat .workflow/${sessionId}/.summaries/IMPL-*.md
# Load test results (if available)
cat .workflow/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
# Get changed files
git log --since="$(cat .workflow/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
```
2. **Perform Specialized Review**: Based on `review_type`
**Security Review** (`--type=security`):
- Use MCP code search for security patterns:
```bash
mcp__code-index__search_code_advanced(pattern="password|token|secret|auth", file_pattern="*.{ts,js,py}")
mcp__code-index__search_code_advanced(pattern="eval|exec|innerHTML|dangerouslySetInnerHTML", file_pattern="*.{ts,js,tsx}")
```
- Use Gemini for security analysis:
```bash
cd .workflow/${sessionId} && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Security audit of completed implementation
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
CONTEXT: @{.summaries/IMPL-*.md,../..,../../CLAUDE.md}
EXPECTED: Security findings report with severity levels
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
" --approval-mode yolo
```
**Architecture Review** (`--type=architecture`):
- Use Qwen for architecture analysis:
```bash
cd .workflow/${sessionId} && ~/.claude/scripts/qwen-wrapper -p "
PURPOSE: Architecture compliance review
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
CONTEXT: @{.summaries/IMPL-*.md,../..,../../CLAUDE.md}
EXPECTED: Architecture assessment with recommendations
RULES: Check for patterns, separation of concerns, modularity, scalability
" --approval-mode yolo
```
**Quality Review** (`--type=quality`):
- Use Gemini for code quality:
```bash
cd .workflow/${sessionId} && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Code quality and best practices review
TASK: Assess code readability, maintainability, adherence to best practices
CONTEXT: @{.summaries/IMPL-*.md,../..,../../CLAUDE.md}
EXPECTED: Quality assessment with improvement suggestions
RULES: Check for code smells, duplication, complexity, naming conventions
" --approval-mode yolo
```
**Action Items Review** (`--type=action-items`):
- Verify all requirements and acceptance criteria met:
```bash
# Load task requirements and acceptance criteria
find .workflow/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
"Task: " + .id + "\n" +
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
"Acceptance: " + (.context.acceptance | join(", "))
' {} \;
# Check implementation summaries against requirements
cd .workflow/${sessionId} && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Verify all requirements and acceptance criteria are met
TASK: Cross-check implementation summaries against original requirements
CONTEXT: @{.task/IMPL-*.json,.summaries/IMPL-*.md,../..,../../CLAUDE.md}
EXPECTED:
- Requirements coverage matrix
- Acceptance criteria verification
- Missing/incomplete action items
- Pre-deployment readiness assessment
RULES:
- Check each requirement has corresponding implementation
- Verify all acceptance criteria are met
- Flag any incomplete or missing action items
- Assess deployment readiness
" --approval-mode yolo
```
3. **Generate Review Report**: Create structured report
```markdown
# Review Report
## Task Completion
- Total: 10
- Completed: 10
- Success Rate: 100%
## Quality Metrics
- Test Coverage: 85%
- Code Quality: A
- Documentation: Complete
## Issues Found
- Minor: 2
- Major: 0
- Critical: 0
# Review Report: ${review_type}
**Session**: ${sessionId}
**Date**: $(date)
**Type**: ${review_type}
## Summary
- Tasks Reviewed: [count IMPL tasks]
- Files Changed: [count files]
- Severity: [High/Medium/Low]
## Findings
### Critical Issues
- [Issue 1 with file:line reference]
- [Issue 2 with file:line reference]
### Recommendations
- [Recommendation 1]
- [Recommendation 2]
### Positive Observations
- [Good pattern observed]
## Action Items
- [ ] [Action 1]
- [ ] [Action 2]
```
3. **Update Session**
```json
{
"current_phase": "REVIEW",
"phases": {
"REVIEW": {
"status": "completed",
"output": "REVIEW.md",
"test_results": {
"passed": 45,
"failed": 0,
"coverage": 85
}
}
}
}
4. **Output Files**:
```bash
# Save review report
Write(.workflow/${sessionId}/REVIEW-${review_type}.md)
# Update session metadata
# (optional) Update workflow-session.json with review status
```
## Auto-fix (Default)
Auto-fix is enabled by default:
- Automatically fixes minor issues
- Runs formatters and linters
- Updates documentation
- Re-runs tests
5. **Optional: Update Memory** (if docs review or significant findings):
```bash
# If architecture or quality issues found, suggest memory update
if [ "$review_type" = "architecture" ] || [ "$review_type" = "quality" ]; then
echo "💡 Consider updating project documentation:"
echo " /update-memory-related"
fi
```
## Completion Criteria
- All tasks marked complete
- Tests passing (configurable threshold)
- No critical issues
- Documentation updated
## Usage Examples
## Output Files
- `REVIEW.md` - Review report
- `workflow-session.json` - Updated with results
- `test-results.json` - Detailed test output
```bash
# General quality review after implementation
/workflow:review
# Security audit before deployment
/workflow:review --type=security
# Architecture review for specific session
/workflow:review --type=architecture WFS-payment-integration
# Documentation review
/workflow:review --type=docs
```
## ✨ Features
- **Simple Validation**: Check session exists and has completed tasks
- **No Complex Orchestration**: Direct analysis, no multi-phase pipeline
- **Specialized Reviews**: Different prompts and tools for different review types
- **MCP Integration**: Fast code search for security and architecture patterns
- **CLI Tool Integration**: Gemini for analysis, Qwen for architecture
- **Structured Output**: Markdown reports with severity levels and action items
- **Optional Memory Update**: Suggests documentation updates for significant findings
## Integration with Workflow
```
Standard Workflow:
plan → execute → test-gen → execute ✅
Optional Review (when needed):
plan → execute → test-gen → execute → review (security/architecture/docs)
```
**When to Use**:
- Before production deployment (security review + action-items review)
- After major feature (architecture review)
- Before code freeze (quality review)
- Pre-deployment verification (action-items review)
**When NOT to Use**:
- Regular development (tests are sufficient)
- Simple bug fixes (test-fix-agent handles it)
- Minor changes (update-memory-related is enough)
## Related Commands
- `/workflow:execute` - Must complete first
- `/task:status` - Check task completion
- `/workflow:status` - View overall status
- `/workflow:execute` - Must complete implementation first
- `/workflow:test-gen` - Primary quality gate (tests)
- `/workflow:tools:docs` - Generate hierarchical documentation (use instead of `--type=docs`)
- `/update-memory-related` - Update CLAUDE.md docs after architecture findings
- `/workflow:status` - Check session status

View File

@@ -44,8 +44,8 @@ mv temp.json .workflow/WFS-session/workflow-session.json
### Step 5: Count Final Statistics
```bash
ls .workflow/WFS-session/.task/*.json 2>/dev/null | wc -l
ls .workflow/WFS-session/.summaries/*.md 2>/dev/null | wc -l
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
```
### Step 6: Remove Active Marker
@@ -56,12 +56,12 @@ rm .workflow/.active-WFS-session-name
## Simple Bash Commands
### Basic Operations
- **Find active session**: `ls .workflow/.active-*`
- **Find active session**: `find .workflow/ -name ".active-*" -type f`
- **Get session name**: `basename marker | sed 's/^\.active-//'`
- **Update status**: `jq '.status = "completed"' session.json > temp.json`
- **Add timestamp**: `jq '.completed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"'`
- **Count tasks**: `ls .task/*.json | wc -l`
- **Count completed**: `ls .summaries/*.md | wc -l`
- **Count tasks**: `find .task/ -name "*.json" -type f | wc -l`
- **Count completed**: `find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l`
- **Remove marker**: `rm .workflow/.active-session`
### Completion Result
@@ -92,11 +92,11 @@ Session Completion Summary:
### Error Handling
```bash
# No active session
ls .workflow/.active-* 2>/dev/null || echo "No active session found"
find .workflow/ -name ".active-*" -type f 2>/dev/null || echo "No active session found"
# Incomplete tasks
task_count=$(ls .task/*.json | wc -l)
summary_count=$(ls .summaries/*.md 2>/dev/null | wc -l)
task_count=$(find .task/ -name "*.json" -type f | wc -l)
summary_count=$(find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l)
test $task_count -eq $summary_count || echo "Warning: Not all tasks completed"
```

View File

@@ -35,8 +35,8 @@ jq -r '.session_id, .status, .project' .workflow/WFS-session/workflow-session.js
### Step 4: Count Task Progress
```bash
ls .workflow/WFS-session/.task/*.json 2>/dev/null | wc -l
ls .workflow/WFS-session/.summaries/*.md 2>/dev/null | wc -l
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
```
### Step 5: Get Creation Time
@@ -47,11 +47,11 @@ jq -r '.created_at // "unknown"' .workflow/WFS-session/workflow-session.json
## Simple Bash Commands
### Basic Operations
- **List sessions**: `ls .workflow/WFS-*`
- **Find active**: `ls .workflow/.active-*`
- **List sessions**: `find .workflow/ -maxdepth 1 -type d -name "WFS-*"`
- **Find active**: `find .workflow/ -name ".active-*" -type f`
- **Read session data**: `jq -r '.session_id, .status' session.json`
- **Count tasks**: `ls .task/*.json | wc -l`
- **Count completed**: `ls .summaries/*.md | wc -l`
- **Count tasks**: `find .task/ -name "*.json" -type f | wc -l`
- **Count completed**: `find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l`
- **Get timestamp**: `jq -r '.created_at' session.json`
## Simple Output Format

View File

@@ -25,7 +25,7 @@ Generates on-demand views from JSON task data. No synchronization needed - all v
### Step 1: Find Active Session
```bash
ls .workflow/.active-* 2>/dev/null | head -1
find .workflow/ -name ".active-*" -type f 2>/dev/null | head -1
```
### Step 2: Load Session Data
@@ -35,7 +35,7 @@ cat .workflow/WFS-session/workflow-session.json
### Step 3: Scan Task Files
```bash
ls .workflow/WFS-session/.task/*.json 2>/dev/null
find .workflow/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
```
### Step 4: Generate Task Status
@@ -45,8 +45,8 @@ cat .workflow/WFS-session/.task/impl-1.json | jq -r '.status'
### Step 5: Count Task Progress
```bash
ls .workflow/WFS-session/.task/*.json | wc -l
ls .workflow/WFS-session/.summaries/*.md 2>/dev/null | wc -l
find .workflow/WFS-session/.task/ -name "*.json" -type f | wc -l
find .workflow/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
```
### Step 6: Display Overview
@@ -66,11 +66,11 @@ ls .workflow/WFS-session/.summaries/*.md 2>/dev/null | wc -l
## Simple Bash Commands
### Basic Operations
- **Find active session**: `ls .workflow/.active-*`
- **Find active session**: `find .workflow/ -name ".active-*" -type f`
- **Read session info**: `cat .workflow/session/workflow-session.json`
- **List tasks**: `ls .workflow/session/.task/*.json`
- **List tasks**: `find .workflow/session/.task/ -name "*.json" -type f`
- **Check task status**: `cat task.json | jq -r '.status'`
- **Count completed**: `ls .summaries/*.md | wc -l`
- **Count completed**: `find .summaries/ -name "*.md" -type f | wc -l`
### Task Status Check
- **pending**: Not started yet
@@ -87,8 +87,8 @@ test -f .workflow/.active-* && echo "Session active"
for f in .workflow/session/.task/*.json; do jq empty "$f" && echo "Valid: $f"; done
# Check summaries match
ls .task/*.json | wc -l
ls .summaries/*.md | wc -l
find .task/ -name "*.json" -type f | wc -l
find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l
```
## Simple Output Format

View File

@@ -0,0 +1,133 @@
---
name: tdd-plan
description: Orchestrate TDD workflow planning with Red-Green-Refactor task chains
usage: /workflow:tdd-plan [--agent] <input>
argument-hint: "[--agent] \"feature description\"|file.md|ISS-001"
examples:
- /workflow:tdd-plan "Implement user authentication"
- /workflow:tdd-plan --agent requirements.md
- /workflow:tdd-plan ISS-001
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
# TDD Workflow Plan Command (/workflow:tdd-plan)
## Coordinator Role
**This command is a pure orchestrator**: Execute 5 slash commands in sequence, parse outputs, pass context, and ensure complete TDD workflow creation.
**Execution Modes**:
- **Manual Mode** (default): Use `/workflow:tools:task-generate-tdd`
- **Agent Mode** (`--agent`): Use `/workflow:tools:task-generate-tdd --agent`
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **No Preliminary Analysis**: Do not read files before Phase 1
3. **Parse Every Output**: Extract required data for next phase
4. **Sequential Execution**: Each phase depends on previous output
5. **Complete All Phases**: Do not return until Phase 5 completes
6. **TDD Context**: All descriptions include "TDD:" prefix
## 5-Phase Execution
### Phase 1: Session Discovery
**Command**: `/workflow:session:start --auto "TDD: [structured-description]"`
**TDD Structured Format**:
```
TDD: [Feature Name]
GOAL: [Objective]
SCOPE: [Included/excluded]
CONTEXT: [Background]
TEST_FOCUS: [Test scenarios]
```
**Parse**: Extract sessionId
### Phase 2: Context Gathering
**Command**: `/workflow:tools:context-gather --session [sessionId] "TDD: [structured-description]"`
**Parse**: Extract contextPath
### Phase 3: TDD Analysis
**Command**: `/workflow:tools:concept-enhanced --session [sessionId] --context [contextPath]`
**Parse**: Verify ANALYSIS_RESULTS.md
### Phase 4: TDD Task Generation
**Command**:
- Manual: `/workflow:tools:task-generate-tdd --session [sessionId]`
- Agent: `/workflow:tools:task-generate-tdd --session [sessionId] --agent`
**Parse**: Extract feature count, chain count, task count
**Validate**:
- TDD_PLAN.md exists
- IMPL_PLAN.md exists
- TEST-*.json, IMPL-*.json, REFACTOR-*.json exist
- TODO_LIST.md exists
### Phase 5: TDD Structure Validation
**Internal validation (no command)**
**Validate**:
1. Each feature has TEST → IMPL → REFACTOR chain
2. Dependencies: IMPL depends_on TEST, REFACTOR depends_on IMPL
3. Meta fields: tdd_phase correct ("red"/"green"/"refactor")
4. Agents: TEST uses @code-review-test-agent, IMPL/REFACTOR use @code-developer
**Return Summary**:
```
TDD Planning complete for session: [sessionId]
Features analyzed: [N]
TDD chains generated: [N]
Total tasks: [3N]
Structure:
- Feature 1: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1
[...]
Plans:
- TDD Structure: .workflow/[sessionId]/TDD_PLAN.md
- Implementation: .workflow/[sessionId]/IMPL_PLAN.md
Next: /workflow:execute or /workflow:tdd-verify
```
## TodoWrite Pattern
```javascript
// Initialize
[
{content: "Execute session discovery", status: "in_progress", activeForm: "..."},
{content: "Execute context gathering", status: "pending", activeForm: "..."},
{content: "Execute TDD analysis", status: "pending", activeForm: "..."},
{content: "Execute TDD task generation", status: "pending", activeForm: "..."},
{content: "Validate TDD structure", status: "pending", activeForm: "..."}
]
// Update after each phase: mark current "completed", next "in_progress"
```
## Input Processing
Convert user input to TDD-structured format:
**Simple text** → Add TDD context
**Detailed text** → Extract components with TEST_FOCUS
**File/Issue** → Read and structure with TDD
## Error Handling
- **Parsing failure**: Retry once, then report
- **Validation failure**: Report missing/invalid data
- **Command failure**: Keep phase in_progress, report error
- **TDD validation failure**: Report incomplete chains or wrong dependencies
## Related Commands
- `/workflow:plan` - Standard (non-TDD) planning
- `/workflow:execute` - Execute TDD tasks
- `/workflow:tdd-verify` - Verify TDD compliance
- `/workflow:status` - View progress

View File

@@ -0,0 +1,361 @@
---
name: tdd-verify
description: Verify TDD workflow compliance and generate quality report
usage: /workflow:tdd-verify [session-id]
argument-hint: "[WFS-session-id]"
examples:
- /workflow:tdd-verify
- /workflow:tdd-verify WFS-auth
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(gemini-wrapper:*)
---
# TDD Verification Command (/workflow:tdd-verify)
## Coordinator Role
**This command is a pure orchestrator**: Execute 4 phases to verify TDD workflow compliance, test coverage, and Red-Green-Refactor cycle execution.
## Core Responsibilities
- Verify TDD task chain structure
- Analyze test coverage
- Validate TDD cycle execution
- Generate compliance report
## 4-Phase Execution
### Phase 1: Session Discovery
**Auto-detect or use provided session**
```bash
# If session-id provided
sessionId = argument
# Else auto-detect active session
find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
```
**Extract**: sessionId
**Validation**: Session directory exists
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Task Chain Validation
**Validate TDD structure using bash commands**
```bash
# Load all task JSONs
find .workflow/{sessionId}/.task/ -name '*.json'
# Extract task IDs
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
# Check dependencies
find .workflow/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
find .workflow/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
# Check meta fields
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
find .workflow/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
```
**Validation**:
- For each feature N, verify TEST-N.M → IMPL-N.M → REFACTOR-N.M exists
- IMPL-N.M.context.depends_on includes TEST-N.M
- REFACTOR-N.M.context.depends_on includes IMPL-N.M
- TEST tasks have tdd_phase="red" and agent="@code-review-test-agent"
- IMPL/REFACTOR tasks have tdd_phase="green"/"refactor" and agent="@code-developer"
**Extract**: Chain validation report
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Test Execution Analysis
**Command**: `SlashCommand(command="/workflow:tools:tdd-coverage-analysis --session [sessionId]")`
**Input**: sessionId from Phase 1
**Parse Output**:
- Coverage metrics (line, branch, function percentages)
- TDD cycle verification results
- Compliance score
**Validation**:
- `.workflow/{sessionId}/.process/test-results.json` exists
- `.workflow/{sessionId}/.process/coverage-report.json` exists
- `.workflow/{sessionId}/.process/tdd-cycle-report.md` exists
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
---
### Phase 4: Compliance Report Generation
**Gemini analysis for comprehensive TDD compliance report**
```bash
cd project-root && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Generate TDD compliance report
TASK: Analyze TDD workflow execution and generate quality report
CONTEXT: @{.workflow/{sessionId}/.task/*.json,.workflow/{sessionId}/.summaries/*,.workflow/{sessionId}/.process/tdd-cycle-report.md}
EXPECTED:
- TDD compliance score (0-100)
- Chain completeness verification
- Test coverage analysis summary
- Quality recommendations
- Red-Green-Refactor cycle validation
- Best practices adherence assessment
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
" > .workflow/{sessionId}/TDD_COMPLIANCE_REPORT.md
```
**Output**: TDD_COMPLIANCE_REPORT.md
**TodoWrite**: Mark phase 4 completed
**Return to User**:
```
TDD Verification Report - Session: {sessionId}
## Chain Validation
✅ Feature 1: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1 (Complete)
✅ Feature 2: TEST-2.1 → IMPL-2.1 → REFACTOR-2.1 (Complete)
⚠️ Feature 3: TEST-3.1 → IMPL-3.1 (Missing REFACTOR phase)
## Test Execution
✅ All TEST tasks produced failing tests
✅ All IMPL tasks made tests pass
✅ All REFACTOR tasks maintained green tests
## Coverage Metrics
Line Coverage: {percentage}%
Branch Coverage: {percentage}%
Function Coverage: {percentage}%
## Compliance Score: {score}/100
Detailed report: .workflow/{sessionId}/TDD_COMPLIANCE_REPORT.md
Recommendations:
- Complete missing REFACTOR-3.1 task
- Consider additional edge case tests for Feature 2
- Improve test failure message clarity in Feature 1
```
## TodoWrite Pattern
```javascript
// Initialize (before Phase 1)
TodoWrite({todos: [
{"content": "Identify target session", "status": "in_progress", "activeForm": "Identifying target session"},
{"content": "Validate task chain structure", "status": "pending", "activeForm": "Validating task chain structure"},
{"content": "Analyze test execution", "status": "pending", "activeForm": "Analyzing test execution"},
{"content": "Generate compliance report", "status": "pending", "activeForm": "Generating compliance report"}
]})
// After Phase 1
TodoWrite({todos: [
{"content": "Identify target session", "status": "completed", "activeForm": "Identifying target session"},
{"content": "Validate task chain structure", "status": "in_progress", "activeForm": "Validating task chain structure"},
{"content": "Analyze test execution", "status": "pending", "activeForm": "Analyzing test execution"},
{"content": "Generate compliance report", "status": "pending", "activeForm": "Generating compliance report"}
]})
// Continue pattern for Phase 2, 3, 4...
```
## Validation Logic
### Chain Validation Algorithm
```
1. Load all task JSONs from .workflow/{sessionId}/.task/
2. Extract task IDs and group by feature number
3. For each feature:
- Check TEST-N.M exists
- Check IMPL-N.M exists
- Check REFACTOR-N.M exists (optional but recommended)
- Verify IMPL-N.M depends_on TEST-N.M
- Verify REFACTOR-N.M depends_on IMPL-N.M
- Verify meta.tdd_phase values
- Verify meta.agent assignments
4. Calculate chain completeness score
5. Report incomplete or invalid chains
```
### Compliance Scoring
```
Base Score: 100 points
Deductions:
- Missing TEST task: -30 points per feature
- Missing IMPL task: -30 points per feature
- Missing REFACTOR task: -10 points per feature
- Wrong dependency: -15 points per error
- Wrong agent: -5 points per error
- Wrong tdd_phase: -5 points per error
- Test didn't fail initially: -10 points per feature
- Tests didn't pass after IMPL: -20 points per feature
- Tests broke during REFACTOR: -15 points per feature
Final Score: Max(0, Base Score - Deductions)
```
## Output Files
```
.workflow/{session-id}/
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report ⭐
└── .process/
├── test-results.json # From tdd-coverage-analysis
├── coverage-report.json # From tdd-coverage-analysis
└── tdd-cycle-report.md # From tdd-coverage-analysis
```
## Error Handling
### Session Discovery Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No active session | No .active-* file | Provide session-id explicitly |
| Multiple active sessions | Multiple .active-* files | Provide session-id explicitly |
| Session not found | Invalid session-id | Check available sessions |
### Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Task files missing | Incomplete planning | Run tdd-plan first |
| Invalid JSON | Corrupted task files | Regenerate tasks |
| Missing summaries | Tasks not executed | Execute tasks before verify |
### Analysis Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Coverage tool missing | No test framework | Configure testing first |
| Tests fail to run | Code errors | Fix errors before verify |
| Gemini analysis fails | Token limit / API error | Retry or reduce context |
## Integration & Usage
### Command Chain
- **Called After**: `/workflow:execute` (when TDD tasks completed)
- **Calls**: `/workflow:tools:tdd-coverage-analysis`, Gemini wrapper
- **Related**: `/workflow:tdd-plan`, `/workflow:status`
### Basic Usage
```bash
# Auto-detect active session
/workflow:tdd-verify
# Specify session
/workflow:tdd-verify WFS-auth
```
### When to Use
- After completing all TDD tasks in a workflow
- Before merging TDD workflow branch
- For TDD process quality assessment
- To identify missing TDD steps
## TDD Compliance Report Structure
```markdown
# TDD Compliance Report - {Session ID}
**Generated**: {timestamp}
**Session**: {sessionId}
**Workflow Type**: TDD
## Executive Summary
Overall Compliance Score: {score}/100
Status: {EXCELLENT | GOOD | NEEDS IMPROVEMENT | FAILED}
## Chain Analysis
### Feature 1: {Feature Name}
**Status**: ✅ Complete
**Chain**: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1
-**Red Phase**: Test created and failed with clear message
-**Green Phase**: Minimal implementation made test pass
-**Refactor Phase**: Code improved, tests remained green
### Feature 2: {Feature Name}
**Status**: ⚠️ Incomplete
**Chain**: TEST-2.1 → IMPL-2.1 (Missing REFACTOR-2.1)
-**Red Phase**: Test created and failed
- ⚠️ **Green Phase**: Implementation seems over-engineered
-**Refactor Phase**: Missing
**Issues**:
- REFACTOR-2.1 task not completed
- IMPL-2.1 implementation exceeded minimal scope
[Repeat for all features]
## Test Coverage Analysis
### Coverage Metrics
- Line Coverage: {percentage}% {status}
- Branch Coverage: {percentage}% {status}
- Function Coverage: {percentage}% {status}
### Coverage Gaps
- {file}:{lines} - Uncovered error handling
- {file}:{lines} - Uncovered edge case
## TDD Cycle Validation
### Red Phase (Write Failing Test)
- ✅ {N}/{total} features had failing tests initially
- ⚠️ Feature 3: No evidence of initial test failure
### Green Phase (Make Test Pass)
- ✅ {N}/{total} implementations made tests pass
- ✅ All implementations minimal and focused
### Refactor Phase (Improve Quality)
- ⚠️ {N}/{total} features completed refactoring
- ❌ Feature 2, 4: Refactoring step skipped
## Best Practices Assessment
### Strengths
- Clear test descriptions
- Good test coverage
- Consistent naming conventions
- Well-structured code
### Areas for Improvement
- Some implementations over-engineered in Green phase
- Missing refactoring steps
- Test failure messages could be more descriptive
## Recommendations
### High Priority
1. Complete missing REFACTOR tasks (Features 2, 4)
2. Verify initial test failures for Feature 3
3. Simplify over-engineered implementations
### Medium Priority
1. Add edge case tests for Features 1, 3
2. Improve test failure message clarity
3. Increase branch coverage to >85%
### Low Priority
1. Add more descriptive test names
2. Consider parameterized tests for similar scenarios
3. Document TDD process learnings
## Conclusion
{Summary of compliance status and next steps}
```
## Related Commands
- `/workflow:tdd-plan` - Creates TDD workflow
- `/workflow:execute` - Executes TDD tasks
- `/workflow:tools:tdd-coverage-analysis` - Analyzes test coverage
- `/workflow:status` - Views workflow progress

View File

@@ -1,145 +1,487 @@
---
name: test-gen
description: Generate comprehensive test workflow based on completed implementation tasks
usage: /workflow:test-gen [session-id]
argument-hint: "WFS-session-id"
description: Create independent test-fix workflow session by analyzing completed implementation
usage: /workflow:test-gen <source-session-id>
argument-hint: "<source-session-id>"
examples:
- /workflow:test-gen
- /workflow:test-gen WFS-user-auth
- /workflow:test-gen WFS-api-refactor
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
# Workflow Test Generation Command
# Workflow Test Generation Command (/workflow:test-gen)
## Overview
Analyzes completed implementation sessions and generates comprehensive test requirements, then calls workflow:plan to create test workflow.
## Coordinator Role
## Usage
```bash
/workflow:test-gen # Auto-detect active session
/workflow:test-gen WFS-session-id # Analyze specific session
**This command is a pure orchestrator**: Creates an independent test-fix workflow session for validating a completed implementation. It reuses the standard planning toolchain with automatic cross-session context gathering.
**Core Principles**:
- **Session Isolation**: Creates new `WFS-test-[source]` session to keep verification separate from implementation
- **Context-First**: Prioritizes gathering code changes and summaries from source session
- **Format Reuse**: Creates standard `IMPL-*.json` task, using `meta.type: "test-fix"` for agent assignment
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
**Execution Flow**:
1. Initialize TodoWrite → Create test session → Parse session ID
2. Gather cross-session context (automatic) → Parse context path
3. Analyze implementation with concept-enhanced → Parse ANALYSIS_RESULTS.md
4. Generate test task from analysis → Return summary
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 test session creation
2. **No Preliminary Analysis**: Do not read files or analyze before Phase 1
3. **Parse Every Output**: Extract required data from each phase for next phase
4. **Sequential Execution**: Each phase depends on previous phase's output
5. **Complete All Phases**: Do not return to user until Phase 4 completes (execution triggered separately)
6. **Track Progress**: Update TodoWrite after every phase completion
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
## 4-Phase Execution
### Phase 1: Create Test Session
**Command**: `SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]\"")`
**Input**: `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
**Expected Behavior**:
- Creates new session with pattern `WFS-test-[source-slug]` (e.g., `WFS-test-user-auth`)
- Writes metadata to `workflow-session.json`:
- `workflow_type: "test_session"`
- `source_session_id: "[sourceSessionId]"`
- Returns new session ID for subsequent phases
**Parse Output**:
- Extract: new test session ID (store as `testSessionId`)
- Pattern: `WFS-test-[slug]`
**Validation**:
- Source session `.workflow/[sourceSessionId]/` exists
- Source session has completed IMPL tasks (`.summaries/IMPL-*-summary.md`)
- New test session directory created
- Metadata includes `workflow_type` and `source_session_id`
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Gather Cross-Session Context
**Command**: `SlashCommand(command="/workflow:tools:context-gather --session [testSessionId]")`
**Input**: `testSessionId` from Phase 1 (e.g., `WFS-test-user-auth`)
**Automatic Detection**:
- context-gather reads `.workflow/[testSessionId]/workflow-session.json`
- Detects `workflow_type: "test_session"`
- Automatically uses `source_session_id` to gather source session context
- No need for manual `--source-session` parameter
**Cross-Session Context Collection** (Automatic):
- Implementation summaries: `.workflow/[sourceSessionId]/.summaries/IMPL-*-summary.md`
- Code changes: `git log --since=[source_session_created_at]` for changed files
- Original plan: `.workflow/[sourceSessionId]/IMPL_PLAN.md`
- Test files: Discovered via MCP code-index tools
**Parse Output**:
- Extract: context package path (store as `contextPath`)
- Pattern: `.workflow/[testSessionId]/.process/context-package.json`
**Validation**:
- Context package created in test session directory
- Contains source session artifacts (summaries, changed files)
- Includes test file inventory
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Implementation Analysis
**Command**: `SlashCommand(command="/workflow:tools:concept-enhanced --session [testSessionId] --context [contextPath]")`
**Input**:
- `testSessionId` from Phase 1 (e.g., `WFS-test-user-auth`)
- `contextPath` from Phase 2 (e.g., `.workflow/WFS-test-user-auth/.process/context-package.json`)
**Analysis Focus**:
- Review implementation summaries from source session
- Identify test files and coverage gaps
- Assess test execution strategy (unit, integration, e2e)
- Determine failure diagnosis approach
- Recommend code quality improvements
**Expected Behavior**:
- Reads context-package.json with cross-session artifacts
- Executes parallel analysis (Gemini for test strategy, optional Codex for validation)
- Generates comprehensive test execution strategy
- Identifies code modification targets for test fixes
- Provides feasibility assessment for test validation
**Parse Output**:
- Verify `.workflow/[testSessionId]/.process/ANALYSIS_RESULTS.md` created
- Contains test strategy and execution plan
- Lists code modification targets (format: `file:function:lines` or `file`)
- Includes risk assessment and optimization recommendations
**Validation**:
- File `.workflow/[testSessionId]/.process/ANALYSIS_RESULTS.md` exists
- Contains complete analysis sections:
- Current State Analysis (test coverage, existing tests)
- Proposed Solution Design (test execution strategy)
- Implementation Strategy (code targets, feasibility)
- Solution Optimization (performance, quality)
- Critical Success Factors (acceptance criteria)
**TodoWrite**: Mark phase 3 completed, phase 4 in_progress
---
### Phase 4: Generate Test Task
**Command**: `SlashCommand(command="/workflow:tools:task-generate --session [testSessionId]")`
**Input**: `testSessionId` from Phase 1
**Expected Behavior**:
- Reads ANALYSIS_RESULTS.md from Phase 3
- Extracts test strategy and code modification targets
- Generates `IMPL-001.json` (reusing standard format) with:
- `meta.type: "test-fix"` (enables @test-fix-agent assignment)
- `meta.agent: "@test-fix-agent"`
- `context.requirements`: Test execution requirements from analysis
- `context.focus_paths`: Test files and source files from analysis
- `context.acceptance`: All tests pass criteria
- `flow_control.pre_analysis`: Load source session summaries
- `flow_control.implementation_approach`: Test execution strategy from ANALYSIS_RESULTS.md
- `flow_control.target_files`: Code modification targets from analysis
**Parse Output**:
- Verify `.workflow/[testSessionId]/.task/IMPL-001.json` exists
- Verify `.workflow/[testSessionId]/IMPL_PLAN.md` created
- Verify `.workflow/[testSessionId]/TODO_LIST.md` created
**Validation**:
- Task JSON has `id: "IMPL-001"` and `meta.type: "test-fix"`
- IMPL_PLAN.md contains test validation strategy from ANALYSIS_RESULTS.md
- TODO_LIST.md shows IMPL-001 task
- flow_control includes code targets and test strategy
- Task is ready for /workflow:execute
**TodoWrite**: Mark phase 4 completed
**Return to User**:
```
Independent test-fix workflow created successfully!
Source Session: [sourceSessionId]
Test Session: [testSessionId]
Task Created: IMPL-001 (test-fix)
Next Steps:
1. Review test plan: .workflow/[testSessionId]/IMPL_PLAN.md
2. Execute validation: /workflow:execute
3. Monitor progress: /workflow:status
The @test-fix-agent will:
- Execute all tests
- Diagnose any failures
- Fix code until tests pass
```
## Dynamic Session ID Resolution
---
The `${SESSION_ID}` variable is dynamically resolved based on:
## TodoWrite Pattern
1. **Command argument**: If session-id provided as argument, use it directly
2. **Auto-detection**: If no argument, detect from active session markers
3. **Format**: Always in format `WFS-session-name`
```javascript
// Initialize (before Phase 1)
TodoWrite({todos: [
{"content": "Create independent test session", "status": "in_progress", "activeForm": "Creating test session"},
{"content": "Gather cross-session context", "status": "pending", "activeForm": "Gathering cross-session context"},
{"content": "Analyze implementation for test strategy", "status": "pending", "activeForm": "Analyzing implementation"},
{"content": "Generate test validation task", "status": "pending", "activeForm": "Generating test validation task"}
]})
```bash
# Example resolution logic:
# If argument provided: SESSION_ID = "WFS-user-auth"
# If no argument: SESSION_ID = $(find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//')
// After Phase 1
TodoWrite({todos: [
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather cross-session context", "status": "in_progress", "activeForm": "Gathering cross-session context"},
{"content": "Analyze implementation for test strategy", "status": "pending", "activeForm": "Analyzing implementation"},
{"content": "Generate test validation task", "status": "pending", "activeForm": "Generating test validation task"}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather cross-session context", "status": "completed", "activeForm": "Gathering cross-session context"},
{"content": "Analyze implementation for test strategy", "status": "in_progress", "activeForm": "Analyzing implementation"},
{"content": "Generate test validation task", "status": "pending", "activeForm": "Generating test validation task"}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather cross-session context", "status": "completed", "activeForm": "Gathering cross-session context"},
{"content": "Analyze implementation for test strategy", "status": "completed", "activeForm": "Analyzing implementation"},
{"content": "Generate test validation task", "status": "in_progress", "activeForm": "Generating test validation task"}
]})
// After Phase 4
TodoWrite({todos: [
{"content": "Create independent test session", "status": "completed", "activeForm": "Creating test session"},
{"content": "Gather cross-session context", "status": "completed", "activeForm": "Gathering cross-session context"},
{"content": "Analyze implementation for test strategy", "status": "completed", "activeForm": "Analyzing implementation"},
{"content": "Generate test validation task", "status": "completed", "activeForm": "Generating test validation task"}
]})
```
## Implementation Flow
## Data Flow
### Step 1: Identify Target Session
```bash
# Auto-detect active session (if no session-id provided)
find .workflow/ -name '.active-*' | head -1 | sed 's/.*active-//'
# Use provided session-id or detected session-id
# SESSION_ID = provided argument OR detected active session
```
User: /workflow:test-gen WFS-user-auth
Phase 1: session-start --new "Test validation for WFS-user-auth"
↓ Creates: WFS-test-user-auth session
↓ Writes: workflow-session.json with workflow_type="test_session", source_session_id="WFS-user-auth"
↓ Output: testSessionId = "WFS-test-user-auth"
Phase 2: context-gather --session WFS-test-user-auth
↓ Auto-detects: test session type from workflow-session.json
↓ Auto-reads: source_session_id = "WFS-user-auth"
↓ Gathers: Cross-session context (summaries, code changes, tests)
↓ Output: .workflow/WFS-test-user-auth/.process/context-package.json
Phase 3: concept-enhanced --session WFS-test-user-auth --context context-package.json
↓ Reads: context-package.json with cross-session artifacts
↓ Executes: Parallel analysis (Gemini test strategy + optional Codex validation)
↓ Analyzes: Test coverage, execution strategy, code targets
↓ Output: .workflow/WFS-test-user-auth/.process/ANALYSIS_RESULTS.md
Phase 4: task-generate --session WFS-test-user-auth
↓ Reads: ANALYSIS_RESULTS.md with test strategy and code targets
↓ Generates: IMPL-001.json with meta.type="test-fix"
↓ Output: Task, plan, and todo files in test session
Return: Summary with next steps (user triggers /workflow:execute separately)
```
### Step 2: Get Session Start Time
```bash
cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at
## Session Metadata Design
**Test Session (`WFS-test-user-auth/workflow-session.json`)**:
```json
{
"session_id": "WFS-test-user-auth",
"project": "Test validation for user authentication implementation",
"status": "planning",
"created_at": "2025-10-03T12:00:00Z",
"workflow_type": "test_session",
"source_session_id": "WFS-user-auth"
}
```
### Step 3: Git Change Analysis (using session start time)
```bash
git log --since="$(cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u | grep -v '^$'
## Automatic Cross-Session Context Collection
When `context-gather` detects `workflow_type: "test_session"`:
**Collected from Source Session** (`.workflow/WFS-user-auth/`):
- Implementation summaries: `.summaries/IMPL-*-summary.md`
- Code changes: `git log --since=[source_created_at] --name-only`
- Original plan: `IMPL_PLAN.md`
- Task definitions: `.task/IMPL-*.json`
**Collected from Current Project**:
- Test files: `mcp__code-index__find_files(pattern="*.test.*")`
- Test configuration: `package.json`, `jest.config.js`, etc.
- Source files: Based on changed files from git log
## Task Generation Output
task-generate creates `IMPL-001.json` (reusing standard format) with:
```json
{
"id": "IMPL-001",
"title": "Execute and validate tests for [sourceSessionId]",
"status": "pending",
"meta": {
"type": "test-fix",
"agent": "@test-fix-agent"
},
"context": {
"requirements": [
"Execute complete test suite for all implemented modules",
"Diagnose and fix any test failures",
"Ensure all tests pass before completion"
],
"focus_paths": ["src/**/*.test.ts", "src/**/implementation.ts"],
"acceptance": [
"All tests pass successfully",
"No test failures or errors",
"Code is approved and ready for deployment"
],
"depends_on": [],
"artifacts": []
},
"flow_control": {
"pre_analysis": [
{
"step": "load_source_session_summaries",
"action": "Load implementation summaries from source session",
"commands": [
"bash(find .workflow/[sourceSessionId]/.summaries/ -name 'IMPL-*-summary.md' 2>/dev/null)",
"Read(.workflow/[sourceSessionId]/.summaries/IMPL-001-summary.md)"
],
"output_to": "implementation_context",
"on_error": "skip_optional"
},
{
"step": "analyze_test_files",
"action": "Identify test files and coverage",
"commands": [
"mcp__code-index__find_files(pattern=\"*.test.*\")",
"mcp__code-index__search_code_advanced(pattern=\"test|describe|it\", file_pattern=\"*.test.*\")"
],
"output_to": "test_inventory",
"on_error": "fail"
}
],
"implementation_approach": {
"task_description": "Execute tests and fix failures until all pass",
"modification_points": [
"Run test suite using detected framework",
"Parse test output to identify failures",
"Diagnose root cause of failures",
"Modify source code to fix issues",
"Re-run tests to verify fixes"
],
"logic_flow": [
"Load implementation context",
"Identify test framework and configuration",
"Execute complete test suite",
"If failures: analyze error messages",
"Fix source code based on diagnosis",
"Re-run tests",
"Repeat until all tests pass"
]
},
"target_files": ["src/**/*.test.ts", "src/**/implementation.ts"]
}
}
```
### Step 4: Filter Code Files
```bash
git log --since="$(cat .workflow/WFS-${SESSION_ID}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u | grep -E '\.(js|ts|jsx|tsx|py|java|go|rs)$'
## Error Handling
### Phase 1 Failures
- **Source session not found**: Return error "Source session [sourceSessionId] not found in .workflow/"
- **Invalid source session**: Return error "Source session [sourceSessionId] has no completed IMPL tasks"
- **Session creation failed**: Return error "Could not create test session. Check /workflow:session:start"
### Phase 2 Failures
- **Context gathering failed**: Return error "Could not gather cross-session context. Check source session artifacts"
- **No source artifacts**: Return error "Source session has no implementation summaries or git history"
### Phase 3 Failures
- **Analysis failed**: Return error "Implementation analysis failed. Check context package and ANALYSIS_RESULTS.md"
- **No test strategy**: Return error "Could not determine test execution strategy from analysis"
- **Missing code targets**: Warning only, proceed with general test task
### Phase 4 Failures
- **Task generation failed**: Retry once, then return error with details
- **Invalid task structure**: Return error with JSON validation details
## Workflow Integration
### Complete Flow Example
```
1. Implementation Phase (prior to test-gen)
/workflow:plan "Build auth system"
→ Creates WFS-auth session
→ @code-developer implements + writes tests
→ Creates IMPL-001-summary.md in WFS-auth
2. Test Generation Phase (test-gen)
/workflow:test-gen WFS-auth
Phase 1: session-start → Creates WFS-test-auth session
Phase 2: context-gather → Gathers from WFS-auth, creates context-package.json
Phase 3: concept-enhanced → Analyzes implementation, creates ANALYSIS_RESULTS.md
Phase 4: task-generate → Creates IMPL-001.json with meta.type="test-fix"
Returns: Summary with next steps
3. Test Execution Phase (user-triggered)
/workflow:execute
→ Detects active session: WFS-test-auth
→ @test-fix-agent picks up IMPL-001 (test-fix type)
→ Runs test suite
→ Diagnoses failures (if any)
→ Fixes source code
→ Re-runs tests
→ All pass → Code approved ✅
```
### Step 5: Load Session Context
```bash
cat .workflow/WFS-${SESSION_ID}/.summaries/IMPL-*-summary.md 2>/dev/null
```
### Output Files Created
### Step 6: Extract Focus Paths
```bash
find .workflow/WFS-${SESSION_ID}/.task/ -name '*.json' -exec jq -r '.context.focus_paths[]?' {} \;
```
**In Test Session** (`.workflow/WFS-test-auth/`):
- `workflow-session.json` - Contains workflow_type and source_session_id
- `.process/context-package.json` - Cross-session context from WFS-auth
- `.process/ANALYSIS_RESULTS.md` - Test strategy and code targets from concept-enhanced
- `.task/IMPL-001.json` - Test-fix task definition
- `IMPL_PLAN.md` - Test validation plan (from ANALYSIS_RESULTS.md)
- `TODO_LIST.md` - Task checklist
- `.summaries/IMPL-001-summary.md` - Created by @test-fix-agent after completion
### Step 7: Gemini Analysis and Planning Document Generation
```bash
cd project-root && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Analyze implementation and generate comprehensive test planning document
TASK: Review changed files and implementation context to create detailed test planning document
CONTEXT: Changed files: [changed_files], Implementation summaries: [impl_summaries], Focus paths: [focus_paths]
EXPECTED: Complete test planning document including:
- Test strategy analysis
- Critical test scenarios identification
- Edge cases and error conditions
- Test priority matrix
- Resource requirements
- Implementation approach recommendations
- Specific test cases with acceptance criteria
RULES: Generate structured markdown document suitable for workflow planning. Focus on actionable test requirements based on actual implementation changes.
" > .workflow/WFS-${SESSION_ID}/.process/GEMINI_TEST_PLAN.md
```
## Best Practices
### Step 8: Generate Combined Test Requirements Document
```bash
mkdir -p .workflow/WFS-${SESSION_ID}/.process
```
1. **Run after implementation complete**: Ensure source session has completed IMPL tasks and summaries
2. **Check git commits**: Implementation changes should be committed for accurate git log analysis
3. **Verify test files exist**: Source implementation should include test files
4. **Independent sessions**: Test session is separate from implementation session for clean separation
5. **Monitor execution**: Use `/workflow:status` to track test-fix progress after /workflow:execute
```bash
cat > .workflow/WFS-${SESSION_ID}/.process/TEST_REQUIREMENTS.md << 'EOF'
# Test Requirements Summary for WFS-${SESSION_ID}
## Coordinator Checklist
## Analysis Data Sources
- Git change analysis results
- Implementation summaries and context
- Gemini-generated test planning document
✅ Initialize TodoWrite before any command (4 phases)
✅ Phase 1: Create test session with source session ID
✅ Parse new test session ID from Phase 1 output
✅ Phase 2: Run context-gather (auto-detects test session, no extra params)
✅ Verify context-package.json contains cross-session artifacts
✅ Phase 3: Run concept-enhanced with session and context path
✅ Verify ANALYSIS_RESULTS.md contains test strategy and code targets
✅ Phase 4: Run task-generate to create IMPL-001.json
✅ Verify task has meta.type="test-fix" and meta.agent="@test-fix-agent"
✅ Verify flow_control includes analysis insights and code targets
✅ Update TodoWrite after each phase
✅ Return summary after Phase 4 (execution is separate user action)
## Reference Documents
- Detailed test plan: GEMINI_TEST_PLAN.md
- Implementation context: IMPL-*-summary.md files
## Required Tool Modifications
## Integration Note
This document combines analysis data with Gemini-generated planning document for comprehensive test workflow generation.
EOF
```
### `/workflow:session:start`
- Support `--new` flag for test session creation
- Auto-detect test session pattern from task description
- Write `workflow_type: "test_session"` and `source_session_id` to metadata
### Step 9: Call Workflow Plan with Gemini Planning Document
```bash
/workflow:plan .workflow/WFS-${SESSION_ID}/.process/GEMINI_TEST_PLAN.md
```
### `/workflow:tools:context-gather`
- Read session metadata to detect `workflow_type: "test_session"`
- Auto-extract `source_session_id` from metadata
- Gather cross-session context from source session artifacts
- Include git log analysis from source session creation time
## Simple Bash Commands
### `/workflow:tools:concept-enhanced`
- No changes required (already supports cross-session context analysis)
- Will automatically analyze test strategy based on context-package.json
- Generates ANALYSIS_RESULTS.md with code targets and test execution plan
### Basic Operations
- **Find active session**: `find .workflow/ -name '.active-*'`
- **Get git changes**: `git log --since='date' --name-only`
- **Filter code files**: `grep -E '\.(js|ts|py)$'`
- **Load summaries**: `cat .workflow/WFS-*/summaries/*.md`
- **Extract JSON data**: `jq -r '.context.focus_paths[]'`
- **Create directory**: `mkdir -p .workflow/session/.process`
- **Write file**: `cat > file << 'EOF'`
### `/workflow:tools:task-generate`
- Recognize test session context and generate appropriate task
- Create `IMPL-001.json` with `meta.type: "test-fix"`
- Extract test strategy from ANALYSIS_RESULTS.md
- Include code targets and cross-session references in flow_control
### Gemini CLI Integration
- **Planning command**: `~/.claude/scripts/gemini-wrapper -p "prompt" > GEMINI_TEST_PLAN.md`
- **Context loading**: Include changed files and implementation context
- **Document generation**: Creates comprehensive test planning document
- **Direct handoff**: Pass Gemini planning document to workflow:plan
## No Complex Logic
- No variables or functions
- No conditional statements
- No loops or complex pipes
- Direct bash commands only
- Gemini CLI for intelligent analysis
### `/workflow:execute`
- No changes required (already dispatches by meta.agent)
## Related Commands
- `/workflow:plan` - Called to generate test workflow
- `/workflow:execute` - Executes generated test tasks
- `/workflow:status` - Shows test workflow progress
- `/workflow:plan` - Create implementation workflow (run before test-gen)
- `/workflow:session:start` - Phase 1 tool for test session creation
- `/workflow:tools:context-gather` - Phase 2 tool for cross-session context collection
- `/workflow:tools:concept-enhanced` - Phase 3 tool for implementation analysis and test strategy
- `/workflow:tools:task-generate` - Phase 4 tool for test task creation
- `/workflow:execute` - Execute test-fix workflow (user-triggered after test-gen)
- `/workflow:status` - Check workflow progress
- `@test-fix-agent` - Agent that executes and fixes tests

View File

@@ -141,6 +141,9 @@ Advanced solution design and feasibility analysis engine with parallel CLI execu
RULES:
- Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS, NOT task planning
- Provide architectural rationale, evaluate alternatives, assess tradeoffs
- **CRITICAL**: Identify code targets - existing files as "file:function:lines", new files as "file"
- For modifications: specify exact files/functions/line ranges
- For new files: specify file path only (no function or lines)
- Do NOT create task lists, implementation steps, or code examples
- Do NOT generate any code snippets or implementation details
- **MUST write output to .workflow/{session_id}/.process/gemini-solution-design.md**
@@ -172,6 +175,8 @@ Advanced solution design and feasibility analysis engine with parallel CLI execu
RULES:
- Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT, NOT implementation planning
- Validate architectural decisions, identify potential issues, recommend optimizations
- **CRITICAL**: Verify code targets - existing files "file:function:lines", new files "file"
- Confirm exact locations for modifications, identify additional new files if needed
- Do NOT create task breakdowns, step-by-step guides, or code examples
- Do NOT generate any code snippets or implementation details
- **MUST write output to .workflow/{session_id}/.process/codex-feasibility-validation.md**
@@ -301,6 +306,39 @@ Generated ANALYSIS_RESULTS.md focuses on **solution improvements, key design dec
- **Module Dependencies**: {dependency_graph_and_order}
- **Quality Assurance**: {qa_approach_and_validation}
### Code Modification Targets
**Purpose**: Specific code locations for modification AND new files to create
**Format**:
- Existing files: `file:function:lines` (with line numbers)
- New files: `file` (no function or lines)
**Identified Targets**:
1. **Target**: `src/auth/AuthService.ts:login:45-52`
- **Type**: Modify existing
- **Modification**: Enhance error handling
- **Rationale**: Current logic lacks validation for edge cases
2. **Target**: `src/auth/PasswordReset.ts`
- **Type**: Create new file
- **Purpose**: Password reset functionality
- **Rationale**: New feature requirement
3. **Target**: `src/middleware/auth.ts:validateToken:30-45`
- **Type**: Modify existing
- **Modification**: Add token expiry check
- **Rationale**: Security requirement for JWT validation
4. **Target**: `tests/auth/PasswordReset.test.ts`
- **Type**: Create new file
- **Purpose**: Test coverage for password reset
- **Rationale**: Test requirement for new feature
**Note**:
- For new files, only specify the file path (no function or lines)
- For existing files without line numbers, use `file:function:*` format
- Task generation will refine these during the `analyze_task_patterns` step
### Feasibility Assessment
- **Technical Complexity**: {complexity_rating_and_analysis}
- **Performance Impact**: {expected_performance_characteristics}

View File

@@ -158,7 +158,7 @@ Task(
"status": "pending",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@code-review-test-agent"
"agent": "@code-developer|@test-fix-agent|@general-purpose"
},
"context": {
"requirements": ["extracted from analysis"],
@@ -212,7 +212,7 @@ Task(
"Validate against acceptance criteria"
]
},
"target_files": ["file:function:lines"]
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
\`\`\`

View File

@@ -0,0 +1,368 @@
---
name: task-generate-tdd
description: Generate TDD task chains with Red-Green-Refactor dependencies
usage: /workflow:tools:task-generate-tdd --session <session_id> [--agent]
argument-hint: "--session WFS-session-id [--agent]"
examples:
- /workflow:tools:task-generate-tdd --session WFS-auth
- /workflow:tools:task-generate-tdd --session WFS-auth --agent
allowed-tools: Read(*), Write(*), Bash(gemini-wrapper:*), TodoWrite(*)
---
# TDD Task Generation Command
## Overview
Generate TDD-specific task chains from analysis results with enforced Red-Green-Refactor structure and dependencies.
## Core Philosophy
- **TDD-First**: Every feature starts with a failing test
- **Chain-Enforced**: Dependencies ensure proper TDD cycle
- **Phase-Explicit**: Each task marked with Red/Green/Refactor phase
- **Artifact-Aware**: Integrates brainstorming outputs
- **Memory-First**: Reuse loaded documents from memory
## Core Responsibilities
- Parse analysis results and identify testable features
- Generate Red-Green-Refactor task chains for each feature
- Enforce proper dependencies (TEST → IMPL → REFACTOR)
- Create TDD_PLAN.md and enhanced IMPL_PLAN.md
- Generate TODO_LIST.md with TDD phase indicators
- Update session state for TDD execution
## Execution Lifecycle
### Phase 1: Input Validation & Discovery
**⚡ Memory-First Rule**: Skip file loading if documents already in conversation memory
1. **Session Validation**
- If session metadata in memory → Skip loading
- Else: Load `.workflow/{session_id}/workflow-session.json`
2. **Analysis Results Loading**
- If ANALYSIS_RESULTS.md in memory → Skip loading
- Else: Read `.workflow/{session_id}/.process/ANALYSIS_RESULTS.md`
3. **Artifact Discovery**
- If artifact inventory in memory → Skip scanning
- Else: Scan `.workflow/{session_id}/.brainstorming/` directory
- Detect: synthesis-specification.md, topic-framework.md, role analyses
### Phase 2: TDD Task Analysis
#### Gemini TDD Breakdown
```bash
cd project-root && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Generate TDD task breakdown with Red-Green-Refactor chains
TASK: Analyze ANALYSIS_RESULTS.md and create TDD-structured task breakdown
CONTEXT: @{.workflow/{session_id}/ANALYSIS_RESULTS.md,.workflow/{session_id}/.brainstorming/*}
EXPECTED:
- Feature list with testable requirements (identify 3-8 features)
- Test cases for each feature (Red phase) - specific test scenarios
- Implementation requirements (Green phase) - minimal code to pass
- Refactoring opportunities (Refactor phase) - quality improvements
- Task dependencies and execution order
- Focus paths for each phase
RULES:
- Each feature must have TEST → IMPL → REFACTOR chain
- Tests must define clear failure conditions
- Implementation must be minimal to pass tests
- Refactoring must maintain green tests
- Output structured markdown for task generation
- Maximum 10 features (30 total tasks)
" > .workflow/{session_id}/.process/TDD_TASK_BREAKDOWN.md
```
### Phase 3: TDD Task JSON Generation
#### Task Chain Structure
For each feature, generate 3 tasks with ID format:
- **TEST-N.M** (Red phase)
- **IMPL-N.M** (Green phase)
- **REFACTOR-N.M** (Refactor phase)
#### Chain Dependency Rules
- **IMPL depends_on TEST**: Cannot implement before test exists
- **REFACTOR depends_on IMPL**: Cannot refactor before implementation
- **Cross-feature dependencies**: If Feature 2 needs Feature 1, then `IMPL-2.1 depends_on ["REFACTOR-1.1"]`
#### Agent Assignment
- **TEST tasks** → `@code-review-test-agent`
- **IMPL tasks** → `@code-developer`
- **REFACTOR tasks** → `@code-developer`
#### Meta Fields
- `meta.type`: "test" | "feature" | "refactor"
- `meta.agent`: Agent for task execution
- `meta.tdd_phase`: "red" | "green" | "refactor"
#### Task JSON Examples
**RED Phase - Test Task (TEST-1.1.json)**
```json
{
"id": "TEST-1.1",
"title": "Write failing test for user authentication",
"status": "pending",
"meta": {
"type": "test",
"agent": "@code-review-test-agent",
"tdd_phase": "red"
},
"context": {
"requirements": [
"Write test case for login with valid credentials",
"Test should fail with 'AuthService not implemented' error",
"Include tests for invalid credentials and edge cases"
],
"focus_paths": ["tests/auth/login.test.ts"],
"acceptance": [
"Test file created with at least 3 test cases",
"Test runs and fails with clear error message",
"Test assertions define expected behavior"
],
"depends_on": []
},
"flow_control": {
"pre_analysis": [
{
"step": "check_test_framework",
"action": "Verify test framework is configured",
"command": "bash(npm list jest || npm list vitest)",
"output_to": "test_framework_info",
"on_error": "warn"
}
]
}
}
```
**GREEN Phase - Implementation Task (IMPL-1.1.json)**
```json
{
"id": "IMPL-1.1",
"title": "Implement user authentication to pass tests",
"status": "pending",
"meta": {
"type": "feature",
"agent": "@code-developer",
"tdd_phase": "green"
},
"context": {
"requirements": [
"Implement minimal AuthService to pass TEST-1.1",
"Handle valid and invalid credentials",
"Return appropriate success/error responses"
],
"focus_paths": ["src/auth/AuthService.ts", "tests/auth/login.test.ts"],
"acceptance": [
"All tests in TEST-1.1 pass",
"Implementation is minimal and focused",
"No over-engineering or premature optimization"
],
"depends_on": ["TEST-1.1"]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_test_requirements",
"action": "Read test specifications from TEST phase",
"command": "bash(cat .workflow/WFS-xxx/.summaries/TEST-1.1-summary.md)",
"output_to": "test_requirements",
"on_error": "fail"
},
{
"step": "verify_tests_failing",
"action": "Confirm tests are currently failing",
"command": "bash(npm test -- tests/auth/login.test.ts || echo 'Tests failing as expected')",
"output_to": "initial_test_status",
"on_error": "warn"
}
],
"post_completion": [
{
"step": "verify_tests_passing",
"action": "Confirm all tests now pass",
"command": "bash(npm test -- tests/auth/login.test.ts)",
"output_to": "final_test_status",
"on_error": "fail"
}
]
}
}
```
**REFACTOR Phase - Refactoring Task (REFACTOR-1.1.json)**
```json
{
"id": "REFACTOR-1.1",
"title": "Refactor authentication implementation",
"status": "pending",
"meta": {
"type": "refactor",
"agent": "@code-developer",
"tdd_phase": "refactor"
},
"context": {
"requirements": [
"Improve code quality while keeping tests green",
"Remove duplication in credential validation",
"Improve error handling and logging",
"Enhance code readability and maintainability"
],
"focus_paths": ["src/auth/AuthService.ts", "tests/auth/login.test.ts"],
"acceptance": [
"Code quality improved (complexity, readability)",
"All tests still pass after refactoring",
"No new functionality added",
"Duplication eliminated"
],
"depends_on": ["IMPL-1.1"]
},
"flow_control": {
"pre_analysis": [
{
"step": "verify_tests_passing",
"action": "Run tests to confirm green state before refactoring",
"command": "bash(npm test -- tests/auth/login.test.ts)",
"output_to": "test_status",
"on_error": "fail"
},
{
"step": "analyze_code_quality",
"action": "Run linter and complexity analysis",
"command": "bash(npm run lint src/auth/AuthService.ts)",
"output_to": "quality_metrics",
"on_error": "warn"
}
],
"post_completion": [
{
"step": "verify_tests_still_passing",
"action": "Confirm tests remain green after refactoring",
"command": "bash(npm test -- tests/auth/login.test.ts)",
"output_to": "final_test_status",
"on_error": "fail"
}
]
}
}
```
### Phase 4: TDD_PLAN.md Generation
Generate TDD-specific plan with:
- Feature breakdown
- Red-Green-Refactor chains
- Execution order
- TDD compliance checkpoints
### Phase 5: Enhanced IMPL_PLAN.md Generation
Generate standard IMPL_PLAN.md with TDD context reference.
### Phase 6: TODO_LIST.md Generation
Generate task list with TDD phase indicators:
```markdown
## Feature 1: {Feature Name}
- [ ] **TEST-1.1**: Write failing test (🔴 RED) → [📋](./.task/TEST-1.1.json)
- [ ] **IMPL-1.1**: Implement to pass tests (🟢 GREEN) [depends: TEST-1.1] → [📋](./.task/IMPL-1.1.json)
- [ ] **REFACTOR-1.1**: Refactor implementation (🔵 REFACTOR) [depends: IMPL-1.1] → [📋](./.task/REFACTOR-1.1.json)
```
### Phase 7: Session State Update
Update workflow-session.json with TDD metadata:
```json
{
"workflow_type": "tdd",
"feature_count": 10,
"task_count": 30,
"tdd_chains": 10
}
```
## Output Files Structure
```
.workflow/{session-id}/
├── IMPL_PLAN.md # Standard implementation plan
├── TDD_PLAN.md # TDD-specific plan ⭐ NEW
├── TODO_LIST.md # Progress tracking with TDD phases
├── .task/
│ ├── TEST-1.1.json # Red phase task
│ ├── IMPL-1.1.json # Green phase task
│ ├── REFACTOR-1.1.json # Refactor phase task
│ └── ...
└── .process/
├── ANALYSIS_RESULTS.md # Input from concept-enhanced
├── TDD_TASK_BREAKDOWN.md # Gemini TDD breakdown ⭐ NEW
└── context-package.json # Input from context-gather
```
## Validation Rules
### Chain Completeness
- Every TEST-N.M must have corresponding IMPL-N.M and REFACTOR-N.M
### Dependency Enforcement
- IMPL-N.M must have `depends_on: ["TEST-N.M"]`
- REFACTOR-N.M must have `depends_on: ["IMPL-N.M"]`
### Task Limits
- Maximum 10 features (30 tasks total)
- Flat hierarchy only
## Error Handling
### Input Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Session not found | Invalid session ID | Verify session exists |
| Analysis missing | Incomplete planning | Run concept-enhanced first |
### TDD Generation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Feature count exceeds 10 | Too many features | Re-scope requirements |
| Missing test framework | No test config | Configure testing first |
| Invalid chain structure | Parsing error | Fix TDD breakdown |
## Integration & Usage
### Command Chain
- **Called By**: `/workflow:tdd-plan` (Phase 4)
- **Calls**: Gemini wrapper for TDD breakdown
- **Followed By**: `/workflow:execute`, `/workflow:tdd-verify`
### Basic Usage
```bash
# Manual mode (default)
/workflow:tools:task-generate-tdd --session WFS-auth
# Agent mode (autonomous task generation)
/workflow:tools:task-generate-tdd --session WFS-auth --agent
```
### Expected Output
```
TDD task generation complete for session: WFS-auth
Features analyzed: 5
TDD chains generated: 5
Total tasks: 15 (5 TEST + 5 IMPL + 5 REFACTOR)
Structure:
- Feature 1: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1
- Feature 2: TEST-2.1 → IMPL-2.1 → REFACTOR-2.1
Plans generated:
- TDD Plan: .workflow/WFS-auth/TDD_PLAN.md
- Implementation Plan: .workflow/WFS-auth/IMPL_PLAN.md
Next: /workflow:execute or /workflow:tdd-verify
```
## Related Commands
- `/workflow:tdd-plan` - Orchestrates TDD workflow planning
- `/workflow:execute` - Executes TDD tasks in order
- `/workflow:tdd-verify` - Verifies TDD compliance

View File

@@ -67,8 +67,8 @@ Generate task JSON files and IMPL_PLAN.md from analysis results with automatic a
"title": "Descriptive task name",
"status": "pending|active|completed|blocked|container",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@planning-agent|@code-review-test-agent"
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@test-fix-agent|@general-purpose"
},
"context": {
"requirements": ["Clear requirement from analysis"],
@@ -142,12 +142,12 @@ Generate task JSON files and IMPL_PLAN.md from analysis results with automatic a
},
{
"step": "analyze_task_patterns",
"action": "Analyze existing code patterns",
"action": "Analyze existing code patterns and identify modification targets",
"commands": [
"bash(cd \"[focus_paths]\")",
"bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Analyze patterns TASK: Review '[title]' CONTEXT: [synthesis_specification] [individual_artifacts] EXPECTED: Pattern analysis RULES: Prioritize synthesis-specification.md\")"
"bash(~/.claude/scripts/gemini-wrapper -p \"PURPOSE: Identify modification targets TASK: Analyze '[title]' and locate specific files/functions/lines to modify CONTEXT: [synthesis_specification] [individual_artifacts] EXPECTED: Code locations in format 'file:function:lines' RULES: Prioritize synthesis-specification.md, identify exact modification points\")"
],
"output_to": "task_context",
"output_to": "task_context_with_targets",
"on_error": "fail"
}
],
@@ -175,8 +175,40 @@ Generate task JSON files and IMPL_PLAN.md from analysis results with automatic a
1. Parse analysis results and extract task definitions
2. Detect brainstorming artifacts with priority scoring
3. Generate task context (requirements, focus_paths, acceptance)
4. Build flow_control with artifact loading steps
5. Create individual task JSON files in `.task/`
4. **Determine modification targets**: Extract specific code locations from analysis
5. Build flow_control with artifact loading steps and target_files
6. Create individual task JSON files in `.task/`
#### Target Files Generation (Critical)
**Purpose**: Identify specific code locations for modification AND new files to create
**Source Data Priority**:
1. **ANALYSIS_RESULTS.md** - Should contain identified code locations
2. **Gemini/MCP Analysis** - From `analyze_task_patterns` step
3. **Context Package** - File references from `focus_paths`
**Format**: `["file:function:lines"]` or `["file"]` (for new files)
- `file`: Relative path from project root (e.g., `src/auth/AuthService.ts`)
- `function`: Function/method name to modify (e.g., `login`, `validateToken`) - **omit for new files**
- `lines`: Approximate line range (e.g., `45-52`, `120-135`) - **omit for new files**
**Examples**:
```json
"target_files": [
"src/auth/AuthService.ts:login:45-52",
"src/middleware/auth.ts:validateToken:30-45",
"src/auth/PasswordReset.ts",
"tests/auth/PasswordReset.test.ts",
"tests/auth.test.ts:testLogin:15-20"
]
```
**Generation Strategy**:
- **New files to create** → Use `["path/to/NewFile.ts"]` (no function or lines)
- **Existing files with specific locations** → Use `["file:function:lines"]`
- **Existing files with function only** → Search lines using MCP/grep `["file:function:*"]`
- **Existing files (explore entire)** → Mark as `["file.ts:*:*"]`
- **No specific targets** → Leave empty `[]` (agent explores focus_paths)
### Phase 3: Artifact Detection & Integration

View File

@@ -0,0 +1,284 @@
---
name: tdd-coverage-analysis
description: Analyze test coverage and TDD cycle execution
usage: /workflow:tools:tdd-coverage-analysis --session <session_id>
argument-hint: "--session WFS-session-id"
examples:
- /workflow:tools:tdd-coverage-analysis --session WFS-auth
allowed-tools: Read(*), Write(*), Bash(*)
---
# TDD Coverage Analysis Command
## Overview
Analyze test coverage and verify Red-Green-Refactor cycle execution for TDD workflow validation.
## Core Responsibilities
- Extract test files from TEST tasks
- Run test suite with coverage
- Parse coverage metrics
- Verify TDD cycle execution (Red -> Green -> Refactor)
- Generate coverage and cycle reports
## Execution Lifecycle
### Phase 1: Extract Test Tasks
```bash
find .workflow/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '.context.focus_paths[]' {} \;
```
**Output**: List of test directories/files from all TEST tasks
### Phase 2: Run Test Suite
```bash
# Node.js/JavaScript
npm test -- --coverage --json > .workflow/{session_id}/.process/test-results.json
# Python
pytest --cov --json-report > .workflow/{session_id}/.process/test-results.json
# Other frameworks (detect from project)
[test_command] --coverage --json-output .workflow/{session_id}/.process/test-results.json
```
**Output**: test-results.json with coverage data
### Phase 3: Parse Coverage Data
```bash
jq '.coverage' .workflow/{session_id}/.process/test-results.json > .workflow/{session_id}/.process/coverage-report.json
```
**Extract**:
- Line coverage percentage
- Branch coverage percentage
- Function coverage percentage
- Uncovered lines/branches
### Phase 4: Verify TDD Cycle
For each TDD chain (TEST-N.M -> IMPL-N.M -> REFACTOR-N.M):
**1. Red Phase Verification**
```bash
# Check TEST task summary
cat .workflow/{session_id}/.summaries/TEST-N.M-summary.md
```
Verify:
- Tests were created
- Tests failed initially
- Failure messages were clear
**2. Green Phase Verification**
```bash
# Check IMPL task summary
cat .workflow/{session_id}/.summaries/IMPL-N.M-summary.md
```
Verify:
- Implementation was completed
- Tests now pass
- Implementation was minimal
**3. Refactor Phase Verification**
```bash
# Check REFACTOR task summary
cat .workflow/{session_id}/.summaries/REFACTOR-N.M-summary.md
```
Verify:
- Refactoring was completed
- Tests still pass
- Code quality improved
### Phase 5: Generate Analysis Report
Create `.workflow/{session_id}/.process/tdd-cycle-report.md`:
```markdown
# TDD Cycle Analysis - {Session ID}
## Coverage Metrics
- **Line Coverage**: {percentage}%
- **Branch Coverage**: {percentage}%
- **Function Coverage**: {percentage}%
## Coverage Details
### Covered
- {covered_lines} lines
- {covered_branches} branches
- {covered_functions} functions
### Uncovered
- Lines: {uncovered_line_numbers}
- Branches: {uncovered_branch_locations}
## TDD Cycle Verification
### Feature 1: {Feature Name}
**Chain**: TEST-1.1 -> IMPL-1.1 -> REFACTOR-1.1
- [PASS] **Red Phase**: Tests created and failed initially
- [PASS] **Green Phase**: Implementation made tests pass
- [PASS] **Refactor Phase**: Refactoring maintained green tests
### Feature 2: {Feature Name}
**Chain**: TEST-2.1 -> IMPL-2.1 -> REFACTOR-2.1
- [PASS] **Red Phase**: Tests created and failed initially
- [WARN] **Green Phase**: Tests pass but implementation seems over-engineered
- [PASS] **Refactor Phase**: Refactoring maintained green tests
[Repeat for all features]
## TDD Compliance Summary
- **Total Chains**: {N}
- **Complete Cycles**: {N}
- **Incomplete Cycles**: {0}
- **Compliance Score**: {score}/100
## Gaps Identified
- Feature 3: Missing initial test failure verification
- Feature 5: No refactoring step completed
## Recommendations
- Complete missing refactoring steps
- Add edge case tests for Feature 2
- Verify test failure messages are descriptive
```
## Output Files
```
.workflow/{session-id}/
└── .process/
├── test-results.json # Raw test execution results
├── coverage-report.json # Parsed coverage data
└── tdd-cycle-report.md # TDD cycle analysis
```
## Test Framework Detection
Auto-detect test framework from project:
```bash
# Check for test frameworks
if [ -f "package.json" ] && grep -q "jest\|mocha\|vitest" package.json; then
TEST_CMD="npm test -- --coverage --json"
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
TEST_CMD="pytest --cov --json-report"
elif [ -f "Cargo.toml" ]; then
TEST_CMD="cargo test -- --test-threads=1 --nocapture"
elif [ -f "go.mod" ]; then
TEST_CMD="go test -coverprofile=coverage.out -json ./..."
else
TEST_CMD="echo 'No supported test framework found'"
fi
```
## TDD Cycle Verification Algorithm
```
For each feature N:
1. Load TEST-N.M-summary.md
IF summary missing:
Mark: "Red phase incomplete"
SKIP to next feature
CHECK: Contains "test" AND "fail"
IF NOT found:
Mark: "Red phase verification failed"
ELSE:
Mark: "Red phase [PASS]"
2. Load IMPL-N.M-summary.md
IF summary missing:
Mark: "Green phase incomplete"
SKIP to next feature
CHECK: Contains "pass" OR "green"
IF NOT found:
Mark: "Green phase verification failed"
ELSE:
Mark: "Green phase [PASS]"
3. Load REFACTOR-N.M-summary.md
IF summary missing:
Mark: "Refactor phase incomplete"
CONTINUE (refactor is optional)
CHECK: Contains "refactor" AND "pass"
IF NOT found:
Mark: "Refactor phase verification failed"
ELSE:
Mark: "Refactor phase [PASS]"
4. Calculate chain score:
- Red + Green + Refactor all [PASS] = 100%
- Red + Green [PASS], Refactor missing = 80%
- Red [PASS], Green missing = 40%
- All missing = 0%
```
## Coverage Metrics Calculation
```bash
# Parse coverage from test-results.json
line_coverage=$(jq '.coverage.lineCoverage' test-results.json)
branch_coverage=$(jq '.coverage.branchCoverage' test-results.json)
function_coverage=$(jq '.coverage.functionCoverage' test-results.json)
# Calculate overall score
overall_score=$(echo "($line_coverage + $branch_coverage + $function_coverage) / 3" | bc)
```
## Error Handling
### Test Execution Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Test framework not found | No test config | Configure test framework first |
| Tests fail to run | Syntax errors | Fix code before analysis |
| Coverage not available | Missing coverage tool | Install coverage plugin |
### Cycle Verification Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Summary missing | Task not executed | Execute tasks before analysis |
| Invalid summary format | Corrupted file | Re-run task to regenerate |
| No test evidence | Tests not committed | Ensure tests are committed |
## Integration & Usage
### Command Chain
- **Called By**: `/workflow:tdd-verify` (Phase 3)
- **Calls**: Test framework commands (npm test, pytest, etc.)
- **Followed By**: Compliance report generation
### Basic Usage
```bash
/workflow:tools:tdd-coverage-analysis --session WFS-auth
```
### Expected Output
```
TDD Coverage Analysis complete for session: WFS-auth
## Coverage Results
Line Coverage: 87%
Branch Coverage: 82%
Function Coverage: 91%
## TDD Cycle Verification
[PASS] Feature 1: Complete (Red -> Green -> Refactor)
[PASS] Feature 2: Complete (Red -> Green -> Refactor)
[WARN] Feature 3: Incomplete (Red -> Green, missing Refactor)
Overall Compliance: 93/100
Detailed report: .workflow/WFS-auth/.process/tdd-cycle-report.md
```
## Related Commands
- `/workflow:tdd-verify` - Uses this tool for verification
- `/workflow:tools:task-generate-tdd` - Generates tasks this tool analyzes
- `/workflow:execute` - Executes tasks before analysis

View File

@@ -6,6 +6,12 @@ type: search-guideline
# Context Search Strategy
## ⚡ Execution Environment
**CRITICAL**: All commands execute in **Bash environment** (Git Bash on Windows, Bash on Linux/macOS)
**❌ Forbidden**: Windows-specific commands (`findstr`, `dir`, `where`, `type`, `copy`, `del`) - Use Bash equivalents (`grep`, `find`, `which`, `cat`, `cp`, `rm`)
## ⚡ Core Search Tools
**rg (ripgrep)**: Fast content search with regex support
@@ -18,6 +24,7 @@ type: search-guideline
- **Use find for files** - Locate files/directories by name
- **Use grep sparingly** - Only when rg unavailable
- **Use get_modules_by_depth.sh first** - MANDATORY for program architecture analysis before planning
- **Always use Bash commands** - NEVER use Windows cmd/PowerShell commands
### Quick Command Reference
```bash

View File

@@ -41,19 +41,21 @@ type: strategic-guideline
### Standard Format (REQUIRED)
```bash
# Gemini Analysis
# Gemini Analysis (全权限)
cd [directory] && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: [clear analysis goal]
TASK: [specific analysis task]
MODE: [analysis|write]
CONTEXT: [file references and memory context]
EXPECTED: [expected output]
RULES: [template reference and constraints]
"
# Qwen Architecture & Code Generation
# Qwen Architecture Analysis (仅分析)
cd [directory] && ~/.claude/scripts/qwen-wrapper -p "
PURPOSE: [clear architecture/code goal]
TASK: [specific architecture/code task]
PURPOSE: [clear architecture goal]
TASK: [specific analysis task]
MODE: analysis
CONTEXT: [file references and memory context]
EXPECTED: [expected deliverables]
RULES: [template reference and constraints]
@@ -63,6 +65,7 @@ RULES: [template reference and constraints]
codex -C [directory] --full-auto exec "
PURPOSE: [clear development goal]
TASK: [specific development task]
MODE: [auto|write]
CONTEXT: [file references and memory context]
EXPECTED: [expected deliverables]
RULES: [template reference and constraints]
@@ -72,10 +75,26 @@ RULES: [template reference and constraints]
### Template Structure
- [ ] **PURPOSE** - Clear goal and intent
- [ ] **TASK** - Specific execution task
- [ ] **MODE** - Execution mode and permission level
- [ ] **CONTEXT** - File references and memory context from previous sessions
- [ ] **EXPECTED** - Clear expected results
- [ ] **RULES** - Template reference and constraints
### MODE Field Definition
The MODE field controls execution behavior and file permissions:
**For Gemini** (全权限,可读写):
- `analysis` (default) - 分析 + 可生成文档
- `write` - 创建/修改文件(自动启用 --approval-mode yolo
**For Qwen** (仅分析):
- `analysis` (default) - 仅架构分析,不生成代码
**For Codex**:
- `auto` (default) - 自主开发,全文件操作
- `write` - 测试生成和文件修改
### Directory Context
Tools execute in current working directory:
- **Gemini**: `cd path/to/project && ~/.claude/scripts/gemini-wrapper -p "prompt"`
@@ -150,80 +169,92 @@ When planning any coding task, **ALWAYS** integrate CLI tools:
### Common Scenarios
```bash
# Project Analysis (in current directory)
# Gemini - Code Analysis
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Understand codebase architecture
TASK: Analyze project structure and identify patterns
MODE: analysis
CONTEXT: @{src/**/*.ts,CLAUDE.md} Previous analysis of auth system
EXPECTED: Architecture overview and integration points
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt") | Focus on integration points
RULES: $(cat '~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt') | Focus on integration points
"
# Project Analysis (in different directory)
cd ../other-project && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Compare authentication patterns
TASK: Analyze auth implementation in related project
CONTEXT: @{src/auth/**/*} Current project context from session memory
EXPECTED: Pattern comparison and recommendations
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt") | Focus on architectural differences
# Gemini - Generate Documentation
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Generate API documentation
TASK: Create comprehensive API reference from code
MODE: write
CONTEXT: @{src/api/**/*}
EXPECTED: API.md with all endpoints documented
RULES: Follow project documentation standards
"
# Architecture Design (with Qwen)
# Qwen - Architecture Analysis
cd src/auth && ~/.claude/scripts/qwen-wrapper -p "
PURPOSE: Design authentication system architecture
TASK: Create modular JWT-based auth system design
PURPOSE: Analyze authentication system architecture
TASK: Review JWT-based auth system design
MODE: analysis
CONTEXT: @{src/auth/**/*} Existing patterns and requirements
EXPECTED: Complete architecture with code scaffolding
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt") | Focus on modularity and security
EXPECTED: Architecture analysis report with recommendations
RULES: $(cat '~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt') | Focus on security
"
# Feature Development (in target directory)
# Codex - Feature Development
codex -C path/to/project --full-auto exec "
PURPOSE: Implement user authentication
TASK: Create JWT-based authentication system
MODE: auto
CONTEXT: @{src/auth/**/*} Database schema from session memory
EXPECTED: Complete auth module with tests
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/development/feature.txt") | Follow security best practices
RULES: $(cat '~/.claude/workflows/cli-templates/prompts/development/feature.txt') | Follow security best practices
" --skip-git-repo-check -s danger-full-access
# Code Review Preparation
~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Prepare comprehensive code review
TASK: Analyze code changes and identify potential issues
CONTEXT: @{**/*.modified} Recent changes discussed in last session
EXPECTED: Review checklist and improvement suggestions
RULES: $(cat "~/.claude/workflows/cli-templates/prompts/analysis/quality.txt") | Focus on maintainability
"
# Codex - Test Generation
codex -C src/auth --full-auto exec "
PURPOSE: Increase test coverage
TASK: Generate comprehensive tests for auth module
MODE: write
CONTEXT: @{**/*.ts} Exclude existing tests
EXPECTED: Complete test suite with 80%+ coverage
RULES: Use Jest, follow existing patterns
" --skip-git-repo-check -s danger-full-access
```
## 📋 Planning Checklist
For every development task:
- [ ] **Purpose defined** - Clear goal and intent
- [ ] **Mode selected** - Execution mode and permission level determined
- [ ] **Context gathered** - File references and session memory documented
- [ ] **Gemini analysis** completed for understanding
- [ ] **Template selected** - Appropriate template chosen
- [ ] **Constraints specified** - File patterns, scope, requirements
- [ ] **Implementation approach** - Tool selection and workflow
- [ ] **Quality measures** - Testing and validation plan
- [ ] **Tool configuration** - Review `.gemini/CLAUDE.md` or `.codex/Agent.md` if needed
## 🎯 Key Features
### Gemini
### Gemini (全权限)
- **Command**: `~/.claude/scripts/gemini-wrapper`
- **Strengths**: Large context window, pattern recognition
- **Best For**: Analysis, architecture review, code exploration
- **Best For**: Analysis, documentation generation, code exploration
- **Permissions**: 可读写MODE=write 时自动启用 --approval-mode yolo
- **Default MODE**: `analysis`
### Qwen
### Qwen (仅分析)
- **Command**: `~/.claude/scripts/qwen-wrapper`
- **Strengths**: Architecture analysis, code generation, implementation patterns
- **Best For**: System design, code scaffolding, architectural planning
- **Strengths**: Architecture analysis, pattern recognition
- **Best For**: System design analysis, architectural review
- **Permissions**: 仅分析,不生成代码
- **Default MODE**: `analysis`
### Codex
- **Command**: `codex --full-auto exec`
- **Strengths**: Autonomous development, mathematical reasoning
- **Best For**: Implementation, testing, automation
- **Required**: `-s danger-full-access` and `--skip-git-repo-check` for development
- **Default MODE**: `auto`
### File Patterns
- All files: `@{**/*}`
@@ -250,15 +281,33 @@ For every development task:
**Example**:
```bash
# Focused analysis (preferred)
cd src/auth && ~/.claude/scripts/gemini-wrapper -p "analyze auth patterns"
# Gemini - Focused analysis
cd src/auth && ~/.claude/scripts/gemini-wrapper -p "
PURPOSE: Understand authentication patterns
TASK: Analyze auth implementation
MODE: analysis
CONTEXT: @{**/*.ts}
EXPECTED: Pattern documentation
RULES: Focus on security best practices
"
# Focused architecture (Qwen)
cd src/auth && ~/.claude/scripts/qwen-wrapper -p "design auth architecture"
# Qwen - Architecture analysis
cd src/auth && ~/.claude/scripts/qwen-wrapper -p "
PURPOSE: Analyze auth architecture
TASK: Review auth system design and patterns
MODE: analysis
CONTEXT: @{**/*}
EXPECTED: Architecture analysis report
RULES: Focus on modularity and security
"
# Focused implementation (Codex)
codex -C src/auth --full-auto exec "analyze auth implementation" --skip-git-repo-check
# Multi-scope (stay in root)
~/.claude/scripts/gemini-wrapper -p "CONTEXT: @{src/auth/**/*,src/api/**/*}"
# Codex - Implementation
codex -C src/auth --full-auto exec "
PURPOSE: Improve auth implementation
TASK: Review and enhance auth code
MODE: auto
CONTEXT: @{**/*.ts}
EXPECTED: Code improvements and fixes
RULES: Maintain backward compatibility
" --skip-git-repo-check -s danger-full-access
```

View File

@@ -13,8 +13,8 @@ All task files use this simplified 5-field schema (aligned with workflow-archite
"status": "pending|active|completed|blocked|container",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@planning-agent|@code-review-test-agent"
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@general-purpose"
},
"context": {
@@ -49,7 +49,8 @@ All task files use this simplified 5-field schema (aligned with workflow-archite
},
"target_files": [
"src/auth/login.ts:handleLogin:75-120",
"src/middleware/auth.ts:validateToken"
"src/middleware/auth.ts:validateToken",
"src/auth/PasswordReset.ts"
]
}
}
@@ -81,7 +82,7 @@ All task files use this simplified 5-field schema (aligned with workflow-archite
**Components**:
- **pre_analysis**: Array of sequential process steps
- **implementation_approach**: Task execution strategy
- **target_files**: Specific files to modify in "file:function:lines" format
- **target_files**: Files to modify/create - existing files in `file:function:lines` format, new files as `file` only
**Step Structure**:
```json
@@ -145,17 +146,17 @@ Tasks inherit from:
## Agent Mapping
### Automatic Agent Selection
- **@code-developer**: Implementation tasks, coding
- **@planning-agent**: Design, architecture planning
- **@code-review-test-agent**: Testing, validation
- **@review-agent**: Code review, quality checks
- **@code-developer**: Implementation tasks, coding, test writing
- **@action-planning-agent**: Design, architecture planning
- **@test-fix-agent**: Test execution, failure diagnosis, code fixing
- **@general-purpose**: Optional manual review (only when explicitly requested)
### Agent Context Filtering
Each agent receives tailored context:
- **@code-developer**: Complete implementation details
- **@planning-agent**: High-level requirements, risks
- **@test-agent**: Files to test, logic flows to validate
- **@review-agent**: Quality standards, security considerations
- **@code-developer**: Complete implementation details, test requirements
- **@action-planning-agent**: High-level requirements, risks, architecture
- **@test-fix-agent**: Test execution, failure diagnosis, code fixing
- **@general-purpose**: Quality standards, security considerations (when requested)
## Deprecated Fields

View File

@@ -113,8 +113,8 @@ All task files use this unified 5-field schema with optional artifacts enhanceme
"status": "pending|active|completed|blocked|container",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "code-developer|planning-agent|code-review-test-agent"
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@general-purpose"
},
"context": {
@@ -175,7 +175,8 @@ All task files use this unified 5-field schema with optional artifacts enhanceme
},
"target_files": [
"src/auth/login.ts:handleLogin:75-120",
"src/middleware/auth.ts:validateToken"
"src/middleware/auth.ts:validateToken",
"src/auth/PasswordReset.ts"
]
}
}
@@ -219,7 +220,7 @@ The **flow_control** field manages task execution with two main components:
- **task_description**: Comprehensive implementation description
- **modification_points**: Specific code modification targets
- **logic_flow**: Business logic execution sequence
- **target_files**: Target file list in `file:function:lines` format
- **target_files**: Target file list - existing files in `file:function:lines` format, new files as `file` only
#### Tool Reference
**Command Types Available**:
@@ -420,10 +421,10 @@ fi
### Agent Assignment
Based on task type and title keywords:
- **Planning tasks** → @planning-agent
- **Implementation** → @code-developer
- **Testing** → @code-review-test-agent
- **Review** → @review-agent
- **Planning tasks** → @action-planning-agent
- **Implementation** → @code-developer (code + tests)
- **Test execution/fixing** → @test-fix-agent
- **Review** → @general-purpose (optional, only when explicitly requested)
### Execution Context
Agents receive complete task JSON plus workflow context:

215
.codex/AGENTS.md Normal file
View File

@@ -0,0 +1,215 @@
# Codex Agent Execution Protocol
## Overview
**Role**: Codex - autonomous development, implementation, and testing
## Prompt Structure
**Receive prompts in this format**:
```
PURPOSE: [development goal]
TASK: [specific implementation task]
MODE: [auto|write]
CONTEXT: [file patterns]
EXPECTED: [deliverables]
RULES: [constraints and templates]
```
## Execution Requirements
### ALWAYS
- **Parse all six fields** - Understand PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES
- **Follow MODE strictly** - Respect execution boundaries
- **Study CONTEXT files** - Find 3+ similar patterns before implementing
- **Apply RULES** - Follow templates and constraints exactly
- **Test continuously** - Run tests after every change
- **Commit incrementally** - Small, working commits
- **Match project style** - Follow existing patterns exactly
- **Validate EXPECTED** - Ensure all deliverables are met
### NEVER
- **Make assumptions** - Verify with existing code
- **Ignore existing patterns** - Study before implementing
- **Skip tests** - Tests are mandatory
- **Use clever tricks** - Choose boring, obvious solutions
- **Over-engineer** - Simple solutions over complex architectures
- **Break existing code** - Ensure backward compatibility
- **Exceed 3 attempts** - Stop and reassess if blocked 3 times
## MODE Behavior
### MODE: auto (default)
**Permissions**:
- Full file operations (create/modify/delete)
- Run tests and builds
- Commit code incrementally
**Execute**:
1. Parse PURPOSE and TASK
2. Analyze CONTEXT files - find 3+ similar patterns
3. Plan implementation approach
4. Generate code following RULES and project patterns
5. Write tests alongside code
6. Run tests continuously
7. Commit working code incrementally
8. Validate all EXPECTED deliverables
9. Report results
**Use For**: Feature implementation, bug fixes, refactoring
### MODE: write
**Permissions**:
- Focused file operations
- Create/modify specific files
- Run tests for validation
**Execute**:
1. Analyze CONTEXT files
2. Make targeted changes
3. Validate tests pass
4. Report file changes
**Use For**: Test generation, documentation updates, targeted fixes
## RULES Processing
- **Parse the RULES field** to identify template content and additional constraints
- **Recognize `|` as separator** between template and additional constraints
- **ALWAYS apply all template guidelines** provided in the prompt
- **ALWAYS apply all additional constraints** specified after `|`
- **Treat all rules as mandatory** - both template and constraints must be followed
- **Failure to follow any rule** constitutes task failure
## Error Handling
### Three-Attempt Rule
**On 3rd failed attempt**:
1. **Stop execution**
2. **Report status**: What was attempted, what failed, root cause
3. **Request guidance**: Ask for clarification, suggest alternatives
### Recovery Strategies
**Syntax/Type Errors**:
1. Review and fix errors
2. Re-run tests
3. Validate build succeeds
**Runtime Errors**:
1. Analyze stack trace
2. Add error handling
3. Add tests for error cases
**Test Failures**:
1. Debug in isolation
2. Review test setup
3. Fix implementation or test
**Build Failures**:
1. Check error messages
2. Fix incrementally
3. Validate each fix
## Progress Reporting
### During Execution
```
[1/5] Analyzing existing code patterns...
[2/5] Planning implementation approach...
[3/5] Generating code...
[4/5] Writing tests...
[5/5] Running validation...
```
### On Success
```
✅ Task completed
Changes:
- Created: [files with line counts]
- Modified: [files with changes]
Validation:
✅ Tests: [count] passing
✅ Coverage: [percentage]
✅ Build: Success
Next Steps: [recommendations]
```
### On Partial Completion
```
⚠️ Task partially completed
Completed: [what worked]
Blocked: [what failed and why]
Required: [what's needed]
Recommendation: [next steps]
```
## Quality Standards
### Code Quality
- Follow project's existing patterns
- Match import style and naming conventions
- Single responsibility per function/class
- DRY (Don't Repeat Yourself)
- YAGNI (You Aren't Gonna Need It)
### Testing
- Test all public functions
- Test edge cases and error conditions
- Mock external dependencies
- Target 80%+ coverage
### Error Handling
- Proper try-catch blocks
- Clear error messages
- Graceful degradation
- Don't expose sensitive info
## Philosophy
- **Incremental progress over big bangs** - Small, testable changes
- **Learning from existing code** - Study 3+ patterns before implementing
- **Pragmatic over dogmatic** - Adapt to project reality
- **Clear intent over clever code** - Boring, obvious solutions
- **Simple over complex** - Avoid over-engineering
- **Follow existing style** - Match project patterns exactly
## Execution Checklist
### Before Implementation
- [ ] Understand PURPOSE and TASK clearly
- [ ] Review all CONTEXT files
- [ ] Find 3+ similar patterns in codebase
- [ ] Check RULES templates and constraints
- [ ] Plan implementation approach
### During Implementation
- [ ] Follow existing patterns exactly
- [ ] Write tests alongside code
- [ ] Run tests after every change
- [ ] Commit working code incrementally
- [ ] Handle errors properly
### After Implementation
- [ ] Run full test suite - all pass
- [ ] Check coverage - meets target
- [ ] Run build - succeeds
- [ ] Review EXPECTED - all deliverables met
---
**Version**: 2.0.0
**Last Updated**: 2025-10-02

143
.gemini/GEMINI.md Normal file
View File

@@ -0,0 +1,143 @@
# Gemini Execution Protocol
## Overview
**Role**: Gemini - code analysis and documentation generation
## Prompt Structure
**Receive prompts in this format**:
```
PURPOSE: [goal statement]
TASK: [specific task]
MODE: [analysis|write]
CONTEXT: [file patterns]
EXPECTED: [deliverables]
RULES: [constraints and templates]
```
## Execution Requirements
### ALWAYS
- **Parse all six fields** - Understand PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES
- **Follow MODE strictly** - Respect permission boundaries
- **Analyze CONTEXT files** - Read all matching patterns thoroughly
- **Apply RULES** - Follow templates and constraints exactly
- **Provide evidence** - Quote code with file:line references
- **Match EXPECTED** - Deliver exactly what's requested
### NEVER
- **Assume behavior** - Verify with actual code
- **Ignore CONTEXT** - Stay within specified file patterns
- **Skip RULES** - Templates are mandatory when provided
- **Make unsubstantiated claims** - Always back with code references
- **Deviate from MODE** - Respect read/write boundaries
## MODE Behavior
### MODE: analysis (default)
**Permissions**:
- Read all CONTEXT files
- Create/modify documentation files
**Execute**:
1. Read and analyze CONTEXT files
2. Identify patterns and issues
3. Generate insights and recommendations
4. Create documentation if needed
5. Output structured analysis
**Constraint**: Do NOT modify source code files
### MODE: write
**Permissions**:
- Full file operations
- Create/modify any files
**Execute**:
1. Read CONTEXT files
2. Perform requested file operations
3. Create/modify files as specified
4. Validate changes
5. Report file changes
## Output Format
### Standard Analysis Structure
```markdown
# Analysis: [TASK Title]
## Summary
[2-3 sentence overview]
## Key Findings
1. [Finding] - path/to/file:123
2. [Finding] - path/to/file:456
## Detailed Analysis
[Evidence-based analysis with code quotes]
## Recommendations
1. [Actionable recommendation]
2. [Actionable recommendation]
```
### Code References
Always use format: `path/to/file:line_number`
Example: "Authentication logic at `src/auth/jwt.ts:45` uses deprecated algorithm"
## RULES Processing
- **Parse the RULES field** to identify template content and additional constraints
- **Recognize `|` as separator** between template and additional constraints
- **ALWAYS apply all template guidelines** provided in the prompt
- **ALWAYS apply all additional constraints** specified after `|`
- **Treat all rules as mandatory** - both template and constraints must be followed
- **Failure to follow any rule** constitutes task failure
## Error Handling
**File Not Found**:
- Report missing files
- Continue with available files
- Note in output
**Invalid CONTEXT Pattern**:
- Report invalid pattern
- Request correction
- Do not guess
## Quality Standards
### Thoroughness
- Analyze ALL files in CONTEXT
- Check cross-file patterns
- Identify edge cases
- Quantify when possible
### Evidence-Based
- Quote relevant code
- Provide file:line references
- Link related patterns
### Actionable
- Clear recommendations
- Prioritized by impact
- Specific, not vague
## Philosophy
- **Incremental over big bangs** - Suggest small, testable changes
- **Learn from existing code** - Reference project patterns
- **Pragmatic over dogmatic** - Adapt to project reality
- **Clear over clever** - Prefer obvious solutions
- **Simple over complex** - Avoid over-engineering

View File

@@ -5,6 +5,166 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.2.0] - 2025-10-02
### 🔄 Test-Fix Workflow & Agent Architecture Simplification
This release simplifies the agent architecture and introduces an automated test-fix workflow based on the principle "Tests Are the Review".
#### Added
**New Agent: test-fix-agent**:
- **Purpose**: Execute tests, diagnose failures, and fix code until all tests pass
- **Philosophy**: When all tests pass, code is automatically approved (no separate review needed)
- **Responsibilities**:
- Execute complete test suite for implemented modules
- Parse test output and identify failures
- Diagnose root cause of test failures
- Modify source code to fix issues
- Re-run tests to verify fixes
- Certify code approval when all tests pass
**Enhanced test-gen Command**:
- Transforms from planning tool to workflow orchestrator
- Auto-generates TEST-FIX tasks for test-fix-agent
- Automatically executes test validation via `/workflow:execute`
- Eliminates manual planning document generation
**New Task Types**:
- `test-gen`: Test generation tasks (handled by @code-developer)
- `test-fix`: Test execution and fixing tasks (handled by @test-fix-agent)
#### Changed
**Agent Architecture Simplification**:
- **Removed**: `@code-review-agent` and `@code-review-test-agent`
- Testing now serves as the quality gate
- Passing tests = approved code
- **Enhanced**: `@code-developer` now writes implementation + tests together
- Unified generative work (code + tests)
- Maintains context continuity
- **Added**: `@general-purpose` for optional manual reviews
- Used only when explicitly requested
- Handles special cases and edge scenarios
**Task Type Updates**:
- `"test"``"test-gen"` (clearer distinction from test-fix)
- Agent mapping updated across all commands:
- `feature|bugfix|refactor|test-gen``@code-developer`
- `test-fix``@test-fix-agent`
- `review``@general-purpose` (optional)
**Workflow Changes**:
```
Old: code-developer → test-agent → code-review-agent
New: code-developer (code+tests) → test-fix-agent (execute+fix) → ✅ approved
```
#### Removed
- `@code-review-agent` - Testing serves as quality gate
- `@code-review-test-agent` - Functionality split between code-developer and test-fix-agent
- Separate review step - Tests passing = code approved
---
## [3.1.0] - 2025-10-02
### 🧪 TDD Workflow Support
This release introduces comprehensive Test-Driven Development (TDD) workflow support with Red-Green-Refactor cycle enforcement.
#### Added
**TDD Workflow Commands**:
- **`/workflow:tdd-plan`**: 5-phase TDD planning orchestrator
- Creates structured TDD workflow with TEST → IMPL → REFACTOR task chains
- Enforces Red-Green-Refactor methodology through task dependencies
- Supports both manual and agent modes (`--agent` flag)
- Validates TDD structure (chains, dependencies, meta fields)
- Outputs: `TDD_PLAN.md`, `IMPL_PLAN.md`, `TODO_LIST.md`
- **`/workflow:tdd-verify`**: 4-phase TDD compliance verification
- Validates task chain structure (TEST-N.M → IMPL-N.M → REFACTOR-N.M)
- Analyzes test coverage metrics (line, branch, function coverage)
- Verifies Red-Green-Refactor cycle execution
- Generates comprehensive compliance report with scoring (0-100)
- Outputs: `TDD_COMPLIANCE_REPORT.md`
**TDD Tool Commands**:
- **`/workflow:tools:task-generate-tdd`**: TDD task chain generator
- Uses Gemini AI to analyze requirements and create TDD breakdowns
- Generates TEST, IMPL, REFACTOR tasks with proper dependencies
- Creates task JSONs with `meta.tdd_phase` field ("red"/"green"/"refactor")
- Assigns specialized agents (`@code-review-test-agent`, `@code-developer`)
- Maximum 10 features (30 total tasks) per workflow
- **`/workflow:tools:tdd-coverage-analysis`**: Test coverage and cycle analysis
- Extracts test files from TEST tasks
- Runs test suite with coverage (supports npm, pytest, cargo, go test)
- Parses coverage metrics (line, branch, function)
- Verifies TDD cycle execution through task summaries
- Outputs: `test-results.json`, `coverage-report.json`, `tdd-cycle-report.md`
**TDD Architecture**:
- **Task ID Format**: `TEST-N.M`, `IMPL-N.M`, `REFACTOR-N.M`
- N = feature number (1-10)
- M = sub-task number (1-N)
- **Dependency System**:
- `IMPL-N.M` depends on `TEST-N.M`
- `REFACTOR-N.M` depends on `IMPL-N.M`
- Enforces execution order: Red → Green → Refactor
- **Meta Fields**:
- `meta.tdd_phase`: "red" | "green" | "refactor"
- `meta.agent`: "@code-review-test-agent" | "@code-developer"
**Compliance Scoring**:
```
Base Score: 100 points
Deductions:
- Missing TEST task: -30 points per feature
- Missing IMPL task: -30 points per feature
- Missing REFACTOR task: -10 points per feature
- Wrong dependency: -15 points per error
- Wrong agent: -5 points per error
- Wrong tdd_phase: -5 points per error
- Test didn't fail initially: -10 points per feature
- Tests didn't pass after IMPL: -20 points per feature
- Tests broke during REFACTOR: -15 points per feature
```
#### Changed
**Documentation Updates**:
- Updated README.md with TDD workflow section
- Added TDD Quick Start guide
- Updated command reference with TDD commands
- Version badge updated to v3.1.0
**Integration**:
- TDD commands work alongside standard workflow commands
- Compatible with `/workflow:execute`, `/workflow:status`, `/workflow:resume`
- Uses same session management and artifact system
#### Benefits
**TDD Best Practices**:
- ✅ Enforced test-first development through task dependencies
- ✅ Automated Red-Green-Refactor cycle verification
- ✅ Comprehensive test coverage analysis
- ✅ Quality scoring and compliance reporting
- ✅ AI-powered task breakdown with TDD focus
**Developer Experience**:
- 🚀 Quick TDD workflow creation with single command
- 📊 Detailed compliance reports with actionable recommendations
- 🔄 Seamless integration with existing workflow system
- 🧪 Multi-framework test support (Jest, Pytest, Cargo, Go)
---
## [3.0.1] - 2025-10-01
### 🔧 Command Updates

View File

@@ -156,22 +156,34 @@ function Test-Prerequisites {
Write-ColorOutput "Current version: $($PSVersionTable.PSVersion)" $ColorError
return $false
}
# Test source files exist
$sourceDir = $PSScriptRoot
$claudeDir = Join-Path $sourceDir ".claude"
$claudeMd = Join-Path $sourceDir "CLAUDE.md"
$codexDir = Join-Path $sourceDir ".codex"
$geminiDir = Join-Path $sourceDir ".gemini"
if (-not (Test-Path $claudeDir)) {
Write-ColorOutput "ERROR: .claude directory not found in $sourceDir" $ColorError
return $false
}
if (-not (Test-Path $claudeMd)) {
Write-ColorOutput "ERROR: CLAUDE.md file not found in $sourceDir" $ColorError
return $false
}
if (-not (Test-Path $codexDir)) {
Write-ColorOutput "ERROR: .codex directory not found in $sourceDir" $ColorError
return $false
}
if (-not (Test-Path $geminiDir)) {
Write-ColorOutput "ERROR: .gemini directory not found in $sourceDir" $ColorError
return $false
}
Write-ColorOutput "Prerequisites check passed" $ColorSuccess
return $true
}
@@ -617,6 +629,8 @@ function Install-Global {
$userProfile = [Environment]::GetFolderPath("UserProfile")
$globalClaudeDir = Join-Path $userProfile ".claude"
$globalClaudeMd = Join-Path $globalClaudeDir "CLAUDE.md"
$globalCodexDir = Join-Path $userProfile ".codex"
$globalGeminiDir = Join-Path $userProfile ".gemini"
Write-ColorOutput "Global installation path: $userProfile" $ColorInfo
@@ -624,12 +638,23 @@ function Install-Global {
$sourceDir = $PSScriptRoot
$sourceClaudeDir = Join-Path $sourceDir ".claude"
$sourceClaudeMd = Join-Path $sourceDir "CLAUDE.md"
$sourceCodexDir = Join-Path $sourceDir ".codex"
$sourceGeminiDir = Join-Path $sourceDir ".gemini"
# Create backup folder if needed (default behavior unless NoBackup is specified)
$backupFolder = $null
if (-not $NoBackup) {
if (Test-Path $globalClaudeDir) {
$existingFiles = Get-ChildItem $globalClaudeDir -Recurse -File -ErrorAction SilentlyContinue
if ((Test-Path $globalClaudeDir) -or (Test-Path $globalCodexDir) -or (Test-Path $globalGeminiDir)) {
$existingFiles = @()
if (Test-Path $globalClaudeDir) {
$existingFiles += Get-ChildItem $globalClaudeDir -Recurse -File -ErrorAction SilentlyContinue
}
if (Test-Path $globalCodexDir) {
$existingFiles += Get-ChildItem $globalCodexDir -Recurse -File -ErrorAction SilentlyContinue
}
if (Test-Path $globalGeminiDir) {
$existingFiles += Get-ChildItem $globalGeminiDir -Recurse -File -ErrorAction SilentlyContinue
}
if (($existingFiles -and ($existingFiles | Measure-Object).Count -gt 0)) {
$backupFolder = Get-BackupDirectory -TargetDirectory $userProfile
Write-ColorOutput "Backup folder created: $backupFolder" $ColorInfo
@@ -649,6 +674,14 @@ function Install-Global {
Write-ColorOutput "Installing CLAUDE.md to global .claude directory..." $ColorInfo
$claudeMdInstalled = Copy-FileToDestination -Source $sourceClaudeMd -Destination $globalClaudeMd -Description "CLAUDE.md" -BackupFolder $backupFolder
# Merge .codex directory contents
Write-ColorOutput "Merging .codex directory contents..." $ColorInfo
$codexMerged = Merge-DirectoryContents -Source $sourceCodexDir -Destination $globalCodexDir -Description ".codex directory contents" -BackupFolder $backupFolder
# Merge .gemini directory contents
Write-ColorOutput "Merging .gemini directory contents..." $ColorInfo
$geminiMerged = Merge-DirectoryContents -Source $sourceGeminiDir -Destination $globalGeminiDir -Description ".gemini directory contents" -BackupFolder $backupFolder
if ($backupFolder -and (Test-Path $backupFolder)) {
$backupFiles = Get-ChildItem $backupFolder -Recurse -File -ErrorAction SilentlyContinue
if (-not $backupFiles -or ($backupFiles | Measure-Object).Count -eq 0) {
@@ -679,14 +712,18 @@ function Install-Path {
$sourceDir = $PSScriptRoot
$sourceClaudeDir = Join-Path $sourceDir ".claude"
$sourceClaudeMd = Join-Path $sourceDir "CLAUDE.md"
$sourceCodexDir = Join-Path $sourceDir ".codex"
$sourceGeminiDir = Join-Path $sourceDir ".gemini"
# Local paths - only for agents, commands, output-styles
# Local paths - for agents, commands, output-styles, .codex, .gemini
$localClaudeDir = Join-Path $TargetDirectory ".claude"
$localCodexDir = Join-Path $TargetDirectory ".codex"
$localGeminiDir = Join-Path $TargetDirectory ".gemini"
# Create backup folder if needed
$backupFolder = $null
if (-not $NoBackup) {
if ((Test-Path $localClaudeDir) -or (Test-Path $globalClaudeDir)) {
if ((Test-Path $localClaudeDir) -or (Test-Path $localCodexDir) -or (Test-Path $localGeminiDir) -or (Test-Path $globalClaudeDir)) {
$backupFolder = Get-BackupDirectory -TargetDirectory $TargetDirectory
Write-ColorOutput "Backup folder created: $backupFolder" $ColorInfo
}
@@ -774,6 +811,14 @@ function Install-Path {
Write-ColorOutput "Installing CLAUDE.md to global .claude directory..." $ColorInfo
Copy-FileToDestination -Source $sourceClaudeMd -Destination $globalClaudeMd -Description "CLAUDE.md" -BackupFolder $backupFolder
# Merge .codex directory contents to local location
Write-ColorOutput "Merging .codex directory contents to local location..." $ColorInfo
$codexMerged = Merge-DirectoryContents -Source $sourceCodexDir -Destination $localCodexDir -Description ".codex directory contents" -BackupFolder $backupFolder
# Merge .gemini directory contents to local location
Write-ColorOutput "Merging .gemini directory contents to local location..." $ColorInfo
$geminiMerged = Merge-DirectoryContents -Source $sourceGeminiDir -Destination $localGeminiDir -Description ".gemini directory contents" -BackupFolder $backupFolder
if ($backupFolder -and (Test-Path $backupFolder)) {
$backupFiles = Get-ChildItem $backupFolder -Recurse -File -ErrorAction SilentlyContinue
if (-not $backupFiles -or ($backupFiles | Measure-Object).Count -eq 0) {
@@ -879,10 +924,11 @@ function Show-Summary {
if ($Mode -eq "Path") {
Write-Host " Local Path: $Path"
Write-Host " Global Path: $([Environment]::GetFolderPath('UserProfile'))"
Write-Host " Local Components: agents, commands, output-styles"
Write-Host " Local Components: agents, commands, output-styles, .codex, .gemini"
Write-Host " Global Components: workflows, scripts, python_script, etc."
} else {
Write-Host " Path: $Path"
Write-Host " Global Components: .claude, .codex, .gemini"
}
if ($NoBackup) {
@@ -896,10 +942,12 @@ function Show-Summary {
Write-Host ""
Write-ColorOutput "Next steps:" $ColorInfo
Write-Host "1. Review CLAUDE.md - Customize guidelines for your project"
Write-Host "2. Configure settings - Edit .claude/settings.local.json as needed"
Write-Host "3. Start using Claude Code with Agent workflow coordination!"
Write-Host "4. Use /workflow commands for task execution"
Write-Host "5. Use /update-memory commands for memory system management"
Write-Host "2. Review .codex/Agent.md - Codex agent execution protocol"
Write-Host "3. Review .gemini/CLAUDE.md - Gemini agent execution protocol"
Write-Host "4. Configure settings - Edit .claude/settings.local.json as needed"
Write-Host "5. Start using Claude Code with Agent workflow coordination!"
Write-Host "6. Use /workflow commands for task execution"
Write-Host "7. Use /update-memory commands for memory system management"
Write-Host ""
Write-ColorOutput "Documentation: https://github.com/catlog22/Claude-CCW" $ColorInfo

View File

@@ -97,6 +97,8 @@ function test_prerequisites() {
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local claude_dir="$script_dir/.claude"
local claude_md="$script_dir/CLAUDE.md"
local codex_dir="$script_dir/.codex"
local gemini_dir="$script_dir/.gemini"
if [ ! -d "$claude_dir" ]; then
write_color "ERROR: .claude directory not found in $script_dir" "$COLOR_ERROR"
@@ -108,6 +110,16 @@ function test_prerequisites() {
return 1
fi
if [ ! -d "$codex_dir" ]; then
write_color "ERROR: .codex directory not found in $script_dir" "$COLOR_ERROR"
return 1
fi
if [ ! -d "$gemini_dir" ]; then
write_color "ERROR: .gemini directory not found in $script_dir" "$COLOR_ERROR"
return 1
fi
write_color "✓ Prerequisites check passed" "$COLOR_SUCCESS"
return 0
}
@@ -389,6 +401,8 @@ function install_global() {
local user_home="$HOME"
local global_claude_dir="${user_home}/.claude"
local global_claude_md="${global_claude_dir}/CLAUDE.md"
local global_codex_dir="${user_home}/.codex"
local global_gemini_dir="${user_home}/.gemini"
write_color "Global installation path: $user_home" "$COLOR_INFO"
@@ -396,14 +410,25 @@ function install_global() {
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local source_claude_dir="${script_dir}/.claude"
local source_claude_md="${script_dir}/CLAUDE.md"
local source_codex_dir="${script_dir}/.codex"
local source_gemini_dir="${script_dir}/.gemini"
# Create backup folder if needed
local backup_folder=""
if [ "$NO_BACKUP" = false ]; then
local has_existing_files=false
if [ -d "$global_claude_dir" ] && [ "$(ls -A "$global_claude_dir" 2>/dev/null)" ]; then
backup_folder=$(get_backup_directory "$user_home")
write_color "Backup folder created: $backup_folder" "$COLOR_INFO"
has_existing_files=true
elif [ -d "$global_codex_dir" ] && [ "$(ls -A "$global_codex_dir" 2>/dev/null)" ]; then
has_existing_files=true
elif [ -d "$global_gemini_dir" ] && [ "$(ls -A "$global_gemini_dir" 2>/dev/null)" ]; then
has_existing_files=true
elif [ -f "$global_claude_md" ]; then
has_existing_files=true
fi
if [ "$has_existing_files" = true ]; then
backup_folder=$(get_backup_directory "$user_home")
write_color "Backup folder created: $backup_folder" "$COLOR_INFO"
fi
@@ -417,6 +442,14 @@ function install_global() {
write_color "Installing CLAUDE.md to global .claude directory..." "$COLOR_INFO"
copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"
# Merge .codex directory contents
write_color "Merging .codex directory contents..." "$COLOR_INFO"
merge_directory_contents "$source_codex_dir" "$global_codex_dir" ".codex directory contents" "$backup_folder"
# Merge .gemini directory contents
write_color "Merging .gemini directory contents..." "$COLOR_INFO"
merge_directory_contents "$source_gemini_dir" "$global_gemini_dir" ".gemini directory contents" "$backup_folder"
# Remove empty backup folder
if [ -n "$backup_folder" ] && [ -d "$backup_folder" ]; then
if [ -z "$(ls -A "$backup_folder" 2>/dev/null)" ]; then
@@ -442,14 +475,18 @@ function install_path() {
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
local source_claude_dir="${script_dir}/.claude"
local source_claude_md="${script_dir}/CLAUDE.md"
local source_codex_dir="${script_dir}/.codex"
local source_gemini_dir="${script_dir}/.gemini"
# Local paths
local local_claude_dir="${target_dir}/.claude"
local local_codex_dir="${target_dir}/.codex"
local local_gemini_dir="${target_dir}/.gemini"
# Create backup folder if needed
local backup_folder=""
if [ "$NO_BACKUP" = false ]; then
if [ -d "$local_claude_dir" ] || [ -d "$global_claude_dir" ]; then
if [ -d "$local_claude_dir" ] || [ -d "$local_codex_dir" ] || [ -d "$local_gemini_dir" ] || [ -d "$global_claude_dir" ]; then
backup_folder=$(get_backup_directory "$target_dir")
write_color "Backup folder created: $backup_folder" "$COLOR_INFO"
fi
@@ -530,6 +567,14 @@ function install_path() {
write_color "Installing CLAUDE.md to global .claude directory..." "$COLOR_INFO"
copy_file_to_destination "$source_claude_md" "$global_claude_md" "CLAUDE.md" "$backup_folder"
# Merge .codex directory contents to local location
write_color "Merging .codex directory contents to local location..." "$COLOR_INFO"
merge_directory_contents "$source_codex_dir" "$local_codex_dir" ".codex directory contents" "$backup_folder"
# Merge .gemini directory contents to local location
write_color "Merging .gemini directory contents to local location..." "$COLOR_INFO"
merge_directory_contents "$source_gemini_dir" "$local_gemini_dir" ".gemini directory contents" "$backup_folder"
# Remove empty backup folder
if [ -n "$backup_folder" ] && [ -d "$backup_folder" ]; then
if [ -z "$(ls -A "$backup_folder" 2>/dev/null)" ]; then
@@ -631,10 +676,11 @@ function show_summary() {
if [ "$mode" = "Path" ]; then
echo " Local Path: $path"
echo " Global Path: $HOME"
echo " Local Components: agents, commands, output-styles"
echo " Local Components: agents, commands, output-styles, .codex, .gemini"
echo " Global Components: workflows, scripts, python_script, etc."
else
echo " Path: $path"
echo " Global Components: .claude, .codex, .gemini"
fi
if [ "$NO_BACKUP" = true ]; then
@@ -648,10 +694,12 @@ function show_summary() {
echo ""
write_color "Next steps:" "$COLOR_INFO"
echo "1. Review CLAUDE.md - Customize guidelines for your project"
echo "2. Configure settings - Edit .claude/settings.local.json as needed"
echo "3. Start using Claude Code with Agent workflow coordination!"
echo "4. Use /workflow commands for task execution"
echo "5. Use /update-memory commands for memory system management"
echo "2. Review .codex/Agent.md - Codex agent execution protocol"
echo "3. Review .gemini/CLAUDE.md - Gemini agent execution protocol"
echo "4. Configure settings - Edit .claude/settings.local.json as needed"
echo "5. Start using Claude Code with Agent workflow coordination!"
echo "6. Use /workflow commands for task execution"
echo "7. Use /update-memory commands for memory system management"
echo ""
write_color "Documentation: https://github.com/catlog22/Claude-Code-Workflow" "$COLOR_INFO"

View File

@@ -1,6 +1,6 @@
# 🚀 Claude Code Workflow (CCW): 下一代多智能体软件开发自动化框架
[![Version](https://img.shields.io/badge/version-v2.1.0--experimental-blue.svg)](https://github.com/catlog22/Claude-Code-Workflow/releases)
[![Version](https://img.shields.io/badge/version-v3.2.1-blue.svg)](https://github.com/catlog22/Claude-Code-Workflow/releases)
[![MCP工具](https://img.shields.io/badge/🔧_MCP工具-实验性-orange.svg)](https://github.com/modelcontextprotocol)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)

401
README.md
View File

@@ -2,7 +2,7 @@
<div align="center">
[![Version](https://img.shields.io/badge/version-v3.0.1-blue.svg)](https://github.com/catlog22/Claude-Code-Workflow/releases)
[![Version](https://img.shields.io/badge/version-v3.2.1-blue.svg)](https://github.com/catlog22/Claude-Code-Workflow/releases)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
[![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20macOS-lightgrey.svg)]()
[![MCP Tools](https://img.shields.io/badge/🔧_MCP_Tools-Experimental-orange.svg)](https://github.com/modelcontextprotocol)
@@ -15,21 +15,27 @@
**Claude Code Workflow (CCW)** is a next-generation multi-agent automation framework that orchestrates complex software development tasks through intelligent workflow management and autonomous execution.
> **🎉 Latest: v3.0.1** - Documentation optimization and brainstorming role updates. See [CHANGELOG.md](CHANGELOG.md) for details.
> **🎉 Latest: v3.2.0** - Simplified agent architecture with "Tests Are the Review" philosophy. See [CHANGELOG.md](CHANGELOG.md) for details.
>
> **v3.0.0**: Introduced **unified CLI command structure**. The `/cli:*` commands consolidate all tool interactions (Gemini, Qwen, Codex) using a `--tool` flag for selection.
> **What's New in v3.2.0**:
> - 🔄 Simplified from 3 agents to 2 core agents (`@code-developer`, `@test-fix-agent`)
> - ✅ "Tests Are the Review" - Passing tests = approved code
> - 🧪 Enhanced test-fix workflow with automatic execution and fixing
> - 📦 Interactive installation with version selection menu
---
## ✨ Key Features
- **🤖 Multi-Agent System**: Specialized agents for planning, coding, testing, and reviewing.
- **🔄 End-to-End Workflow Automation**: From brainstorming (`/workflow:brainstorm`) to deployment.
- **🎯 JSON-First Architecture**: Uses JSON as the single source of truth for tasks, ensuring consistency.
- **🧪 Automated Test Generation**: Creates comprehensive test suites based on implementation analysis.
- **🎯 Context-First Architecture**: Pre-defined context gathering eliminates execution uncertainty and error accumulation.
- **🤖 Multi-Agent System**: Specialized agents (`@code-developer`, `@test-fix-agent`) with tech-stack awareness and automated test validation.
- **🔄 End-to-End Workflow Automation**: From brainstorming to deployment with multi-phase orchestration.
- **📋 JSON-First Task Model**: Structured task definitions with `pre_analysis` steps for deterministic execution.
- **🧪 TDD Workflow Support**: Complete Test-Driven Development with Red-Green-Refactor cycle enforcement.
- **🧠 Multi-Model Orchestration**: Leverages Gemini (analysis), Qwen (architecture), and Codex (implementation) strengths.
- **✅ Pre-execution Verification**: Validates plans with both strategic (Gemini) and technical (Codex) analysis.
- **🔧 Unified CLI**: A single, powerful `/cli:*` command set for interacting with various AI tools.
- **🧠 Smart Context Management**: Automatically manages and updates project documentation (`CLAUDE.md`).
- **📦 Smart Context Package**: `context-package.json` links tasks to relevant codebase files and external examples.
---
@@ -47,35 +53,142 @@ Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/cat
bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
### **📋 Interactive Version Selection**
After running the installation command, you'll see an interactive menu with real-time version information:
```
Detecting latest release and commits...
Latest stable: v3.2.0 (2025-10-02 04:27 UTC)
Latest commit: cdea58f (2025-10-02 08:15 UTC)
====================================================
Version Selection Menu
====================================================
1) Latest Stable Release (Recommended)
|-- Version: v3.2.0
|-- Released: 2025-10-02 04:27 UTC
\-- Production-ready
2) Latest Development Version
|-- Branch: main
|-- Commit: cdea58f
|-- Updated: 2025-10-02 08:15 UTC
|-- Cutting-edge features
\-- May contain experimental changes
3) Specific Release Version
|-- Install a specific tagged release
\-- Recent: v3.2.0, v3.1.0, v3.0.1
====================================================
Select version to install (1-3, default: 1):
```
**Version Options:**
- **Option 1 (Recommended)**: Latest stable release with verified production quality
- **Option 2**: Latest development version from main branch with newest features
- **Option 3**: Specific version tag for controlled deployments
> 💡 **Pro Tip**: The installer automatically detects and displays the latest version numbers and release dates from GitHub. Just press Enter to select the recommended stable release.
### **✅ Verify Installation**
After installation, run the following command to ensure CCW is working:
```bash
/workflow:session:list
```
> **📝 Important Notes:**
> - The installer will automatically install/update `.codex/` and `.gemini/` directories
> - **Global mode**: Installs to `~/.codex` and `~/.gemini`
> - **Path mode**: Installs to your specified directory (e.g., `project/.codex`, `project/.gemini`)
> - Existing files will be backed up automatically before installation
---
## 🚀 Getting Started: A Simple Workflow
## 🚀 Getting Started
1. **Start a new workflow session:**
```bash
/workflow:session:start "Create a new user authentication feature"
```
### Complete Development Workflow
2. **Generate an implementation plan:**
```bash
/workflow:plan "Implement JWT-based user authentication"
```
**Phase 1: Brainstorming & Conceptual Planning**
```bash
# Multi-perspective brainstorming with role-based agents
/workflow:brainstorm:auto-parallel "Build a user authentication system"
3. **Execute the plan with AI agents:**
```bash
/workflow:execute
```
# Review and refine specific aspects (optional)
/workflow:brainstorm:ui-designer "authentication flows"
/workflow:brainstorm:synthesis # Generate consolidated specification
```
4. **Check the status:**
```bash
/workflow:status
```
**Phase 2: Action Planning**
```bash
# Create executable implementation plan
/workflow:plan "Implement JWT-based authentication system"
# OR for TDD approach
/workflow:tdd-plan "Implement authentication with test-first development"
```
**Phase 3: Execution**
```bash
# Execute tasks with AI agents
/workflow:execute
# Monitor progress
/workflow:status
```
**Phase 4: Testing & Quality Assurance**
```bash
# Generate comprehensive test suite (standard workflow)
/workflow:test-gen
/workflow:execute
# OR verify TDD compliance (TDD workflow)
/workflow:tdd-verify
```
### Quick Start for Simple Tasks
**Feature Development:**
```bash
/workflow:session:start "Add password reset feature"
/workflow:plan "Email-based password reset with token expiry"
/workflow:execute
```
**Bug Fixing:**
```bash
# Interactive analysis with CLI tools
/cli:mode:bug-index --tool gemini "Login timeout on mobile devices"
# Execute the suggested fix
/workflow:execute
```
> **💡 When to Use Which Approach?**
>
> **Use `/workflow:plan` + `/workflow:execute` for:**
> - Complex features requiring multiple modules (>3 modules)
> - Tasks with multiple subtasks (>5 subtasks)
> - Cross-cutting changes affecting architecture
> - Features requiring coordination between components
> - When you need structured planning and progress tracking
>
> **Use Claude Code directly for:**
> - Simple, focused changes (single file or module)
> - Quick bug fixes with clear solutions
> - Documentation updates
> - Code refactoring within one component
> - Straightforward feature additions
**Code Analysis:**
```bash
# Deep codebase analysis
/cli:mode:code-analysis --tool qwen "Analyze authentication module architecture"
```
---
@@ -101,10 +214,12 @@ After installation, run the following command to ensure CCW is working:
| `/workflow:session:*` | Manage development sessions (`start`, `pause`, `resume`, `list`, `switch`, `complete`). |
| `/workflow:brainstorm:*` | Use role-based agents for multi-perspective planning. |
| `/workflow:plan` | Create a detailed, executable plan from a description. |
| `/workflow:tdd-plan` | Create a Test-Driven Development workflow with Red-Green-Refactor cycles. |
| `/workflow:execute` | Execute the current workflow plan autonomously. |
| `/workflow:status` | Display the current status of the workflow. |
| `/workflow:test-gen` | Automatically generate a test plan from the implementation. |
| `/workflow:review` | Initiate a quality assurance review of the completed work. |
| `/workflow:tdd-verify` | Verify TDD compliance and generate quality report. |
| `/workflow:review` | **Optional** manual review (only use when explicitly needed - passing tests = approved code). |
### **Task & Memory Commands**
@@ -116,9 +231,46 @@ After installation, run the following command to ensure CCW is working:
---
## ⚙️ Essential Configuration
## ⚙️ Configuration
For optimal integration, configure your Gemini CLI settings by creating a `settings.json` file in `~/.gemini/`:
### **Prerequisites: Required Tools**
Before using CCW, install the following command-line tools:
#### **Core CLI Tools**
| Tool | Purpose | Installation |
|------|---------|--------------|
| **Gemini CLI** | AI analysis & documentation | `npm install -g @google/gemini-cli` ([GitHub](https://github.com/google-gemini/gemini-cli)) |
| **Codex CLI** | AI development & implementation | `npm install -g @openai/codex` ([GitHub](https://github.com/openai/codex)) |
| **Qwen Code** | AI architecture & code generation | `npm install -g @qwen-code/qwen-code` ([Docs](https://github.com/QwenLM/qwen-code)) |
#### **System Utilities**
| Tool | Purpose | Installation |
|------|---------|--------------|
| **ripgrep (rg)** | Fast code search | [Download](https://github.com/BurntSushi/ripgrep/releases) or `brew install ripgrep` (macOS), `apt install ripgrep` (Ubuntu) |
| **jq** | JSON processing | [Download](https://jqlang.github.io/jq/download/) or `brew install jq` (macOS), `apt install jq` (Ubuntu) |
**Quick Install (All Tools):**
```bash
# macOS
brew install ripgrep jq
npm install -g @google/gemini-cli @openai/codex @qwen-code/qwen-code
# Ubuntu/Debian
sudo apt install ripgrep jq
npm install -g @google/gemini-cli @openai/codex @qwen-code/qwen-code
# Windows (Chocolatey)
choco install ripgrep jq
npm install -g @google/gemini-cli @openai/codex @qwen-code/qwen-code
```
### **Essential: Gemini CLI Setup**
Configure Gemini CLI for optimal integration:
```json
// ~/.gemini/settings.json
@@ -126,7 +278,198 @@ For optimal integration, configure your Gemini CLI settings by creating a `setti
"contextFileName": "CLAUDE.md"
}
```
This ensures CCW's intelligent documentation system works seamlessly with the Gemini CLI.
### **Recommended: .geminiignore**
Optimize performance by excluding unnecessary files:
```bash
# .geminiignore (in project root)
/dist/
/build/
/node_modules/
/.next/
*.tmp
*.log
/temp/
# Include important docs
!README.md
!**/CLAUDE.md
```
### **Optional: MCP Tools** *(Enhanced Analysis)*
MCP (Model Context Protocol) tools provide advanced codebase analysis. **Completely optional** - CCW works perfectly without them.
#### Available MCP Servers
| MCP Server | Purpose | Installation Guide |
|------------|---------|-------------------|
| **Exa MCP** | External API patterns & best practices | [Install Guide](https://github.com/exa-labs/exa-mcp-server) |
| **Code Index MCP** | Advanced internal code search | [Install Guide](https://github.com/johnhuang316/code-index-mcp) |
#### Benefits When Enabled
- 📊 **Faster Analysis**: Direct codebase indexing vs manual searching
- 🌐 **External Context**: Real-world API patterns and examples
- 🔍 **Advanced Search**: Pattern matching and similarity detection
-**Automatic Fallback**: Uses traditional tools when MCP unavailable
---
## 🧩 How It Works: Design Philosophy
### The Core Problem
Traditional AI coding workflows face a fundamental challenge: **execution uncertainty leads to error accumulation**.
**Example:**
```bash
# Prompt 1: "Develop XX feature"
# Prompt 2: "Review XX architecture in file Y, then develop XX feature"
```
While Prompt 1 might succeed for simple tasks, in complex workflows:
- The AI may examine different files each time
- Small deviations compound across multiple steps
- Final output drifts from the intended goal
> **CCW's Mission**: Solve the "1-to-N" problem — building upon existing codebases with precision, not just "0-to-1" greenfield development.
---
### The CCW Solution: Context-First Architecture
#### 1. **Pre-defined Context Gathering**
Instead of letting agents randomly explore, CCW uses structured context packages:
**`context-package.json`** created during planning:
```json
{
"metadata": {
"task_description": "...",
"tech_stack": {"frontend": [...], "backend": [...]},
"complexity": "high"
},
"assets": [
{
"path": "synthesis-specification.md",
"priority": "critical",
"sections": ["Backend Module Structure"]
}
],
"implementation_guidance": {
"start_with": ["Step 1", "Step 2"],
"critical_security_items": [...]
}
}
```
#### 2. **JSON-First Task Model**
Each task includes a `flow_control.pre_analysis` section:
```json
{
"id": "IMPL-1",
"flow_control": {
"pre_analysis": [
{
"step": "load_architecture",
"commands": ["Read(architecture.md)", "grep 'auth' src/"],
"output_to": "arch_context",
"on_error": "fail"
}
],
"implementation_approach": {
"modification_points": ["..."],
"logic_flow": ["..."]
},
"target_files": ["src/auth/index.ts"]
}
}
```
**Key Innovation**: The `pre_analysis` steps are **executed before implementation**, ensuring agents always have the correct context.
#### 3. **Multi-Phase Orchestration**
CCW workflows are orchestrators that coordinate slash commands:
**Planning Phase** (`/workflow:plan`):
```
Phase 1: session:start → Create session
Phase 2: context-gather → Build context-package.json
Phase 3: concept-enhanced → CLI analysis (Gemini/Qwen)
Phase 4: task-generate → Generate task JSONs with pre_analysis
```
**Execution Phase** (`/workflow:execute`):
```
For each task:
1. Execute pre_analysis steps → Load context
2. Apply implementation_approach → Make changes
3. Validate acceptance criteria → Verify success
4. Generate summary → Track progress
```
#### 4. **Multi-Model Orchestration**
Each AI model serves its strength:
| Model | Role | Use Cases |
|-------|------|-----------|
| **Gemini** | Analysis & Understanding | Long-context analysis, architecture review, bug investigation |
| **Qwen** | Architecture & Design | System design, code generation, architectural planning |
| **Codex** | Implementation | Feature development, testing, autonomous execution |
**Example:**
```bash
# Gemini analyzes the problem space
/cli:mode:code-analysis --tool gemini "Analyze auth module"
# Qwen designs the solution
/cli:analyze --tool qwen "Design scalable auth architecture"
# Codex implements the code
/workflow:execute # Uses @code-developer with Codex
```
---
### From 0-to-1 vs 1-to-N Development
| Scenario | Traditional Workflow | CCW Approach |
|----------|---------------------|--------------|
| **Greenfield (0→1)** | ✅ Works well | ✅ Adds structured planning |
| **Feature Addition (1→2)** | ⚠️ Context uncertainty | ✅ Context-package links to existing code |
| **Bug Fixing (N→N+1)** | ⚠️ May miss related code | ✅ Pre-analysis finds dependencies |
| **Refactoring** | ⚠️ Unpredictable scope | ✅ CLI analysis + structured tasks |
---
### Key Workflows
#### **Complete Development (Brainstorm → Deploy)**
```
Brainstorm (8 roles) → Synthesis → Plan (4 phases) → Execute → Test → Review
```
#### **Quick Feature Development**
```
session:start → plan → execute → test-gen → execute
```
#### **TDD Workflow**
```
tdd-plan (TEST→IMPL→REFACTOR chains) → execute → tdd-verify
```
#### **Bug Fixing**
```
cli:mode:bug-index (analyze) → execute (fix) → test-gen (verify)
```
---

View File

@@ -2,7 +2,7 @@
<div align="center">
[![Version](https://img.shields.io/badge/version-v3.0.1-blue.svg)](https://github.com/catlog22/Claude-Code-Workflow/releases)
[![Version](https://img.shields.io/badge/version-v3.2.1-blue.svg)](https://github.com/catlog22/Claude-Code-Workflow/releases)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
[![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20macOS-lightgrey.svg)]()
[![MCP工具](https://img.shields.io/badge/🔧_MCP工具-实验性-orange.svg)](https://github.com/modelcontextprotocol)
@@ -15,21 +15,27 @@
**Claude Code Workflow (CCW)** 是一个新一代的多智能体自动化开发框架,通过智能工作流管理和自主执行来协调复杂的软件开发任务。
> **🎉 最新版本: v3.0.1** - 文档优化和头脑风暴角色更新。详见 [CHANGELOG.md](CHANGELOG.md)。
> **🎉 最新版本: v3.2.0** - 采用"测试即审查"理念简化智能体架构。详见 [CHANGELOG.md](CHANGELOG.md)。
>
> **v3.0.0 版本**: 引入了**统一的 CLI 命令结构**。`/cli:*` 命令通过 `--tool` 标志整合了所有工具Gemini, Qwen, Codex的交互。
> **v3.2.0 版本新特性**:
> - 🔄 从 3 个智能体简化为 2 个核心智能体(`@code-developer`、`@test-fix-agent`
> - ✅ "测试即审查" - 测试通过 = 代码批准
> - 🧪 增强的测试修复工作流,支持自动执行和修复
> - 📦 交互式安装,包含版本选择菜单
---
## ✨ 核心特性
- **🤖 多智能体系统**: 用于规划、编码、测试和审查的专用智能体
- **🔄 端到端工作流自动化**: 从头脑风暴 (`/workflow:brainstorm`) 到部署的完整流程
- **🎯 JSON 优先架构**: 使用 JSON 作为任务的唯一真实数据源,确保一致性
- **🧪 自动测试生成**: 基于实现分析创建全面的测试套件
- **🎯 上下文优先架构**: 预定义上下文收集消除执行不确定性和误差累积
- **🤖 多智能体系统**: 专用智能体(`@code-developer``@code-review-test-agent`)具备技术栈感知能力
- **🔄 端到端工作流自动化**: 从头脑风暴到部署的多阶段编排
- **📋 JSON 优先任务模型**: 结构化任务定义,包含 `pre_analysis` 步骤实现确定性执行
- **🧪 TDD 工作流支持**: 完整的测试驱动开发,包含 Red-Green-Refactor 循环强制执行。
- **🧠 多模型编排**: 发挥 Gemini分析、Qwen架构和 Codex实现各自优势。
- **✅ 执行前验证**: 通过战略Gemini和技术Codex双重分析验证计划。
- **🔧 统一 CLI**: 一个强大、统一的 `/cli:*` 命令集,用于与各种 AI 工具交互。
- **🧠 智能上下文管理**: 自动管理和更新项目文档 (`CLAUDE.md`)
- **📦 智能上下文**: `context-package.json` 将任务链接到相关代码库文件和外部示例
---
@@ -47,35 +53,145 @@ Invoke-Expression (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/cat
bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflow/main/install-remote.sh)
```
### **📋 交互式版本选择**
运行安装命令后,您将看到包含实时版本信息的交互式菜单:
```
正在检测最新版本和提交...
最新稳定版: v3.2.0 (2025-10-02 04:27 UTC)
最新提交: cdea58f (2025-10-02 08:15 UTC)
====================================================
版本选择菜单
====================================================
1) 最新稳定版(推荐)
|-- 版本: v3.2.0
|-- 发布时间: 2025-10-02 04:27 UTC
\-- 生产就绪
2) 最新开发版
|-- 分支: main
|-- 提交: cdea58f
|-- 更新时间: 2025-10-02 08:15 UTC
|-- 最新功能
\-- 可能包含实验性更改
3) 指定版本
|-- 安装特定标签版本
\-- 最近版本: v3.2.0, v3.1.0, v3.0.1
====================================================
选择要安装的版本 (1-3, 默认: 1):
```
**版本选项:**
- **选项 1推荐**:经过验证的最新稳定版本,生产环境可用
- **选项 2**:来自 main 分支的最新开发版本,包含最新功能
- **选项 3**:指定版本标签,用于受控部署
> 💡 **提示**:安装程序会自动从 GitHub 检测并显示最新的版本号和发布日期。只需按 Enter 键即可选择推荐的稳定版本。
### **✅ 验证安装**
安装后,运行以下命令以确保 CCW 正常工作:
```bash
/workflow:session:list
```
> **📝 重要说明:**
> - 安装程序将自动安装/更新 `.codex/` 和 `.gemini/` 目录
> - **全局模式**:安装到 `~/.codex` 和 `~/.gemini`
> - **路径模式**:安装到指定目录(例如 `project/.codex`、`project/.gemini`
> - 安装前会自动备份现有文件
---
## 🚀 快速入门:一个简单的工作流
## 🚀 快速入门
1. **启动一个新的工作流会话:**
```bash
/workflow:session:start "创建一个新的用户认证功能"
```
### 完整开发工作流
2. **生成一个实现计划:**
```bash
/workflow:plan "实现基于JWT的用户认证"
```
**阶段 1头脑风暴与概念规划**
```bash
# 多视角头脑风暴,使用基于角色的智能体
/workflow:brainstorm:auto-parallel "构建用户认证系统"
3. **使用 AI 智能体执行计划:**
```bash
/workflow:execute
```
# 审查和优化特定方面(可选)
/workflow:brainstorm:ui-designer "认证流程"
/workflow:brainstorm:synthesis # 生成综合规范
```
4. **检查状态:**
```bash
/workflow:status
```
**阶段 2行动规划**
```bash
# 创建可执行的实现计划
/workflow:plan "实现基于 JWT 的认证系统"
# 或使用 TDD 方法
/workflow:tdd-plan "使用测试优先开发实现认证"
```
**阶段 3执行**
```bash
# 使用 AI 智能体执行任务
/workflow:execute
# 监控进度
/workflow:status
```
**阶段 4测试与质量保证**
```bash
# 生成全面测试套件(标准工作流)
/workflow:test-gen
/workflow:execute
# 或验证 TDD 合规性TDD 工作流)
/workflow:tdd-verify
# 可选:手动审查(仅在明确需要时使用)
# /workflow:review # 测试通过 = 代码已批准
```
### 简单任务快速入门
**功能开发:**
```bash
/workflow:session:start "添加密码重置功能"
/workflow:plan "基于邮件的密码重置,带令牌过期"
/workflow:execute
```
**Bug 修复:**
```bash
# 使用 CLI 工具进行交互式分析
/cli:mode:bug-index --tool gemini "移动设备上登录超时"
# 执行建议的修复
/workflow:execute
```
> **💡 何时使用哪种方式?**
>
> **使用 `/workflow:plan` + `/workflow:execute` 适用于:**
> - 需要多个模块的复杂功能(>3 个模块)
> - 包含多个子任务的任务(>5 个子任务)
> - 影响架构的横切变更
> - 需要组件间协调的功能
> - 需要结构化规划和进度跟踪时
>
> **直接使用 Claude Code 适用于:**
> - 简单、聚焦的变更(单个文件或模块)
> - 解决方案明确的快速 bug 修复
> - 文档更新
> - 单个组件内的代码重构
> - 简单直接的功能添加
**代码分析:**
```bash
# 深度代码库分析
/cli:mode:code-analysis --tool qwen "分析认证模块架构"
```
---
@@ -101,10 +217,12 @@ bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflo
| `/workflow:session:*` | 管理开发会话(`start`, `pause`, `resume`, `list`, `switch`, `complete`)。 |
| `/workflow:brainstorm:*` | 使用基于角色的智能体进行多视角规划。 |
| `/workflow:plan` | 从描述创建详细、可执行的计划。 |
| `/workflow:tdd-plan` | 创建测试驱动开发工作流,包含 Red-Green-Refactor 循环。 |
| `/workflow:execute` | 自主执行当前的工作流计划。 |
| `/workflow:status` | 显示工作流的当前状态。 |
| `/workflow:test-gen` | 从实现中自动生成测试计划。 |
| `/workflow:review` | 对已完成的工作启动质量保证审查。 |
| `/workflow:tdd-verify` | 验证 TDD 合规性并生成质量报告。 |
| `/workflow:review` | **可选** 手动审查(仅在明确需要时使用,测试通过即代表代码已批准)。 |
### **任务与内存命令**
@@ -116,9 +234,46 @@ bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflo
---
## ⚙️ 核心配置
## ⚙️ 配置
为实现最佳集成,请在 `~/.gemini/` 中创建一个 `settings.json` 文件来配置您的 Gemini CLI 设置:
### **前置要求:必需工具**
在使用 CCW 之前,请安装以下命令行工具:
#### **核心 CLI 工具**
| 工具 | 用途 | 安装方式 |
|------|------|----------|
| **Gemini CLI** | AI 分析与文档生成 | `npm install -g @google/gemini-cli` ([GitHub](https://github.com/google-gemini/gemini-cli)) |
| **Codex CLI** | AI 开发与实现 | `npm install -g @openai/codex` ([GitHub](https://github.com/openai/codex)) |
| **Qwen Code** | AI 架构与代码生成 | `npm install -g @qwen-code/qwen-code` ([文档](https://github.com/QwenLM/qwen-code)) |
#### **系统实用工具**
| 工具 | 用途 | 安装方式 |
|------|------|----------|
| **ripgrep (rg)** | 快速代码搜索 | [下载](https://github.com/BurntSushi/ripgrep/releases) 或 `brew install ripgrep` (macOS), `apt install ripgrep` (Ubuntu) |
| **jq** | JSON 处理 | [下载](https://jqlang.github.io/jq/download/) 或 `brew install jq` (macOS), `apt install jq` (Ubuntu) |
**快速安装(所有工具):**
```bash
# macOS
brew install ripgrep jq
npm install -g @google/gemini-cli @openai/codex @qwen-code/qwen-code
# Ubuntu/Debian
sudo apt install ripgrep jq
npm install -g @google/gemini-cli @openai/codex @qwen-code/qwen-code
# Windows (Chocolatey)
choco install ripgrep jq
npm install -g @google/gemini-cli @openai/codex @qwen-code/qwen-code
```
### **必需: Gemini CLI 设置**
配置 Gemini CLI 以实现最佳集成:
```json
// ~/.gemini/settings.json
@@ -126,7 +281,198 @@ bash <(curl -fsSL https://raw.githubusercontent.com/catlog22/Claude-Code-Workflo
"contextFileName": "CLAUDE.md"
}
```
这确保了 CCW 的智能文档系统能与 Gemini CLI 无缝协作。
### **推荐: .geminiignore**
通过排除不必要的文件来优化性能:
```bash
# .geminiignore (在项目根目录)
/dist/
/build/
/node_modules/
/.next/
*.tmp
*.log
/temp/
# 包含重要文档
!README.md
!**/CLAUDE.md
```
### **可选: MCP 工具** *(增强分析)*
MCP (模型上下文协议) 工具提供高级代码库分析。**完全可选** - CCW 在没有它们的情况下也能完美工作。
#### 可用的 MCP 服务器
| MCP 服务器 | 用途 | 安装指南 |
|------------|------|---------|
| **Exa MCP** | 外部 API 模式和最佳实践 | [安装指南](https://github.com/exa-labs/exa-mcp-server) |
| **Code Index MCP** | 高级内部代码搜索 | [安装指南](https://github.com/johnhuang316/code-index-mcp) |
#### 启用后的好处
- 📊 **更快分析**: 直接代码库索引 vs 手动搜索
- 🌐 **外部上下文**: 真实世界的 API 模式和示例
- 🔍 **高级搜索**: 模式匹配和相似性检测
-**自动回退**: MCP 不可用时使用传统工具
---
## 🧩 工作原理:设计理念
### 核心问题
传统的 AI 编码工作流面临一个根本性挑战:**执行不确定性导致误差累积**。
**示例:**
```bash
# 提示词1"开发XX功能"
# 提示词2"查看XX文件中架构设计开发XX功能"
```
虽然提示词1对简单任务可能成功但在复杂工作流中
- AI 每次可能检查不同的文件
- 小偏差在多个步骤中累积
- 最终输出偏离预期目标
> **CCW 的使命**:解决"1到N"的问题 — 精确地在现有代码库基础上开发,而不仅仅是"0到1"的全新项目开发。
---
### CCW 解决方案:上下文优先架构
#### 1. **预定义上下文收集**
CCW 使用结构化上下文包,而不是让智能体随机探索:
**规划阶段创建的 `context-package.json`**
```json
{
"metadata": {
"task_description": "...",
"tech_stack": {"frontend": [...], "backend": [...]},
"complexity": "high"
},
"assets": [
{
"path": "synthesis-specification.md",
"priority": "critical",
"sections": ["后端模块结构"]
}
],
"implementation_guidance": {
"start_with": ["步骤1", "步骤2"],
"critical_security_items": [...]
}
}
```
#### 2. **JSON 优先任务模型**
每个任务包含 `flow_control.pre_analysis` 部分:
```json
{
"id": "IMPL-1",
"flow_control": {
"pre_analysis": [
{
"step": "load_architecture",
"commands": ["Read(architecture.md)", "grep 'auth' src/"],
"output_to": "arch_context",
"on_error": "fail"
}
],
"implementation_approach": {
"modification_points": ["..."],
"logic_flow": ["..."]
},
"target_files": ["src/auth/index.ts"]
}
}
```
**核心创新**`pre_analysis` 步骤在实现**之前执行**,确保智能体始终拥有正确的上下文。
#### 3. **多阶段编排**
CCW 工作流是协调斜杠命令的编排器:
**规划阶段** (`/workflow:plan`)
```
阶段1: session:start → 创建会话
阶段2: context-gather → 构建 context-package.json
阶段3: concept-enhanced → CLI 分析Gemini/Qwen
阶段4: task-generate → 生成带 pre_analysis 的任务 JSON
```
**执行阶段** (`/workflow:execute`)
```
对于每个任务:
1. 执行 pre_analysis 步骤 → 加载上下文
2. 应用 implementation_approach → 进行更改
3. 验证验收标准 → 验证成功
4. 生成摘要 → 跟踪进度
```
#### 4. **多模型编排**
每个 AI 模型发挥各自优势:
| 模型 | 角色 | 使用场景 |
|------|------|----------|
| **Gemini** | 分析与理解 | 长上下文分析、架构审查、bug 调查 |
| **Qwen** | 架构与设计 | 系统设计、代码生成、架构规划 |
| **Codex** | 实现 | 功能开发、测试、自主执行 |
**示例:**
```bash
# Gemini 分析问题空间
/cli:mode:code-analysis --tool gemini "分析认证模块"
# Qwen 设计解决方案
/cli:analyze --tool qwen "设计可扩展的认证架构"
# Codex 实现代码
/workflow:execute # 使用带 Codex 的 @code-developer
```
---
### 0到1 vs 1到N 开发
| 场景 | 传统工作流 | CCW 方法 |
|------|-----------|----------|
| **全新项目0→1** | ✅ 效果良好 | ✅ 增加结构化规划 |
| **功能添加1→2** | ⚠️ 上下文不确定 | ✅ context-package 链接现有代码 |
| **Bug 修复N→N+1** | ⚠️ 可能遗漏相关代码 | ✅ pre_analysis 查找依赖 |
| **重构** | ⚠️ 范围不可预测 | ✅ CLI 分析 + 结构化任务 |
---
### 核心工作流
#### **完整开发(头脑风暴 → 部署)**
```
头脑风暴8个角色→ 综合 → 规划4阶段→ 执行 → 测试 → 审查
```
#### **快速功能开发**
```
session:start → plan → execute → test-gen → execute
```
#### **TDD 工作流**
```
tdd-plan (TEST→IMPL→REFACTOR 链) → execute → tdd-verify
```
#### **Bug 修复**
```
cli:mode:bug-index分析→ execute修复→ test-gen验证
```
---

142
RELEASE_NOTES_v3.2.1.md Normal file
View File

@@ -0,0 +1,142 @@
# 🔧 Claude Code Workflow (CCW) v3.2.1 Release Notes
**Release Date**: October 2, 2025
**Release Type**: Patch Release - Documentation Fix
**Repository**: https://github.com/catlog22/Claude-Code-Workflow
---
## 📋 Overview
CCW v3.2.1 is a critical documentation fix release that corrects `workflow-session.json` path references throughout the brainstorming workflow documentation. This ensures consistency with the architecture specification defined in `workflow-architecture.md`.
---
## 🐛 Bug Fixes
### **Documentation Path Corrections**
**Issue**: Documentation incorrectly referenced `workflow-session.json` inside the `.brainstorming/` subdirectory
**Impact**:
- Confusing path references in 9 brainstorming role documentation files
- Inconsistency with architectural specifications
- Potential runtime errors when commands attempt to read session metadata
**Fixed Files** (9 total):
1.`data-architect.md`
2.`product-manager.md`
3.`product-owner.md`
4.`scrum-master.md`
5.`subject-matter-expert.md`
6.`ui-designer.md`
7.`ux-expert.md`
8.`auto-parallel.md`
9.`artifacts.md`
**Corrections Applied**:
-**Incorrect**: `.workflow/WFS-{session}/.brainstorming/workflow-session.json`
-**Correct**: `.workflow/WFS-{session}/workflow-session.json`
---
## 📐 Architecture Alignment
### Confirmed Standard Structure
```
.workflow/WFS-[topic-slug]/
├── workflow-session.json # ✅ Session metadata (root level)
├── .brainstorming/ # Brainstorming artifacts subdirectory
│ └── topic-framework.md
├── IMPL_PLAN.md
├── TODO_LIST.md
└── .task/
└── IMPL-*.json
```
### Key Points
- `workflow-session.json` is **always at session root level**
- `.brainstorming/` directory contains **only** framework and analysis files
- No session metadata files inside subdirectories
---
## 📊 Changes Summary
| Category | Files Changed | Lines Modified |
|----------|--------------|----------------|
| Documentation Fixes | 9 | 19 insertions, 18 deletions |
| Path Corrections | 8 role files | 2 corrections per file |
| Structure Clarifications | 1 artifacts file | Added architectural note |
---
## ✅ Verification
**Pre-Release Checks**:
- ✅ No incorrect `.brainstorming/workflow-session.json` references
- ✅ No legacy `session.json` references
- ✅ All brainstorming roles use correct paths
- ✅ Architecture consistency verified with Gemini analysis
---
## 🔄 Upgrade Instructions
### For Existing Users
**No action required** - This is a documentation-only fix.
1. Pull the latest changes:
```bash
git pull origin main
```
2. Review updated documentation:
- `.claude/commands/workflow/brainstorm/*.md`
### For New Users
Simply clone the repository with the correct documentation:
```bash
git clone https://github.com/catlog22/Claude-Code-Workflow.git
```
---
## 📚 Related Documentation
- **Architecture Reference**: `.claude/workflows/workflow-architecture.md`
- **Brainstorming Commands**: `.claude/commands/workflow/brainstorm/`
- **Session Management**: `.claude/commands/workflow/session/`
---
## 🙏 Acknowledgments
Special thanks to the analysis tools used in this fix:
- **Gemini CLI**: Path verification and consistency checking
- **Codex**: Initial codebase analysis
- **Claude Code**: Documentation review and corrections
---
## 🔗 Links
- **Full Changelog**: [v3.2.0...v3.2.1](https://github.com/catlog22/Claude-Code-Workflow/compare/v3.2.0...v3.2.1)
- **Issues Fixed**: Documentation consistency issue
- **Previous Release**: [v3.2.0 Release Notes](RELEASE_NOTES_v3.2.0.md)
---
## 📝 Notes
- This is a **non-breaking change** - existing workflows will continue to function
- Documentation now correctly reflects the implemented architecture
- No code changes were necessary - this was purely a documentation correction
---
**Contributors**: Claude Code Development Team
**License**: MIT
**Support**: [GitHub Issues](https://github.com/catlog22/Claude-Code-Workflow/issues)

View File

@@ -1,16 +1,88 @@
#!/usr/bin/env pwsh
# Claude Code Workflow (CCW) - Remote Installation Script
# One-liner remote installation for Claude Code Workflow system
<#
.SYNOPSIS
Claude Code Workflow (CCW) - Remote Installation Script
.DESCRIPTION
One-liner remote installation for Claude Code Workflow system.
Downloads and installs CCW from GitHub with flexible version selection.
.PARAMETER Version
Installation version type:
- "stable" (default): Latest stable release tag
- "latest": Latest main branch (development version)
- "branch": Install from specific branch
.PARAMETER Tag
Specific release tag to install (e.g., "v3.2.0")
Only used when Version is "stable"
.PARAMETER Branch
Branch name to install from (default: "main")
Only used when Version is "branch"
.PARAMETER Global
Install to global user directory (~/.claude)
.PARAMETER Directory
Install to custom directory
.PARAMETER Force
Skip confirmation prompts
.PARAMETER NoBackup
Skip backup of existing installation
.PARAMETER NonInteractive
Run in non-interactive mode
.PARAMETER BackupAll
Backup all files including git-ignored files
.EXAMPLE
# Install latest stable release (recommended)
.\install-remote.ps1
.EXAMPLE
# Install specific stable version
.\install-remote.ps1 -Version stable -Tag "v3.2.0"
.EXAMPLE
# Install latest development version
.\install-remote.ps1 -Version latest
.EXAMPLE
# Install from specific branch
.\install-remote.ps1 -Version branch -Branch "feature/new-feature"
.EXAMPLE
# Install to global directory without prompts
.\install-remote.ps1 -Global -Force
.LINK
https://github.com/catlog22/Claude-Code-Workflow
#>
[CmdletBinding()]
param(
[ValidateSet("stable", "latest", "branch")]
[string]$Version = "stable",
[string]$Tag = "",
[string]$Branch = "main",
[switch]$Global,
[string]$Directory = "",
[switch]$Force,
[switch]$NoBackup,
[switch]$NonInteractive,
[switch]$BackupAll,
[string]$Branch = "main"
[switch]$BackupAll
)
# Set encoding for proper Unicode support
@@ -26,7 +98,7 @@ if ($PSVersionTable.PSVersion.Major -ge 6) {
# Script metadata
$ScriptName = "Claude Code Workflow (CCW) Remote Installer"
$Version = "2.1.1"
$InstallerVersion = "2.2.0"
# Colors for output
$ColorSuccess = "Green"
@@ -43,7 +115,7 @@ function Write-ColorOutput {
}
function Show-Header {
Write-ColorOutput "==== $ScriptName v$Version ====" $ColorInfo
Write-ColorOutput "==== $ScriptName v$InstallerVersion ====" $ColorInfo
Write-ColorOutput "========================================================" $ColorInfo
Write-Host ""
}
@@ -78,29 +150,70 @@ function Get-TempDirectory {
return $tempDir
}
function Get-LatestRelease {
try {
$apiUrl = "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest"
$response = Invoke-RestMethod -Uri $apiUrl -UseBasicParsing
return $response.tag_name
} catch {
Write-ColorOutput "WARNING: Failed to fetch latest release, using 'main' branch" $ColorWarning
return $null
}
}
function Download-Repository {
param(
[string]$TempDir,
[string]$Branch = "main"
[string]$Version = "stable",
[string]$Branch = "main",
[string]$Tag = ""
)
$repoUrl = "https://github.com/catlog22/Claude-Code-Workflow"
$zipUrl = "$repoUrl/archive/refs/heads/$Branch.zip"
# Determine download URL based on version type
if ($Version -eq "stable") {
# Download latest stable release
if ([string]::IsNullOrEmpty($Tag)) {
$latestTag = Get-LatestRelease
if ($latestTag) {
$Tag = $latestTag
} else {
# Fallback to main branch if API fails
$zipUrl = "$repoUrl/archive/refs/heads/main.zip"
$downloadType = "main branch (fallback)"
}
}
if (-not [string]::IsNullOrEmpty($Tag)) {
$zipUrl = "$repoUrl/archive/refs/tags/$Tag.zip"
$downloadType = "stable release $Tag"
}
} elseif ($Version -eq "latest") {
# Download latest main branch
$zipUrl = "$repoUrl/archive/refs/heads/main.zip"
$downloadType = "latest main branch"
} else {
# Download specific branch
$zipUrl = "$repoUrl/archive/refs/heads/$Branch.zip"
$downloadType = "branch $Branch"
}
$zipPath = Join-Path $TempDir "repo.zip"
Write-ColorOutput "Downloading from GitHub..." $ColorInfo
Write-ColorOutput "Source: $repoUrl" $ColorInfo
Write-ColorOutput "Branch: $Branch" $ColorInfo
Write-ColorOutput "Type: $downloadType" $ColorInfo
try {
# Download with progress
$progressPreference = $ProgressPreference
$ProgressPreference = 'SilentlyContinue'
Invoke-WebRequest -Uri $zipUrl -OutFile $zipPath -UseBasicParsing
$ProgressPreference = $progressPreference
if (Test-Path $zipPath) {
$fileSize = (Get-Item $zipPath).Length
Write-ColorOutput "Download complete ($([math]::Round($fileSize/1024/1024, 2)) MB)" $ColorSuccess
@@ -225,29 +338,200 @@ function Wait-ForUserConfirmation {
}
}
function Show-VersionMenu {
param(
[string]$LatestStableVersion = "Detecting...",
[string]$LatestStableDate = "",
[string]$LatestCommitId = "",
[string]$LatestCommitDate = ""
)
Write-Host ""
Write-ColorOutput "====================================================" $ColorInfo
Write-ColorOutput " Version Selection Menu" $ColorInfo
Write-ColorOutput "====================================================" $ColorInfo
Write-Host ""
# Option 1: Latest Stable
Write-ColorOutput "1) Latest Stable Release (Recommended)" $ColorSuccess
if ($LatestStableVersion -ne "Detecting..." -and $LatestStableVersion -ne "Unknown") {
Write-Host " |-- Version: $LatestStableVersion"
if ($LatestStableDate) {
Write-Host " |-- Released: $LatestStableDate"
}
Write-Host " \-- Production-ready"
} else {
Write-Host " |-- Version: Auto-detected from GitHub"
Write-Host " \-- Production-ready"
}
Write-Host ""
# Option 2: Latest Development
Write-ColorOutput "2) Latest Development Version" $ColorWarning
Write-Host " |-- Branch: main"
if ($LatestCommitId -and $LatestCommitDate) {
Write-Host " |-- Commit: $LatestCommitId"
Write-Host " |-- Updated: $LatestCommitDate"
}
Write-Host " |-- Cutting-edge features"
Write-Host " \-- May contain experimental changes"
Write-Host ""
# Option 3: Specific Version
Write-ColorOutput "3) Specific Release Version" $ColorInfo
Write-Host " |-- Install a specific tagged release"
Write-Host " \-- Recent: v3.2.0, v3.1.0, v3.0.1"
Write-Host ""
Write-ColorOutput "====================================================" $ColorInfo
Write-Host ""
}
function Get-UserVersionChoice {
if ($NonInteractive -or $Force) {
# Non-interactive mode: use default stable version
return @{
Type = "stable"
Tag = $Tag
Branch = $Branch
}
}
# Detect latest stable version and commit info
Write-ColorOutput "Detecting latest release and commits..." $ColorInfo
$latestVersion = "Unknown"
$latestStableDate = ""
$latestCommitId = ""
$latestCommitDate = ""
try {
# Get latest release info
$apiUrl = "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest"
$response = Invoke-RestMethod -Uri $apiUrl -UseBasicParsing -TimeoutSec 5
$latestVersion = $response.tag_name
# Parse and format release date
if ($response.published_at) {
$publishDate = [DateTime]::Parse($response.published_at)
$latestStableDate = $publishDate.ToString("yyyy-MM-dd HH:mm UTC")
}
Write-ColorOutput "Latest stable: $latestVersion ($latestStableDate)" $ColorSuccess
} catch {
Write-ColorOutput "Could not detect latest release" $ColorWarning
}
try {
# Get latest commit info from main branch
$commitUrl = "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main"
$commitResponse = Invoke-RestMethod -Uri $commitUrl -UseBasicParsing -TimeoutSec 5
$latestCommitId = $commitResponse.sha.Substring(0, 7)
# Parse and format commit date
if ($commitResponse.commit.committer.date) {
$commitDate = [DateTime]::Parse($commitResponse.commit.committer.date)
$latestCommitDate = $commitDate.ToString("yyyy-MM-dd HH:mm UTC")
}
Write-ColorOutput "Latest commit: $latestCommitId ($latestCommitDate)" $ColorSuccess
} catch {
Write-ColorOutput "Could not detect latest commit" $ColorWarning
}
Show-VersionMenu -LatestStableVersion $latestVersion -LatestStableDate $latestStableDate -LatestCommitId $latestCommitId -LatestCommitDate $latestCommitDate
$choice = Read-Host "Select version to install (1-3, default: 1)"
switch ($choice) {
"2" {
Write-Host ""
Write-ColorOutput "✓ Selected: Latest Development Version (main branch)" $ColorSuccess
return @{
Type = "latest"
Tag = ""
Branch = "main"
}
}
"3" {
Write-Host ""
Write-ColorOutput "Available recent releases:" $ColorInfo
Write-Host " v3.2.0, v3.1.0, v3.0.1, v3.0.0"
Write-Host ""
$tagInput = Read-Host "Enter version tag (e.g., v3.2.0)"
if ([string]::IsNullOrWhiteSpace($tagInput)) {
Write-ColorOutput "No tag specified, using latest stable" $ColorWarning
return @{
Type = "stable"
Tag = ""
Branch = "main"
}
}
Write-ColorOutput "✓ Selected: Specific Version $tagInput" $ColorSuccess
return @{
Type = "stable"
Tag = $tagInput
Branch = "main"
}
}
default {
Write-Host ""
if ($latestVersion -ne "Unknown") {
Write-ColorOutput "✓ Selected: Latest Stable Release ($latestVersion)" $ColorSuccess
} else {
Write-ColorOutput "✓ Selected: Latest Stable Release (auto-detect)" $ColorSuccess
}
return @{
Type = "stable"
Tag = ""
Branch = "main"
}
}
}
}
function Main {
Show-Header
Write-ColorOutput "This will download and install Claude Code Workflow System from GitHub." $ColorInfo
Write-Host ""
# Test prerequisites
Write-ColorOutput "Checking system requirements..." $ColorInfo
if (-not (Test-Prerequisites)) {
Wait-ForUserConfirmation "System check failed! Press any key to exit..." -ExitAfter
}
# Get version choice from user (interactive menu)
$versionChoice = Get-UserVersionChoice
$script:Version = $versionChoice.Type
$script:Tag = $versionChoice.Tag
$script:Branch = $versionChoice.Branch
# Determine version information for display
$versionInfo = switch ($Version) {
"stable" {
if ($Tag) { "Stable release: $Tag" }
else { "Latest stable release (auto-detected)" }
}
"latest" { "Latest main branch (development)" }
"branch" { "Custom branch: $Branch" }
}
# Confirm installation
if (-not $NonInteractive -and -not $Force) {
Write-Host ""
Write-ColorOutput "SECURITY NOTE:" $ColorWarning
Write-Host "- This script will download and execute Claude Code Workflow from GitHub"
Write-Host "- Repository: https://github.com/catlog22/Claude-Code-Workflow"
Write-Host "- Branch: $Branch (latest stable version)"
Write-ColorOutput "INSTALLATION DETAILS:" $ColorInfo
Write-Host "- Repository: https://github.com/catlog22/Claude-Code-Workflow"
Write-Host "- Version: $versionInfo"
Write-Host "- Features: Intelligent workflow orchestration with multi-agent coordination"
Write-Host ""
Write-ColorOutput "SECURITY NOTE:" $ColorWarning
Write-Host "- This script will download and execute code from GitHub"
Write-Host "- Please ensure you trust this source"
Write-Host ""
$choice = Read-Host "Continue with installation? (y/N)"
if ($choice -notmatch '^[Yy]') {
Write-ColorOutput "Installation cancelled" $ColorWarning
@@ -261,7 +545,7 @@ function Main {
try {
# Download repository
$zipPath = Download-Repository $tempDir $Branch
$zipPath = Download-Repository -TempDir $tempDir -Version $Version -Branch $Branch -Tag $Tag
if (-not $zipPath) {
throw "Download failed"
}

View File

@@ -6,9 +6,13 @@ set -e # Exit on error
# Script metadata
SCRIPT_NAME="Claude Code Workflow (CCW) Remote Installer"
VERSION="2.1.1"
INSTALLER_VERSION="2.2.0"
BRANCH="${BRANCH:-main}"
# Version control
VERSION_TYPE="${VERSION_TYPE:-stable}" # stable, latest, branch
TAG_VERSION=""
# Colors for output
COLOR_RESET='\033[0m'
COLOR_SUCCESS='\033[0;32m'
@@ -32,7 +36,7 @@ function write_color() {
}
function show_header() {
write_color "==== $SCRIPT_NAME v$VERSION ====" "$COLOR_INFO"
write_color "==== $SCRIPT_NAME v$INSTALLER_VERSION ====" "$COLOR_INFO"
write_color "========================================================" "$COLOR_INFO"
echo ""
}
@@ -72,17 +76,80 @@ function get_temp_directory() {
echo "$temp_dir"
}
function get_latest_release() {
local api_url="https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest"
if command -v jq &> /dev/null; then
# Use jq if available
local tag
tag=$(curl -fsSL "$api_url" 2>/dev/null | jq -r '.tag_name' 2>/dev/null)
if [ -n "$tag" ] && [ "$tag" != "null" ]; then
echo "$tag"
return 0
fi
else
# Fallback: parse JSON with grep/sed
local tag
tag=$(curl -fsSL "$api_url" 2>/dev/null | grep -o '"tag_name":\s*"[^"]*"' | sed 's/"tag_name":\s*"\([^"]*\)"/\1/')
if [ -n "$tag" ]; then
echo "$tag"
return 0
fi
fi
write_color "WARNING: Failed to fetch latest release, using 'main' branch" "$COLOR_WARNING" >&2
return 1
}
function download_repository() {
local temp_dir="$1"
local branch="${2:-main}"
local version_type="${2:-stable}"
local branch="${3:-main}"
local tag="${4:-}"
local repo_url="https://github.com/catlog22/Claude-Code-Workflow"
local zip_url="${repo_url}/archive/refs/heads/${branch}.zip"
local zip_url=""
local download_type=""
# Determine download URL based on version type
case "$version_type" in
stable)
# Download latest stable release
if [ -z "$tag" ]; then
tag=$(get_latest_release)
if [ -z "$tag" ]; then
# Fallback to main branch if API fails
zip_url="${repo_url}/archive/refs/heads/main.zip"
download_type="main branch (fallback)"
else
zip_url="${repo_url}/archive/refs/tags/${tag}.zip"
download_type="stable release $tag"
fi
else
zip_url="${repo_url}/archive/refs/tags/${tag}.zip"
download_type="stable release $tag"
fi
;;
latest)
# Download latest main branch
zip_url="${repo_url}/archive/refs/heads/main.zip"
download_type="latest main branch"
;;
branch)
# Download specific branch
zip_url="${repo_url}/archive/refs/heads/${branch}.zip"
download_type="branch $branch"
;;
*)
write_color "ERROR: Invalid version type: $version_type" "$COLOR_ERROR" >&2
return 1
;;
esac
local zip_path="${temp_dir}/repo.zip"
write_color "Downloading from GitHub..." "$COLOR_INFO" >&2
write_color "Source: $repo_url" "$COLOR_INFO" >&2
write_color "Branch: $branch" "$COLOR_INFO" >&2
write_color "URL: $zip_url" "$COLOR_INFO" >&2
write_color "Type: $download_type" "$COLOR_INFO" >&2
# Download with curl
if curl -fsSL -o "$zip_path" "$zip_url" 2>&1 >&2; then
@@ -217,6 +284,23 @@ function wait_for_user() {
function parse_arguments() {
while [[ $# -gt 0 ]]; do
case "$1" in
--version)
VERSION_TYPE="$2"
if [[ ! "$VERSION_TYPE" =~ ^(stable|latest|branch)$ ]]; then
write_color "ERROR: Invalid version type: $VERSION_TYPE" "$COLOR_ERROR"
write_color "Valid options: stable, latest, branch" "$COLOR_ERROR"
exit 1
fi
shift 2
;;
--tag)
TAG_VERSION="$2"
shift 2
;;
--branch)
BRANCH="$2"
shift 2
;;
--global)
INSTALL_GLOBAL=true
shift
@@ -241,10 +325,6 @@ function parse_arguments() {
BACKUP_ALL=true
shift
;;
--branch)
BRANCH="$2"
shift 2
;;
--help)
show_help
exit 0
@@ -260,33 +340,208 @@ function parse_arguments() {
function show_help() {
cat << EOF
$SCRIPT_NAME v$VERSION
$SCRIPT_NAME v$INSTALLER_VERSION
Usage: $0 [OPTIONS]
Options:
Version Options:
--version TYPE Version type: stable (default), latest, or branch
--tag TAG Specific release tag (e.g., v3.2.0) - for stable version
--branch BRANCH Branch name (default: main) - for branch version
Installation Options:
--global Install to global user directory (~/.claude)
--directory DIR Install to custom directory
--force Force installation without prompts
--no-backup Skip backup creation
--non-interactive Non-interactive mode (no prompts)
--backup-all Backup all files before installation
--branch BRANCH Specify GitHub branch (default: main)
--help Show this help message
Examples:
# Interactive installation
# Install latest stable release (recommended)
$0
# Install specific stable version
$0 --version stable --tag v3.2.0
# Install latest development version
$0 --version latest
# Install from specific branch
$0 --version branch --branch feature/new-feature
# Global installation without prompts
$0 --global --non-interactive
# Custom directory installation
$0 --directory /opt/claude-code-workflow
Repository: https://github.com/catlog22/Claude-Code-Workflow
EOF
}
function show_version_menu() {
local latest_version="$1"
local latest_date="$2"
local commit_id="$3"
local commit_date="$4"
echo ""
write_color "====================================================" "$COLOR_INFO"
write_color " Version Selection Menu" "$COLOR_INFO"
write_color "====================================================" "$COLOR_INFO"
echo ""
# Option 1: Latest Stable
write_color "1) Latest Stable Release (Recommended)" "$COLOR_SUCCESS"
if [ -n "$latest_version" ] && [ "$latest_version" != "Unknown" ]; then
echo " |-- Version: $latest_version"
if [ -n "$latest_date" ]; then
echo " |-- Released: $latest_date"
fi
echo " \-- Production-ready"
else
echo " |-- Version: Auto-detected from GitHub"
echo " \-- Production-ready"
fi
echo ""
# Option 2: Latest Development
write_color "2) Latest Development Version" "$COLOR_WARNING"
echo " |-- Branch: main"
if [ -n "$commit_id" ] && [ -n "$commit_date" ]; then
echo " |-- Commit: $commit_id"
echo " |-- Updated: $commit_date"
fi
echo " |-- Cutting-edge features"
echo " \-- May contain experimental changes"
echo ""
# Option 3: Specific Version
write_color "3) Specific Release Version" "$COLOR_INFO"
echo " |-- Install a specific tagged release"
echo " \-- Recent: v3.2.0, v3.1.0, v3.0.1"
echo ""
write_color "====================================================" "$COLOR_INFO"
echo ""
}
function get_user_version_choice() {
if [ "$NON_INTERACTIVE" = true ] || [ "$FORCE" = true ]; then
# Non-interactive mode: use default stable version
echo "stable"
return 0
fi
# Detect latest stable version and commit info
write_color "Detecting latest release and commits..." "$COLOR_INFO"
local latest_version="Unknown"
local latest_date=""
local commit_id=""
local commit_date=""
# Get latest release info
local release_data
release_data=$(curl -fsSL --connect-timeout 5 "https://api.github.com/repos/catlog22/Claude-Code-Workflow/releases/latest" 2>/dev/null)
if [ -n "$release_data" ]; then
if command -v jq &> /dev/null; then
latest_version=$(echo "$release_data" | jq -r '.tag_name' 2>/dev/null)
local published_at=$(echo "$release_data" | jq -r '.published_at' 2>/dev/null)
if [ -n "$published_at" ] && [ "$published_at" != "null" ]; then
# Format: 2025-10-02T04:27:21Z -> 2025-10-02 04:27 UTC
latest_date=$(echo "$published_at" | sed 's/T/ /' | sed 's/Z/ UTC/' | cut -d'.' -f1)
fi
else
latest_version=$(echo "$release_data" | grep -o '"tag_name":\s*"[^"]*"' | sed 's/"tag_name":\s*"\([^"]*\)"/\1/')
local published_at=$(echo "$release_data" | grep -o '"published_at":\s*"[^"]*"' | sed 's/"published_at":\s*"\([^"]*\)"/\1/')
if [ -n "$published_at" ]; then
latest_date=$(echo "$published_at" | sed 's/T/ /' | sed 's/Z/ UTC/' | cut -d'.' -f1)
fi
fi
fi
if [ -n "$latest_version" ] && [ "$latest_version" != "null" ] && [ "$latest_version" != "Unknown" ]; then
write_color "Latest stable: $latest_version ($latest_date)" "$COLOR_SUCCESS"
else
latest_version="Unknown"
write_color "Could not detect latest release" "$COLOR_WARNING"
fi
# Get latest commit info
local commit_data
commit_data=$(curl -fsSL --connect-timeout 5 "https://api.github.com/repos/catlog22/Claude-Code-Workflow/commits/main" 2>/dev/null)
if [ -n "$commit_data" ]; then
if command -v jq &> /dev/null; then
commit_id=$(echo "$commit_data" | jq -r '.sha' 2>/dev/null | cut -c1-7)
local committer_date=$(echo "$commit_data" | jq -r '.commit.committer.date' 2>/dev/null)
if [ -n "$committer_date" ] && [ "$committer_date" != "null" ]; then
commit_date=$(echo "$committer_date" | sed 's/T/ /' | sed 's/Z/ UTC/' | cut -d'.' -f1)
fi
else
commit_id=$(echo "$commit_data" | grep -o '"sha":\s*"[^"]*"' | head -1 | sed 's/"sha":\s*"\([^"]*\)"/\1/' | cut -c1-7)
local committer_date=$(echo "$commit_data" | grep -o '"date":\s*"[^"]*"' | head -1 | sed 's/"date":\s*"\([^"]*\)"/\1/')
if [ -n "$committer_date" ]; then
commit_date=$(echo "$committer_date" | sed 's/T/ /' | sed 's/Z/ UTC/' | cut -d'.' -f1)
fi
fi
fi
if [ -n "$commit_id" ] && [ -n "$commit_date" ]; then
write_color "Latest commit: $commit_id ($commit_date)" "$COLOR_SUCCESS"
else
write_color "Could not detect latest commit" "$COLOR_WARNING"
fi
show_version_menu "$latest_version" "$latest_date" "$commit_id" "$commit_date"
read -p "Select version to install (1-3, default: 1): " choice
case "$choice" in
2)
echo ""
write_color "✓ Selected: Latest Development Version (main branch)" "$COLOR_SUCCESS"
VERSION_TYPE="latest"
TAG_VERSION=""
BRANCH="main"
;;
3)
echo ""
write_color "Available recent releases:" "$COLOR_INFO"
echo " v3.2.0, v3.1.0, v3.0.1, v3.0.0"
echo ""
read -p "Enter version tag (e.g., v3.2.0): " tag_input
if [ -z "$tag_input" ]; then
write_color "⚠ No tag specified, using latest stable" "$COLOR_WARNING"
VERSION_TYPE="stable"
TAG_VERSION=""
else
echo ""
write_color "✓ Selected: Specific Version $tag_input" "$COLOR_SUCCESS"
VERSION_TYPE="stable"
TAG_VERSION="$tag_input"
fi
BRANCH="main"
;;
*)
echo ""
if [ "$latest_version" != "Unknown" ]; then
write_color "✓ Selected: Latest Stable Release ($latest_version)" "$COLOR_SUCCESS"
else
write_color "✓ Selected: Latest Stable Release (auto-detect)" "$COLOR_SUCCESS"
fi
VERSION_TYPE="stable"
TAG_VERSION=""
BRANCH="main"
;;
esac
}
function main() {
show_header
@@ -300,14 +555,37 @@ function main() {
exit 1
fi
# Get version choice from user (interactive menu)
get_user_version_choice
# Determine version information for display
local version_info=""
case "$VERSION_TYPE" in
stable)
if [ -n "$TAG_VERSION" ]; then
version_info="Stable release: $TAG_VERSION"
else
version_info="Latest stable release (auto-detected)"
fi
;;
latest)
version_info="Latest main branch (development)"
;;
branch)
version_info="Custom branch: $BRANCH"
;;
esac
# Confirm installation
if [ "$NON_INTERACTIVE" != true ] && [ "$FORCE" != true ]; then
echo ""
write_color "SECURITY NOTE:" "$COLOR_WARNING"
echo "- This script will download and execute Claude Code Workflow from GitHub"
write_color "INSTALLATION DETAILS:" "$COLOR_INFO"
echo "- Repository: https://github.com/catlog22/Claude-Code-Workflow"
echo "- Branch: $BRANCH (latest stable version)"
echo "- Version: $version_info"
echo "- Features: Intelligent workflow orchestration with multi-agent coordination"
echo ""
write_color "SECURITY NOTE:" "$COLOR_WARNING"
echo "- This script will download and execute code from GitHub"
echo "- Please ensure you trust this source"
echo ""
@@ -328,7 +606,7 @@ function main() {
# Download repository
local zip_path
write_color "Starting download process..." "$COLOR_INFO"
zip_path=$(download_repository "$temp_dir" "$BRANCH")
zip_path=$(download_repository "$temp_dir" "$VERSION_TYPE" "$BRANCH" "$TAG_VERSION")
local download_status=$?
if [ $download_status -eq 0 ] && [ -n "$zip_path" ] && [ -f "$zip_path" ]; then