Initial release: Claude Code Workflow (CCW) v2.0

🚀 Revolutionary AI-powered development workflow orchestration system

## 🔥 Core Innovations
- **Document-State Separation**: Markdown for planning, JSON for execution state
- **Progressive Complexity Management**: Level 0-2 adaptive workflow depth
- **5-Agent Orchestration**: Specialized AI agents with context preservation
- **Session-First Architecture**: Auto-discovery and state inheritance

## 🏗️ Key Features
- Intelligent workflow orchestration (Simple/Medium/Complex patterns)
- Real-time document-state synchronization with conflict resolution
- Hierarchical task management with 3-level JSON structure
- Gemini CLI integration with 12+ specialized templates
- Comprehensive file output generation for all workflow commands

## 📦 Installation
Remote one-liner installation:
```
iex (iwr -useb https://raw.githubusercontent.com/catlog22/Claude-CCW/main/install-remote.ps1)
```

## 🎯 System Architecture
4-layer intelligent development architecture:
1. Command Layer - Smart routing and version management
2. Agent Layer - 5 specialized development agents
3. Workflow Layer - Gemini templates and task orchestration
4. Memory Layer - Distributed documentation and auto-sync

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
catlog22
2025-09-07 17:39:54 +08:00
commit 445ac823ba
87 changed files with 19076 additions and 0 deletions

View File

@@ -0,0 +1,113 @@
# Agent Orchestration Patterns
## Core Agent Coordination Features
- **Context Preservation**: Maintain original task context throughout Agent chain
- **Quality Gates**: Each Agent validates input and ensures output standards
- **Adaptive Complexity**: Workflow depth matches task complexity requirements
- **Iterative Improvement**: Complex workflows include multiple review-fix cycles
- **Structured Output**: Standardized Agent output formats ensure reliable coordination
- **Error Recovery**: Graceful handling of Agent coordination failures
## Workflow Implementation Patterns
### Simple Workflow Pattern
```pseudocode
Flow: TodoWrite Creation → Context → Implementation → Review
1. MANDATORY TodoWrite Creation:
- "Gather context for implementation"
- "Implement solution"
- "Review and validate code"
- "Complete task"
2. Implementation Checkpoint:
Task(code-developer): Direct implementation
Output: SUMMARY, FILES_MODIFIED, TESTS, VERIFICATION
3. Review Checkpoint:
Task(code-review-agent): Quick quality review
Output: STATUS, SCORE, ISSUES, RECOMMENDATIONS
Resume Support: Load todos + context from checkpoint
```
### Medium Workflow Pattern
```pseudocode
Flow: TodoWrite → Planning → Implementation → Review
1. MANDATORY TodoWrite Creation (5-7 todos):
- "Create implementation plan"
- "Gather context", "Implement with tests"
- "Validate", "Review", "Complete"
2. Planning Checkpoint:
Task(planning-agent): Create plan
Trigger decomposition if >3 modules or >5 subtasks
Output: PLAN_SUMMARY, STEPS, SUCCESS_CRITERIA
3. Implementation Checkpoint:
Task(code-developer): Follow plan
Update TODO_CHECKLIST.md if decomposition exists
4. Review Checkpoint:
Task(code-review-agent): Comprehensive review
Verify against plan and decomposition
Resume Support: Full state restoration at each checkpoint
```
### Complex Workflow Pattern
```pseudocode
Flow: TodoWrite → Planning → Implementation → Review → Iterate (max 2)
1. MANDATORY TodoWrite Creation (7-10 todos):
- "Create detailed plan", "Generate decomposition docs"
- "Gather context", "Implement with testing"
- "Validate criteria", "Review", "Iterate", "Complete"
2. Planning Checkpoint:
Task(planning-agent): MANDATORY task decomposition
Generate: IMPL_PLAN.md (enhanced structure), TODO_LIST.md
Include risk assessment and quality gates
3. Implementation Checkpoint:
Task(code-developer): Follow hierarchical breakdown
Update TODO_CHECKLIST.md for each subtask completion
4. Review & Iteration Loop (max 2 iterations):
Task(code-review-agent): Production-ready review
If CRITICAL_ISSUES found: Task(code-developer) fixes issues
Continue until no critical issues or max iterations reached
Document Validation: Verify decomposition docs generated
Resume Support: Full state + iteration tracking
```
## Workflow Characteristics by Pattern
| Pattern | Agent Chain | Quality Focus | Iteration Strategy |
|---------|-------------|---------------|--------------------|
| **Complex** | Full 3-stage + iterations | Production-ready quality | Multiple rounds until perfect |
| **Medium** | Full 3-stage single-pass | Comprehensive quality | Single thorough review |
| **Simple** | 2-stage direct | Basic quality | Quick validation |
## Task Invocation Examples
```bash
# Research Task
Task(subagent_type="general-purpose",
prompt="Research authentication patterns in codebase")
# Planning Task
Task(subagent_type="planning-agent",
prompt="Plan OAuth2 implementation across API, middleware, and UI")
# Implementation Task
Task(subagent_type="code-developer",
prompt="Implement email validation function with tests")
# Review Task
Task(subagent_type="code-review-agent",
prompt="Review recently implemented authentication service")
```

View File

@@ -0,0 +1,179 @@
# Brainstorming System Principles
## Core Philosophy
**"Diverge first, then converge"** - Generate multiple solutions from diverse perspectives, then synthesize and prioritize.
## Project Structure Establishment (MANDATORY FIRST STEP)
### Automatic Directory Creation
Before ANY agent coordination begins, the brainstorming command MUST establish the complete project structure:
1. **Create Session Directory**:
```bash
mkdir -p .workflow/WFS-[topic-slug]/.brainstorming/
```
2. **Create Agent Output Directories**:
```bash
# Create directories ONLY for selected participating agent roles
mkdir -p .workflow/WFS-[topic-slug]/.brainstorming/{selected-agent1,selected-agent2,selected-agent3}
# Example: mkdir -p .workflow/WFS-user-auth/.brainstorming/{system-architect,ui-designer,product-manager}
```
3. **Initialize Session State**:
- Create workflow-session.json with brainstorming phase tracking
- Set up document reference structure
- Establish agent coordination metadata
### Pre-Agent Verification
Before delegating to conceptual-planning-agent, VERIFY:
- [ ] Topic slug generated correctly
- [ ] All required directories exist
- [ ] workflow-session.json initialized
- [ ] Agent roles selected and corresponding directories created
## Brainstorming Modes
### Creative Mode (Default)
- **Techniques**: SCAMPER, Six Thinking Hats, wild ideas
- **Focus**: Innovation and unconventional solutions
### Analytical Mode
- **Techniques**: Root cause analysis, data-driven insights
- **Focus**: Evidence-based systematic problem-solving
### Strategic Mode
- **Techniques**: Systems thinking, scenario planning
- **Focus**: Long-term strategic positioning
## Documentation Structure
### Workflow Integration
Brainstorming sessions are integrated with the unified workflow system under `.workflow/WFS-[topic-slug]/.brainstorming/`.
**Directory Creation**: If `.workflow/WFS-[topic-slug]/` doesn't exist, create it automatically before starting brainstorming.
**Topic Slug Format**: Convert topic to lowercase with hyphens (e.g., "User Authentication System" → `WFS-user-authentication-system`)
```
.workflow/WFS-[topic-slug]/
└── .brainstorming/
├── session-summary.md # Main session documentation
├── synthesis-analysis.md # Cross-role integration
├── recommendations.md # Prioritized solutions
├── system-architect/ # Architecture perspective
│ ├── analysis.md
│ └── technical-specifications.md
├── ui-designer/ # Design perspective
│ ├── analysis.md
│ └── user-experience-plan.md
├── product-manager/ # Product perspective
│ ├── analysis.md
│ └── business-requirements.md
├── data-architect/ # Data perspective
│ ├── analysis.md
│ └── data-model-design.md
├── test-strategist/ # Testing perspective
│ ├── analysis.md
│ └── test-strategy-plan.md
├── security-expert/ # Security perspective
│ ├── analysis.md
│ └── security-assessment.md
├── user-researcher/ # User research perspective
│ ├── analysis.md
│ └── user-insights.md
├── business-analyst/ # Business analysis perspective
│ ├── analysis.md
│ └── process-optimization.md
├── innovation-lead/ # Innovation perspective
│ ├── analysis.md
│ └── future-roadmap.md
└── feature-planner/ # Feature planning perspective
├── analysis.md
└── feature-specifications.md
```
## Session Metadata
Each brainstorming session maintains metadata in `session-summary.md` header:
```markdown
# Brainstorming Session: [Topic]
**Session ID**: WFS-[topic-slug]
**Topic**: [Challenge description]
**Mode**: creative|analytical|strategic
**Perspectives**: [role1, role2, role3...]
**Facilitator**: conceptual-planning-agent
**Date**: YYYY-MM-DD
## Session Overview
[Brief session description and objectives]
```
## Quality Standards
- **Clear Structure**: Follow Explore → Ideate → Converge → Document phases
- **Diverse Perspectives**: Include multiple role viewpoints
- **Actionable Outputs**: Generate concrete next steps
- **Comprehensive Documentation**: Capture all insights and recommendations
## Unified Workflow Integration
### Document-State Separation
Following unified workflow system principles:
- **Markdown Files** → Brainstorming insights, role analyses, synthesis results
- **JSON Files** → Session state, role completion tracking, workflow coordination
- **Auto-sync** → Integration with `workflow-session.json` for seamless workflow transition
### Session Coordination
Brainstorming sessions integrate with the unified workflow system:
```json
// workflow-session.json integration
{
"session_id": "WFS-[topic-slug]",
"type": "complex", // brainstorming typically creates complex workflows
"current_phase": "PLAN", // conceptual phase
"brainstorming": {
"status": "active|completed",
"mode": "creative|analytical|strategic",
"roles_completed": ["system-architect", "ui-designer"],
"current_role": "data-architect",
"output_directory": ".workflow/WFS-[topic-slug]/.brainstorming/",
"agent_document_paths": {
"system-architect": ".workflow/WFS-[topic-slug]/.brainstorming/system-architect/",
"ui-designer": ".workflow/WFS-[topic-slug]/.brainstorming/ui-designer/",
"product-manager": ".workflow/WFS-[topic-slug]/.brainstorming/product-manager/",
"data-architect": ".workflow/WFS-[topic-slug]/.brainstorming/data-architect/"
}
}
}
```
### Directory Auto-Creation
Before starting brainstorming session:
```bash
# Create workflow structure and ONLY selected agent directories
mkdir -p .workflow/WFS-[topic-slug]/.brainstorming/
# Create directories for selected agents only
for agent in selected_agents; do
mkdir -p .workflow/WFS-[topic-slug]/.brainstorming/$agent
done
```
### Agent Document Assignment Protocol
When coordinating with conceptual-planning-agent, ALWAYS specify exact output location:
**Correct Agent Delegation:**
```
Task(conceptual-planning-agent): "Conduct brainstorming analysis for: [topic]. Use [mode] approach. Required perspective: [role].
Load role definition using: ~/.claude/scripts/plan-executor.sh [role]
OUTPUT REQUIREMENT: Save all generated documents to: .workflow/WFS-[topic-slug]/.brainstorming/[role]/
- analysis.md (main perspective analysis)
- [role-specific-output].md (specialized deliverable)
"
```
### Brainstorming Output
The brainstorming phase produces comprehensive role-based analysis documents that serve as input for subsequent workflow phases.

View File

@@ -0,0 +1,59 @@
# Workflow Complexity Decision Tree
## Task Classification
```
Task Type?
├── Single file/bug fix → Simple Workflow
├── Multi-file feature → Medium Workflow
├── System changes → Complex Workflow
└── Uncertain complexity → Start with Medium, escalate if needed
```
## Complexity Patterns
### Always Use Simple Workflow For:
- Bug fixes in single files
- Minor UI adjustments
- Text/message updates
- Simple validation additions
- Quick documentation fixes
### Always Use Medium Workflow For:
- New feature implementations
- Multi-component changes
- API endpoint additions
- Database schema updates
- Integration implementations
### Always Use Complex Workflow For:
- Architecture changes
- Security implementations
- Performance optimizations
- Migration projects
- System integrations
- Authentication/authorization systems
## Workflow Pattern Matrix
| Task Type | Recommended Workflow | Agent Sequence | Iteration Requirements |
|-----------|---------------------|----------------|----------------------|
| Bug Fix (Simple) | Simple | code-developer → review | Minimal |
| Bug Fix (Complex) | Medium | planning → developer → review | 1 round |
| New Feature (Small) | Simple | developer → review | Minimal |
| New Feature (Large) | Medium | planning → developer → review | 1-2 rounds |
| Architecture Changes | Complex | planning → developer → review → iterate | Multiple rounds |
| Security Implementation | Complex | planning → developer → review → validate | Mandatory multiple rounds |
| Performance Optimization | Complex | planning → developer → review → test | Performance validation |
| Prototype Development | Simple | developer → minimal review | Fast |
## Progressive Complexity Strategy
```bash
# Start simple and escalate as needed
/workflow simple "initial implementation"
# If complexity emerges during development:
/workflow medium "enhance with additional requirements"
# If system-wide impact discovered:
/workflow complex "complete system integration"
```

View File

@@ -0,0 +1,280 @@
# Conceptual Planning Agent
**Agent Definition**: See @~/.claude/agents/conceptual-planning-agent.md
**Integration Principles**: See @~/.claude/workflows/brainstorming-principles.md
## Purpose
Multi-role brainstorming and conceptual planning agent specialized in creative problem-solving, strategic thinking, and comprehensive perspective coordination.
## Core Capabilities
### Brainstorming Facilitation
- **Multi-Perspective Coordination** → Orchestrate insights from different role perspectives
- **Creative Technique Application** → Apply SCAMPER, Six Thinking Hats, and other proven methods
- **Structured Ideation** → Guide systematic idea generation and evaluation processes
- **Session Documentation** → Create comprehensive brainstorming records and summaries
### Strategic Analysis
- **Systems Thinking** → Analyze complex interdependencies and relationships
- **Scenario Planning** → Explore multiple future possibilities and outcomes
- **Strategic Framework Application** → Use established strategic analysis tools
- **Long-term Vision Development** → Create compelling future state visions
### Multi-Role Perspective Integration
- **Role-Based Analysis** → Channel different expertise areas and mental models
- **Perspective Synthesis** → Combine insights from multiple viewpoints into coherent solutions
- **Conflict Resolution** → Address tensions between different role perspectives
- **Comprehensive Coverage** → Ensure all relevant aspects are considered
## Execution Patterns
### Brainstorming Session Protocol
**Input Processing**:
```
Topic: [Challenge or opportunity description]
Mode: [creative|analytical|strategic]
Perspectives: [Selected role perspectives]
Context: [Current situation and constraints]
```
**Execution Flow**:
```
1. Challenge Analysis → Define scope, constraints, success criteria
2. Perspective Setup → Establish role contexts and viewpoints
3. Ideation Phase → Generate ideas using appropriate techniques
4. Convergence Phase → Evaluate, synthesize, prioritize solutions
5. Documentation → Create structured session records
```
### Multi-Role Perspective Execution
**Available Roles and Contexts**:
**Product Manager Perspective**:
- Focus: User needs, business value, market positioning
- Questions: What do users want? How does this create business value?
- Output: User stories, business cases, market analysis
**System Architect Perspective**:
- Focus: Technical architecture, scalability, integration
- Questions: How does this scale? What are technical constraints?
- Output: Architecture diagrams, technical requirements, system design
**UI Designer Perspective**:
- Focus: User experience, interface design, usability
- Questions: How do users interact? What's the optimal user journey?
- Output: User flows, wireframes, interaction patterns, design principles
**Data Architect Perspective**:
- Focus: Data flow, storage, analytics, insights
- Questions: What data is needed? How is it processed and analyzed?
- Output: Data models, flow diagrams, analytics requirements
**Security Expert Perspective**:
- Focus: Security implications, threat modeling, compliance
- Questions: What are the risks? How do we protect against threats?
- Output: Threat models, security requirements, compliance frameworks
**User Researcher Perspective**:
- Focus: User behavior, pain points, research insights
- Questions: What do users really need? What problems are we solving?
- Output: User research synthesis, personas, behavioral insights
**Business Analyst Perspective**:
- Focus: Process optimization, efficiency, ROI
- Questions: How does this improve processes? What's the return on investment?
- Output: Process maps, efficiency metrics, cost-benefit analysis
**Innovation Lead Perspective**:
- Focus: Emerging trends, disruptive technologies, future opportunities
- Questions: What's the innovation potential? What trends are relevant?
- Output: Technology roadmaps, trend analysis, innovation opportunities
### Creative Technique Application
**SCAMPER Method**:
- **Substitute**: What can be substituted or replaced?
- **Combine**: What can be combined or merged?
- **Adapt**: What can be adapted from elsewhere?
- **Modify**: What can be magnified, minimized, or modified?
- **Put to other uses**: How else can this be used?
- **Eliminate**: What can be removed or simplified?
- **Reverse**: What can be rearranged or reversed?
**Six Thinking Hats**:
- **White Hat**: Facts, information, data
- **Red Hat**: Emotions, feelings, intuition
- **Black Hat**: Critical judgment, caution, problems
- **Yellow Hat**: Optimism, benefits, positive thinking
- **Green Hat**: Creativity, alternatives, new ideas
- **Blue Hat**: Process control, meta-thinking
**Additional Techniques**:
- **Mind Mapping**: Visual idea exploration and connection
- **Brainstorming**: Free-flowing idea generation
- **Brainwriting**: Silent idea generation and building
- **Random Word**: Stimulus-based creative thinking
- **What If**: Scenario-based exploration
- **Assumption Challenging**: Question fundamental assumptions
### Mode-Specific Execution
**Creative Mode**:
- Emphasize divergent thinking and wild ideas
- Apply creative techniques extensively
- Encourage "what if" thinking and assumption challenging
- Focus on novel and unconventional solutions
**Analytical Mode**:
- Use structured analysis frameworks
- Apply root cause analysis and logical thinking
- Emphasize evidence-based reasoning
- Focus on systematic problem-solving
**Strategic Mode**:
- Apply strategic thinking frameworks
- Use systems thinking and long-term perspective
- Consider competitive dynamics and market forces
- Focus on strategic positioning and advantage
## Documentation Standards
### Session Summary Generation
Generate comprehensive session documentation including:
- Session metadata and configuration
- Challenge definition and scope
- Key insights and patterns
- Generated ideas with descriptions
- Perspective analysis from each role
- Evaluation and prioritization
- Recommendations and next steps
### Idea Documentation
For each significant idea, create detailed documentation:
- Concept description and core mechanism
- Multi-perspective analysis and implications
- Feasibility assessment (technical, resource, timeline)
- Impact potential (user, business, technical)
- Implementation considerations and prerequisites
- Success metrics and validation approach
- Risk assessment and mitigation strategies
### Integration Preparation
When brainstorming integrates with workflows:
- Synthesize requirements suitable for planning phase
- Prioritize solutions by feasibility and impact
- Prepare structured input for workflow systems
- Maintain traceability between brainstorming and implementation
## Output Format Standards
### Brainstorming Session Output
```
BRAINSTORMING_SUMMARY: [Comprehensive session overview]
CHALLENGE_DEFINITION: [Clear problem space definition]
KEY_INSIGHTS: [Major discoveries and patterns]
IDEA_INVENTORY: [Structured list of all generated ideas]
TOP_CONCEPTS: [5 most promising solutions with analysis]
PERSPECTIVE_SYNTHESIS: [Integration of role-based insights]
FEASIBILITY_ASSESSMENT: [Technical and resource evaluation]
IMPACT_ANALYSIS: [Expected outcomes and benefits]
RECOMMENDATIONS: [Prioritized next steps and actions]
WORKFLOW_INTEGRATION: [If applicable, workflow handoff preparation]
```
### Multi-Role Analysis Output
```
ROLE_COORDINATION: [How perspectives were integrated]
PERSPECTIVE_INSIGHTS: [Key insights from each role]
SYNTHESIS_RESULTS: [Combined perspective analysis]
CONFLICT_RESOLUTION: [How role conflicts were addressed]
COMPREHENSIVE_COVERAGE: [Confirmation all aspects considered]
```
## Quality Standards
### Effective Session Facilitation
- **Clear Structure** → Follow defined phases and maintain session flow
- **Inclusive Participation** → Ensure all perspectives are heard and valued
- **Creative Environment** → Maintain judgment-free ideation atmosphere
- **Productive Tension** → Balance creativity with practical constraints
- **Actionable Outcomes** → Generate concrete next steps and recommendations
### Perspective Integration
- **Authentic Representation** → Accurately channel each role's mental models
- **Balanced Coverage** → Give appropriate attention to all perspectives
- **Constructive Synthesis** → Combine insights into stronger solutions
- **Conflict Navigation** → Address perspective tensions constructively
- **Comprehensive Analysis** → Ensure no critical aspects are overlooked
### Documentation Quality
- **Structured Capture** → Organize insights and ideas systematically
- **Clear Communication** → Present complex ideas in accessible format
- **Decision Support** → Provide frameworks for evaluating options
- **Implementation Ready** → Prepare outputs for next development phases
- **Traceability** → Maintain clear links between ideas and analysis
## Dynamic Role Definition Loading
### Role-Based Planning Template Integration
The conceptual planning agent dynamically loads role-specific capabilities using the planning template system:
**Dynamic Role Loading Process:**
1. **Role Identification** → Receive required role(s) from brainstorming coordination command
2. **Template Loading** → Use Bash tool to execute `~/.claude/scripts/plan-executor.sh [role]`
3. **Capability Integration** → Apply loaded role template to current brainstorming context
4. **Perspective Analysis** → Conduct analysis from the specified role perspective
5. **Multi-Role Synthesis** → When multiple roles specified, integrate perspectives coherently
**Supported Roles:**
- `product-manager`, `system-architect`, `ui-designer`, `data-architect`
- `security-expert`, `user-researcher`, `business-analyst`, `innovation-lead`
- `feature-planner`, `test-strategist`
**Role Loading Example:**
```
For role "product-manager":
1. Execute: Bash(~/.claude/scripts/plan-executor.sh product-manager)
2. Receive: Product Manager Planning Template with responsibilities and focus areas
3. Apply: Template guidance to current brainstorming topic
4. Generate: Analysis from product management perspective
```
**Multi-Role Coordination:**
When conducting multi-perspective brainstorming:
1. Load each required role template sequentially
2. Apply each perspective to the brainstorming topic
3. Synthesize insights across all loaded perspectives
4. Identify convergent themes and resolve conflicts
5. Generate integrated recommendations
## Brainstorming Documentation Creation
### Mandatory File Creation Requirements
Following @~/.claude/workflows/brainstorming-principles.md, the conceptual planning agent MUST create structured documentation for all brainstorming sessions.
**Role-Specific Documentation**: Each role template loaded via plan-executor.sh contains its specific documentation requirements and file creation instructions.
### File Creation Protocol
1. **Load Role Requirements**: When loading each role template, extract the "Brainstorming Documentation Files to Create" section
2. **Create Role Analysis Files**: Generate the specific analysis files as defined by each loaded role (e.g., `product-manager-analysis.md`)
3. **Follow Role Templates**: Each role specifies its exact file structure, naming convention, and content template
### Integration with Brainstorming Principles
**Must Follow Brainstorming Modes:**
- **Creative Mode**: Apply SCAMPER, Six Thinking Hats, divergent thinking
- **Analytical Mode**: Use root cause analysis, data-driven insights, logical frameworks
- **Strategic Mode**: Apply systems thinking, strategic frameworks, scenario planning
**Quality Standards Compliance:**
- **Clear Structure**: Follow defined phases (Explore → Ideate → Converge → Document)
- **Diverse Perspectives**: Ensure all loaded roles contribute unique insights
- **Judgment-Free Ideation**: Encourage wild ideas during creative phases
- **Actionable Outputs**: Generate concrete next steps and decision frameworks
### File Creation Tools
The conceptual planning agent has access to Write, MultiEdit, and other file creation tools to generate the complete brainstorming documentation structure.
This conceptual planning agent provides comprehensive brainstorming and strategic analysis capabilities with dynamic role-based perspectives, mandatory documentation creation following brainstorming principles, and full integration with the planning template system and workflow management system.

View File

@@ -0,0 +1,72 @@
# Workflow System Core Principles
## Architecture Philosophy
### Document-State Separation
**"Documents store plans, JSON manages state"**
- **Markdown Files** → Planning, requirements, task structure, implementation strategies
- **JSON Files** → Execution state, progress tracking, session metadata, dynamic changes
- **Auto-sync** → Bidirectional coordination with clear ownership rules
### Progressive Complexity
**"Minimal overhead → comprehensive structure"**
- **Simple** → Lightweight JSON + optional docs
- **Medium** → Structured planning + conditional documents
- **Complex** → Complete document suite + full coordination
### Embedded Document Logic
**"No command dependencies for document operations"**
- **Built-in** → Document splitting internal to commands
- **Trigger-based** → Auto-splitting on complexity/task thresholds
- **Maintenance** → docs:manage for manual operations only
### Command Pre-execution Protocol
**"All commands check active session for context"**
Commands automatically discover and inherit context from active sessions for seamless workflow integration.
## Fundamental Design Patterns
### Session-First Architecture
- All workflow operations inherit from active session context
- Multi-session support with single active session pattern
- Context switching preserves complete state
### Hierarchical Task Management
- JSON-based task definitions with up to 3 levels of decomposition
- Bidirectional sync between task files and visualization
- Progress tracking with dependency management
### Complexity-Driven Structure
- File structure scales automatically with task complexity
- Document generation triggered by complexity thresholds
- Progressive enhancement without breaking simple workflows
### Real-time Coordination
- TodoWrite tool provides immediate task visibility
- Persistent TODO_LIST.md maintains cross-session continuity
- Agent coordination through unified task interface
## Quality Assurance Principles
### Data Integrity
- Single source of truth for each data type
- Automatic validation and consistency checks
- Error recovery with graceful degradation
### Performance Guidelines
- Lazy loading of complex structures
- Minimal overhead for simple workflows
- Real-time updates without blocking operations
### Extensibility Rules
- Plugin architecture for specialized agents
- Template-based document generation
- Configurable complexity thresholds
---
**Core Philosophy**: Consistent scalable workflow management with simplicity for basic tasks → comprehensive structure for complex projects

View File

@@ -0,0 +1,298 @@
# Workflow File Structure Standards
## Overview
This document defines directory layouts, file naming conventions, and progressive complexity structures for workflow sessions.
## Progressive Structure System
**Complexity → Structure Level**
File structure scales with task complexity to minimize overhead for simple tasks while providing comprehensive organization for complex workflows.
### Level 0: Minimal Structure (<5 tasks)
**Target**: Simple workflows with clear, limited scope
```
.workflow/WFS-[topic-slug]/
├── workflow-session.json # Session metadata and state
├── [.brainstorming/] # Optional brainstorming phase
├── [.chat/] # Gemini CLI interaction sessions
│ └── chat-*.md # Saved chat sessions with timestamps
├── IMPL_PLAN.md # Combined planning document
├── .summaries/ # Task completion summaries
│ └── IMPL-*.md # Individual task summaries
└── .task/
└── impl-*.json # Task definitions
```
### Level 1: Enhanced Structure (5-15 tasks)
**Target**: Medium complexity workflows with multiple phases
```
.workflow/WFS-[topic-slug]/
├── workflow-session.json # Session metadata and state
├── [.brainstorming/] # Optional brainstorming phase
├── [.chat/] # Gemini CLI interaction sessions
│ └── chat-*.md # Saved chat sessions with timestamps
├── IMPL_PLAN.md # Combined planning document
├── TODO_LIST.md # Auto-triggered progress tracking
├── .summaries/ # Task completion summaries
│ ├── IMPL-*.md # Main task summaries
│ └── IMPL-*.*.md # Subtask summaries
└── .task/
├── impl-*.json # Main task definitions
└── impl-*.*.json # Subtask definitions (up to 3 levels)
```
### Level 2: Complete Structure (>15 tasks)
**Target**: Complex workflows with extensive documentation needs
```
.workflow/WFS-[topic-slug]/
├── workflow-session.json # Session metadata and state
├── [.brainstorming/] # Optional brainstorming phase
├── [.chat/] # Gemini CLI interaction sessions
│ ├── chat-*.md # Saved chat sessions with timestamps
│ └── analysis-*.md # Comprehensive analysis results
├── IMPL_PLAN.md # Comprehensive planning document
├── TODO_LIST.md # Progress tracking and monitoring
├── .summaries/ # Task completion summaries
│ ├── IMPL-*.md # Main task summaries
│ ├── IMPL-*.*.md # Subtask summaries
│ └── IMPL-*.*.*.md # Detailed subtask summaries
└── .task/
├── impl-*.json # Task hierarchy (max 3 levels deep)
├── impl-*.*.json # Subtasks
└── impl-*.*.*.json # Detailed subtasks
```
## File Naming Conventions
### Session Identifiers
**Format**: `WFS-[topic-slug]`
- Convert topic to lowercase with hyphens (e.g., "User Auth System" → `WFS-user-auth-system`)
- Add `-NNN` suffix only if conflicts exist (e.g., `WFS-payment-integration-002`)
### Task File Naming
**Hierarchical ID Format**:
```
impl-1 # Main task
impl-1.1 # Subtask of impl-1
impl-1.1.1 # Detailed subtask of impl-1.1
impl-1.2 # Another subtask of impl-1
impl-2 # Another main task
```
**Maximum Depth**: 3 levels (impl-N.M.P)
### Document Naming
- `workflow-session.json` - Session state (required)
- `IMPL_PLAN.md` - Planning document (required)
- `TODO_LIST.md` - Progress tracking (auto-generated when needed)
- Chat sessions: `chat-YYYYMMDD-HHMMSS.md`
- Analysis results: `analysis-[topic].md`
- Task summaries: `IMPL-[task-id]-summary.md`
## Directory Structure Rules
### Required Directories
- `.task/` - Always present, contains JSON task definitions
- `.summaries/` - Always present, contains task completion documentation
### Optional Directories
- `.brainstorming/` - Present when brainstorming phase was used
- `.chat/` - Present when Gemini CLI sessions were saved
### Directory Permissions and Access
- All workflow directories are project-local
- Session registry at `.workflow/session_status.jsonl` (global)
- Individual sessions in `.workflow/WFS-[topic-slug]/` (session-specific)
## Document Generation Triggers
### Automatic Document Creation
**Based on complexity assessment**:
| **Complexity** | **IMPL_PLAN.md** | **TODO_LIST.md** | **Task Files** |
|----------------|------------------|------------------|----------------|
| Simple (<5 tasks) | Always | No | impl-*.json |
| Medium (5-15 tasks) | Always | Auto-trigger* | impl-*.*.json |
| Complex (>15 tasks) | Always | Always | impl-*.*.*.json |
**Auto-trigger conditions (*):**
- Tasks > 5 OR modules > 3 OR estimated effort > 4h OR complex dependencies
### Document Template Standards
#### IMPL_PLAN.md Structure
```markdown
# Implementation Plan: [Session Topic]
## Overview
[High-level description]
## Requirements
[Functional and non-functional requirements]
## [Brainstorming Integration]
[If .brainstorming/ exists - reference analysis results]
## Implementation Strategy
[Technical approach and phases]
## Success Criteria
[Acceptance criteria and validation]
## Risk Assessment
[Potential issues and mitigation strategies]
```
#### TODO_LIST.md Structure
```markdown
# Task Progress List: [Session Topic]
## Progress Overview
- **Total Tasks**: X
- **Completed**: Y (Z%)
- **In Progress**: N
- **Pending**: M
## Implementation Tasks
### Main Tasks
- [ ] **IMPL-001**: [Task Description] → [📋 Details](./.task/impl-001.json)
- [x] **IMPL-002**: [Completed Task] → [📋 Details](./.task/impl-002.json) | [✅ Summary](./.summaries/IMPL-002-summary.md)
### Subtasks (Auto-expanded when active)
- [ ] **IMPL-001.1**: [Subtask Description] → [📋 Details](./.task/impl-001.1.json)
```
## Chat Session Management
### Chat Directory Structure
```
.chat/
├── chat-YYYYMMDD-HHMMSS.md # Individual chat sessions with timestamps
├── analysis-[topic].md # Comprehensive analysis results
└── context-[phase].md # Phase-specific context gathering
```
### Chat Session Template
```markdown
# Chat Session: [Timestamp] - [Topic]
## Query
[Original user inquiry]
## Template Used
[Auto-selected template name and rationale]
## Context
[Files and patterns included in analysis]
## Gemini Response
[Complete response from Gemini CLI]
## Key Insights
- [Important findings]
- [Architectural insights]
- [Implementation recommendations]
## Links
- [🔙 Back to Workflow](../workflow-session.json)
- [📋 Implementation Plan](../IMPL_PLAN.md)
```
## Summary Management
### Summary Directory Structure
```
.summaries/
├── IMPL-001-summary.md # Main task summaries
├── IMPL-001.1-summary.md # Subtask summaries
└── IMPL-001.1.1-summary.md # Detailed subtask summaries
```
### Summary Template
```markdown
# Task Summary: [Task-ID] [Task Name]
## What Was Done
- [Files modified/created]
- [Functionality implemented]
- [Key changes made]
## Issues Resolved
- [Problems solved]
- [Bugs fixed]
## Links
- [🔙 Back to Task List](../TODO_LIST.md#[Task-ID])
- [📋 Implementation Plan](../IMPL_PLAN.md#[Task-ID])
```
## Brainstorming Integration
When `.brainstorming/` directory exists, documents MUST reference brainstorming results:
### In IMPL_PLAN.md
```markdown
## Brainstorming Integration
Based on multi-role analysis from `.brainstorming/`:
- **Architecture Insights**: [Reference system-architect/analysis.md]
- **User Experience Considerations**: [Reference ui-designer/analysis.md]
- **Technical Requirements**: [Reference relevant role analyses]
- **Implementation Priorities**: [Reference synthesis-analysis.md]
```
### In JSON Task Context
```json
{
"context": {
"brainstorming_refs": [
".workflow/WFS-[topic-slug]/.brainstorming/system-architect/technical-specifications.md",
".workflow/WFS-[topic-slug]/.brainstorming/ui-designer/user-experience-plan.md"
],
"requirements": ["derived from brainstorming analysis"]
}
}
```
## Quality Control
### File System Validation
- Verify directory structure matches complexity level
- Validate file naming conventions
- Check for required vs optional directories
- Ensure proper file permissions
### Cross-Reference Validation
- All document links point to existing files
- Task IDs consistent across JSON files and TODO_LIST.md
- Brainstorming references are valid when .brainstorming/ exists
- Summary links properly reference parent tasks
### Performance Considerations
- Lazy directory creation (create only when needed)
- Efficient file structure scanning
- Minimal overhead for simple workflows
- Scalable organization for complex projects
## Error Recovery
### Missing File Scenarios
- **workflow-session.json missing**: Recreate from available documents
- **Required directories missing**: Auto-create with proper structure
- **Template files corrupted**: Regenerate from templates
- **Naming convention violations**: Auto-correct or flag for manual resolution
### Structure Consistency
- Validate structure level matches task complexity
- Auto-upgrade structure when complexity increases
- Maintain backward compatibility during transitions
- Preserve existing content during structure changes
---
**System ensures**: Consistent, scalable file organization with minimal overhead for simple tasks → comprehensive structure for complex projects

View File

@@ -0,0 +1,68 @@
# Gemini Agent Templates Overview
**Precise, task-focused templates for actionable agent context gathering.**
## Overview
This document provides focused templates that deliver precise, actionable context for specific tasks rather than generic pattern analysis. Each template targets exact requirements, modification points, and concrete implementation guidance.
## Template Usage Guidelines
### Key Principles
1. **Task-Specific Focus**: Templates target specific tasks rather than broad analysis
2. **Actionable Output**: Provide exact file:line references and concrete guidance
3. **Repository Context**: Extract patterns specific to the actual codebase
4. **Precise Scope**: Analyze only what's needed for the immediate task
### When to Use Each Template
**Planning Agent**: Before creating implementation plans for specific features or fixes
- Use when you need to understand exact scope and modification points
- Focus on concrete deliverables rather than architectural overviews
**Code Developer**: Before implementing specific functions, classes, or features
- Use when you need exact insertion points and code structure guidance
- Focus on actionable implementation steps with line references
**Code Review**: After code has been written for a specific task
- Use when reviewing changes against repository-specific standards
- Focus on understanding what was actually implemented and how it fits
**UI Design**: Before creating or modifying specific UI components
- Use when you need component-specific patterns and design system compliance
- Focus on established design language and interaction patterns
**Memory-Gemini-Bridge**: For creating or updating CLAUDE.md files
- Use when establishing hierarchical documentation strategy
- Focus on cross-system compatibility between Claude and Gemini CLI
### Benefits of Task-Focused Approach
1. **Precision**: Get exact modification points instead of general patterns
2. **Efficiency**: 50% reduction in irrelevant analysis
3. **Actionability**: Concrete guidance with file:line references
4. **Context Relevance**: Repository-specific patterns, not generic best practices
5. **Task Alignment**: Analysis directly supports the specific work being done
### Template Customization
Customize templates by:
1. **Specific File Targeting**: Replace `[task-related-files]` with exact patterns for your task
2. **Domain Context**: Add domain-specific file patterns (auth, api, ui, etc.)
3. **Technology Focus**: Include relevant extensions (.tsx for React, .py for Python, etc.)
4. **Task Context**: Specify exact feature or component being worked on
These focused templates provide agents with precise, actionable context for specific tasks, eliminating unnecessary pattern analysis and providing concrete implementation guidance.
## Integration with Intelligent Context
All templates integrate with `gemini-intelligent-context.md`(@~/.claude/workflows/gemini-intelligent-context.md) for:
- **Smart Path Detection** - Automatic file targeting based on analysis type
- **Technology Stack Detection** - Framework and language-specific optimizations
- **Domain Context Mapping** - Intelligent domain-specific pattern matching
- **Dynamic Prompt Enhancement** - Context-aware prompt construction
For complete context detection algorithms and intelligent file targeting, see `gemini-intelligent-context.md`.

View File

@@ -0,0 +1,273 @@
# Gemini CLI Core Guidelines
**Streamlined Gemini CLI usage guidelines with parallel execution patterns for enhanced performance.**
## 🎯 Core Command Syntax
### Basic Structure
```bash
gemini --all-files -p "@{file_patterns} analysis_prompt"
```
**Parameters**:
- `--all-files` - Include all files (context-dependent on execution path)
- `-p` - Specify prompt content
- `@{pattern}` - File reference pattern
### Parallel Execution Structure
```bash
# Execute multiple Gemini commands concurrently
(
gemini_command_1 &
gemini_command_2 &
gemini_command_3 &
wait # Synchronize all parallel processes
)
```
### ⚠️ Execution Path Dependencies
- `--all-files` adds all text to context memory and depends on execution path
- **For folder-specific analysis**: Navigate to target folder first, then run gemini
- **If errors occur**: Remove `--all-files` and use `@folder` or `@file` in prompts instead
**Example - Analyzing a specific component folder:**
```bash
# Combined command - navigate and analyze in one line
cd src/components/ui && gemini --all-files -p "analyze component structure and patterns in this UI folder"
# For Windows systems
cd src\components\ui && gemini --all-files -p "analyze component structure and patterns in this UI folder"
# Alternative if --all-files fails
cd /project/root && gemini -p "@src/components/ui analyze UI component patterns and structure"
# Cross-platform combined command with fallback
cd src/components/ui && gemini --all-files -p "analyze patterns" || gemini -p "@src/components/ui analyze patterns"
```
## 📂 File Reference Rules
### Required Reference Patterns
```bash
# 1. Project guidelines files (REQUIRED)
@{CLAUDE.md,**/*CLAUDE.md}
# 2. Target analysis files
@{src/**/*,lib/**/*} # Source code
@{**/*.{ts,tsx,js,jsx}} # Specific languages
@{**/api/**/*} # Domain-related
# 3. Test files (RECOMMENDED)
@{**/*.test.*,**/*.spec.*}
```
### Domain Pattern Quick Reference
| Domain | Pattern |
|--------|---------|
| **Frontend Components** | `@{src/components/**/*,src/ui/**/*}` |
| **API Endpoints** | `@{**/api/**/*,**/routes/**/*}` |
| **Authentication** | `@{**/*auth*,**/*login*,**/*session*}` |
| **Database** | `@{**/models/**/*,**/db/**/*}` |
| **Configuration** | `@{*.config.*,**/config/**/*}` |
## ⚡ Parallel Agent Execution Guidelines
### Parallel Task Distribution Rules
**Rule 1: Module Independence Analysis**
- Before parallel execution, identify independent modules
- Group modules by dependency level
- Execute only independent modules in parallel
**Rule 2: Resource-Based Concurrency**
- Default: 3 concurrent Gemini processes
- Maximum: 5 concurrent processes (system dependent)
- Reduce if memory/CPU constraints detected
**Rule 3: Synchronization Points**
- Wait for all modules at same dependency level
- Merge results before proceeding to next level
- Global summary only after all modules complete
### Parallel Template Formats
**Directory-Based Parallel Execution**:
```bash
# Navigate to different directories and analyze in parallel
(
cd src/components/ui && gemini --all-files -p "@{CLAUDE.md} analyze UI patterns" &
cd src/components/forms && gemini --all-files -p "@{CLAUDE.md} analyze form patterns" &
cd src/api && gemini --all-files -p "@{CLAUDE.md} analyze API patterns" &
wait
)
```
**File Pattern Parallel Execution**:
```bash
# Use file patterns for parallel analysis (when --all-files not suitable)
gemini -p "@src/components/ui/**/* @{CLAUDE.md} analyze UI implementation" &
gemini -p "@src/components/forms/**/* @{CLAUDE.md} analyze form handling" &
gemini -p "@src/api/auth/**/* @{CLAUDE.md} analyze authentication" &
wait
```
### Standard Sequential Template
```bash
gemini --all-files -p "@{target_files} @{CLAUDE.md,**/*CLAUDE.md}
Analysis Task: [specific task description]
Required Output:
- Specific file:line references
- Executable code examples
- Clear implementation guidance"
```
### Agent-Specific Modes
```bash
# Planning Agent (navigate to project root first)
cd /path/to/project && gemini --all-files -p "@{src/**/*} @{CLAUDE.md,**/*CLAUDE.md}
Task planning analysis: [task] - Extract file modification points, implementation sequence, integration requirements"
# Code Developer (navigate to target directory first)
cd /path/to/project && gemini --all-files -p "@{**/*.{ts,js}} @{**/*test*} @{CLAUDE.md,**/*CLAUDE.md}
Code implementation guidance: [feature] - Extract code patterns, insertion points, testing requirements"
# Code Review (use @file references if --all-files fails)
gemini -p "@modified_files @related_files @CLAUDE.md
Code review: [changes] - Compare against standards, check consistency, identify risks"
# Alternative without --all-files for targeted analysis
gemini -p "@src/components @CLAUDE.md
Component analysis: [specific_component] - Extract patterns and implementation guidance"
```
## 📋 Core Principles
### 1. File Reference Principles
- **Must include**: `@{CLAUDE.md,**/*CLAUDE.md}`
- **Precise targeting**: Use specific file patterns, avoid over-inclusion
- **Logical grouping**: Combine related file patterns
### 2. Prompt Construction Principles
- **Single objective**: Each command completes one analysis task
- **Specific requirements**: Clearly specify required output format
- **Context integration**: Reference project standards and existing patterns
### 3. Output Requirements
- **File references**: Provide specific `file:line` locations
- **Code examples**: Give executable code snippets
- **Implementation guidance**: Clear next-step actions
## 🔧 Common Command Patterns
### Quick Analysis
```bash
# Architecture analysis (navigate to project root first)
cd /project/root && gemini --all-files -p "@{src/**/*} @{CLAUDE.md} system architecture and component relationships"
# Pattern detection (navigate to project root first)
cd /project/root && gemini --all-files -p "@{**/*.ts} @{CLAUDE.md} TypeScript usage patterns"
# Security review (fallback to @file if --all-files fails)
gemini -p "@**/*auth* @CLAUDE.md authentication and authorization implementation patterns"
# Folder-specific analysis (navigate to target folder)
cd /project/src/components && gemini --all-files -p "component structure and patterns analysis"
```
### Parallel Analysis Patterns
```bash
# Parallel architecture analysis by layer
(
cd src/frontend && gemini --all-files -p "analyze frontend architecture" &
cd src/backend && gemini --all-files -p "analyze backend architecture" &
cd src/database && gemini --all-files -p "analyze data layer architecture" &
wait
)
# Parallel pattern detection across modules
gemini -p "@src/components/**/*.tsx analyze React patterns" &
gemini -p "@src/api/**/*.ts analyze API patterns" &
gemini -p "@src/utils/**/*.ts analyze utility patterns" &
wait
# Parallel security review
(
gemini -p "@**/*auth* analyze authentication implementation" &
gemini -p "@**/*permission* analyze authorization patterns" &
gemini -p "@**/*crypto* analyze encryption usage" &
wait
)
```
### Integration Standards
1. **Path awareness**: Navigate to appropriate directory before using `--all-files`
2. **Fallback strategy**: Use `@file` or `@folder` references if `--all-files` errors
3. **Minimal references**: Only reference files you actually need
4. **Self-contained**: Avoid complex cross-file dependencies
5. **Focused analysis**: Use for specific analysis, not general exploration
6. **Result reuse**: Reuse analysis results when possible
### Parallel Execution Standards
1. **Dependency verification**: Ensure modules are independent before parallel execution
2. **Resource monitoring**: Check system capacity before increasing concurrency
3. **Synchronization discipline**: Always use `wait` after parallel commands
4. **Result aggregation**: Merge outputs from parallel executions properly
5. **Error isolation**: Handle failures in individual parallel tasks gracefully
6. **Performance tracking**: Monitor speedup to validate parallel benefit
## 📊 Parallel Execution Rules
### Rule-Based Parallel Coordination
**Execution Order Rules**:
1. **Level 0 (Leaf Modules)**: Execute all in parallel (max 5)
2. **Level N**: Wait for Level N-1 completion before starting
3. **Root Level**: Process only after all module levels complete
**File Partitioning Rules**:
1. **Size-based**: Split large directories into ~equal file counts
2. **Type-based**: Group by file extension for focused analysis
3. **Logic-based**: Separate by functionality (auth, api, ui, etc.)
**Memory Management Rules**:
1. **Per-process limit**: Each Gemini process uses ~500MB-1GB
2. **Total limit**: Don't exceed 80% system memory
3. **Throttling**: Reduce parallelism if memory pressure detected
**Synchronization Rules**:
1. **Barrier sync**: All tasks at level must complete
2. **Queue sync**: Next task starts when worker available
3. **Async sync**: Collect results as they complete
### Performance Optimization Rules
**When to use parallel execution**:
- Project has >5 independent modules
- Modules have clear separation
- System has adequate resources (>8GB RAM)
- Time savings justify coordination overhead
**When to avoid parallel execution**:
- Small projects (<5 modules)
- Highly interdependent modules
- Limited system resources
- Sequential dependencies required
### Error Handling Rules
**Parallel Failure Recovery**:
1. If one parallel task fails, continue others
2. Retry failed tasks once with reduced scope
3. Fall back to sequential for persistent failures
4. Report all failures at synchronization point
**Resource Exhaustion Handling**:
1. Detect high memory/CPU usage
2. Pause new parallel tasks
3. Wait for current tasks to complete
4. Resume with reduced concurrency
---
*Enhanced version with parallel execution patterns and coordination rules*

View File

@@ -0,0 +1,92 @@
# Gemini Code Developer Template
**Purpose**: Locate exact modification points and provide concrete implementation guidance
## Template Structure
```bash
gemini --all-files -p "@{[target-modification-files]} @{[similar-feature-files]} @{**/*test*/**/*,**/*.test.*,**/*.spec.*}
Implementation guidance for: [specific feature/function to implement]
## Required Analysis:
1. **Exact Modification Points**:
- Find precise locations (file:line) where new code should be added
- Identify existing functions that need modification
- Locate where new imports/dependencies should be added
2. **Similar Code Examples**:
- Find existing implementations similar to what needs to be built
- Extract code patterns that should be followed
- Identify utility functions that can be reused
3. **Code Structure and Patterns**:
- How should the new code be structured based on existing patterns?
- What naming conventions are used for similar features?
- What error handling patterns should be followed?
4. **Testing Requirements**:
- Find similar test cases for reference
- Identify testing utilities and helpers available
- Determine what specific test scenarios are needed
5. **Integration and Dependencies**:
- What existing functions need to call the new code?
- Which modules need to import the new functionality?
- What configuration or setup is required?
## Output Requirements:
- **Precise insertion points**: Exact file:line locations for new code
- **Code skeleton**: Structure based on existing patterns with placeholder functions
- **Concrete examples**: Copy-paste reference code from similar features
- **Test template**: Specific test cases needed based on existing patterns
- **Integration checklist**: Exact functions/files that need to call or import new code
Focus on actionable implementation guidance with specific line references."
```
## Intelligent Usage Examples
```python
# React component implementation
def code_developer_context(user_input):
context = build_intelligent_context(
user_input="Create user profile edit component",
analysis_type="code-developer-context",
domains=['frontend', 'ui'],
tech_stack=['React', 'TypeScript', 'Tailwind']
)
return f"""
gemini --all-files -p "@{{src/components/**/*,src/pages/**/*}}
@{{**/*profile*,**/*user*,**/*form*}} @{{**/*.test.*,**/*.spec.*}}
@{{CLAUDE.md,frontend/CLAUDE.md,react/CLAUDE.md}}
Implementation guidance for: User profile edit component with form validation
- Profile form fields: name, email, bio, avatar upload
- Form validation using existing patterns
- State management integration with user context
Focus on exact insertion points and component structure based on similar forms."
"""
```
## Context Application
- Locate exact code insertion and modification points
- Follow repository-specific patterns and conventions
- Reuse existing utilities and established approaches
- Create comprehensive test coverage based on similar features
## Usage Guidelines
**Use Code Developer template when**:
- Before implementing specific functions, classes, or features
- You need exact insertion points and code structure guidance
- Focus on actionable implementation steps with line references
**Template focuses on**:
- Precise, task-focused analysis for actionable implementation
- Exact file:line references and concrete guidance
- Repository context with patterns specific to the actual codebase
- Specific scope analyzing only what's needed for the immediate task

View File

@@ -0,0 +1,93 @@
# Gemini Code Review Template
**Purpose**: Understand specific changes and review against repository standards
## Template Structure
```bash
gemini --all-files -p "@{[modified-files]} @{[related-files]} @{[test-files-for-changes]}
Review context for recent changes:
Modified files: [list of specific files that were changed]
Original task: [what was being implemented]
## Required Analysis:
1. **Change Understanding**:
- What was the specific goal of these modifications?
- Which functions/classes were added or modified?
- How do the changes relate to the original task requirements?
2. **Repository Convention Compliance**:
- Do the changes follow naming conventions used in similar files?
- Is the code structure consistent with existing patterns?
- Are imports, error handling, and logging consistent?
3. **Impact and Integration Analysis**:
- What other code might be affected by these changes?
- Are all necessary integration points properly handled?
- Do the changes maintain backward compatibility?
4. **Test Coverage and Quality**:
- Are the specific changes properly tested?
- Do test cases cover edge cases similar to existing tests?
- Is the test structure consistent with repository patterns?
5. **Security and Performance**:
- Are there security concerns specific to these changes?
- Do the changes follow performance patterns used elsewhere?
- Are there potential bottlenecks introduced?
## Output Requirements:
- **Specific issues**: Point to exact problems with file:line references
- **Convention violations**: Compare against similar code in the repository
- **Missing coverage**: Identify untested code paths with test examples
- **Integration gaps**: List functions/modules that need updates
- **Improvement suggestions**: Provide specific code improvements based on repository patterns
Focus on change-specific review rather than generic quality assessment."
```
## Intelligent Usage Examples
```python
# Authentication system review
def code_review_context(user_input):
context = build_intelligent_context(
user_input="Review OAuth2 implementation changes",
analysis_type="code-review-context",
domains=['auth', 'security', 'api'],
tech_stack=['Node.js', 'JWT', 'Redis']
)
return f"""
gemini --all-files -p "@{{**/auth/**/*,**/middleware/*auth*}}
@{{**/oauth/**/*,**/session/**/*}} @{{**/*test*/*auth*}}
@{{CLAUDE.md,auth/CLAUDE.md,security/CLAUDE.md}}
Review context for recent OAuth2 implementation changes:
Modified files: auth/oauth-controller.js, middleware/auth-middleware.js
Original task: Implement OAuth2 authorization code flow with PKCE
Focus on security compliance and existing authentication patterns."
"""
```
## Context Application
- Review changes against repository-specific standards
- Compare implementation approach with similar features
- Validate test coverage for the specific functionality implemented
- Ensure integration points are properly handled
## Usage Guidelines
**Use Code Review template when**:
- After code has been written for a specific task
- You need to review changes against repository-specific standards
- Focus on understanding what was actually implemented and how it fits
**Template focuses on**:
- Change-specific review rather than generic quality assessment
- Specific issues with exact file:line references
- Repository context comparing against similar code
- Precise scope analyzing only what's relevant to the changes made

View File

@@ -0,0 +1,431 @@
# Gemini Core Analysis Templates
**Comprehensive templates for core codebase analysis using Gemini CLI.**
## Overview
This document provides core analysis templates for pattern detection, architecture analysis, security assessment, performance optimization, feature tracing, quality analysis, dependencies review, and migration planning.
## Pattern Analysis
### Template Structure
```bash
gemini --all-files -p "@{file_patterns} @{claude_context}
Context: Pattern analysis targeting @{file_patterns}
Guidelines: Include CLAUDE.md standards from @{claude_context}
Analyze this codebase and identify all {target} patterns.
Focus on:
1. Implementation patterns in specified files
2. Compliance with project guidelines from CLAUDE.md
3. Best practices and anti-patterns
4. Usage frequency and distribution across modules
5. Specific examples with file:line references
Include concrete recommendations based on existing patterns."
```
### Intelligent Usage Examples
```python
# Simple pattern detection
def pattern_analysis(user_input):
context = build_intelligent_context(
user_input="React hooks usage patterns",
analysis_type="pattern",
domains=['frontend', 'state'],
tech_stack=['React', 'TypeScript']
)
return f"""
gemini --all-files -p "@{{**/*.{{jsx,tsx,js,ts}}}} @{{**/hooks/**/*,**/context/**/*}}
@{{CLAUDE.md,frontend/CLAUDE.md,react/CLAUDE.md}}
Analyze React hooks patterns in this codebase:
- Custom hooks implementation and naming conventions
- useState/useEffect usage patterns and dependencies
- Context providers and consumers
- Hook composition and reusability
- Performance considerations and optimization
- Compliance with React best practices
- Project-specific patterns from CLAUDE.md
Focus on TypeScript implementations and provide specific file:line examples."
"""
```
## Architecture Analysis
### Template Structure
```bash
gemini --all-files -p "@{module_patterns} @{claude_context}
Context: System architecture analysis at @{module_patterns}
Project structure: @{structure_patterns}
Guidelines: @{claude_context}
Examine the {target} in this application.
Analyze:
1. Component hierarchy and module organization
2. Data flow and state management patterns
3. Dependency relationships and coupling
4. Architectural patterns and design decisions
5. Integration points and boundaries
Map findings to specific files and provide architecture insights."
```
### Intelligent Usage Examples
```python
# Microservices architecture analysis
def architecture_analysis(user_input):
context = build_intelligent_context(
user_input="microservices communication patterns",
analysis_type="architecture",
domains=['api', 'backend'],
tech_stack=['Node.js', 'Docker']
)
return f"""
gemini --all-files -p "@{{**/services/**/*,**/api/**/*,**/gateway/**/*}}
@{{docker-compose*.yml,**/Dockerfile,**/*.proto}}
@{{CLAUDE.md,architecture/CLAUDE.md,services/CLAUDE.md}}
Analyze microservices architecture:
- Service boundaries and single responsibilities
- Inter-service communication patterns (REST, gRPC, events)
- API gateway configuration and routing
- Service discovery and load balancing
- Data consistency and transaction boundaries
- Deployment and orchestration patterns
- Compliance with architectural guidelines
Include service dependency graph and communication flow diagrams."
"""
```
## Security Analysis
### Template Structure
```bash
gemini --all-files -p "@{security_patterns} @{auth_patterns} @{config_patterns}
Context: Security analysis scope @{security_patterns}
Auth modules: @{auth_patterns}
Config files: @{config_patterns}
Guidelines: Security standards from @{claude_context}
Scan for {target} security vulnerabilities.
Check:
1. Authentication and authorization implementations
2. Input validation and sanitization
3. Sensitive data handling and encryption
4. Security headers and configurations
5. Third-party dependency vulnerabilities
Provide OWASP-aligned findings with severity levels and remediation steps."
```
### Intelligent Usage Examples
```python
# OAuth2 security analysis
def security_analysis(user_input):
context = build_intelligent_context(
user_input="OAuth2 authentication vulnerabilities",
analysis_type="security",
domains=['auth', 'security'],
tech_stack=['Node.js', 'JWT']
)
return f"""
gemini --all-files -p "@{{**/auth/**/*,**/oauth/**/*,**/middleware/*auth*}}
@{{**/config/**/*,.env*,**/*.pem,**/*.key}}
@{{CLAUDE.md,security/CLAUDE.md,auth/CLAUDE.md}}
Analyze OAuth2 authentication security:
- Authorization code flow implementation
- Token storage and handling security
- Client authentication and PKCE implementation
- Scope validation and privilege escalation risks
- JWT token signature verification
- Refresh token rotation and revocation
- CSRF and state parameter validation
- Redirect URI validation
Apply OWASP OAuth2 Security Cheat Sheet standards and provide specific vulnerability findings."
"""
```
## Performance Analysis
### Template Structure
```bash
gemini --all-files -p "@{performance_patterns} @{core_patterns}
Context: Performance analysis at @{performance_patterns}
Core modules: @{core_patterns}
Guidelines: Performance standards from @{claude_context}
Analyze {target} performance issues.
Examine:
1. Expensive operations and computational complexity
2. Memory usage and potential leaks
3. Database query efficiency and N+1 problems
4. Network requests and data transfer optimization
5. Rendering performance and re-render cycles
Include performance metrics and optimization recommendations."
```
### Intelligent Usage Examples
```python
# React rendering performance analysis
def performance_analysis(user_input):
context = build_intelligent_context(
user_input="React component rendering performance",
analysis_type="performance",
domains=['frontend', 'performance'],
tech_stack=['React', 'TypeScript']
)
return f"""
gemini --all-files -p "@{{src/components/**/*.{{jsx,tsx}},src/hooks/**/*}}
@{{**/context/**/*,**/store/**/*}}
@{{CLAUDE.md,performance/CLAUDE.md,react/CLAUDE.md}}
Analyze React rendering performance issues:
- Component re-render cycles and unnecessary renders
- useMemo/useCallback optimization opportunities
- Context provider optimization and value memoization
- Large list virtualization needs
- Bundle splitting and lazy loading opportunities
- State update batching and scheduling
- Memory leaks in useEffect cleanup
- Performance impact of prop drilling
Include React DevTools Profiler insights and specific optimization recommendations."
"""
```
## Feature Tracing
### Template Structure
```bash
gemini --all-files -p "@{feature_patterns} @{related_patterns}
Context: Feature implementation at @{feature_patterns}
Related modules: @{related_patterns}
Guidelines: Feature standards from @{claude_context}
Trace the implementation of {target} throughout this codebase.
Map:
1. Entry points (UI components, API endpoints)
2. Business logic and data processing
3. Database models and queries
4. State management and data flow
5. Integration points with other features
Show complete feature flow with file:line references."
```
### Intelligent Usage Examples
```python
# Payment processing feature trace
def feature_tracing(user_input):
context = build_intelligent_context(
user_input="payment processing system",
analysis_type="feature",
domains=['api', 'database', 'frontend'],
tech_stack=['Node.js', 'React', 'PostgreSQL']
)
return f"""
gemini --all-files -p "@{{**/payment/**/*,**/billing/**/*,**/stripe/**/*}}
@{{**/models/*payment*,**/models/*order*,**/api/*payment*}}
@{{src/components/*payment*,src/pages/*checkout*}}
@{{CLAUDE.md,payment/CLAUDE.md,api/CLAUDE.md}}
Trace complete payment processing implementation:
- Frontend: Payment forms, checkout flow, success/error handling
- API: Payment endpoints, validation, webhook handling
- Business Logic: Payment calculation, tax, discounts, refunds
- Database: Payment models, transaction records, audit logs
- Integration: Stripe/PayPal integration, notification systems
- Security: PCI compliance, data encryption, fraud detection
- Error Handling: Payment failures, retry logic, recovery flows
Map the entire payment flow from UI interaction to database persistence."
"""
```
## Quality Analysis
### Template Structure
```bash
gemini --all-files -p "@{quality_patterns} @{test_patterns}
Context: Code quality assessment at @{quality_patterns}
Test coverage: @{test_patterns}
Guidelines: Quality standards from @{claude_context}
Examine {target} in this codebase.
Assess:
1. Code consistency and style compliance
2. Error handling and edge case coverage
3. Testing coverage and quality
4. Documentation completeness
5. Maintainability and refactoring opportunities
Provide actionable quality improvement recommendations with priorities."
```
### Intelligent Usage Examples
```python
# TypeScript code quality analysis
def quality_analysis(user_input):
context = build_intelligent_context(
user_input="TypeScript code quality and consistency",
analysis_type="quality",
domains=['frontend', 'testing'],
tech_stack=['TypeScript', 'React', 'Jest']
)
return f"""
gemini --all-files -p "@{{**/*.{{ts,tsx}},src/**/*}}
@{{**/*.test.{{ts,tsx}},**/*.spec.{{ts,tsx}}}}
@{{CLAUDE.md,typescript/CLAUDE.md,testing/CLAUDE.md}}
Analyze TypeScript code quality:
- Type safety: any usage, strict mode compliance, type assertions
- Interface design: proper abstractions, generic usage, utility types
- Error handling: proper error types, exception handling patterns
- Code consistency: naming conventions, file organization, imports
- Testing quality: type-safe tests, mock implementations, coverage
- Documentation: TSDoc comments, README updates, type exports
- Performance: bundle analysis, tree-shaking optimization
- Maintainability: code duplication, complexity metrics
Prioritize recommendations by impact and provide specific file:line examples."
"""
```
## Dependencies Analysis
### Template Structure
```bash
gemini --all-files -p "@{dependency_patterns} @{package_patterns}
Context: Dependency analysis at @{dependency_patterns}
Package files: @{package_patterns}
Guidelines: Dependency standards from @{claude_context}
Analyze {target} in this project.
Review:
1. Third-party library usage and necessity
2. Version consistency and update availability
3. Security vulnerabilities in dependencies
4. Bundle size impact and optimization opportunities
5. Licensing compatibility and compliance
Show dependency graph with recommendations for optimization."
```
### Intelligent Usage Examples
```python
# Node.js dependencies security analysis
def dependencies_analysis(user_input):
context = build_intelligent_context(
user_input="Node.js dependencies security vulnerabilities",
analysis_type="dependencies",
domains=['security', 'config'],
tech_stack=['Node.js', 'npm']
)
return f"""
gemini --all-files -p "@{{package*.json,yarn.lock,pnpm-lock.yaml}}
@{{**/node_modules/**/package.json}} @{{.npmrc,.yarnrc*}}
@{{CLAUDE.md,security/CLAUDE.md,dependencies/CLAUDE.md}}
Analyze Node.js dependencies for security issues:
- Vulnerability scanning: known CVEs, security advisories
- Outdated packages: major version gaps, EOL dependencies
- License compliance: GPL conflicts, commercial restrictions
- Bundle impact: largest dependencies, tree-shaking opportunities
- Maintenance status: abandoned packages, low activity projects
- Alternative recommendations: lighter alternatives, native implementations
- Development vs production: devDependency misclassification
- Version pinning: semantic versioning strategy, lock file consistency
Provide dependency upgrade roadmap with security priority rankings."
"""
```
## Migration Analysis
### Template Structure
```bash
gemini --all-files -p "@{migration_patterns} @{legacy_patterns}
Context: Migration analysis at @{migration_patterns}
Legacy code: @{legacy_patterns}
Guidelines: Migration standards from @{claude_context}
Identify {target} that could benefit from modernization.
Find:
1. Outdated patterns and deprecated APIs
2. Performance inefficiencies and technical debt
3. Security vulnerabilities in legacy code
4. Opportunities for newer language features
5. Framework upgrade paths and compatibility
Provide prioritized migration roadmap with risk assessment."
```
### Intelligent Usage Examples
```python
# React class to hooks migration analysis
def migration_analysis(user_input):
context = build_intelligent_context(
user_input="React class components to hooks migration",
analysis_type="migration",
domains=['frontend'],
tech_stack=['React', 'JavaScript', 'TypeScript']
)
return f"""
gemini --all-files -p "@{{src/components/**/*.{{jsx,js}},src/containers/**/*}}
@{{**/legacy/**/*,**/deprecated/**/*}}
@{{CLAUDE.md,react/CLAUDE.md,migration/CLAUDE.md}}
Analyze React class components for hooks migration:
- Class components: lifecycle methods, state usage, refs
- HOC patterns: higher-order components vs custom hooks
- Render props: render prop patterns vs hook alternatives
- Legacy context: old context API vs useContext
- Performance: shouldComponentUpdate vs React.memo
- Testing: enzyme vs testing-library compatibility
- Bundle size: potential size reduction after migration
- Breaking changes: prop types, default props handling
Provide migration priority matrix based on complexity and benefit."
"""
```
## Template Usage Guidelines
1. **Always use intelligent context** - Let the system generate smart file patterns
2. **Reference specific sections** - Use anchor links for modular access
3. **Validate generated patterns** - Ensure patterns match actual project structure
4. **Combine templates strategically** - Use multiple templates for comprehensive analysis
5. **Cache context results** - Reuse context analysis across multiple templates
## Integration with Intelligent Context
All templates integrate with @~/.claude/workflows/gemini-intelligent-context.md for:
- **Smart Path Detection** - Automatic file targeting based on analysis type
- **Technology Stack Detection** - Framework and language-specific optimizations
- **Domain Context Mapping** - Intelligent domain-specific pattern matching
- **Dynamic Prompt Enhancement** - Context-aware prompt construction
For complete context detection algorithms and intelligent file targeting, see the dedicated intelligent context documentation.

View File

@@ -0,0 +1,282 @@
# Gemini DMS Analysis Templates
**Specialized templates for Distributed Memory System (DMS) analysis and documentation hierarchy optimization.**
## Overview
This document provides DMS-specific analysis templates for architecture analysis, complexity assessment, content strategy, module analysis, and cross-module coordination for intelligent documentation hierarchy management.
## DMS Architecture Analysis
### Template Structure
```bash
gemini --all-files -p "@{project_patterns} @{module_patterns}
Context: DMS architecture analysis for @{project_patterns}
Module structure: @{module_patterns}
Guidelines: Project standards from @{claude_context}
Analyze {target} hierarchical documentation structure:
1. Project complexity assessment (file count, LOC, tech stack diversity)
2. Module responsibility boundaries and architectural patterns
3. Cross-module dependencies and integration points
4. Technology stack analysis and framework usage patterns
5. Content hierarchy strategy (depth 0-2 classification)
Provide intelligent classification recommendations with hierarchy mapping."
```
### Intelligent Usage Examples
```python
# Project structure analysis for DMS hierarchy
def dms_architecture_analysis(target_path):
context = build_intelligent_context(
user_input=f"DMS architecture analysis for {target_path}",
analysis_type="architecture",
domains=['dms', 'documentation'],
tech_stack=detect_project_tech_stack(target_path)
)
return f"""
gemini --all-files -p "@{{{target_path}/**/*}} @{{**/*CLAUDE.md}}
@{{CLAUDE.md,**/*CLAUDE.md,.claude/*/CLAUDE.md}}
Analyze project structure for DMS hierarchy optimization:
- Project complexity: file count, lines of code, technology diversity
- Module boundaries: logical groupings, responsibility separation
- Cross-dependencies: integration patterns, shared utilities
- Documentation needs: complexity-based hierarchy requirements
- Content differentiation: level-specific focus areas
- Classification thresholds: >3 files or >300 LOC triggers
Provide smart hierarchy recommendations with classification rationale."
"""
```
## DMS Complexity Assessment
### Template Structure
```bash
gemini --all-files -p "@{assessment_patterns} @{technology_patterns}
Context: DMS complexity assessment at @{assessment_patterns}
Technology stack: @{technology_patterns}
Guidelines: Classification rules from @{claude_context}
Evaluate {target} for intelligent DMS classification:
1. File count analysis and logical grouping assessment
2. Lines of code distribution and complexity indicators
3. Technology stack diversity and integration complexity
4. Cross-module dependencies and architectural coupling
5. Documentation requirements based on complexity metrics
Provide classification recommendations with threshold justification."
```
### Intelligent Usage Examples
```python
# Project complexity evaluation for smart classification
def dms_complexity_assessment(project_scope):
context = build_intelligent_context(
user_input=f"Complexity assessment for {project_scope}",
analysis_type="quality",
domains=['dms', 'classification'],
project_info=analyze_project_structure(project_scope)
)
return f"""
gemini --all-files -p "@{{{project_scope}}} @{{package*.json,requirements.txt,pom.xml}}
@{{CLAUDE.md,**/CLAUDE.md}}
Assess project complexity for DMS classification:
- Single-file detection: 1-2 files → consolidated documentation
- Simple project: 3-10 files, <800 LOC → minimal hierarchy
- Medium project: 11-100 files, 800-3000 LOC → selective hierarchy
- Complex project: >100 files, >3000 LOC → full hierarchy
- Technology stack: framework diversity impact on documentation needs
- Integration complexity: cross-module dependency analysis
Recommend optimal DMS structure with classification thresholds."
"""
```
## DMS Content Strategy
### Template Structure
```bash
gemini --all-files -p "@{content_patterns} @{reference_patterns}
Context: DMS content strategy for @{content_patterns}
Reference patterns: @{reference_patterns}
Guidelines: Content standards from @{claude_context}
Develop {target} content differentiation strategy:
1. Level-specific content focus and responsibility boundaries
2. Content hierarchy optimization and redundancy elimination
3. Implementation pattern identification and documentation priorities
4. Cross-level content flow and reference strategies
5. Quality standards and actionable guideline emphasis
Provide content strategy with level-specific focus areas."
```
### Intelligent Usage Examples
```python
# Content strategy for hierarchical documentation
def dms_content_strategy(hierarchy_levels):
context = build_intelligent_context(
user_input=f"Content strategy for {len(hierarchy_levels)} levels",
analysis_type="quality",
domains=['dms', 'content', 'hierarchy'],
levels_info=hierarchy_levels
)
return f"""
gemini --all-files -p "@{{**/*.{{js,ts,jsx,tsx,py,java}}}} @{{**/CLAUDE.md}}
@{{CLAUDE.md,.claude/*/CLAUDE.md}}
Develop content differentiation strategy:
- Depth 0 (Project): Architecture, tech stack, global standards
- Depth 1 (Module): Module patterns, integration, responsibilities
- Depth 2 (Implementation): Details, gotchas, specific guidelines
- Content consolidation: Merge depth 3+ content into depth 2
- Redundancy elimination: Unique focus per level
- Non-obvious priority: Essential decisions, actionable patterns
Provide level-specific content guidelines with focus differentiation."
"""
```
## DMS Module Analysis
### Template Structure
```bash
gemini --all-files -p "@{module_patterns} @{integration_patterns}
Context: DMS module analysis for @{module_patterns}
Integration context: @{integration_patterns}
Guidelines: Module standards from @{claude_context}
Analyze {target} module-specific documentation needs:
1. Module responsibility boundaries and architectural role
2. Internal implementation patterns and conventions
3. External integration points and dependency management
4. Module-specific quality standards and best practices
5. Documentation depth requirements based on complexity
Provide module documentation strategy with integration focus."
```
### Intelligent Usage Examples
```python
# Module-specific analysis for targeted documentation
def dms_module_analysis(module_path):
context = build_intelligent_context(
user_input=f"Module analysis for {module_path}",
analysis_type="architecture",
domains=['dms', 'module'],
module_info=analyze_module_structure(module_path)
)
return f"""
gemini --all-files -p "@{{{module_path}/**/*}} @{{**/*{module_path.split('/')[-1]}*}}
@{{CLAUDE.md,{module_path}/CLAUDE.md}}
Analyze module for documentation strategy:
- Module boundaries: responsibility scope, architectural role
- Implementation patterns: internal conventions, code organization
- Integration points: external dependencies, API contracts
- Quality standards: module-specific testing, validation patterns
- Complexity indicators: >3 files or >300 LOC → dedicated CLAUDE.md
- Documentation depth: implementation details vs architectural overview
Recommend module documentation approach with depth justification."
"""
```
## DMS Cross-Module Analysis
### Template Structure
```bash
gemini --all-files -p "@{cross_module_patterns} @{dependency_patterns}
Context: DMS cross-module analysis for @{cross_module_patterns}
Dependencies: @{dependency_patterns}
Guidelines: Integration standards from @{claude_context}
Analyze {target} cross-module documentation requirements:
1. Inter-module dependency mapping and communication patterns
2. Shared utility identification and documentation consolidation
3. Integration complexity assessment and documentation depth
4. Cross-cutting concern identification and hierarchy placement
5. Documentation coordination strategy across module boundaries
Provide cross-module documentation strategy with integration focus."
```
### Intelligent Usage Examples
```python
# Cross-module analysis for integration documentation
def dms_cross_module_analysis(affected_modules):
context = build_intelligent_context(
user_input=f"Cross-module analysis for {len(affected_modules)} modules",
analysis_type="architecture",
domains=['dms', 'integration', 'modules'],
modules_info=affected_modules
)
module_patterns = ','.join([f"{m}/**/*" for m in affected_modules])
return f"""
gemini --all-files -p "@{{{module_patterns}}} @{{**/shared/**/*,**/common/**/*}}
@{{CLAUDE.md,**/CLAUDE.md}}
Analyze cross-module integration for documentation:
- Dependency mapping: module interdependencies, communication flow
- Shared patterns: common utilities, cross-cutting concerns
- Integration complexity: >5 modules → enhanced coordination documentation
- Documentation coordination: avoid redundancy across module boundaries
- Hierarchy placement: integration patterns at appropriate depth levels
- Reference strategies: cross-module links and shared guideline access
Provide integration documentation strategy with coordination guidelines."
"""
```
## DMS Classification Matrix
### Project Complexity Thresholds
| Complexity Level | File Count | Lines of Code | Tech Stack | Hierarchy Strategy |
|------------------|------------|---------------|------------|-------------------|
| **Single File** | 1-2 files | <300 LOC | 1 technology | Consolidated docs |
| **Simple** | 3-10 files | 300-800 LOC | 1-2 technologies | Minimal hierarchy |
| **Medium** | 11-100 files | 800-3000 LOC | 2-3 technologies | Selective hierarchy |
| **Complex** | >100 files | >3000 LOC | >3 technologies | Full hierarchy |
### Documentation Depth Strategy
| Depth Level | Focus Areas | Content Types | Triggers |
|-------------|-------------|---------------|----------|
| **Depth 0 (Project)** | Architecture, global standards, tech stack overview | High-level patterns, system design | Always present |
| **Depth 1 (Module)** | Module patterns, integration points, responsibilities | Interface contracts, module APIs | >3 files or >300 LOC |
| **Depth 2 (Implementation)** | Implementation details, gotchas, specific guidelines | Code patterns, edge cases | Complex modules |
## Integration with Intelligent Context
All DMS templates integrate with @~/.claude/workflows/gemini-intelligent-context.md for:
- **Smart Project Classification** - Automatic complexity assessment based on project metrics
- **Module Boundary Detection** - Intelligent identification of logical module groupings
- **Hierarchy Optimization** - Content differentiation strategies across documentation levels
- **Cross-Module Coordination** - Integration pattern analysis for documentation coordination
## Template Usage Guidelines
1. **Assess Project Complexity First** - Use complexity assessment to determine appropriate hierarchy
2. **Apply Classification Thresholds** - Follow established metrics for documentation depth decisions
3. **Coordinate Across Modules** - Use cross-module analysis for integration documentation
4. **Optimize Content Differentiation** - Ensure unique focus areas for each hierarchy level
5. **Validate Documentation Strategy** - Check hierarchy alignment with project structure
These DMS-specific templates enable intelligent documentation hierarchy management and content optimization for distributed memory systems.

View File

@@ -0,0 +1,225 @@
# Gemini Intelligent Context System
**Smart context detection and file targeting system for Gemini CLI analysis.**
## Overview
The intelligent context system automatically resolves file paths and context based on user input, analysis type, and project structure detection, enabling precise and efficient codebase analysis.
## Smart Path Detection
### Technology Stack Detection
```python
def detect_technology_stack(project_path):
"""Detect technologies used in the project"""
indicators = {
'React': ['package.json contains react', '**/*.jsx', '**/*.tsx'],
'Vue': ['package.json contains vue', '**/*.vue'],
'Angular': ['angular.json', '**/*.component.ts'],
'Node.js': ['package.json', 'server.js', 'app.js'],
'Python': ['requirements.txt', '**/*.py', 'setup.py'],
'Java': ['pom.xml', '**/*.java', 'build.gradle'],
'TypeScript': ['tsconfig.json', '**/*.ts'],
'Express': ['package.json contains express'],
'FastAPI': ['**/*main.py', 'requirements.txt contains fastapi'],
'Spring': ['pom.xml contains spring', '**/*Application.java']
}
return analyze_indicators(indicators, project_path)
```
### Project Structure Detection
```python
def detect_project_structure(project_path):
"""Identify common project patterns"""
patterns = {
'src_based': has_directory('src/'),
'lib_based': has_directory('lib/'),
'app_based': has_directory('app/'),
'modules_based': has_directory('modules/'),
'packages_based': has_directory('packages/'),
'microservices': has_multiple_services(),
'monorepo': has_workspaces_or_lerna()
}
return analyze_structure_patterns(patterns)
```
### Domain Context Detection
```python
def extract_domain_keywords(user_input):
"""Extract domain-specific keywords for smart targeting"""
domain_mapping = {
'auth': ['authentication', 'login', 'session', 'auth', 'oauth', 'jwt', 'token'],
'api': ['api', 'endpoint', 'route', 'controller', 'service'],
'frontend': ['component', 'ui', 'view', 'react', 'vue', 'angular'],
'backend': ['server', 'backend', 'api', 'database', 'model'],
'database': ['database', 'db', 'model', 'query', 'migration', 'schema'],
'security': ['security', 'vulnerability', 'xss', 'csrf', 'injection'],
'performance': ['performance', 'slow', 'optimization', 'bottleneck'],
'testing': ['test', 'spec', 'mock', 'unit', 'integration', 'e2e'],
'state': ['state', 'redux', 'context', 'store', 'vuex'],
'config': ['config', 'environment', 'settings', 'env']
}
return match_domains(user_input.lower(), domain_mapping)
```
## Intelligent File Targeting
### Context-Aware Path Generation
| Domain Context | Generated File Patterns |
|----------------|------------------------|
| **Authentication** | `@{**/*auth*,**/*login*,**/*session*,**/middleware/*auth*,**/guards/**/*}` |
| **API Endpoints** | `@{**/api/**/*,**/routes/**/*,**/controllers/**/*,**/handlers/**/*}` |
| **Frontend Components** | `@{src/components/**/*,src/ui/**/*,src/views/**/*,components/**/*}` |
| **Database Layer** | `@{**/models/**/*,**/db/**/*,**/migrations/**/*,**/repositories/**/*}` |
| **State Management** | `@{**/store/**/*,**/redux/**/*,**/context/**/*,**/state/**/*}` |
| **Configuration** | `@{*.config.*,**/config/**/*,.env*,**/settings/**/*}` |
| **Testing** | `@{**/*.test.*,**/*.spec.*,**/test/**/*,**/spec/**/*,**/__tests__/**/*}` |
| **Security** | `@{**/*security*,**/*auth*,**/*crypto*,**/middleware/**/*}` |
| **Performance** | `@{**/core/**/*,**/services/**/*,**/utils/**/*,**/lib/**/*}` |
### Technology-Specific Extensions
```python
def get_tech_extensions(technology_stack):
"""Get relevant file extensions based on detected technologies"""
extension_mapping = {
'React': ['.jsx', '.tsx', '.js', '.ts'],
'Vue': ['.vue', '.js', '.ts'],
'Angular': ['.component.ts', '.service.ts', '.module.ts'],
'Node.js': ['.js', '.ts', '.mjs'],
'Python': ['.py', '.pyx', '.pyi'],
'Java': ['.java', '.kt', '.scala'],
'TypeScript': ['.ts', '.tsx', '.d.ts'],
'CSS': ['.css', '.scss', '.sass', '.less', '.styl']
}
return build_extension_patterns(technology_stack, extension_mapping)
```
## Dynamic Context Enhancement
### Smart Prompt Construction
```python
def build_intelligent_context(user_input, analysis_type, project_info):
"""Build context-aware Gemini CLI prompt"""
# Step 1: Detect domains and technologies
domains = extract_domain_keywords(user_input)
tech_stack = project_info.technology_stack
# Step 2: Generate smart file patterns
file_patterns = generate_file_patterns(domains, tech_stack, analysis_type)
# Step 3: Include relevant CLAUDE.md contexts
claude_patterns = generate_claude_patterns(domains, project_info.structure)
# Step 4: Build context-enriched prompt
return construct_enhanced_prompt(
base_prompt=user_input,
file_patterns=file_patterns,
claude_context=claude_patterns,
analysis_focus=get_analysis_focus(analysis_type),
tech_context=tech_stack
)
```
### Context Validation and Fallbacks
```python
def validate_and_fallback_context(generated_patterns, project_path):
"""Ensure generated patterns match actual project structure"""
validated_patterns = []
for pattern in generated_patterns:
if has_matching_files(pattern, project_path):
validated_patterns.append(pattern)
else:
# Try fallback patterns
fallback = generate_fallback_pattern(pattern, project_path)
if fallback:
validated_patterns.append(fallback)
return validated_patterns or get_generic_patterns(project_path)
```
## Integration Patterns
### Command Integration Examples
```python
def gather_gemini_insights(user_input, base_enhancement):
# Use intelligent context system
context = build_intelligent_context(
user_input=user_input,
analysis_type=determine_analysis_type(base_enhancement),
project_info=get_project_info()
)
# Select appropriate template
template = select_template(base_enhancement.complexity, base_enhancement.domains)
# Execute with enhanced context
return execute_template(template, context)
```
### Agent Workflow Integration
```python
# Before agent execution, collect enhanced context
def collect_enhanced_gemini_context(task_description):
domains = extract_domain_keywords(task_description)
analysis_types = determine_required_analysis(domains)
context_results = {}
for analysis_type in analysis_types:
# Use appropriate template file based on analysis type
if analysis_type.startswith('dms'):
template_path = f"workflows/gemini-dms-templates.md#{analysis_type}"
elif analysis_type in ['planning-agent-context', 'code-developer-context', 'code-review-context', 'ui-design-context']:
template_path = f"workflows/gemini-agent-templates.md#{analysis_type}"
else:
template_path = f"workflows/gemini-core-templates.md#{analysis_type}"
context_results[analysis_type] = execute_template_by_reference(
template_path,
task_description
)
return consolidate_context(context_results)
```
### Smart Template Selection
```python
def select_optimal_template(task_complexity, domains, tech_stack):
template_matrix = {
('simple', ['frontend']): 'pattern-analysis',
('medium', ['frontend', 'api']): ['pattern-analysis', 'architecture-analysis'],
('complex', ['security', 'auth']): ['security-analysis', 'architecture-analysis', 'quality-analysis'],
('critical', ['payment', 'crypto']): ['security-analysis', 'performance-analysis', 'dependencies-analysis']
}
return template_matrix.get((task_complexity, sorted(domains)), ['pattern-analysis'])
```
## Usage Guidelines
### Performance Optimization
1. **Scope file patterns appropriately** - Too broad patterns slow analysis
2. **Use technology-specific extensions** - More precise targeting improves results
3. **Implement pattern validation** - Check patterns match files before execution
4. **Consider project size** - Large projects may need pattern chunking
### Maintenance
1. **Update templates regularly** - Keep pace with new technologies and patterns
2. **Validate anchor links** - Ensure cross-references remain accurate
3. **Test intelligent context** - Verify smart targeting works across project types
4. **Monitor template performance** - Track analysis quality and speed
This intelligent context system provides the foundation for all Gemini CLI analysis, ensuring efficient and precise codebase understanding across different commands and agents.

View File

@@ -0,0 +1,72 @@
# Gemini Memory-Gemini-Bridge Template
**Purpose**: Comprehensive project structure analysis for hierarchical CLAUDE.md documentation generation
## Template Structure
```bash
gemini --all-files -p "@{**/*} @{CLAUDE.md,**/*CLAUDE.md} @{package*.json,requirements.txt,pom.xml,Cargo.toml}
Project structure analysis for Memory-Gemini-Bridge documentation generation:
Target: [project-name or specific scope]
## Required Analysis:
1. **Project Architecture Assessment**:
- Overall system structure and module organization
- Technology stack diversity and integration complexity
- Existing CLAUDE.md hierarchy and content gaps
- Directory structure patterns and logical groupings
2. **Documentation Hierarchy Strategy**:
- Identify optimal CLAUDE.md placement levels (root, module, sub-module)
- Determine content differentiation across hierarchy levels
- Assess complexity thresholds for documentation depth
- Map existing documentation patterns and standards
3. **Cross-System Integration Analysis**:
- Claude-Gemini compatibility requirements
- Cross-system documentation synchronization patterns
- Template and guideline reference patterns
- Memory system synchronization needs
4. **Module Responsibility Mapping**:
- Core module identification and purpose analysis
- Inter-module dependencies and integration patterns
- Component organization and architectural boundaries
- Implementation pattern consistency across modules
5. **Technology Stack Integration**:
- Framework usage patterns and configuration analysis
- Build system and development workflow patterns
- Testing architecture and quality standards
- Deployment and infrastructure considerations
## Output Requirements:
- **Hierarchy Recommendation**: Specific CLAUDE.md file structure with rationale
- **Content Strategy**: Level-specific focus areas and content differentiation
- **Template Selection**: Appropriate DMS and core templates for analysis
- **Integration Plan**: Workflow coordination with update_dms and agent systems
- **Quality Standards**: Cross-system compatibility and maintenance guidelines
Focus on architectural understanding for documentation strategy rather than implementation details."
```
## Context Application
- Generate hierarchical documentation strategy based on project structure
- Create comprehensive CLAUDE.md files with appropriate content depth
- Ensure cross-system compatibility between Claude and Gemini CLI
- Establish maintainable documentation patterns for ongoing development
## Usage Guidelines
**Use Memory-Gemini-Bridge template when**:
- Creating or updating CLAUDE.md files for Gemini CLI compatibility
- Establishing hierarchical documentation strategy for complex projects
- Synchronizing memory systems between Claude and Gemini CLI
**Template focuses on**:
- Comprehensive project structure analysis
- Documentation hierarchy strategy and content differentiation
- Cross-system integration analysis for Claude-Gemini compatibility
- Architectural understanding for documentation strategy rather than implementation details

View File

@@ -0,0 +1,91 @@
# Gemini Planning Agent Template
**Purpose**: Identify specific task scope, affected files, and concrete implementation plan
## Template Structure
```bash
gemini --all-files -p "@{[task-related-files]} @{CLAUDE.md,**/*CLAUDE.md}
Task-specific planning analysis for: [exact task description]
## Required Analysis:
1. **Task Scope Identification**:
- What exactly needs to be built/modified/fixed?
- Which specific components, files, or modules are affected?
- What is the precise deliverable?
2. **File and Modification Mapping**:
- List exact files that need modification (with file:line references where possible)
- Identify specific functions, classes, or components to change
- Find configuration files, tests, or documentation that need updates
3. **Dependencies and Integration Points**:
- What modules/services depend on the changes?
- What external APIs, databases, or services are involved?
- Which existing functions will need to call the new code?
4. **Risk and Complexity Assessment**:
- What could break from these changes?
- Are there critical paths that need special testing?
- What rollback strategy is needed?
5. **Implementation Sequence**:
- What order should changes be made in?
- Which changes are prerequisites for others?
- What can be done in parallel?
## Output Requirements:
- **Concrete file list**: Exact files to modify with reasons
- **Specific entry points**: Functions/classes that need changes with line references
- **Clear sequence**: Step-by-step implementation order
- **Risk mitigation**: Specific testing requirements and rollback plans
- **Success criteria**: How to verify each step works
Focus on actionable, specific guidance rather than general patterns."
```
## Intelligent Usage Examples
```python
# API endpoint planning
def planning_agent_context(user_input):
context = build_intelligent_context(
user_input="Add user profile management API",
analysis_type="planning-agent-context",
domains=['api', 'backend', 'database'],
tech_stack=['Node.js', 'Express', 'PostgreSQL']
)
return f"""
gemini --all-files -p "@{{**/api/**/*,**/routes/**/*,**/controllers/**/*}}
@{{**/models/**/*,**/db/**/*}} @{{CLAUDE.md,api/CLAUDE.md,backend/CLAUDE.md}}
Task-specific planning analysis for: Add user profile management API endpoints
- Profile creation, update, retrieval, deletion endpoints
- User avatar upload and management
- Profile privacy settings and visibility controls
Focus on exact file modification points and implementation sequence."
"""
```
## Context Application
- Create detailed, file-specific implementation plan
- Identify exact modification points with line references
- Establish concrete success criteria for each stage
- Plan specific testing and validation steps
## Usage Guidelines
**Use Planning Agent template when**:
- Before creating implementation plans for specific features or fixes
- You need to understand exact scope and modification points
- Focus on concrete deliverables rather than architectural overviews
**Template focuses on**:
- Task-specific analysis targeting exact requirements
- Actionable output with specific file:line references
- Repository context extracting patterns specific to the actual codebase
- Precise scope analyzing only what's needed for the immediate task

View File

@@ -0,0 +1,477 @@
# JSON-Document Coordination System
## Overview
This document provides technical implementation details for JSON file structures, synchronization mechanisms, conflict resolution, and performance optimization.
### JSON File Hierarchy
```
.workflow/WFS-[topic-slug]/
├── workflow-session.json # Master session state
├── IMPL_PLAN.md # Combined planning document
├── TODO_LIST.md # Progress tracking document
└── .task/
├── impl-1.json # Main task
├── impl-1.1.json # Level 2 subtask
├── impl-1.1.1.json # Level 3 detailed subtask
├── impl-1.2.json # Another level 2 subtask
└── impl-2.json # Another main task
```
## JSON File Structures
### 1. workflow-session.json (Master State)
```json
{
"session_id": "WFS-user-auth-system",
"project": "OAuth2 authentication system",
"type": "complex",
"status": "active",
"current_phase": "IMPLEMENT",
"directory": ".workflow/WFS-user-auth-system",
"documents": {
"IMPL_PLAN.md": {
"status": "generated",
"path": ".workflow/WFS-user-auth-system/IMPL_PLAN.md",
"last_updated": "2025-09-05T10:30:00Z",
"sync_status": "synced"
},
"TODO_LIST.md": {
"status": "generated",
"path": ".workflow/WFS-user-auth-system/TODO_LIST.md",
"last_updated": "2025-09-05T11:20:00Z",
"sync_status": "synced"
}
},
"task_system": {
"enabled": true,
"directory": ".workflow/WFS-user-auth-system/.task",
"next_main_task_id": 3,
"max_depth": 3,
"task_count": {
"total": 8,
"main_tasks": 2,
"subtasks": 6,
"pending": 3,
"active": 2,
"completed": 2,
"blocked": 1
}
},
"coordination": {
"last_sync": "2025-09-05T11:20:00Z",
"sync_conflicts": 0,
"auto_sync_enabled": true,
"manual_sync_required": false
},
"metadata": {
"created_at": "2025-09-05T10:00:00Z",
"last_updated": "2025-09-05T11:20:00Z",
"version": "2.1"
}
}
```
### 2. TODO_LIST.md (Task Registry & Display)
TODO_LIST.md serves as both the task registry and progress display:
```markdown
# Implementation Progress
## Task Status Summary
- **Total Tasks**: 5
- **Completed**: 2 (40%)
- **Active**: 2
- **Pending**: 1
## Task Hierarchy
### ☐ impl-1: Build authentication module (75% complete)
- ☑ impl-1.1: Design authentication schema (100%)
- ☑ impl-1.1.1: Create user model
- ☑ impl-1.1.2: Design JWT structure
- ☐ impl-1.2: Implement OAuth2 flow (50%)
### ☐ impl-2: Setup user management (0%)
```
**Task Registry Data Extracted from TODO_LIST.md:**
"last_updated": "2025-09-05T11:20:00Z"
}
```
### 3. Individual Task JSON (impl-*.json)
```json
{
"id": "impl-1.1",
"title": "Design authentication schema",
"parent_id": "impl-1",
"depth": 2,
"status": "completed",
"type": "design",
"priority": "normal",
"agent": "planning-agent",
"effort": "1h",
"subtasks": ["impl-1.1.1", "impl-1.1.2"],
"context": {
"inherited_from": "impl-1",
"requirements": ["User model schema", "JWT token design", "OAuth2 integration points"],
"scope": ["src/auth/models/*", "docs/auth-schema.md"],
"acceptance": ["Schema validates JWT tokens", "User model complete", "OAuth2 flow documented"]
},
"document_refs": {
"todo_section": "TODO_LIST.md#impl-1.1",
"todo_items": [
"TODO_LIST.md#impl-1.1",
"TODO_LIST.md#impl-1.1.1",
"TODO_LIST.md#impl-1.1.2"
],
"impl_plan_ref": "IMPL_PLAN.md#authentication-schema-design"
},
"dependencies": {
"upstream": [],
"downstream": ["impl-1.2"],
"blocking": [],
"blocked_by": [],
"parent_dependencies": ["impl-1"]
},
"execution": {
"attempts": 1,
"current_attempt": {
"started_at": "2025-09-05T10:35:00Z",
"completed_at": "2025-09-05T11:20:00Z",
"duration": "45m",
"checkpoints": ["setup", "design", "validate", "document"],
"completed_checkpoints": ["setup", "design", "validate", "document"]
},
"history": [
{
"attempt": 1,
"started_at": "2025-09-05T10:35:00Z",
"completed_at": "2025-09-05T11:20:00Z",
"result": "success",
"outputs": ["src/auth/models/User.ts", "docs/auth-schema.md"]
}
]
},
"sync_metadata": {
"last_document_sync": "2025-09-05T11:20:00Z",
"document_version": "1.2",
"sync_conflicts": [],
"pending_document_updates": []
},
"metadata": {
"created_at": "2025-09-05T10:30:00Z",
"started_at": "2025-09-05T10:35:00Z",
"completed_at": "2025-09-05T11:20:00Z",
"last_updated": "2025-09-05T11:20:00Z",
"created_by": "task:breakdown IMPL-001",
"version": "1.0"
}
}
```
## Coordination Mechanisms
### 1. Data Ownership Rules
#### Documents Own (Authoritative)
**IMPL_PLAN.md:**
- **Implementation Strategy**: Overall approach, phases, risk assessment
- **Requirements**: High-level functional requirements
- **Context**: Global project context, constraints
**TODO_LIST.md:**
- **Progress Visualization**: Task status display, completion tracking
- **Checklist Format**: Checkbox representation of task hierarchy
#### JSON Files Own (Authoritative)
- **Complete Task Definitions**: Full task context, requirements, acceptance criteria
- **Hierarchical Relationships**: Parent-child links, depth management
- **Execution State**: pending/active/completed/blocked/failed
- **Progress Data**: Percentages, timing, checkpoints
- **Agent Assignment**: Current agent, execution history
- **Dependencies**: Task relationships across all hierarchy levels
- **Session Metadata**: Timestamps, versions, attempt counts
- **Runtime State**: Current attempt, active processes
#### Shared Responsibility (Synchronized)
- **Task Status**: JSON authoritative, TODO_LIST.md displays current state
- **Progress Calculations**: Derived from JSON hierarchy, shown in TODO_LIST.md
- **Cross-References**: JSON contains document refs, documents link to relevant tasks
- **Task Hierarchy**: JSON defines structure, TODO_LIST.md visualizes it
### 2. Synchronization Events
#### Document → JSON Synchronization
**Trigger Events**:
- IMPL_PLAN.md modified (strategy/context changes)
- TODO_LIST.md checkboxes changed (manual status updates)
- Document structure changes affecting task references
**Actions**:
```javascript
// Pseudo-code for document sync process
on_document_change(document_path) {
if (document_path.includes('IMPL_PLAN.md')) {
const context_changes = parse_context_updates(document_path);
propagate_context_to_tasks(context_changes);
log_sync_event('impl_plan_to_json', document_path);
}
if (document_path.includes('TODO_LIST.md')) {
const status_changes = parse_checkbox_updates(document_path);
update_task_status_from_todos(status_changes);
recalculate_hierarchy_progress(status_changes);
update_session_progress();
log_sync_event('todo_list_to_json', document_path);
}
}
```
#### JSON → Document Synchronization
**Trigger Events**:
- Task status changed in JSON files
- New task created via decomposition
- Task hierarchy modified (parent-child relationships)
- Progress checkpoint reached
- Task completion cascading up hierarchy
**Actions**:
```javascript
// Pseudo-code for JSON sync process
on_task_change(task_id, change_type, data) {
// Update TODO_LIST.md with current task status
update_todo_list_display(task_id, data.status);
if (change_type === 'status_change' && data.new_status === 'completed') {
// Recalculate parent task progress
update_parent_progress(task_id);
check_dependency_unblocking(task_id);
}
if (change_type === 'task_decomposition') {
// Add new subtasks to TODO_LIST.md
add_subtasks_to_todo_list(data.subtasks);
update_todo_list_hierarchy(task_id, data.subtasks);
}
update_session_coordination_metadata();
log_sync_event('json_to_todo_list', task_id);
}
```
### 3. Real-Time Coordination Process
#### Automatic Sync Process
```
1. File System Watcher → Detects document changes
2. Change Parser → Extracts structured data from documents
3. Conflict Detector → Identifies synchronization conflicts
4. Sync Engine → Applies changes based on ownership rules
5. Validation → Verifies consistency across all files
6. Audit Logger → Records all sync events
```
#### Manual Sync Triggers
```bash
# Force complete synchronization
/task:sync --all
# Sync specific task
/task:sync IMPL-001
# Validate and repair sync issues
/task:sync --validate --repair
# View sync status
/task:sync --status
```
## Conflict Resolution
### Conflict Types and Resolution
#### 1. Timestamp Conflicts
**Scenario**: Both document and JSON modified simultaneously
**Resolution**: Most recent timestamp wins, with manual review option
```json
{
"conflict_type": "timestamp",
"document_timestamp": "2025-09-05T11:20:00Z",
"json_timestamp": "2025-09-05T11:19:30Z",
"resolution": "document_wins",
"manual_review_required": false
}
```
#### 2. Data Authority Conflicts
**Scenario**: Task status changed directly in TODO_LIST.md vs JSON file
**Resolution**: Determine if change is authorized checkbox update or unauthorized edit
```json
{
"conflict_type": "data_authority",
"field": "task_status",
"document_value": "completed",
"json_value": "active",
"change_source": "checkbox|direct_edit",
"resolution": "checkbox_authorized|json_authority",
"action": "accept_checkbox_change|revert_document_change"
}
```
#### 3. Hierarchy Conflicts
**Scenario**: Task decomposition modified in JSON but TODO_LIST.md structure differs
**Resolution**: JSON hierarchy is authoritative, TODO_LIST.md updated
```json
{
"conflict_type": "hierarchy",
"conflict_description": "Task impl-1 subtasks differ between JSON and TODO display",
"json_subtasks": ["impl-1.1", "impl-1.2", "impl-1.3"],
"todo_display": ["impl-1.1", "impl-1.2"],
"resolution": "json_authority",
"action": "update_todo_list_structure",
"manual_validation_required": false
}
```
### Conflict Resolution Priority
1. **Data Ownership Rules**: Respect authoritative source
2. **Recent Timestamp**: When ownership is shared
3. **User Intent**: Manual resolution for complex conflicts
4. **System Consistency**: Maintain cross-file integrity
## Validation and Integrity
### Consistency Checks
```bash
/task:validate --consistency
Running consistency checks:
✅ Task IDs consistent across JSON files and TODO_LIST.md
✅ Hierarchical relationships valid (parent-child links)
✅ Task depth within limits (max 3 levels)
✅ Progress calculations accurate across hierarchy
⚠️ impl-2.1 missing from TODO_LIST.md display
❌ impl-1.1 status mismatch (JSON: completed, TODO: pending)
❌ Orphaned task: impl-3.2.1 has non-existent parent
Issues found: 3
Auto-fix available: 2
Manual review required: 1
```
### Cross-Reference Validation
- Task IDs exist in all referenced documents
- Document sections referenced in JSON exist
- Progress percentages mathematically consistent
- Dependency relationships bidirectional
### Data Integrity Checks
- JSON schema validation
- Document structure validation
- Cross-file referential integrity
- Timeline consistency validation
## Performance and Scalability
### Optimization Strategies
- **Incremental Sync**: Only sync changed sections
- **Batch Updates**: Group related changes
- **Async Processing**: Non-blocking synchronization
- **Caching**: Cache parsed document structures
### Scalability Considerations
- **File Size Limits**: Split large task sets across multiple files
- **Memory Usage**: Stream processing for large document parsing
- **I/O Optimization**: Minimize file reads/writes through batching
## Error Handling and Recovery
### Common Error Scenarios
```bash
# Document parsing error
❌ Failed to parse TODO_LIST.md
→ Syntax error in checkbox format at line 23
→ Restore from JSON task data? (y/n)
# JSON corruption
❌ Invalid JSON in impl-1.2.json
→ Reconstruct from parent task and TODO_LIST.md? (y/n)
# Hierarchy errors
❌ Circular parent-child relationship detected: impl-1.1 → impl-1.1.1 → impl-1.1
→ Break circular dependency? (y/n)
# Missing files
❌ TODO_LIST.md not found
→ Regenerate from JSON task hierarchy? (y/n)
# Depth violations
⚠️ Task impl-1.2.3.1 exceeds maximum depth (3 levels)
→ Flatten hierarchy or promote task? (flatten/promote)
```
### Recovery Mechanisms
- **Automatic Backup**: Git-based document versioning
- **Rollback Options**: Restore from previous sync point
- **Reconstruction**: Rebuild JSON from documents or vice versa
- **Partial Recovery**: Fix individual files without full reset
## Monitoring and Auditing
### Sync Event Logging
```json
{
"timestamp": "2025-09-05T11:20:00Z",
"event_type": "json_to_todo_list_sync",
"source": "impl-1.1.json",
"target": ["TODO_LIST.md"],
"changes": [
{
"type": "hierarchical_status_update",
"task_id": "impl-1.1",
"old_value": "active",
"new_value": "completed",
"propagation": {
"parent_progress": {
"task_id": "impl-1",
"old_progress": 45,
"new_progress": 67
}
}
}
],
"hierarchy_effects": [
"impl-1 progress recalculated",
"impl-1.2 unblocked due to impl-1.1 completion"
],
"conflicts": 0,
"duration_ms": 89,
"status": "success"
}
```
### Performance Metrics
- Sync frequency and duration
- Conflict rate and resolution time
- File size growth over time
- Error rate and recovery success
This JSON-document coordination system ensures reliable, consistent, and performant integration between state management and planning documentation while maintaining clear data ownership and providing robust error handling.

View File

@@ -0,0 +1,293 @@
# Workflow Session Management Principles
## Overview
This document provides complete technical implementation details for session state management, multi-session registry, command pre-execution protocol, and recovery mechanisms.
## Multi-Session Architecture
### Session Registry System
**Lightweight Global Registry**: `.workflow/session_status.jsonl`
The system supports multiple concurrent sessions with a single active session:
```jsonl
{"id":"WFS-oauth-integration","status":"paused","description":"OAuth2 authentication implementation","created":"2025-09-07T10:00:00Z","directory":".workflow/WFS-oauth-integration"}
{"id":"WFS-user-profile","status":"active","description":"User profile feature","created":"2025-09-07T11:00:00Z","directory":".workflow/WFS-user-profile"}
{"id":"WFS-bug-fix-123","status":"completed","description":"Fix login timeout issue","created":"2025-09-06T14:00:00Z","directory":".workflow/WFS-bug-fix-123"}
```
**Registry Management**:
- **Single Active Rule**: Only one session can have `status="active"`
- **Automatic Registration**: Sessions auto-register on creation
- **Session Discovery**: Commands query registry for active session context
- **Context Inheritance**: Active session provides default workspace and documents
### Command Pre-execution Protocol
**Universal Session Awareness**: All commands automatically check for active session context before execution
```pseudo
FUNCTION execute_command(command, args):
active_session = get_active_session_from_registry()
IF active_session EXISTS:
context = load_session_context(active_session.directory)
workspace = active_session.directory
inherit_task_context(context)
ELSE:
context = create_temporary_workspace()
workspace = temporary_directory
execute_with_context(command, args, context, workspace)
END FUNCTION
```
**Protocol Benefits**:
- **Active Session Discovery**: Query `.workflow/session_status.jsonl` for active session
- **Context Inheritance**: Use active session directory and documents for command execution
- **Fallback Mode**: Commands can operate without active session (creates temporary workspace)
- **Output Location**: Active session determines where files are created/modified
- **Task Context**: Active session provides current task purpose and requirements
## Individual Session Tracking
All workflow state for each session managed through `workflow-session.json` with comprehensive structure:
### Session State Structure
```json
{
"session_id": "WFS-[topic-slug]",
"session_version": "2.0",
"project": "feature description",
"type": "simple|medium|complex",
"current_phase": "PLAN|IMPLEMENT|REVIEW",
"status": "active|paused|completed",
"created_at": "timestamp",
"updated_at": "timestamp",
"checkpoints": {
"plan": {
"status": "completed|in_progress|pending",
"documents": ["IMPL_PLAN.md", "TASK_BREAKDOWN.md"],
"timestamp": "timestamp"
},
"implement": {
"status": "completed|in_progress|pending",
"agents_completed": ["code-developer"],
"current_agent": "code-review-agent",
"todos": {
"total": 12,
"completed": 8,
"in_progress": 1
},
"timestamp": "timestamp"
},
"review": {
"status": "completed|in_progress|pending",
"quality_checks": {
"code_quality": "passed",
"test_coverage": "pending"
}
}
},
"context_chain": [],
"state_transitions": []
}
```
## Phase-Aware Session Management
### Conceptual/Planning Phase
- Tracks planning document generation
- Monitors task decomposition progress
- Preserves planning context and decisions
- Safe interruption at document boundaries
### Implementation Phase
- Integrates with existing TodoWrite system
- Tracks agent progression and outputs
- Maintains file modification history
- Supports multi-agent coordination
### Review Phase
- Tracks validation and quality gates
- Preserves review comments and decisions
- Maintains compliance check status
## Automatic Checkpoints
### Checkpoint Triggers
- **Planning Phase**:
- After planning document completion
- After task breakdown generation
- On user interrupt request
- **Implementation Phase**:
- After agent completion
- At TodoWrite milestones
- After significant file changes
- On phase transitions
- **Review Phase**:
- After quality check completion
- On validation milestones
- At review agent boundaries
### Checkpoint Strategy
```json
{
"save_triggers": ["agent_complete", "todo_milestone", "user_interrupt"],
"save_data": ["agent_outputs", "file_changes", "todo_state"],
"resume_logic": "skip_completed_continue_sequence"
}
```
## Cross-Phase Context Preservation
### Context Chain Maintenance
- All phase outputs preserved in session
- Context automatically transferred between phases
- Planning documents bridge PLAN → IMPLEMENT phases
- Implementation artifacts bridge IMPLEMENT → REVIEW
- Full audit trail maintained for decisions
### State Transitions
```json
{
"from": "PLAN",
"to": "IMPLEMENT",
"timestamp": "timestamp",
"trigger": "planning completion",
"handoff_data": {
"plan_path": ".workflow/WFS-[topic-slug]/IMPL_PLAN.md",
"tasks": ["task1", "task2"],
"complexity": "medium"
}
}
```
## Recovery Mechanisms
### Automatic Recovery Logic
```python
def resume_workflow():
session = load_session()
if session.current_phase == "PLAN":
resume_planning(session.checkpoints.plan)
elif session.current_phase == "IMPLEMENT":
resume_implementation(session.checkpoints.implement)
elif session.current_phase == "REVIEW":
resume_review(session.checkpoints.review)
```
### State Validation
- Verify required artifacts exist for resumption
- Check file system consistency with session state
- Validate TodoWrite synchronization
- Ensure agent context completeness
- Confirm phase prerequisites met
### Recovery Strategies
- **Complete Recovery**: Full state restoration when possible
- **Partial Recovery**: Resume with warning when some data missing
- **Graceful Degradation**: Restart phase with maximum retained context
- **Manual Intervention**: Request user guidance for complex conflicts
## Agent Integration Protocol
### Required Agent Capabilities
All agents must support:
- Checkpoint save/load functionality
- State validation for resumption
- Context preservation across interrupts
- Progress reporting to session manager
### Phase-Specific Integration
- **Planning Agents**: Auto-save planning outputs
- **Implementation Agents**: Track code changes and test results
- **Review Agents**: Preserve validation outcomes
## Error Handling
### Common Scenarios
1. **Session File Corruption**:
- Automatic backup before each save
- Rollback to last known good state
- Recovery from planning documents
2. **Version Incompatibility**:
- Automatic migration for minor versions
- Manual intervention for major changes
- Backward compatibility for essential fields
3. **Missing Dependencies**:
- Graceful handling of missing files
- Regeneration of recoverable artifacts
- Clear error messages for resolution
4. **Multi-Session Conflicts**:
- Registry integrity validation
- Active session collision detection
- Automatic session status correction
## Session Lifecycle Management
### Complete Session Lifecycle
**1. Registration Phase**
- Add session to global registry (`.workflow/session_status.jsonl`)
- Generate unique session ID in WFS-[topic-slug] format
- Create session directory structure
**2. Activation Phase**
- Set session as active (deactivates any other active session)
- Initialize session state file (`workflow-session.json`)
- Create base directory structure based on complexity level
**3. Execution Phase**
- Track progress through workflow phases (PLAN → IMPLEMENT → REVIEW)
- Maintain checkpoints at natural boundaries
- Update session state with phase transitions and progress
**4. State Management Phase**
- **Active**: Session is currently being worked on
- **Paused**: Session temporarily suspended, can be resumed
- **Completed**: Session finished successfully
**5. Session Operations**
- **Switching**: Change active session (preserves state of previous)
- **Resumption**: Intelligent recovery from saved state and checkpoints
- **Interruption**: Graceful pause with complete state preservation
### Session State Transitions
```
INACTIVE → ACTIVE → PAUSED → ACTIVE → COMPLETED
↑ ↓ ↓ ↑ ↓
CREATE PAUSE SWITCH RESUME ARCHIVE
```
## Implementation Guidelines
### Session Management Operations
### Testing Requirements
- Single-phase interruption/resumption
- Multi-phase workflow continuity
- Context preservation validation
- Error recovery scenarios
- Multi-session registry operations
- Session switching without data loss
- Active session inheritance in commands
- Registry integrity validation
- Version migration testing
### Success Metrics
- Zero data loss on resume or session switch
- Context continuity maintained across sessions
- No duplicate work performed
- Full workflow completion capability
- Seamless multi-session management
- Registry integrity maintained
- Commands automatically inherit active session context
- Minimal performance overhead

View File

@@ -0,0 +1,134 @@
# Task Decomposition Integration Principles
## Overview
This document defines authoritative complexity thresholds, decomposition triggers, and decision trees for workflow complexity classification.
## Standardized Complexity Thresholds
### Simple Workflows (<5 tasks)
**Criteria**: Tasks < 5 AND modules ≤ 3 AND effort ≤ 4h
**Structure**: Minimal structure with basic task tracking
**Documents**: IMPL_PLAN.md only, no TODO_LIST.md
**Task Files**: impl-*.json (single level)
### Medium Workflows (5-15 tasks)
**Criteria**: Tasks 5-15 OR modules > 3 OR effort > 4h OR complex dependencies
**Structure**: Enhanced structure with progress tracking
**Documents**: IMPL_PLAN.md + TODO_LIST.md (auto-triggered)
**Task Files**: impl-*.*.json (up to 2 levels)
### Complex Workflows (>15 tasks)
**Criteria**: Tasks > 15 OR modules > 5 OR effort > 2 days OR multi-repository
**Structure**: Complete structure with comprehensive documentation
**Documents**: IMPL_PLAN.md + TODO_LIST.md + expanded documentation
**Task Files**: impl-*.*.*.json (up to 3 levels maximum)
## Complexity Decision Tree
### Classification Algorithm
```
START: Analyze Workflow Requirements
Count Tasks → Is Task Count < 5?
↓ YES ↓ NO
Count Modules Count Modules → > 5?
↓ ↓ YES
≤ 3 Modules? COMPLEX
↓ YES ↓ NO
Estimate Effort Estimate Effort → > 2 days?
↓ ↓ YES
≤ 4 hours? COMPLEX
↓ YES ↓ NO
SIMPLE Check Dependencies → Multi-repo?
↓ YES ↓ NO
COMPLEX MEDIUM
```
### Decision Matrix
| **Factor** | **Simple** | **Medium** | **Complex** |
|------------|------------|------------|-------------|
| Task Count | < 5 | 5-15 | > 15 |
| Module Count | ≤ 3 | 4-5 | > 5 |
| Effort Estimate | ≤ 4h | 4h-2d | > 2d |
| Dependencies | Simple | Complex | Multi-repo |
| Repository Scope | Single | Single | Multiple |
### Threshold Priority
1. **Task Count**: Primary factor (most reliable predictor)
2. **Module Count**: Secondary factor (scope indicator)
3. **Effort Estimate**: Tertiary factor (complexity indicator)
4. **Dependencies**: Override factor (can force higher complexity)
## Automatic Document Generation Rules
### Generation Matrix
| **Complexity** | **IMPL_PLAN.md** | **TODO_LIST.md** | **Task Hierarchy** | **Structure** |
|----------------|------------------|------------------|-------------------|---------------|
| Simple | Always | No | 1 level | Minimal |
| Medium | Always | Auto-trigger | 2 levels | Enhanced |
| Complex | Always | Always | 3 levels | Complete |
### Auto-trigger Conditions
**TODO_LIST.md Generation** (Medium workflows):
- Tasks ≥ 5 OR modules > 3 OR effort > 4h OR dependencies complex
**Enhanced Structure** (Medium workflows):
- Progress tracking with hierarchical task breakdown
- Cross-references between planning and implementation
- Summary generation for major tasks
**Complete Structure** (Complex workflows):
- Comprehensive documentation suite
- Multi-level task decomposition
- Full progress monitoring and audit trail
## Task System Integration
### Hierarchical Task Schema
**Maximum Depth**: 3 levels (impl-N.M.P)
**Task File Structure**: Complexity determines maximum hierarchy depth
### Progress Calculation Rules
**Simple**: Linear progress through main tasks
**Medium**: Weighted progress with subtask consideration
**Complex**: Hierarchical progress with multi-level rollup
## Implementation Integration Rules
### Decomposition Triggers
**Automatic Decomposition Required When**:
- Task count exceeds complexity threshold (5+ for medium, 15+ for complex)
- Cross-module changes affect >3 modules
- Architecture pattern changes required
- Multi-repository impacts detected
- Complex interdependencies identified
### Direct Execution Conditions
**Skip Decomposition For**:
- Single module updates with clear boundaries
- Simple documentation changes
- Isolated bug fixes affecting <3 files
- Clear, well-defined maintenance tasks
## Validation Rules
### Complexity Classification Validation
1. **Threshold Verification**: Ensure task count, module count, and effort estimates align
2. **Override Checks**: Verify dependency complexity doesn't require higher classification
3. **Consistency Validation**: Confirm file structure matches complexity level
4. **Progress Calculation**: Validate progress tracking matches hierarchy depth
### Quality Assurance
- Decomposition depth must not exceed 3 levels (impl-N.M.P maximum)
- Task hierarchy must be consistent across JSON files and TODO_LIST.md
- Complexity classification must align with document generation rules
- Auto-trigger conditions must be properly evaluated and documented
---
**System ensures**: Consistent complexity classification with appropriate decomposition and structure scaling

View File

@@ -0,0 +1,397 @@
# Task Management Principles
## Overview
This document provides complete technical implementation for the task system, including JSON schema, coordination rules, TodoWrite integration, and validation mechanisms.
## Unified Task JSON Schema
### Core Task Structure
All task files must conform to this schema with support for recursive decomposition:
```json
{
"id": "impl-1",
"parent_id": null,
"title": "Task title describing the work",
"type": "feature|bugfix|refactor|test|docs",
"status": "pending|active|completed|blocked|failed",
"priority": "low|normal|high|critical",
"agent": "code-developer|planning-agent|test-agent|review-agent",
"effort": "1h|2h|4h|1d|2d",
"context": {
"inherited_from": "WFS-user-auth-system",
"requirements": ["Specific requirement 1", "Specific requirement 2"],
"scope": ["src/module/*", "tests/module/*"],
"acceptance": ["Success criteria 1", "Success criteria 2"]
},
"dependencies": {
"upstream": ["impl-0"],
"downstream": ["impl-2", "impl-3"]
},
"subtasks": ["impl-1.1", "impl-1.2", "impl-1.3"],
"execution": {
"attempts": 1,
"current_attempt": {
"started_at": "2025-09-05T10:35:00Z",
"checkpoints": ["setup", "implement", "test", "validate"],
"completed_checkpoints": ["setup", "implement"]
},
"history": []
},
"metadata": {
"created_at": "2025-09-05T10:30:00Z",
"started_at": "2025-09-05T10:35:00Z",
"completed_at": "2025-09-05T13:15:00Z",
"last_updated": "2025-09-05T13:15:00Z",
"version": "1.0"
}
}
```
### Status Enumeration
Standard status values across all systems:
- **pending**: Task created but not started
- **active**: Task currently being worked on
- **completed**: Task successfully finished
- **blocked**: Task cannot proceed due to dependencies
- **failed**: Task attempted but failed execution
### Type Classification
Standard task types:
- **feature**: New functionality implementation
- **bugfix**: Fixing existing issues
- **refactor**: Code improvement without functionality change
- **test**: Test implementation or testing tasks
- **docs**: Documentation creation or updates
### Priority Levels
Standard priority values:
- **low**: Can be deferred
- **normal**: Standard priority (default)
- **high**: Should be completed soon
- **critical**: Must be completed immediately
## TodoWrite Integration System
### TodoWrite Tool vs TODO_LIST.md File
**Clear Distinction**: TodoWrite is Claude's internal task tracking tool, TODO_LIST.md is the persistent workflow file
**TodoWrite Tool**:
- Claude's internal task management interface
- Real-time progress tracking during execution
- Temporary state for active workflow sessions
- Used by agents and commands for immediate task coordination
**TODO_LIST.md File**:
- Persistent task list stored in workflow session directory
- Cross-referenced with JSON task files
- Maintains task hierarchy and progress visualization
- Provides audit trail and resumable task state
### Synchronization Protocol
**TodoWrite → TODO_LIST.md**:
- TodoWrite task completion triggers TODO_LIST.md checkbox updates
- TodoWrite progress reflected in TODO_LIST.md progress calculations
- TodoWrite task status changes sync to JSON task files
**TODO_LIST.md → JSON Task Files**:
- Checkbox changes in TODO_LIST.md update JSON task status
- Manual task modifications propagate to JSON files
- Progress calculations derived from JSON task completion
**JSON Task Files → TodoWrite**:
- Task creation in JSON automatically creates TodoWrite entries when session is active
- JSON status changes reflect in TodoWrite display
- Agent task assignments sync to TodoWrite coordination
### Integration Rules
1. **Session Active**: TodoWrite automatically syncs with TODO_LIST.md
2. **Session Paused**: TodoWrite state preserved in TODO_LIST.md
3. **Session Resumed**: TodoWrite reconstructed from TODO_LIST.md and JSON files
4. **Cross-Session**: TODO_LIST.md provides continuity, TodoWrite provides active session interface
## Workflow Integration Schema
### Workflow Task Summary
Workflow session contains minimal task references with hierarchical support:
```json
{
"phases": {
"IMPLEMENT": {
"tasks": ["impl-1", "impl-2", "impl-3"],
"completed_tasks": ["impl-1"],
"blocked_tasks": [],
"progress": 33,
"task_depth": 2,
"last_sync": "2025-09-05T13:15:00Z"
}
}
}
```
### Task Reference Format
Tasks referenced by hierarchical ID, full details in JSON files:
```json
{
"task_summary": {
"id": "impl-1",
"parent_id": null,
"title": "Task title",
"status": "completed",
"type": "feature",
"depth": 1,
"progress": 100
}
}
// Subtask example
{
"task_summary": {
"id": "impl-1.2.1",
"parent_id": "impl-1.2",
"title": "Detailed subtask",
"status": "active",
"type": "implementation",
"depth": 3,
"progress": 45
}
}
```
## Data Ownership Rules
### JSON Task Files Own
- Complete task details and context (all levels)
- Execution history and checkpoints
- Parent-child relationships (via parent_id)
- Requirements and acceptance criteria
- Agent assignment and progress tracking
- Hierarchical decomposition state
### Workflow Session Owns
- Top-level task ID lists per phase
- Overall progress calculations
- Phase transition triggers
- Global context inheritance rules
- Task depth management (max 3 levels)
- Sync timestamps and validation
### Shared Responsibility
- Task status (JSON authoritative, TODO_LIST.md displays)
- Progress calculations (derived from JSON, shown in TODO_LIST.md)
- Hierarchical relationships (JSON defines, TODO_LIST.md visualizes)
- Dependency validation (cross-file consistency)
## Synchronization Principles
### Automatic Sync Triggers
- **Task Creation**: Add to workflow task list and TODO_LIST.md
- **Status Change**: Update TODO_LIST.md checkboxes and progress
- **Task Completion**: Update TODO_LIST.md and recalculate hierarchy progress
- **Task Decomposition**: Create child JSON files, update TODO_LIST.md structure
- **Context Update**: Propagate to child tasks in hierarchy
- **Dependency Change**: Validate across all hierarchy levels
### Sync Direction Rules
1. **JSON Task → TODO_LIST.md**: Status updates, progress changes, completion
2. **JSON Task → Workflow**: Task creation, hierarchy changes, phase completion
3. **IMPL_PLAN.md → JSON Task**: Context updates, requirement changes
4. **TODO_LIST.md → JSON Task**: Manual status changes via checkboxes
5. **Bidirectional**: Dependencies, timestamps, hierarchy validation
### Conflict Resolution
Priority order for conflicts:
1. **Most Recent**: Latest timestamp wins
2. **Task Authority**: Task files authoritative for task details
3. **Workflow Authority**: Workflow authoritative for phase management
4. **Manual Resolution**: User confirmation for complex conflicts
### Data Integrity Checks
- **ID Consistency**: All task IDs exist across JSON files and TODO_LIST.md
- **Hierarchy Validation**: Parent-child relationships are bidirectional and valid
- **Depth Limits**: No task exceeds 3 levels deep (impl-N.M.P max)
- **Status Validation**: Status values match enumeration across all files
- **Dependency Validation**: All dependencies exist and respect hierarchy
- **Progress Accuracy**: Calculated progress matches hierarchical task completion
- **Timestamp Ordering**: Created ≤ Started ≤ Completed across hierarchy
## Hierarchical Task Decomposition
### Decomposition Rules
**Maximum Depth**: 3 levels (impl-N.M.P)
- **Level 1** (impl-N): Main implementation tasks
- **Level 2** (impl-N.M): Subtasks with specific focus areas
- **Level 3** (impl-N.M.P): Detailed implementation steps
### ID Format Standards
```
impl-1 # Main task
impl-1.1 # Subtask of impl-1
impl-1.1.1 # Detailed subtask of impl-1.1
impl-1.2 # Another subtask of impl-1
impl-2 # Another main task
```
### Parent-Child Relationships
```json
// Parent task (impl-1.json)
{
"id": "impl-1",
"parent_id": null,
"subtasks": ["impl-1.1", "impl-1.2", "impl-1.3"]
}
// Child task (impl-1.1.json)
{
"id": "impl-1.1",
"parent_id": "impl-1",
"subtasks": ["impl-1.1.1", "impl-1.1.2"]
}
// Grandchild task (impl-1.1.1.json)
{
"id": "impl-1.1.1",
"parent_id": "impl-1.1",
"subtasks": [] // Leaf node - no further decomposition
}
```
### Progress Calculation
- **Leaf tasks**: Progress based on execution checkpoints
- **Container tasks**: Progress = average of all subtask progress
- **Workflow progress**: Weighted average of all top-level tasks
### Status Propagation Rules
- **Child → Parent**: Parent cannot be "completed" until all children complete
- **Parent → Child**: Parent "blocked" status may propagate to children
- **Sibling Independence**: Subtasks at same level operate independently
## Context Management
### Context Inheritance Chain
```
Workflow Context
↓ (inherits requirements, constraints)
Task Context
↓ (distributes scope, specific requirements)
Subtask Context
```
### Context Distribution Rules
- **Requirements**: Flow from workflow to tasks
- **Scope**: Refined at each level (workflow → task → subtask)
- **Constraints**: Apply globally from workflow
- **Acceptance Criteria**: Specific to each task level
### Dynamic Context Updates
- Changes in workflow context propagate to tasks
- Task-specific context remains isolated
- Subtask context inherits from parent task
- Context versioning tracks changes
## Agent Integration
### Agent Assignment Logic
Based on task type and complexity:
- **Planning tasks** → planning-agent
- **Implementation** → code-developer
- **Testing** → test-agent
- **Documentation** → docs-agent
- **Review** → review-agent
### Execution Context Preparation
```json
{
"execution_context": {
"task": {
"id": "IMPL-001",
"requirements": ["from task context"],
"scope": ["from task context"]
},
"workflow": {
"session": "WFS-2025-001",
"phase": "IMPLEMENT",
"global_context": ["from workflow"]
},
"agent": {
"type": "code-developer",
"capabilities": ["coding", "testing"],
"context_optimizations": ["code_patterns"]
}
}
}
```
## Error Handling
### Common Error Scenarios
1. **JSON Task File Missing**: Recreate from TODO_LIST.md or parent task data
2. **Status Mismatch**: JSON files are authoritative, update TODO_LIST.md
3. **Hierarchy Broken**: Reconstruct parent-child relationships from IDs
4. **Invalid Dependencies**: Validate across all hierarchy levels
5. **Schema Version Mismatch**: Migrate to current hierarchical schema
6. **Orphaned Tasks**: Clean up or reassign to proper parent/workflow
7. **Depth Violation**: Flatten excessive hierarchy to 3 levels max
### Recovery Strategies
- **Automatic Recovery**: For common, well-defined conflicts
- **Validation Warnings**: For non-critical inconsistencies
- **Manual Intervention**: For complex or ambiguous conflicts
- **Graceful Degradation**: Continue with best available data
### Validation Rules
- All task IDs must be unique and follow impl-N[.M[.P]] format
- Hierarchical IDs must have valid parent relationships
- Maximum depth of 3 levels (impl-N.M.P)
- Status values must be from defined enumeration
- Dependencies must reference existing tasks at appropriate levels
- Parent tasks cannot be completed until all subtasks complete
- Timestamps must be chronologically ordered
- Required fields cannot be null or empty
## Implementation Guidelines
### File Organization
```
.task/
├── impl-1.json # Main task
├── impl-1.1.json # Level 2 subtask
├── impl-1.1.1.json # Level 3 detailed subtask
├── impl-1.2.json # Level 2 subtask
└── impl-2.json # Another main task
.workflow/WFS-[topic-slug]/
├── workflow-session.json # Master session
├── IMPL_PLAN.md # Planning document
└── TODO_LIST.md # Progress tracking and task registry
```
### Performance Considerations
- **Lazy Loading**: Load task details only when needed
- **Batch Operations**: Group sync operations for efficiency
- **Incremental Updates**: Only sync changed data
- **Cache Management**: Cache frequently accessed task data
### Testing Requirements
- Schema validation for all task operations
- Sync consistency across workflow/task boundaries
- Error recovery scenario testing
- Performance testing with multiple tasks
- Concurrent access handling
### Success Metrics
- Zero data loss during sync operations
- Consistent task status across systems
- Fast task operations (< 100ms for single task)
- Reliable error recovery
- Complete audit trail of changes

View File

@@ -0,0 +1,234 @@
# TodoWrite-Workflow Coordination Rules
## Overview
This document defines the complete coordination system between Claude's TodoWrite tool and the workflow persistence layer (TODO_LIST.md and JSON task files).
## TodoWrite Tool Architecture
### Tool Purpose and Scope
**TodoWrite Tool**:
- Claude's internal task coordination interface
- Real-time progress tracking during active sessions
- Agent coordination and status management
- Immediate task visibility for execution context
**NOT for**:
- Long-term task persistence (that's JSON task files)
- Cross-session task continuity (that's TODO_LIST.md)
- Historical task audit trails (that's workflow summaries)
## Core Coordination Principles
### Execution Order Rules
1. **Create TodoWrite FIRST** - Before any agent coordination begins
2. **Real-time Updates** - Agents update todo status during execution
3. **Progress Tracking** - Maintain visible workflow state throughout
4. **Single Active Rule** - Only one todo in_progress at any time
5. **Completion Gates** - Mark completed only when truly finished
6. **Persistence Sync** - TodoWrite changes trigger workflow file updates
### Integration Architecture
```
TodoWrite Tool (Claude Internal)
↕ Real-time sync
TODO_LIST.md (Workflow Persistence)
↕ Bidirectional updates
JSON Task Files (Detailed State)
↕ Status propagation
Workflow Session (Master State)
```
## Mandatory TodoWrite Creation
### Pre-execution Requirements
Every workflow execution MUST create TodoWrite entries before agent coordination begins.
**Workflow Initialization**:
1. Analyze workflow complexity
2. Create appropriate TodoWrite pattern based on complexity
3. Initialize TODO_LIST.md file if complexity warrants it
4. Begin agent coordination with TodoWrite context
### Agent Handoff Protocol
**Agent → TodoWrite**:
- Agents receive TodoWrite context on initialization
- Agents update todo status in real-time during execution
- Agents mark completion only when truly finished
- Agents create new todos when discovering additional work
## TodoWrite Patterns by Complexity
### Simple Workflows (3-4 todos)
**Pattern**: Linear execution with minimal tracking
```
1. [pending] Context gathering
2. [pending] Solution implementation
3. [pending] Code review and validation
4. [pending] Task completion
```
**Coordination**: Direct TodoWrite → JSON files (no TODO_LIST.md)
### Medium Workflows (5-7 todos)
**Pattern**: Structured execution with progress tracking
```
1. [pending] Implementation planning
2. [pending] Context gathering
3. [pending] Implementation with testing
4. [pending] Functionality validation
5. [pending] Code quality review
6. [pending] Task completion
```
**Coordination**: TodoWrite ↔ TODO_LIST.md ↔ JSON files
### Complex Workflows (7-10 todos)
**Pattern**: Comprehensive execution with full documentation
```
1. [pending] Detailed planning
2. [pending] Documentation generation
3. [pending] Context and dependency gathering
4. [pending] Comprehensive implementation
5. [pending] Acceptance criteria validation
6. [pending] Thorough review process
7. [pending] Feedback iteration
8. [pending] Task completion
```
**Coordination**: Full three-way sync with audit trails
## Synchronization Protocols
### TodoWrite → TODO_LIST.md Sync
**Trigger Events**:
- Todo status change (pending → active → completed)
- Todo creation during workflow execution
- Todo blocking/unblocking status changes
- Progress milestone achievement
**Sync Actions**:
- Update TODO_LIST.md checkbox states
- Recalculate progress percentages
- Update task status summaries
- Propagate completion timestamps
### TODO_LIST.md → TodoWrite Sync
**Trigger Events**:
- Manual checkbox modification in TODO_LIST.md
- External task status updates
- Workflow resumption from paused state
- Cross-session task inheritance
**Sync Actions**:
- Reconstruct TodoWrite state from TODO_LIST.md
- Initialize appropriate todo patterns
- Restore progress tracking context
- Re-establish agent coordination context
### Bidirectional JSON Integration
**TodoWrite → JSON**:
- Task completion triggers JSON status updates
- Progress checkpoints sync to JSON execution state
- Agent assignments propagate to JSON context
**JSON → TodoWrite**:
- JSON task creation generates TodoWrite entries
- JSON status changes reflect in TodoWrite display
- JSON dependency updates trigger TodoWrite coordination
## State Management Rules
### Session Lifecycle Integration
**Active Session**:
- TodoWrite automatically syncs with workflow session
- Real-time updates propagate to persistent files
- Progress tracking maintains workflow continuity
**Session Pause**:
- TodoWrite state preserved in TODO_LIST.md
- JSON files maintain detailed task context
- Workflow session tracks overall progress
**Session Resume**:
- TodoWrite reconstructed from TODO_LIST.md + JSON files
- Previous progress state fully restored
- Agent context re-established from preserved state
**Session Switch**:
- Current TodoWrite state saved to workflow files
- New session TodoWrite initialized from target session files
- Seamless context switching without data loss
### Progress Calculation Rules
**Simple Workflows**: Progress = completed todos / total todos
**Medium Workflows**: Progress = weighted completion across todo + subtask hierarchy
**Complex Workflows**: Progress = multi-level rollup with checkpoint weighting
### Blocking and Dependency Management
**Todo Blocking**:
- Blocked todos tracked with resolution requirements
- Upstream dependencies prevent todo activation
- Dependency resolution automatically unblocks downstream todos
**Cross-Todo Dependencies**:
- TodoWrite enforces dependency order
- JSON files maintain dependency graph
- TODO_LIST.md visualizes dependency relationships
## Error Handling and Recovery
### TodoWrite State Corruption
**Recovery Strategy**:
1. Attempt to reconstruct from TODO_LIST.md
2. Fallback to JSON file task status
3. Last resort: regenerate from workflow session state
4. Manual intervention if all sources inconsistent
### Sync Conflict Resolution
**Priority Order**:
1. **TodoWrite** (most recent user interaction)
2. **JSON Files** (authoritative task state)
3. **TODO_LIST.md** (persistence layer)
4. **Manual Resolution** (complex conflicts)
### Validation Rules
- Todo IDs must map to valid JSON task IDs
- Todo status must be consistent across all coordination layers
- Progress calculations must align with actual task completion
- Single active todo rule must be enforced at all times
## Integration with Specialized Systems
### Task Management Integration
**Hierarchical Support**: TodoWrite flattens task hierarchy for execution view
**Status Synchronization**: Bidirectional sync with JSON task status
### Session Management Integration
**Multi-Session Support**: TodoWrite aware of active session context
**Context Switching**: Seamless integration with session switching
### Complexity Classification Integration
**Pattern Selection**: TodoWrite patterns match complexity classification
**Auto-scaling**: TodoWrite patterns adapt to workflow complexity changes
## Quality Assurance
### Mandatory Validation Checks
- TodoWrite entries exist before agent coordination
- Single active todo rule maintained throughout execution
- Progress tracking accuracy across all coordination layers
- Completion gates properly validated before marking tasks complete
- Sync consistency across TodoWrite, TODO_LIST.md, and JSON files
### Performance Requirements
- TodoWrite updates must be real-time (< 100ms response)
- Sync operations must complete within 500ms
- Progress calculation must be immediate
- Context switching must preserve full state
---
**System ensures**: Seamless coordination between TodoWrite tool interface and persistent workflow state with real-time progress tracking and reliable state management

View File

@@ -0,0 +1,98 @@
# Workflow System Architecture
## Overview
**Foundation**: @./core-principles.md
This document defines the technical system architecture, component relationships, and coordination mechanisms that implement the core workflow principles.
## System Components
### Session Management
**Multi-Session Architecture**: Supports concurrent sessions with single active session pattern
**Registry System**: Global registry tracks all sessions, commands inherit active session context
**State Management**: Individual session state with phase-aware progress tracking
**Technical Details**: @./session-management-principles.md
### File Structure System
**Progressive Structure**: Scales from minimal structure for simple tasks to comprehensive organization for complex workflows
**Complexity Levels**: Three levels (0-2) with automatic structure generation based on task count and scope
**Standard Templates**: Consistent directory layouts and file naming across all complexity levels
**Technical Details**: @./file-structure-standards.md
### Chat and Summary Management
**Interaction Documentation**: Gemini CLI sessions automatically saved and cross-referenced with planning documents
**Task Summaries**: Comprehensive documentation of completed work with cross-referencing to implementation plans
**Integration**: Chat insights inform planning, summaries provide audit trail
**Technical Details**: @./file-structure-standards.md
### Task Management System
**Hierarchical Task Schema**: JSON-based task definitions with up to 3 levels of decomposition
**State Coordination**: Bidirectional sync between JSON task files, TODO_LIST.md, and workflow session
**Agent Integration**: Agent assignment based on task type with context preparation
**Progress Tracking**: Real-time progress calculation with dependency management
**Technical Details**: @./task-management-principles.md
### Document Generation Rules
**Complexity-Based Generation**: Automatic document creation based on task count, scope, and complexity
**Progressive Templates**: Standard document templates that scale with workflow complexity
**Auto-trigger Logic**: Conditional document generation based on predefined thresholds
**Technical Details**: @./task-decomposition-integration.md
### Brainstorming Integration
**Context Preservation**: Multi-role brainstorming analysis automatically integrated into planning documents
**Cross-Referencing**: Task context includes references to relevant brainstorming insights
**Synthesis Integration**: Planning documents synthesize brainstorming outputs into actionable strategies
**Technical Details**: @./file-structure-standards.md
## Coordination System
### Data Ownership and Synchronization
**Clear Ownership**: Each document type owns specific data with defined synchronization rules
**Bidirectional Sync**: Automatic synchronization between JSON task files, TODO_LIST.md, and planning documents
**Conflict Resolution**: Prioritized resolution system based on ownership, timestamps, and consistency
**Technical Details**: @./task-management-principles.md
## Command Integration
### Embedded Workflow Logic
**Workflow Commands**: Session management, planning, and implementation with embedded document generation
**Task Commands**: Task creation, breakdown, execution, and status with automatic synchronization
**Manual Tools**: Maintenance operations for edge cases and manual intervention
**Technical Details**: See individual command documentation
## Implementation Flow
**Workflow Phases**: Session initialization → [Optional brainstorming] → Planning → Implementation → Review
**Progressive Complexity**: Structure and documentation automatically scale with task complexity
**Cross-Integration**: Real-time synchronization across all system components
## Quality Control
**Auto-Validation**: Task ID consistency, document references, progress calculations, cross-file integrity
**Error Recovery**: Automatic recovery strategies with manual fallback for complex conflicts
**Data Integrity**: Comprehensive validation and consistency checks across all workflow components
## Architecture Integration
This document provides the technical architecture framework. For complete system documentation, see:
**📋 Complete Documentation**: @./workflow-overview.md
For specialized implementation details:
- **Session Management**: @./session-management-principles.md
- **Task System**: @./task-management-principles.md
- **Complexity Rules**: @./task-decomposition-integration.md
- **File Structure**: @./file-structure-standards.md
- **TodoWrite Integration**: @./todowrite-coordination-rules.md
---
**Architecture ensures**: Technical framework supporting core principles with scalable component coordination