## Major Changes ### Workflow Architecture Enhancements - **Simplified pre_analysis definition**: Removed overly specific JSON examples - **Added flexibility principles**: Emphasized task-driven approach over template copying - **Enhanced implementation_approach**: Added four design principles and clear field descriptions - **Converted all Chinese content to English**: Improved international accessibility ### Agent Documentation Updates - **code-developer.md**: Updated flow control execution to emphasize four flexible context acquisition methods - **code-review-test-agent.md**: Simplified context gathering process description with practical examples ### Key Improvements - **Flexibility over rigidity**: Removed mechanical template copying tendency - **Clear separation**: pre_analysis for context gathering only, no implementation actions - **Practical examples**: Added dependency task summary reference patterns - **Consistent language**: Full English documentation for better collaboration ## Benefits - Restored system flexibility for diverse task requirements - Clear guidance without prescriptive templates - Better international accessibility - Maintained structural integrity while improving usability 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
21 KiB
Workflow Architecture
Overview
This document defines the complete workflow system architecture using a JSON-only data model, marker-based session management, and unified file structure with dynamic task decomposition.
Key Design Decisions
- JSON files are the single source of truth - All markdown documents are read-only generated views
- Marker files for session tracking - Ultra-simple active session management
- Unified file structure definition - Same structure template for all workflows, created on-demand
- Dynamic task decomposition - Subtasks created as needed during execution
- On-demand file creation - Directories and files created only when required
- Agent-agnostic task definitions - Complete context preserved for autonomous execution
Session Management
Active Session Marker System
Ultra-Simple Active Tracking: .workflow/.active-[session-name]
.workflow/
├── WFS-oauth-integration/ # Session directory (paused)
├── WFS-user-profile/ # Session directory (paused)
├── WFS-bug-fix-123/ # Session directory (completed)
└── .active-WFS-user-profile # Marker file (indicates active session)
Marker File Benefits:
- Zero Parsing: File existence check is atomic and instant
- Atomic Operations: File creation/deletion is naturally atomic
- Visual Discovery:
ls .workflow/.active-*shows active session immediately - Simple Switching: Delete old marker + create new marker = session switch
Session Operations
Detect Active Session
active_session=$(find .workflow -name ".active-*" | head -1)
if [ -n "$active_session" ]; then
session_name=$(basename "$active_session" | sed 's/^\.active-//')
echo "Active session: $session_name"
fi
Switch Session
find .workflow -name ".active-*" -delete && touch .workflow/.active-WFS-new-feature
Individual Session Tracking
Each session directory contains workflow-session.json:
{
"session_id": "WFS-[topic-slug]",
"project": "feature description",
"type": "simple|medium|complex",
"current_phase": "PLAN|IMPLEMENT|REVIEW",
"status": "active|paused|completed",
"progress": {
"completed_phases": ["PLAN"],
"current_tasks": ["IMPL-1", "IMPL-2"]
}
}
Data Model
JSON-Only Architecture
JSON files (.task/IMPL-*.json) are the only authoritative source of task state. All markdown documents are read-only generated views.
- Task State: Stored exclusively in JSON files
- Documents: Generated on-demand from JSON data
- No Synchronization: Eliminates bidirectional sync complexity
- Performance: Direct JSON access without parsing overhead
Hierarchical Task System
Maximum Depth: 2 levels (IMPL-N.M format)
IMPL-1 # Main task
IMPL-1.1 # Subtask of IMPL-1 (dynamically created)
IMPL-1.2 # Another subtask of IMPL-1
IMPL-2 # Another main task
IMPL-2.1 # Subtask of IMPL-2 (dynamically created)
Task Status Rules:
- Container tasks: Parent tasks with subtasks (cannot be directly executed)
- Leaf tasks: Only these can be executed directly
- Status inheritance: Parent status derived from subtask completion
Task JSON Schema
All task files use this simplified 5-field schema:
{
"id": "IMPL-1.2",
"title": "Implement JWT authentication",
"status": "pending|active|completed|blocked|container",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "code-developer|planning-agent|code-review-test-agent"
},
"context": {
"requirements": ["JWT authentication", "OAuth2 support"],
"focus_paths": ["src/auth", "tests/auth", "config/auth.json"],
"acceptance": ["JWT validation works", "OAuth flow complete"],
"parent": "IMPL-1",
"depends_on": ["IMPL-1.1"],
"inherited": {
"from": "IMPL-1",
"context": ["Authentication system design completed"]
},
"shared_context": {
"auth_strategy": "JWT with refresh tokens"
}
},
"flow_control": {
"pre_analysis": [
{
"step": "...",
"action": "...",
"command": "..."
}
],
"implementation_approach": {
"task_description": "Implement comprehensive JWT authentication system with secure token management and validation middleware. Reference [inherited.context] from parent task [parent] for architectural consistency. Apply [shared_context.auth_strategy] across authentication modules. Focus implementation on [focus_paths] directories following established patterns.",
"modification_points": [
"Add JWT token generation in login handler (src/auth/login.ts:handleLogin:75-120) following [shared_context.auth_strategy]",
"Implement token validation middleware (src/middleware/auth.ts:validateToken) referencing [inherited.context] design patterns",
"Add refresh token mechanism for session management using [shared_context] token strategy",
"Update user authentication flow to support JWT tokens in [focus_paths] modules"
],
"logic_flow": [
"User login request → validate credentials → generate JWT token using [shared_context.auth_strategy] → store refresh token",
"Protected route access → extract JWT from headers → validate token against [inherited.context] schema → allow/deny access",
"Token expiry handling → use refresh token following [shared_context] strategy → generate new JWT → continue session",
"Logout process → invalidate refresh token → clear client-side tokens in [focus_paths] components"
]
},
"target_files": [
"src/auth/login.ts:handleLogin:75-120",
"src/middleware/auth.ts:validateToken"
]
}
}
Focus Paths Field Details
The focus_paths field within context specifies concrete project paths relevant to the task implementation:
Focus Paths Format
- Array of strings:
["folder1", "folder2", "specific_file.ts"] - Concrete paths: Use actual directory/file names without wildcards
- Mixed types: Can include both directories and specific files
- Relative paths: From project root (e.g.,
src/auth, not./src/auth)
Path Selection Strategy
- Directories: Include relevant module directories (e.g.,
src/auth,tests/auth) - Specific files: Include files explicitly mentioned in requirements (e.g.,
config/auth.json) - Avoid wildcards: Use concrete paths discovered via
get_modules_by_depth.sh - Focus scope: Only include paths directly related to task implementation
Examples
// Authentication system task
"focus_paths": ["src/auth", "tests/auth", "config/auth.json", "src/middleware/auth.ts"]
// UI component task
"focus_paths": ["src/components/Button", "src/styles", "tests/components"]
// Database migration task
"focus_paths": ["migrations", "src/models", "config/database.json"]
Flow Control Field Details
The flow_control field serves as a universal process manager for task execution with comprehensive flow orchestration:
pre_analysis - Flexible Context Gathering
核心定位: 纯上下文获取阶段,为实现准备充分信息
四种灵活获取方式:
- 文档引用:
bash(cat 相关文档路径)- 直接读取项目文档 - 搜索命令:
bash(grep/rg/find等)- 灵活搜索代码模式 - CLI分析:
bash(gemini/codex分析命令)- 深度分析理解 - 自由探索: Agent自主使用Read、Grep、Search等工具
设计原则:
- 步骤数量自由: 1-N步,按需求复杂度决定
- 命令组合灵活: 支持任意bash管道组合
- 够用即可: 不要过度设计,获取足够上下文即可
基本步骤结构:
- step: 步骤标识
- action: 步骤描述
- command: 执行命令(支持变量引用)
- output_to: 存储结果的变量名(可选)
- on_error: 错误处理策略
Context Flow Management
- Variable Accumulation: Each step can reference outputs from previous steps via
[variable_name] - Context Inheritance: Steps can use dependency summaries and parent task context
- Pipeline Processing: Results flow sequentially through the analysis chain
Variable Reference Format
- Context Variables: Use
[variable_name]to reference step outputs - Task Properties: Use
[depends_on],[focus_paths]to reference task JSON properties - Bash Compatibility: Avoids conflicts with bash
${}variable expansion
Path Reference Format
- Session-Specific: Use
.workflow/WFS-[session-id]/for commands within active session context - Cross-Session: Use
.workflow/*/only when accessing multiple sessions (rare cases) - Relative Paths: Use
.summaries/when executing from within session directory
Supported Context Acquisition Methods
- Document Retrieval:
bash(cat)- Read CLAUDE.md, README and other documentation - Pattern Search:
bash(rg),bash(grep),bash(find)- Search for code patterns - Deep Analysis:
bash(gemini-wrapper)- Understand existing architecture and patterns - Autonomous Exploration: Flexible combination of Read, Grep, Search and other tools
- Pipeline Combinations: Any bash command pipelines, supporting complex information processing
Flexible Command Combination Examples
Basic Search:
rg "auth|login" --type ts # Search authentication-related code
find . -name "*test*" | head # Find test files
cat CLAUDE.md # Read project specifications
Dependency Task Summary References:
# Reference completed task summary documents
cat .workflow/WFS-[session-id]/.summaries/IMPL-1.1-summary.md
# Batch read dependency task summaries
for dep in IMPL-1.1 IMPL-1.2; do
cat .workflow/WFS-[session-id]/.summaries/${dep}-summary.md 2>/dev/null
done
Combined Analysis:
rg "interface.*Auth" | head -20 # Find interface definitions
find src -name "*.ts" | xargs rg -l "login" # Find related files
gemini-wrapper -p "analyze existing auth patterns" # Deep analysis
Flexibility Principles:
- Combine commands based on task requirements, don't mechanically copy examples
- Simple tasks may only need 1-2 search commands
- Complex tasks can design multi-step context accumulation
- Fully utilize the powerful combination capabilities of pipeline commands
- Reasonably reference prerequisite task summary documents as context
Error Handling Strategies
- skip_optional: Continue execution, step result is empty
- fail: Stop execution, mark task as failed
- retry_once: Retry step once, then fail if still unsuccessful
- manual_intervention: Pause execution for manual review
Flexible Design Examples
Simple Tasks (1-2 step context gathering):
"pre_analysis": [
{
"step": "check_patterns",
"action": "Search existing patterns",
"command": "bash(rg 'login|auth' src/ | head -10)"
}
]
Medium Tasks (Multi-step context accumulation):
"pre_analysis": [
{
"step": "load_dependencies",
"action": "Read dependency task summaries",
"command": "bash(cat .workflow/WFS-[session-id]/.summaries/IMPL-1.1-summary.md)",
"output_to": "dependency_context"
},
{
"step": "read_docs",
"action": "Read project specifications",
"command": "bash(cat CLAUDE.md)",
"output_to": "project_rules"
},
{
"step": "find_similar",
"action": "Find similar implementations",
"command": "bash(rg 'class.*Service' src/)",
"output_to": "existing_services"
}
]
Important: These are references only, please design flexibly based on actual task requirements, don't mechanically copy.
implementation_approach - Implementation Method Definition
Core Purpose: Based on context gathered from pre_analysis, describe specific implementation strategies and methods
Four Design Principles:
- Context Driven: Fully utilize all context information collected during pre_analysis phase
- Pattern Following: Implement based on discovered existing code patterns and project conventions
- Precise Targeting: Clearly specify files, functions and code locations that need modification
- Clear Logic: Describe clear implementation flow and business logic
Field Descriptions:
- task_description: Comprehensive description, referencing context variables and inherited information
- modification_points: Specific code modification points, precise to files and functions
- logic_flow: Business logic flow, describing implementation execution sequence
- target_files: Target file list, format as
file:function:lines
Benefits of Flow Control
- Universal Process Manager: Handles any type of analysis or implementation flow
- Context Accumulation: Builds comprehensive context through step chain
- Error Recovery: Granular error handling at step level
- Command Flexibility: Supports any executable command or agent
- Dependency Integration: Automatic loading of prerequisite task results
File Structure
Unified File Structure
All workflows use the same file structure definition regardless of complexity. Directories and files are created on-demand as needed, not all at once during initialization.
Complete Structure Reference
.workflow/WFS-[topic-slug]/
├── workflow-session.json # Session metadata and state (REQUIRED)
├── [.brainstorming/] # Optional brainstorming phase (created when needed)
├── [.chat/] # CLI interaction sessions (created when analysis is run)
│ ├── chat-*.md # Saved chat sessions
│ └── analysis-*.md # Analysis results
├── IMPL_PLAN.md # Planning document (REQUIRED)
├── TODO_LIST.md # Progress tracking (REQUIRED)
├── [.summaries/] # Task completion summaries (created when tasks complete)
│ ├── IMPL-*.md # Main task summaries
│ └── IMPL-*.*.md # Subtask summaries
└── .task/ # Task definitions (REQUIRED)
├── IMPL-*.json # Main task definitions
└── IMPL-*.*.json # Subtask definitions (created dynamically)
Creation Strategy
- Initial Setup: Create only
workflow-session.json,IMPL_PLAN.md,TODO_LIST.md, and.task/directory - On-Demand Creation: Other directories created when first needed:
.brainstorming/→ When brainstorming phase is initiated.chat/→ When CLI analysis commands are executed.summaries/→ When first task is completed
- Dynamic Files: Subtask JSON files created during task decomposition
File Naming Conventions
Session Identifiers
Format: WFS-[topic-slug]
WFS Prefix Meaning:
WFS= WorkFlow Session- Identifies directories as workflow session containers
- Distinguishes workflow sessions from other project directories
Naming Rules:
- Convert topic to lowercase with hyphens (e.g., "User Auth System" →
WFS-user-auth-system) - Add
-NNNsuffix only if conflicts exist (e.g.,WFS-payment-integration-002) - Maximum length: 50 characters including WFS- prefix
Document Naming
workflow-session.json- Session state (required)IMPL_PLAN.md- Planning document (required)TODO_LIST.md- Progress tracking (auto-generated when needed)- Chat sessions:
chat-analysis-*.md - Task summaries:
IMPL-[task-id]-summary.md
Document Templates
IMPL_PLAN.md Template
Generated based on task complexity and requirements. Contains overview, requirements, and task structure.
Notes for Future Tasks: [Any important considerations, limitations, or follow-up items]
Summary Document Purpose:
- Context Inheritance: Provides structured context for dependent tasks
- Integration Guidance: Offers clear integration points and usage instructions
- Quality Assurance: Documents testing and validation performed
- Decision History: Preserves rationale for implementation choices
- Dependency Chain: Enables automatic context accumulation through task dependencies
TODO_LIST.md Template
# Tasks: [Session Topic]
## Task Progress
▸ **IMPL-001**: [Main Task Group] → [📋](./.task/IMPL-001.json)
- [ ] **IMPL-001.1**: [Subtask] → [📋](./.task/IMPL-001.1.json)
- [x] **IMPL-001.2**: [Subtask] → [📋](./.task/IMPL-001.2.json) | [✅](./.summaries/IMPL-001.2.md)
- [x] **IMPL-002**: [Simple Task] → [📋](./.task/IMPL-002.json) | [✅](./.summaries/IMPL-002.md)
▸ **IMPL-003**: [Main Task Group] → [📋](./.task/IMPL-003.json)
- [ ] **IMPL-003.1**: [Subtask] → [📋](./.task/IMPL-003.1.json)
- [ ] **IMPL-003.2**: [Subtask] → [📋](./.task/IMPL-003.2.json)
## Status Legend
- `▸` = Container task (has subtasks)
- `- [ ]` = Pending leaf task
- `- [x]` = Completed leaf task
- Maximum 2 levels: Main tasks and subtasks only
## Notes
[Optional notes]
Agent Integration
Agent Assignment
Based on task type and title keywords:
- Planning tasks → planning-agent
- Implementation → code-developer
- Testing → code-review-test-agent
- Review → review-agent
Execution Context
Agents receive complete task JSON plus workflow context:
{
"task": { /* complete task JSON */ },
"workflow": {
"session": "WFS-user-auth",
"phase": "IMPLEMENT"
}
}
Data Operations
Session Initialization
# Create minimal required structure
mkdir -p .workflow/WFS-topic-slug/.task
echo '{"session_id":"WFS-topic-slug",...}' > .workflow/WFS-topic-slug/workflow-session.json
echo '# Implementation Plan' > .workflow/WFS-topic-slug/IMPL_PLAN.md
echo '# Tasks' > .workflow/WFS-topic-slug/TODO_LIST.md
Task Creation
echo '{"id":"IMPL-1","title":"New task",...}' > .task/IMPL-1.json
Directory Creation (On-Demand)
# Create directories only when needed
mkdir -p .brainstorming # When brainstorming is initiated
mkdir -p .chat # When analysis commands are run
mkdir -p .summaries # When first task completes
Task Updates
jq '.status = "active"' .task/IMPL-1.json > temp && mv temp .task/IMPL-1.json
Document Generation
# Generate TODO_LIST.md from current JSON state
generate_todo_list_from_json .task/
Complexity Classification
Task Complexity Rules
Complexity is determined by task count and decomposition needs:
| Complexity | Task Count | Hierarchy Depth | Decomposition Behavior |
|---|---|---|---|
| Simple | <5 tasks | 1 level (IMPL-N) | Direct execution, minimal decomposition |
| Medium | 5-15 tasks | 2 levels (IMPL-N.M) | Moderate decomposition, context coordination |
| Complex | >15 tasks | 2 levels (IMPL-N.M) | Frequent decomposition, multi-agent orchestration |
Simple Workflows
Characteristics: Direct implementation tasks with clear, limited scope
- Examples: Bug fixes, small feature additions, configuration changes
- Task Decomposition: Usually single-level tasks, minimal breakdown needed
- Agent Coordination: Direct execution without complex orchestration
Medium Workflows
Characteristics: Feature implementation requiring moderate task breakdown
- Examples: New features, API endpoints with integration, database schema changes
- Task Decomposition: Two-level hierarchy when decomposition is needed
- Agent Coordination: Context coordination between related tasks
Complex Workflows
Characteristics: System-wide changes requiring detailed decomposition
- Examples: Major features, architecture refactoring, security implementations, multi-service deployments
- Task Decomposition: Frequent use of two-level hierarchy with dynamic subtask creation
- Agent Coordination: Multi-agent orchestration with deep context analysis
Automatic Assessment & Upgrades
- During Creation: System evaluates requirements and assigns complexity
- During Execution: Can upgrade (Simple→Medium→Complex) but never downgrade
- Override Allowed: Users can specify higher complexity manually
Validation and Error Handling
Task Integrity Rules
- ID Uniqueness: All task IDs must be unique
- Hierarchical Format: Must follow IMPL-N[.M] pattern (maximum 2 levels)
- Parent References: All parent IDs must exist as JSON files
- Depth Limits: Maximum 2 levels deep
- Status Consistency: Status values from defined enumeration
- Required Fields: All 5 core fields must be present (id, title, status, meta, context, flow_control)
- Focus Paths Structure: context.focus_paths array must contain valid project paths
- Flow Control Format: flow_control.pre_analysis must be array with step, action, command fields
- Dependency Integrity: All context.depends_on task IDs must exist as JSON files
Session Consistency Checks
# Validate active session integrity
active_marker=$(find .workflow -name ".active-*" | head -1)
if [ -n "$active_marker" ]; then
session_name=$(basename "$active_marker" | sed 's/^\.active-//')
session_dir=".workflow/$session_name"
if [ ! -d "$session_dir" ]; then
echo "⚠️ Orphaned active marker, removing..."
rm "$active_marker"
fi
fi
Recovery Strategies
- Missing Session Directory: Remove orphaned active marker
- Multiple Active Markers: Keep newest, remove others
- Corrupted Session File: Recreate from template
- Broken Task Hierarchy: Reconstruct parent-child relationships