mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-12 02:37:45 +08:00
Comprehensive update to eliminate all old path references and implement transactional archival in session:complete.
## Part 1: Complete Path Migration (43 files)
### Files Updated
- action-plan-verify.md
- session/list.md, session/resume.md
- tdd-verify.md, tdd-plan.md, test-*.md
- All brainstorm/*.md files (13 files)
- All tools/*.md files (10 files)
- All ui-design/*.md files (10 files)
- lite-plan.md, lite-execute.md, plan.md
### Path Changes
- `.workflow/.active-*` → `find .workflow/sessions/ -name "WFS-*" -type d`
- `.workflow/WFS-{session}` → `.workflow/sessions/WFS-{session}`
- `.workflow/.archives/` → `.workflow/archives/`
- Removed all marker file operations (touch/rm .active-*)
### Verification
- ✅ 0 references to .active-* markers remain
- ✅ 0 references to direct .workflow/WFS-* paths
- ✅ 0 references to .workflow/.archives/ (dot prefix)
## Part 2: Transactional Archival Enhancement
### session/complete.md - Complete Rewrite
**New Four-Phase Architecture**:
1. **Phase 1: Pre-Archival Preparation**
- Find active session in .workflow/sessions/
- Check for existing .archiving marker (resume detection)
- Create .archiving marker to prevent concurrent operations
2. **Phase 2: Agent Analysis (In-Place)**
- Agent processes session WHILE STILL in sessions/ directory
- Pure analysis - no file moves or manifest updates
- Returns complete metadata package for atomic commit
- Failure here → session remains active, safe to retry
3. **Phase 3: Atomic Commit**
- Only executes if Phase 2 succeeds
- Move session to archives/
- Update manifest.json
- Remove .archiving marker
- All-or-nothing guarantee
4. **Phase 4: Project Registry Update**
- Extract feature metadata and update project.json
**Key Improvements**:
- **Agent-first, move-second**: Prevents inconsistent states
- **Transactional guarantees**: All-or-nothing file operations
- **Resume capability**: .archiving marker enables safe retry
- **Error recovery**: Comprehensive failure handling documented
**Old Design Flaw (Fixed)**:
- Old: Move first → Agent processes → If agent fails, session moved but metadata incomplete
- New: Agent processes → If success, atomic commit → Guaranteed consistency
## Benefits
1. **Consistency**: All commands use identical path patterns
2. **Robustness**: Transactional archival prevents data loss
3. **Maintainability**: Single source of truth for directory structure
4. **Recoverability**: Failed operations can be safely retried
5. **Alignment**: Closer to OpenSpec's clean three-layer model
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
8.4 KiB
8.4 KiB
name, description, argument-hint, allowed-tools
| name | description | argument-hint | allowed-tools |
|---|---|---|---|
| data-architect | Generate or update data-architect/analysis.md addressing guidance-specification discussion points for data architecture perspective | optional topic - uses existing framework if available | Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*) |
📊 Data Architect Analysis Generator
Purpose
Specialized command for generating data-architect/analysis.md that addresses guidance-specification.md discussion points from data architecture perspective. Creates or updates role-specific analysis with framework references.
Core Function
- Framework-based Analysis: Address each discussion point in guidance-specification.md
- Data Architecture Focus: Data models, pipelines, governance, and analytics perspective
- Update Mechanism: Create new or update existing analysis.md
- Agent Delegation: Use conceptual-planning-agent for analysis generation
Analysis Scope
- Data Model Design: Efficient and scalable data models and schemas
- Data Flow Design: Data collection, processing, and storage workflows
- Data Quality Management: Data accuracy, completeness, and consistency
- Analytics and Insights: Data analysis and business intelligence solutions
Role Boundaries & Responsibilities
What This Role OWNS (Canonical Data Model - Source of Truth)
- Canonical Data Model: The authoritative, system-wide data schema representing domain entities and relationships
- Entity-Relationship Design: Defining entities, attributes, relationships, and constraints
- Data Normalization & Optimization: Ensuring data integrity, reducing redundancy, and optimizing storage
- Database Schema Design: Physical database structures, indexes, partitioning strategies
- Data Pipeline Architecture: ETL/ELT processes, data warehousing, and analytics pipelines
- Data Governance: Data quality standards, retention policies, and compliance requirements
What This Role DOES NOT Own (Defers to Other Roles)
- API Data Contracts: Public-facing request/response schemas exposed by APIs → Defers to API Designer
- System Integration Patterns: How services communicate at the macro level → Defers to System Architect
- UI Data Presentation: How data is displayed to users → Defers to UI Designer
Handoff Points
- TO API Designer: Provides canonical data model that API Designer translates into public API data contracts (as projection/view)
- TO System Architect: Provides data flow requirements and storage constraints to inform system design
- FROM System Architect: Receives system-level integration requirements and scalability constraints
⚙️ Execution Protocol
Phase 1: Session & Framework Detection
# Check active session and framework
CHECK: find .workflow/sessions/ -name "WFS-*" -type d
IF active_session EXISTS:
session_id = get_active_session()
brainstorm_dir = .workflow/sessions/WFS-{session}/.brainstorming/
CHECK: brainstorm_dir/guidance-specification.md
IF EXISTS:
framework_mode = true
load_framework = true
ELSE:
IF topic_provided:
framework_mode = false # Create analysis without framework
ELSE:
ERROR: "No framework found and no topic provided"
Phase 2: Analysis Mode Detection
# Determine execution mode
IF framework_mode == true:
mode = "framework_based_analysis"
topic_ref = load_framework_topic()
discussion_points = extract_framework_points()
ELSE:
mode = "standalone_analysis"
topic_ref = provided_topic
discussion_points = generate_basic_structure()
Phase 3: Agent Execution with Flow Control
Framework-Based Analysis Generation
Task(conceptual-planning-agent): "
[FLOW_CONTROL]
Execute data-architect analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: data-architect
OUTPUT_LOCATION: .workflow/sessions/WFS-{session}/.brainstorming/data-architect/
ANALYSIS_MODE: {framework_mode ? "framework_based" : "standalone"}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/sessions/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load data-architect planning template
- Command: bash($(cat ~/.claude/workflows/cli-templates/planning-roles/data-architect.md))
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and existing context
- Command: Read(.workflow/sessions/WFS-{session}/workflow-session.json)
- Output: session_context
## Analysis Requirements
**Framework Reference**: Address all discussion points in guidance-specification.md from data architecture perspective
**Role Focus**: Data models, pipelines, governance, analytics platforms
**Structured Approach**: Create analysis.md addressing framework discussion points
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive data architecture analysis addressing all framework discussion points
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
## Completion Criteria
- Address each discussion point from guidance-specification.md with data architecture expertise
- Provide data model designs, pipeline architectures, and governance strategies
- Include scalability, performance, and quality considerations
- Reference framework document using @ notation for integration
"
📋 TodoWrite Integration
Workflow Progress Tracking
TodoWrite({
todos: [
{
content: "Detect active session and locate topic framework",
status: "in_progress",
activeForm: "Detecting session and framework"
},
{
content: "Load guidance-specification.md and session metadata for context",
status: "pending",
activeForm: "Loading framework and session context"
},
{
content: "Execute data-architect analysis using conceptual-planning-agent with FLOW_CONTROL",
status: "pending",
activeForm: "Executing data-architect framework analysis"
},
{
content: "Generate analysis.md addressing all framework discussion points",
status: "pending",
activeForm: "Generating structured data-architect analysis"
},
{
content: "Update workflow-session.json with data-architect completion status",
status: "pending",
activeForm: "Updating session metadata"
}
]
});
📊 Output Structure
Framework-Based Analysis
.workflow/sessions/WFS-{session}/.brainstorming/data-architect/
└── analysis.md # Structured analysis addressing guidance-specification.md discussion points
Analysis Document Structure
# Data Architect Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: Data Architecture perspective
## Discussion Points Analysis
[Address each point from guidance-specification.md with data architecture expertise]
### Core Requirements (from framework)
[Data architecture perspective on requirements]
### Technical Considerations (from framework)
[Data model, pipeline, and storage considerations]
### User Experience Factors (from framework)
[Data access patterns and analytics user experience]
### Implementation Challenges (from framework)
[Data migration, quality, and governance challenges]
### Success Metrics (from framework)
[Data quality metrics and analytics success criteria]
## Data Architecture Specific Recommendations
[Role-specific data architecture recommendations and solutions]
---
*Generated by data-architect analysis addressing structured framework*
🔄 Session Integration
Completion Status Update
{
"data_architect": {
"status": "completed",
"framework_addressed": true,
"output_location": ".workflow/sessions/WFS-{session}/.brainstorming/data-architect/analysis.md",
"framework_reference": "@../guidance-specification.md"
}
}
Integration Points
- Framework Reference: @../guidance-specification.md for structured discussion points
- Cross-Role Synthesis: Data architecture insights available for synthesis-report.md integration
- Agent Autonomy: Independent execution with framework guidance