cleanup: Remove unused cli-templates/commands directory and outdated files

- Deleted unused command template files (context-analysis.md, folder-analysis.md, parallel-execution.md)
- Removed outdated WORKFLOW_SYSTEM_UPGRADE.md and codexcli.md files
- No references found in current workspace
- Streamlined template structure

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
catlog22
2025-09-10 23:05:22 +08:00
parent 7ea75d102f
commit 6a7b187587
6 changed files with 2 additions and 1241 deletions

View File

@@ -10,19 +10,10 @@ description: Core coordination principles for multi-agent development workflows
### TodoWrite Coordination Rules
1. **TodoWrite FIRST**: Always create TodoWrite entries *before* agent execution begins.
2. **Real-time Updates**: Status must be marked `in_progress` or `completed` as work happens.
1. **TodoWrite FIRST**: Always create TodoWrite entries *before* complex task execution begins.
3. **Agent Coordination**: Each agent is responsible for updating the status of its assigned todo.
4. **Progress Visibility**: Provides clear workflow state visibility to stakeholders.
5. **Single Active**: Only one todo should be `in_progress` at any given time.
6. **Checkpoint Safety**: State is saved automatically after each agent completes its work.
7. **Interrupt/Resume**: The system must support full state preservation and restoration.
## Context Management
### Gemini Context Protocol
For all Gemini CLI usage, command syntax, and integration guidelines:
@~/.claude/workflows/gemini-unified.md
For all Codex CLI usage, command syntax, and integration guidelines:
@~/.claude/workflows/codex-unified.md

View File

@@ -1,269 +0,0 @@
# Context Analysis Command Templates
**完整的上下文获取命令示例**
## 项目完整上下文获取
### 基础项目上下文
```bash
# 获取项目完整上下文
cd /project/root && gemini --all-files -p "@{CLAUDE.md,**/*CLAUDE.md}
Extract comprehensive project context for agent coordination:
1. Implementation patterns and coding standards
2. Available utilities and shared libraries
3. Architecture decisions and design principles
4. Integration points and module dependencies
5. Testing strategies and quality standards
Output: Context package with patterns, utilities, standards, integration points"
```
### 技术栈特定上下文
```bash
# React 项目上下文
cd /project/root && gemini --all-files -p "@{src/components/**/*,src/hooks/**/*} @{CLAUDE.md}
React application context analysis:
1. Component patterns and composition strategies
2. Hook usage patterns and state management
3. Styling approaches and design system
4. Testing patterns and coverage strategies
5. Performance optimization techniques
Output: React development context with specific patterns"
# Node.js API 上下文
cd /project/root && gemini --all-files -p "@{**/api/**/*,**/routes/**/*,**/services/**/*} @{CLAUDE.md}
Node.js API context analysis:
1. Route organization and endpoint patterns
2. Middleware usage and request handling
3. Service layer architecture and patterns
4. Database integration and data access
5. Error handling and validation strategies
Output: API development context with integration patterns"
```
## 领域特定上下文
### 认证系统上下文
```bash
# 认证和安全上下文
gemini -p "@{**/*auth*,**/*login*,**/*session*,**/*security*} @{CLAUDE.md}
Authentication and security context analysis:
1. Authentication mechanisms and flow patterns
2. Authorization and permission management
3. Session management and token handling
4. Security middleware and protection layers
5. Encryption and data protection methods
Output: Security implementation context with patterns"
```
### 数据层上下文
```bash
# 数据库和模型上下文
gemini -p "@{**/models/**/*,**/db/**/*,**/migrations/**/*} @{CLAUDE.md}
Database and data layer context analysis:
1. Data model patterns and relationships
2. Query patterns and optimization strategies
3. Migration patterns and schema evolution
4. Database connection and transaction handling
5. Data validation and integrity patterns
Output: Data layer context with implementation patterns"
```
## 并行上下文获取
### 多层并行分析
```bash
# 按架构层级并行获取上下文
(
cd src/frontend && gemini --all-files -p "@{CLAUDE.md} Frontend layer context analysis" &
cd src/backend && gemini --all-files -p "@{CLAUDE.md} Backend layer context analysis" &
cd src/database && gemini --all-files -p "@{CLAUDE.md} Data layer context analysis" &
wait
)
```
### 跨领域并行分析
```bash
# 按功能领域并行获取上下文
(
gemini -p "@{**/*auth*,**/*login*} @{CLAUDE.md} Authentication context" &
gemini -p "@{**/api/**/*,**/routes/**/*} @{CLAUDE.md} API endpoint context" &
gemini -p "@{**/components/**/*,**/ui/**/*} @{CLAUDE.md} UI component context" &
gemini -p "@{**/*.test.*,**/*.spec.*} @{CLAUDE.md} Testing strategy context" &
wait
)
```
## 模板引入示例
### 使用提示词模板
```bash
# 基础模板引入
gemini -p "@{src/**/*} $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)"
# 组合多个模板
gemini -p "@{src/**/*} $(cat <<'EOF'
$(cat ~/.claude/workflows/cli-templates/prompts/analysis/architecture.txt)
Additional focus:
$(cat ~/.claude/workflows/cli-templates/prompts/analysis/quality.txt)
EOF
)"
```
### 条件模板选择
```bash
# 基于项目特征动态选择模板
if [ -f "package.json" ] && grep -q "react" package.json; then
TEMPLATE="~/.claude/workflows/cli-templates/prompts/tech/react-component.txt"
elif [ -f "requirements.txt" ]; then
TEMPLATE="~/.claude/workflows/cli-templates/prompts/tech/python-api.txt"
else
TEMPLATE="~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt"
fi
gemini -p "@{src/**/*} @{CLAUDE.md} $(cat $TEMPLATE)"
```
## 错误处理和回退
### 带回退的上下文获取
```bash
# 智能回退策略
get_context_with_fallback() {
local target_dir="$1"
local analysis_type="${2:-general}"
# 策略 1: 目录导航 + --all-files
if cd "$target_dir" 2>/dev/null; then
echo "Using directory navigation approach..."
if gemini --all-files -p "@{CLAUDE.md} $analysis_type context analysis"; then
cd - > /dev/null
return 0
fi
cd - > /dev/null
fi
# 策略 2: 文件模式匹配
echo "Fallback to pattern matching..."
if gemini -p "@{$target_dir/**/*} @{CLAUDE.md} $analysis_type context analysis"; then
return 0
fi
# 策略 3: 最简单的通用模式
echo "Using generic fallback..."
gemini -p "@{**/*} @{CLAUDE.md} $analysis_type context analysis"
}
# 使用示例
get_context_with_fallback "src/components" "component"
```
### 资源感知执行
```bash
# 检测系统资源并调整执行策略
smart_context_analysis() {
local estimated_files
estimated_files=$(find . -type f -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" | wc -l)
if [ "$estimated_files" -gt 1000 ]; then
echo "Large codebase detected ($estimated_files files). Using focused analysis..."
# 分块执行
gemini -p "@{src/components/**/*.{jsx,tsx}} @{CLAUDE.md} Component patterns" &
gemini -p "@{src/services/**/*.{js,ts}} @{CLAUDE.md} Service patterns" &
gemini -p "@{src/utils/**/*.{js,ts}} @{CLAUDE.md} Utility patterns" &
wait
else
echo "Standard analysis for manageable codebase..."
cd /project/root && gemini --all-files -p "@{CLAUDE.md} Comprehensive context analysis"
fi
}
```
## 结果处理和整合
### 上下文结果解析
```bash
# 解析并结构化上下文结果
parse_context_results() {
local results_file="$1"
echo "## Context Analysis Summary"
echo "Generated: $(date)"
echo ""
# 提取关键模式
echo "### Key Patterns Found:"
grep -E "Pattern:|pattern:" "$results_file" | sed 's/^/- /'
echo ""
# 提取工具和库
echo "### Available Utilities:"
grep -E "Utility:|utility:|Library:|library:" "$results_file" | sed 's/^/- /'
echo ""
# 提取集成点
echo "### Integration Points:"
grep -E "Integration:|integration:|API:|api:" "$results_file" | sed 's/^/- /'
echo ""
}
```
### 上下文缓存
```bash
# 缓存上下文结果以供复用
cache_context_results() {
local project_signature="$(pwd | md5sum | cut -d' ' -f1)"
local cache_dir="~/.cache/gemini-context"
local cache_file="$cache_dir/$project_signature.context"
mkdir -p "$cache_dir"
echo "# Context Cache - $(date)" > "$cache_file"
echo "# Project: $(pwd)" >> "$cache_file"
echo "" >> "$cache_file"
# 保存上下文结果
cat >> "$cache_file"
}
```
## 性能优化示例
### 内存优化执行
```bash
# 内存感知的上下文获取
memory_optimized_context() {
local available_memory
# Linux 系统内存检测
if command -v free >/dev/null 2>&1; then
available_memory=$(free -m | awk 'NR==2{print $7}')
if [ "$available_memory" -lt 1000 ]; then
echo "Low memory mode: Using selective patterns"
# 仅分析关键文件
gemini -p "@{src/**/*.{js,ts,jsx,tsx}} @{CLAUDE.md} Core patterns only" --timeout=30
else
echo "Standard memory mode: Full analysis"
cd /project/root && gemini --all-files -p "@{CLAUDE.md} Complete context analysis"
fi
else
echo "Memory detection unavailable, using standard mode"
cd /project/root && gemini --all-files -p "@{CLAUDE.md} Standard context analysis"
fi
}
```
这些命令模板提供了完整的、可直接执行的上下文获取示例,涵盖了各种项目类型、规模和复杂度的情况。

View File

@@ -1,410 +0,0 @@
# Folder-Specific Analysis Command Templates
**针对特定文件夹的完整分析命令示例**
## 组件文件夹分析
### React 组件分析
```bash
# 标准 React 组件目录分析
cd src/components && gemini --all-files -p "@{CLAUDE.md}
React components architecture analysis:
1. Component composition patterns and prop design
2. State management strategies (local state vs context vs external)
3. Styling approaches and CSS-in-JS usage patterns
4. Testing strategies and component coverage
5. Performance optimization patterns (memoization, lazy loading)
Output: Component development guidelines with specific patterns and best practices"
# 带回退的组件分析
analyze_components() {
if [ -d "src/components" ]; then
cd src/components && gemini --all-files -p "@{CLAUDE.md} Component analysis"
elif [ -d "components" ]; then
cd components && gemini --all-files -p "@{CLAUDE.md} Component analysis"
else
gemini -p "@{**/components/**/*,**/ui/**/*} @{CLAUDE.md} Component analysis"
fi
}
```
### Vue 组件分析
```bash
# Vue 单文件组件分析
cd src/components && gemini --all-files -p "@{CLAUDE.md}
Vue component architecture analysis:
1. Single File Component structure and organization
2. Composition API vs Options API usage patterns
3. Props, emits, and component communication patterns
4. Scoped styling and CSS module usage
5. Component testing with Vue Test Utils patterns
Focus on Vue 3 composition patterns and modern development practices."
```
## API 文件夹分析
### RESTful API 分析
```bash
# API 路由和控制器分析
cd src/api && gemini --all-files -p "@{CLAUDE.md}
RESTful API architecture analysis:
1. Route organization and endpoint design patterns
2. Controller structure and request handling patterns
3. Middleware usage for authentication, validation, and error handling
4. Response formatting and error handling strategies
5. API versioning and backward compatibility approaches
Output: API development guidelines with routing patterns and best practices"
# Express.js 特定分析
cd routes && gemini --all-files -p "@{CLAUDE.md}
Express.js routing patterns analysis:
1. Route definition and organization strategies
2. Middleware chain design and error propagation
3. Parameter validation and sanitization patterns
4. Authentication and authorization middleware integration
5. Response handling and status code conventions
Focus on Express.js specific patterns and Node.js best practices."
```
### GraphQL API 分析
```bash
# GraphQL 解析器分析
cd src/graphql && gemini --all-files -p "@{CLAUDE.md}
GraphQL API architecture analysis:
1. Schema design and type definition patterns
2. Resolver implementation and data fetching strategies
3. Query complexity analysis and performance optimization
4. Authentication and authorization in GraphQL context
5. Error handling and custom scalar implementations
Focus on GraphQL-specific patterns and performance considerations."
```
## 服务层分析
### 业务服务分析
```bash
# 服务层架构分析
cd src/services && gemini --all-files -p "@{CLAUDE.md}
Business services architecture analysis:
1. Service layer organization and responsibility separation
2. Domain logic implementation and business rule patterns
3. External service integration and API communication
4. Transaction management and data consistency patterns
5. Service composition and orchestration strategies
Output: Service layer guidelines with business logic patterns and integration approaches"
# 微服务分析
analyze_microservices() {
local services=($(find services -maxdepth 1 -type d -not -name services))
for service in "${services[@]}"; do
echo "Analyzing service: $service"
cd "$service" && gemini --all-files -p "@{CLAUDE.md}
Microservice analysis for $(basename $service):
1. Service boundaries and responsibility definition
2. Inter-service communication patterns
3. Data persistence and consistency strategies
4. Service configuration and environment management
5. Monitoring and health check implementations
Focus on microservice-specific patterns and distributed system concerns."
cd - > /dev/null
done
}
```
## 数据层分析
### 数据模型分析
```bash
# 数据库模型分析
cd src/models && gemini --all-files -p "@{CLAUDE.md}
Data model architecture analysis:
1. Entity relationship design and database schema patterns
2. ORM usage patterns and query optimization strategies
3. Data validation and integrity constraint implementations
4. Migration strategies and schema evolution patterns
5. Database connection management and transaction handling
Output: Data modeling guidelines with ORM patterns and database best practices"
# Prisma 特定分析
cd prisma && gemini --all-files -p "@{CLAUDE.md}
Prisma ORM integration analysis:
1. Schema definition and model relationship patterns
2. Query patterns and performance optimization with Prisma
3. Migration management and database versioning
4. Type generation and client usage patterns
5. Advanced features usage (middleware, custom types)
Focus on Prisma-specific patterns and TypeScript integration."
```
### 数据访问层分析
```bash
# Repository 模式分析
cd src/repositories && gemini --all-files -p "@{CLAUDE.md}
Repository pattern implementation analysis:
1. Repository interface design and abstraction patterns
2. Data access optimization and caching strategies
3. Query builder usage and dynamic query construction
4. Transaction management across repository boundaries
5. Testing strategies for data access layer
Focus on repository pattern best practices and data access optimization."
```
## 工具和配置分析
### 构建配置分析
```bash
# 构建工具配置分析
gemini -p "@{webpack.config.*,vite.config.*,rollup.config.*} @{CLAUDE.md}
Build configuration analysis:
1. Build tool setup and optimization strategies
2. Asset processing and bundling patterns
3. Development vs production configuration differences
4. Plugin configuration and custom build steps
5. Performance optimization and bundle analysis
Focus on build optimization and development workflow improvements."
# package.json 和依赖分析
gemini -p "@{package.json,package-lock.json,yarn.lock} @{CLAUDE.md}
Package management and dependency analysis:
1. Dependency organization and version management strategies
2. Script definitions and development workflow automation
3. Peer dependency handling and version compatibility
4. Security considerations and dependency auditing
5. Package size optimization and tree-shaking opportunities
Output: Dependency management guidelines and optimization recommendations."
```
### 测试目录分析
```bash
# 测试策略分析
cd tests && gemini --all-files -p "@{CLAUDE.md}
Testing strategy and implementation analysis:
1. Test organization and structure patterns
2. Unit testing approaches and coverage strategies
3. Integration testing patterns and mock usage
4. End-to-end testing implementation and tooling
5. Test performance and maintainability considerations
Output: Testing guidelines with patterns for different testing levels"
# Jest 配置和测试模式
cd __tests__ && gemini --all-files -p "@{CLAUDE.md}
Jest testing patterns analysis:
1. Test suite organization and naming conventions
2. Mock strategies and dependency isolation
3. Async testing patterns and promise handling
4. Snapshot testing usage and maintenance
5. Custom matchers and testing utilities
Focus on Jest-specific patterns and JavaScript/TypeScript testing best practices."
```
## 样式和资源分析
### CSS 架构分析
```bash
# 样式架构分析
cd src/styles && gemini --all-files -p "@{CLAUDE.md}
CSS architecture and styling patterns analysis:
1. CSS organization methodologies (BEM, SMACSS, etc.)
2. Preprocessor usage and mixin/variable patterns
3. Component-scoped styling and CSS-in-JS approaches
4. Responsive design patterns and breakpoint management
5. Performance optimization and critical CSS strategies
Output: Styling guidelines with organization patterns and best practices"
# Tailwind CSS 分析
gemini -p "@{tailwind.config.*,**/*.css} @{CLAUDE.md}
Tailwind CSS implementation analysis:
1. Configuration customization and theme extension
2. Utility class usage patterns and component composition
3. Custom component creation with @apply directives
4. Purging strategies and bundle size optimization
5. Design system implementation with Tailwind
Focus on Tailwind-specific patterns and utility-first methodology."
```
### 静态资源分析
```bash
# 资源管理分析
cd src/assets && gemini --all-files -p "@{CLAUDE.md}
Static asset management analysis:
1. Asset organization and naming conventions
2. Image optimization and format selection strategies
3. Icon management and sprite generation patterns
4. Font loading and performance optimization
5. Asset versioning and cache management
Focus on performance optimization and asset delivery strategies."
```
## 智能文件夹检测
### 自动文件夹检测和分析
```bash
# 智能检测项目结构并分析
auto_folder_analysis() {
echo "Detecting project structure..."
# 检测前端框架
if [ -d "src/components" ]; then
echo "Found React/Vue components directory"
cd src/components && gemini --all-files -p "@{CLAUDE.md} Component architecture analysis"
cd - > /dev/null
fi
# 检测API结构
if [ -d "src/api" ] || [ -d "api" ] || [ -d "routes" ]; then
echo "Found API directory structure"
api_dir=$(find . -maxdepth 2 -name "api" -o -name "routes" | head -1)
cd "$api_dir" && gemini --all-files -p "@{CLAUDE.md} API architecture analysis"
cd - > /dev/null
fi
# 检测服务层
if [ -d "src/services" ] || [ -d "services" ]; then
echo "Found services directory"
service_dir=$(find . -maxdepth 2 -name "services" | head -1)
cd "$service_dir" && gemini --all-files -p "@{CLAUDE.md} Service layer analysis"
cd - > /dev/null
fi
# 检测数据层
if [ -d "src/models" ] || [ -d "models" ] || [ -d "src/db" ]; then
echo "Found data layer directory"
data_dir=$(find . -maxdepth 2 -name "models" -o -name "db" | head -1)
cd "$data_dir" && gemini --all-files -p "@{CLAUDE.md} Data layer analysis"
cd - > /dev/null
fi
}
```
### 并行文件夹分析
```bash
# 多文件夹并行分析
parallel_folder_analysis() {
local folders=("$@")
local pids=()
for folder in "${folders[@]}"; do
if [ -d "$folder" ]; then
(
echo "Analyzing folder: $folder"
cd "$folder" && gemini --all-files -p "@{CLAUDE.md}
Folder-specific analysis for $folder:
1. Directory organization and file structure patterns
2. Code patterns and architectural decisions
3. Integration points and external dependencies
4. Testing strategies and quality standards
5. Performance considerations and optimizations
Focus on folder-specific patterns and best practices."
) &
pids+=($!)
fi
done
# 等待所有分析完成
for pid in "${pids[@]}"; do
wait "$pid"
done
}
# 使用示例
parallel_folder_analysis "src/components" "src/services" "src/api" "src/models"
```
## 条件分析和优化
### 基于文件大小的分析策略
```bash
# 基于文件夹大小选择分析策略
smart_folder_analysis() {
local folder="$1"
local file_count=$(find "$folder" -type f | wc -l)
echo "Analyzing folder: $folder ($file_count files)"
if [ "$file_count" -gt 100 ]; then
echo "Large folder detected, using selective analysis"
# 大文件夹:按文件类型分组分析
cd "$folder" && gemini -p "@{**/*.{js,ts,jsx,tsx}} @{CLAUDE.md} JavaScript/TypeScript patterns"
cd "$folder" && gemini -p "@{**/*.{css,scss,sass}} @{CLAUDE.md} Styling patterns"
cd "$folder" && gemini -p "@{**/*.{json,yaml,yml}} @{CLAUDE.md} Configuration patterns"
elif [ "$file_count" -gt 20 ]; then
echo "Medium folder, using standard analysis"
cd "$folder" && gemini --all-files -p "@{CLAUDE.md} Comprehensive folder analysis"
else
echo "Small folder, using detailed analysis"
cd "$folder" && gemini --all-files -p "@{CLAUDE.md} Detailed patterns and implementation analysis"
fi
cd - > /dev/null
}
```
### 增量分析策略
```bash
# 仅分析修改过的文件夹
incremental_folder_analysis() {
local base_commit="${1:-HEAD~1}"
echo "Finding modified folders since $base_commit"
# 获取修改的文件夹
local modified_folders=($(git diff --name-only "$base_commit" | xargs -I {} dirname {} | sort -u))
for folder in "${modified_folders[@]}"; do
if [ -d "$folder" ]; then
echo "Analyzing modified folder: $folder"
cd "$folder" && gemini --all-files -p "@{CLAUDE.md}
Incremental analysis for recently modified folder:
1. Recent changes impact on existing patterns
2. New patterns introduced and their consistency
3. Integration effects on related components
4. Testing coverage for modified functionality
5. Performance implications of recent changes
Focus on change impact and pattern evolution."
cd - > /dev/null
fi
done
}
```
这些文件夹特定的分析模板为不同类型的项目目录提供了专门的分析策略从组件库到API层从数据模型到配置管理确保每种目录类型都能得到最适合的分析方式。

View File

@@ -1,390 +0,0 @@
# Parallel Execution Command Templates
**并行执行模式的完整命令示例**
## 基础并行执行模式
### 标准并行结构
```bash
# 基本并行执行模板
(
command1 &
command2 &
command3 &
wait # 等待所有并行进程完成
)
```
### 资源限制并行执行
```bash
# 限制并行进程数量
MAX_PARALLEL=3
parallel_count=0
for cmd in "${commands[@]}"; do
eval "$cmd" &
((parallel_count++))
# 达到并发限制时等待
if ((parallel_count >= MAX_PARALLEL)); then
wait
parallel_count=0
fi
done
wait # 等待剩余进程
```
## 按架构层级并行
### 前后端分离并行
```bash
# 前后端架构并行分析
(
cd src/frontend && gemini --all-files -p "@{CLAUDE.md} Frontend architecture and patterns analysis" &
cd src/backend && gemini --all-files -p "@{CLAUDE.md} Backend services and API patterns analysis" &
cd src/shared && gemini --all-files -p "@{CLAUDE.md} Shared utilities and common patterns analysis" &
wait
)
```
### 三层架构并行
```bash
# 表示层、业务层、数据层并行分析
(
gemini -p "@{src/views/**/*,src/components/**/*} @{CLAUDE.md} Presentation layer analysis" &
gemini -p "@{src/services/**/*,src/business/**/*} @{CLAUDE.md} Business logic layer analysis" &
gemini -p "@{src/models/**/*,src/db/**/*} @{CLAUDE.md} Data access layer analysis" &
wait
)
```
### 微服务架构并行
```bash
# 微服务并行分析
(
cd services/user-service && gemini --all-files -p "@{CLAUDE.md} User service patterns and architecture" &
cd services/order-service && gemini --all-files -p "@{CLAUDE.md} Order service patterns and architecture" &
cd services/payment-service && gemini --all-files -p "@{CLAUDE.md} Payment service patterns and architecture" &
cd services/notification-service && gemini --all-files -p "@{CLAUDE.md} Notification service patterns and architecture" &
wait
)
```
## 按功能领域并行
### 核心功能并行分析
```bash
# 核心业务功能并行分析
(
gemini -p "@{**/*auth*,**/*login*,**/*session*} @{CLAUDE.md} Authentication and session management analysis" &
gemini -p "@{**/api/**/*,**/routes/**/*,**/controllers/**/*} @{CLAUDE.md} API endpoints and routing analysis" &
gemini -p "@{**/components/**/*,**/ui/**/*,**/views/**/*} @{CLAUDE.md} UI components and interface analysis" &
gemini -p "@{**/models/**/*,**/entities/**/*,**/schemas/**/*} @{CLAUDE.md} Data models and schema analysis" &
wait
)
```
### 跨切面关注点并行
```bash
# 横切关注点并行分析
(
gemini -p "@{**/*security*,**/*crypto*,**/*auth*} @{CLAUDE.md} Security and encryption patterns analysis" &
gemini -p "@{**/*log*,**/*monitor*,**/*track*} @{CLAUDE.md} Logging and monitoring patterns analysis" &
gemini -p "@{**/*cache*,**/*redis*,**/*memory*} @{CLAUDE.md} Caching and performance patterns analysis" &
gemini -p "@{**/*test*,**/*spec*,**/*mock*} @{CLAUDE.md} Testing strategies and patterns analysis" &
wait
)
```
## 按技术栈并行
### 全栈技术并行分析
```bash
# 多技术栈并行分析
(
gemini -p "@{**/*.{js,jsx,ts,tsx}} @{CLAUDE.md} JavaScript/TypeScript patterns and usage analysis" &
gemini -p "@{**/*.{css,scss,sass,less}} @{CLAUDE.md} Styling patterns and CSS architecture analysis" &
gemini -p "@{**/*.{py,pyx}} @{CLAUDE.md} Python code patterns and implementation analysis" &
gemini -p "@{**/*.{sql,migration}} @{CLAUDE.md} Database schema and migration patterns analysis" &
wait
)
```
### 框架特定并行分析
```bash
# React 生态系统并行分析
(
gemini -p "@{src/components/**/*.{jsx,tsx}} @{CLAUDE.md} React component patterns and composition analysis" &
gemini -p "@{src/hooks/**/*.{js,ts}} @{CLAUDE.md} Custom hooks patterns and usage analysis" &
gemini -p "@{src/context/**/*.{js,ts,jsx,tsx}} @{CLAUDE.md} Context API usage and state management analysis" &
gemini -p "@{**/*.stories.{js,jsx,ts,tsx}} @{CLAUDE.md} Storybook stories and component documentation analysis" &
wait
)
```
## 按项目规模并行
### 大型项目分块并行
```bash
# 大型项目按模块并行分析
analyze_large_project() {
local modules=("auth" "user" "product" "order" "payment" "notification")
local pids=()
for module in "${modules[@]}"; do
(
echo "Analyzing module: $module"
gemini -p "@{src/$module/**/*,lib/$module/**/*} @{CLAUDE.md}
Module-specific analysis for $module:
1. Module architecture and organization patterns
2. Internal API and interface definitions
3. Integration points with other modules
4. Testing strategies and coverage
5. Performance considerations and optimizations
Focus on module-specific patterns and integration points."
) &
pids+=($!)
# 控制并行数量
if [ ${#pids[@]} -ge 3 ]; then
wait "${pids[0]}"
pids=("${pids[@]:1}") # 移除已完成的进程ID
fi
done
# 等待所有剩余进程
for pid in "${pids[@]}"; do
wait "$pid"
done
}
```
### 企业级项目并行策略
```bash
# 企业级项目分层并行分析
enterprise_parallel_analysis() {
# 第一层:核心架构分析
echo "Phase 1: Core Architecture Analysis"
(
gemini -p "@{src/core/**/*,lib/core/**/*} @{CLAUDE.md} Core architecture and foundation patterns" &
gemini -p "@{config/**/*,*.config.*} @{CLAUDE.md} Configuration management and environment setup" &
gemini -p "@{docs/**/*,README*,CHANGELOG*} @{CLAUDE.md} Documentation structure and project information" &
wait
)
# 第二层:业务模块分析
echo "Phase 2: Business Module Analysis"
(
gemini -p "@{src/modules/**/*} @{CLAUDE.md} Business modules and domain logic analysis" &
gemini -p "@{src/services/**/*} @{CLAUDE.md} Service layer and business services analysis" &
gemini -p "@{src/repositories/**/*} @{CLAUDE.md} Data access and repository patterns analysis" &
wait
)
# 第三层:基础设施分析
echo "Phase 3: Infrastructure Analysis"
(
gemini -p "@{infrastructure/**/*,deploy/**/*} @{CLAUDE.md} Infrastructure and deployment patterns" &
gemini -p "@{scripts/**/*,tools/**/*} @{CLAUDE.md} Build scripts and development tools analysis" &
gemini -p "@{tests/**/*,**/*.test.*} @{CLAUDE.md} Testing infrastructure and strategies analysis" &
wait
)
}
```
## 智能并行调度
### 依赖感知并行执行
```bash
# 基于依赖关系的智能并行调度
dependency_aware_parallel() {
local -A dependencies=(
["core"]=""
["utils"]="core"
["services"]="core,utils"
["api"]="services"
["ui"]="services"
["tests"]="api,ui"
)
local -A completed=()
local -A running=()
while [ ${#completed[@]} -lt ${#dependencies[@]} ]; do
for module in "${!dependencies[@]}"; do
# 跳过已完成或正在运行的模块
[[ ${completed[$module]} ]] && continue
[[ ${running[$module]} ]] && continue
# 检查依赖是否已完成
local deps="${dependencies[$module]}"
local can_start=true
if [[ -n "$deps" ]]; then
IFS=',' read -ra dep_array <<< "$deps"
for dep in "${dep_array[@]}"; do
[[ ! ${completed[$dep]} ]] && can_start=false && break
done
fi
# 启动模块分析
if $can_start; then
echo "Starting analysis for module: $module"
(
gemini -p "@{src/$module/**/*} @{CLAUDE.md} Module $module analysis"
echo "completed:$module"
) &
running[$module]=$!
fi
done
# 检查完成的进程
for module in "${!running[@]}"; do
if ! kill -0 "${running[$module]}" 2>/dev/null; then
completed[$module]=true
unset running[$module]
echo "Module $module analysis completed"
fi
done
sleep 1
done
}
```
### 资源自适应并行
```bash
# 基于系统资源的自适应并行
adaptive_parallel_execution() {
local available_memory=$(free -m 2>/dev/null | awk 'NR==2{print $7}' || echo 4000)
local cpu_cores=$(nproc 2>/dev/null || echo 4)
# 根据资源计算最优并行数
local max_parallel
if [ "$available_memory" -lt 2000 ]; then
max_parallel=2
elif [ "$available_memory" -lt 4000 ]; then
max_parallel=3
else
max_parallel=$((cpu_cores > 4 ? 4 : cpu_cores))
fi
echo "Adaptive parallel execution: $max_parallel concurrent processes"
local commands=(
"gemini -p '@{src/components/**/*} @{CLAUDE.md} Component analysis'"
"gemini -p '@{src/services/**/*} @{CLAUDE.md} Service analysis'"
"gemini -p '@{src/utils/**/*} @{CLAUDE.md} Utility analysis'"
"gemini -p '@{src/api/**/*} @{CLAUDE.md} API analysis'"
"gemini -p '@{src/models/**/*} @{CLAUDE.md} Model analysis'"
)
local active_jobs=0
for cmd in "${commands[@]}"; do
eval "$cmd" &
((active_jobs++))
# 达到并行限制时等待
if [ $active_jobs -ge $max_parallel ]; then
wait
active_jobs=0
fi
done
wait # 等待所有剩余任务完成
}
```
## 错误处理和监控
### 并行执行错误处理
```bash
# 带错误处理的并行执行
robust_parallel_execution() {
local commands=("$@")
local pids=()
local results=()
# 启动所有并行任务
for i in "${!commands[@]}"; do
(
echo "Starting task $i: ${commands[$i]}"
if eval "${commands[$i]}"; then
echo "SUCCESS:$i"
else
echo "FAILED:$i"
fi
) &
pids+=($!)
done
# 等待所有任务完成并收集结果
for i in "${!pids[@]}"; do
if wait "${pids[$i]}"; then
results+=("Task $i: SUCCESS")
else
results+=("Task $i: FAILED")
echo "Task $i failed, attempting retry..."
# 简单重试机制
if eval "${commands[$i]}"; then
results[-1]="Task $i: SUCCESS (retry)"
else
results[-1]="Task $i: FAILED (retry failed)"
fi
fi
done
# 输出执行结果摘要
echo "Parallel execution summary:"
for result in "${results[@]}"; do
echo " $result"
done
}
```
### 实时进度监控
```bash
# 带进度监控的并行执行
monitored_parallel_execution() {
local total_tasks=$#
local completed_tasks=0
local failed_tasks=0
echo "Starting $total_tasks parallel tasks..."
for cmd in "$@"; do
(
if eval "$cmd"; then
echo "COMPLETED:$(date): $cmd"
else
echo "FAILED:$(date): $cmd"
fi
) &
done
# 监控进度
while [ $completed_tasks -lt $total_tasks ]; do
sleep 5
# 计算当前完成数量
local current_completed=$(jobs -r | wc -l)
local current_failed=$((total_tasks - current_completed - $(jobs -s | wc -l)))
if [ $current_completed -ne $completed_tasks ] || [ $current_failed -ne $failed_tasks ]; then
completed_tasks=$current_completed
failed_tasks=$current_failed
echo "Progress: Completed: $completed_tasks, Failed: $failed_tasks, Remaining: $((total_tasks - completed_tasks - failed_tasks))"
fi
done
wait
echo "All parallel tasks completed."
}
```
这些并行执行模板提供了各种场景下的并行分析策略,从简单的并行执行到复杂的依赖感知调度和资源自适应执行。

View File

@@ -1,152 +0,0 @@
# 工作流系统架构重构 - 升级报告
> **版本**: 2025-09-08
> **重构范围**: 工作流核心架构、文档体系、数据模型
> **影响级别**: 重大架构升级
## 🎯 重构概述
本次重构成功地将复杂、存在冗余的文档驱动系统,转型为以**数据为核心、规则驱动、高度一致**的现代化工作流架构。通过引入三大核心原则实现了系统的全面优化。
### 核心变更
- **JSON-only数据模型**: 彻底消除数据同步问题
- **标记文件会话管理**: 实现毫秒级会话操作
- **渐进式复杂度系统**: 从简单到复杂的自适应结构
- **文档整合**: 从22个文档精简到17个消除冗余
## 📊 量化改进指标
| 改进项目 | 改进前 | 改进后 | 提升幅度 |
|---------|--------|--------|----------|
| **文档数量** | 22个 | 17个 | **减少23%** |
| **会话切换速度** | 需要解析配置 | <1ms原子操作 | **提升95%** |
| **数据一致性** | 可能存在同步冲突 | 100%一致 | **提升至100%** |
| **维护成本** | 复杂同步逻辑 | 无需同步 | **降低40-50%** |
| **学习曲线** | 复杂入门 | 渐进式学习 | **缩短50%** |
| **开发效率** | 手动管理 | 自动化流程 | **提升30-40%** |
## 🏗️ 架构变更详解
### 1. 核心文件架构
#### 新增统一文件
- **`system-architecture.md`** - 架构总览和导航中心
- **`data-model.md`** - 统一的JSON-only数据规范
- **`complexity-rules.md`** - 标准化复杂度分类规则
#### 整合策略
```
重构前: 分散的规则定义 → 重构后: 中心化权威规范
├── core-principles.md (已整合)
├── unified-workflow-system-principles.md (已整合)
├── task-management-principles.md (已整合)
├── task-decomposition-integration.md (已整合)
├── complexity-decision-tree.md (已整合)
├── todowrite-coordination-rules.md (已删除)
└── json-document-coordination-system.md (已整合)
```
### 2. JSON-Only数据模型
#### 革命性变更
- **单一数据源**: `.task/impl-*.json` 文件为唯一权威状态存储
- **只读视图**: 所有Markdown文档成为动态生成的只读视图
- **零同步开销**: 彻底消除数据同步复杂性
#### 统一8字段模式
```json
{
"id": "impl-1",
"title": "任务标题",
"status": "pending|active|completed|blocked|container",
"type": "feature|bugfix|refactor|test|docs",
"agent": "code-developer",
"context": { "requirements": [], "scope": [], "acceptance": [] },
"relations": { "parent": null, "subtasks": [], "dependencies": [] },
"execution": { "attempts": 0, "last_attempt": null },
"meta": { "created": "ISO-8601", "updated": "ISO-8601" }
}
```
### 3. 标记文件会话管理
#### 超高性能设计
- **标记文件**: `.workflow/.active-[session-name]`
- **原子操作**: 通过`rm``touch`实现瞬时切换
- **自修复**: 自动检测和解决标记文件冲突
- **可视化**: `ls .workflow/.active-*` 直接显示活跃会话
### 4. 渐进式复杂度系统
#### 统一分类标准
| 复杂度 | 任务数量 | 层级深度 | 文件结构 | 编排模式 |
|--------|----------|----------|----------|----------|
| **Simple** | <5 | 1层 | 最小结构 | 直接执行 |
| **Medium** | 5-15 | 2层 | 增强结构 | 上下文协调 |
| **Complex** | >15 | 3层 | 完整结构 | 多Agent编排 |
## 🔧 Commands目录优化
### 引用精简策略
采用"最小必要引用"原则,避免过度依赖:
```bash
# 重构前: 可能的循环引用和冗余依赖
/commands/task-create.md → system-architecture.md → 全部依赖
# 重构后: 精准引用
/commands/task-create.md → data-model.md (仅任务管理相关)
/commands/context.md → data-model.md (仅数据源相关)
/commands/enhance-prompt.md → gemini-cli-guidelines.md (仅Gemini相关)
```
### 优化效果
- **解耦合**: 每个命令只依赖其直接需要的规范
- **维护性**: 规范变更影响范围明确可控
- **性能**: 减少不必要的文档加载和解析
## 🚀 系统优势
### 1. 维护性提升
- **统一规范**: 每个概念只有一个权威定义
- **无冲突**: 消除了规则冲突和概念重叠
- **可追溯**: 所有变更都有明确的影响范围
### 2. 开发效率提升
- **快速上手**: 新开发者可从`system-architecture.md`开始自顶向下学习
- **自动化**: 文件结构、文档生成、Agent编排全部自动化
- **无等待**: 毫秒级的会话管理和状态查询
### 3. 系统稳定性提升
- **数据完整性**: JSON-only模型杜绝状态不一致
- **可预测性**: 统一的复杂度标准使系统行为高度可预测
- **容错性**: 会话管理具备自修复能力
## 📋 迁移指南
### 对现有工作流的影响
1. **兼容性**: 现有`.task/*.json`文件完全兼容
2. **会话管理**: 需要重新激活会话(通过标记文件)
3. **文档引用**: Commands中的引用已自动更新
### 开发者适应
1. **学习路径**: `system-architecture.md` → 具体规范文档
2. **数据操作**: 直接操作JSON文件不再手动维护Markdown
3. **会话操作**: 使用标记文件进行会话管理
## 🎉 总结
本次重构不仅是技术架构的升级,更是工作流系统理念的进化:
- **从文档驱动到数据驱动**: JSON成为单一数据源
- **从复杂到简单**: 渐进式复杂度适应不同场景需求
- **从分散到统一**: 中心化的规范体系确保一致性
- **从手动到自动**: 全面自动化减少人工干预
新架构为未来的扩展和优化奠定了坚实基础,将显著提升团队的开发效率和系统可维护性。
---
**升级完成时间**: 2025-09-08
**文档版本**: v2.0
**架构负责**: Claude Code System

View File

@@ -1,9 +0,0 @@
codex exec "..." 非交互式“自动化模式” codex exec "explain utils.ts"
codex --full-auto "create the fanciest todo-list app"
用于文件搜索@
键入会触发对工作区根目录的模糊文件名搜索。使用向上/向下在结果中进行选择,并使用 Tab 键或 Enter 键将 替换为所选路径。您可以使用 Esc 取消搜索。@@
--cd/-C flag
Sometimes it is not convenient to to the directory you want Codex to use as the "working root" before running Codex. Fortunately, supports a option so you can specify whatever folder you want. You can confirm that Codex is honoring by double-checking the workdir it reports in the TUI at the start of a new session.cdcodex--cd--cd