Refactor code structure for improved readability and maintainability

This commit is contained in:
catlog22
2026-02-07 23:44:22 +08:00
parent 80ae4baea8
commit 41cff28799
175 changed files with 691 additions and 1479 deletions

View File

@@ -0,0 +1,37 @@
Analyze implementation patterns and code structure.
## Planning Required
Before providing analysis, you MUST:
1. Review all files in context (not just samples)
2. Identify patterns with file:line references
3. Distinguish good patterns from anti-patterns
4. Apply template requirements
## Core Checklist
- [ ] Analyze ALL files in CONTEXT
- [ ] Provide file:line references for each pattern
- [ ] Distinguish good patterns from anti-patterns
- [ ] Apply RULES template requirements
## REQUIRED ANALYSIS
1. Identify common code patterns and architectural decisions
2. Extract reusable utilities and shared components
3. Document existing conventions and coding standards
4. Assess pattern consistency and identify anti-patterns
5. Suggest improvements and optimization opportunities
## OUTPUT REQUIREMENTS
- Specific file:line references for all findings
- Code snippets demonstrating identified patterns
- Clear recommendations for pattern improvements
- Standards compliance assessment with priority levels
## Verification Checklist
Before finalizing output, verify:
- [ ] All CONTEXT files analyzed
- [ ] Every pattern has code reference (file:line)
- [ ] Anti-patterns clearly distinguished
- [ ] Recommendations prioritized by impact
## Output Requirements
Provide actionable insights with concrete implementation guidance.

View File

@@ -0,0 +1,29 @@
Analyze performance characteristics and optimization opportunities.
## CORE CHECKLIST ⚡
□ Focus on measurable metrics (e.g., latency, memory, CPU usage)
□ Provide file:line references for all identified bottlenecks
□ Distinguish between algorithmic and resource-based issues
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Identify performance bottlenecks and resource usage patterns
2. Assess algorithm efficiency and data structure choices
3. Evaluate caching strategies and optimization techniques
4. Review memory management and resource cleanup
5. Document performance metrics and improvement opportunities
## OUTPUT REQUIREMENTS
- Performance bottleneck identification with specific file:line locations
- Algorithm complexity analysis and optimization suggestions
- Caching pattern documentation and recommendations
- Memory usage patterns and optimization opportunities
- Prioritized list of performance improvements
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed for performance characteristics
□ Every bottleneck is backed by a code reference (file:line)
□ Both algorithmic and resource-related issues are covered
□ Recommendations are prioritized by potential impact
Focus: Measurable performance improvements and concrete optimization strategies.

View File

@@ -0,0 +1,33 @@
Analyze technical documents, research papers, and specifications systematically.
## CORE CHECKLIST ⚡
□ Plan analysis approach before reading (document type, key questions, success criteria)
□ Provide section/page references for all claims and findings
□ Distinguish facts from interpretations explicitly
□ Use precise, direct language - avoid persuasive wording
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Document assessment: type, structure, audience, quality indicators
2. Content extraction: concepts, specifications, implementation details, constraints
3. Critical evaluation: strengths, gaps, ambiguities, clarity issues
4. Self-critique: verify citations, completeness, actionable recommendations
5. Synthesis: key takeaways, integration points, follow-up questions
## OUTPUT REQUIREMENTS
- Structured analysis with mandatory section/page references
- Evidence-based findings with specific location citations
- Clear separation of facts vs. interpretations
- Actionable recommendations tied to document content
- Integration points with existing project patterns
- Identified gaps and ambiguities with impact assessment
## VERIFICATION CHECKLIST ✓
□ Pre-analysis plan documented (3-5 bullet points)
□ All claims backed by section/page references
□ Self-critique completed before final output
□ Language is precise and direct (no persuasive adjectives)
□ Recommendations are specific and actionable
□ Output length proportional to document size
Focus: Evidence-based insights extraction with pre-planning and self-critique for technical documents.

View File

@@ -0,0 +1,29 @@
Analyze security implementation and potential vulnerabilities.
## CORE CHECKLIST ⚡
□ Identify all data entry points and external system interfaces
□ Provide file:line references for all potential vulnerabilities
□ Classify risks by severity and type (e.g., OWASP Top 10)
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Identify authentication and authorization mechanisms
2. Assess input validation and sanitization practices
3. Review data encryption and secure storage methods
4. Evaluate API security and access control patterns
5. Document security risks and compliance considerations
## OUTPUT REQUIREMENTS
- Security vulnerability findings with file:line references
- Authentication/authorization pattern documentation
- Input validation examples and identified gaps
- Encryption usage patterns and recommendations
- Prioritized remediation plan based on risk level
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed for security vulnerabilities
□ Every finding is backed by a code reference (file:line)
□ Both authentication and data handling are covered
□ Recommendations include clear, actionable remediation steps
Focus: Identifying security gaps and providing actionable remediation steps.

View File

@@ -0,0 +1,127 @@
---
name: bug-diagnosis
description: 用于定位bug并提供修改建议
category: development
keywords: [bug诊断, 故障分析, 修复方案]
---
# Role & Output Requirements
**Role**: Software engineer specializing in bug diagnosis
**Output Format**: Diagnostic report in Chinese following the specified structure
**Constraints**: Do NOT write complete code files. Provide diagnostic analysis and targeted correction suggestions only.
## Core Capabilities
- Interpret symptoms from bug reports, stack traces, and logs
- Trace execution flow to identify root causes
- Formulate and validate hypotheses about bug origins
- Design targeted, low-risk corrections
- Analyze impact on other system components
## Analysis Process (Required)
**Before providing your final diagnosis, you MUST:**
1. Analyze symptoms and form initial hypothesis
2. Trace code execution to identify root cause
3. Design correction strategy
4. Assess potential impacts and risks
5. Present structured diagnostic report
## Objectives
1. Identify root cause (not just symptoms)
2. Propose targeted correction with justification
3. Assess risks and side effects
4. Provide verification steps
## Input
- Bug description (observed vs. expected behavior)
- Code snippets or file locations
- Logs, stack traces, error messages
- Reproduction steps (if available)
## Output Structure (Required)
Output in Chinese using this Markdown structure:
---
### 0. 诊断思维链 (Diagnostic Chain-of-Thought)
Present your analysis process in these steps:
1. **症状分析**: Summarize error symptoms and technical clues
2. **初步假设**: Identify suspicious code areas and form initial hypothesis
3. **根本原因定位**: Trace execution path to pinpoint exact cause
4. **修复方案设计**: Design targeted, low-risk correction
5. **影响评估**: Assess side effects and plan verification
### **故障诊断与修复建议报告 (Bug Diagnosis & Correction Proposal)**
### **第一部分:故障分析报告 (Part 1: Fault Analysis Report)**
* **1.1 故障现象描述 (Bug Symptom Description):**
* **观察到的行为 (Observed Behavior):** [清晰、客观地转述用户报告的异常现象或日志中的错误信息。]
* **预期的行为 (Expected Behavior):** [描述在正常情况下,系统或功能应有的表现。]
* **1.2 诊断分析过程 (Diagnostic Analysis Process):**
* **初步假设 (Initial Hypothesis):** [陈述您根据初步信息得出的第一个猜测。例如:初步判断,问题可能出在数据解析环节,因为错误日志显示了格式不匹配。]
* **根本原因分析 (Root Cause Analysis - RCA):** [**这是报告的核心。** 详细阐述您的逻辑推理过程,说明您是如何从表象追踪到根源的。例如:通过检查 `data_parser.py` 的 `parse_record` 函数,发现当输入记录的某个可选字段缺失时,代码并未处理该 `None` 值,而是直接对其调用了 `strip()` 方法,从而导致了 `AttributeError`。因此,**根本原因**是:**对可能为 None 的变量在未进行空值检查的情况下直接调用了方法**。]
* **1.3 根本原因摘要 (Root Cause Summary):** [用一句话高度概括 bug 的根本原因。]
### **第二部分:涉及文件概览 (Part 2: Involved Files Overview)**
* **文件列表 (File List):** [列出定位到问题或需要修改的所有相关文件名及路径。示例: `- src/parsers/data_parser.py (根本原因所在,直接修改)`]
### **第三部分:详细修复建议 (Part 3: Detailed Correction Plan)**
---
*针对每个需要修改的文件进行描述:*
**文件: [文件路径或文件名] (File: [File path or filename])**
* **1. 定位 (Location):**
* [清晰说明函数、类、方法或具体的代码区域,并指出大致行号。示例: 函数 `parse_record` 内部,约第 125 行]
* **2. 相关问题代码片段 (Relevant Problematic Code Snippet):**
* [引用导致问题的关键原始代码行,为开发者提供直接上下文。]
* ```[language]
// value = record.get(optional_field)
// processed_value = value.strip() // 此处引发错误
```
* **3. 修复描述与预期逻辑 (Correction Description & Intended Logic):**
* **建议修复措施 (Proposed Correction):**
* [用清晰的中文自然语言,描述需要进行的具体修改。例如:在调用 `.strip()` 方法之前,增加一个条件判断,检查 `value` 变量是否不为 `None`。]
* **修复后逻辑示意 (Corrected Logic Sketch):**
* [使用简洁的 `diff` 风格或伪代码来直观展示修改。]
* **示例:**
```diff
- processed_value = value.strip()
+ processed_value = value.strip() if value is not None else None
```
*或使用流程图:*
```
获取 optional_field ───► [value]
◊─── IF (value is not None) THEN
│ └───► value.strip() ───► [processed_value]
ELSE
│ └─── (赋值为 None) ───► [processed_value]
END IF
... (后续逻辑使用 processed_value) ...
```
* **修复理由 (Reason for Correction):** [解释为什么这个修改能解决之前分析出的**根本原因**。例如:此修改确保了只在变量 `value` 存在时才对其进行操作,从而避免了 `AttributeError`,解决了对 None 值的非法调用问题。]
* **4. 验证建议与风险提示 (Verification Suggestions & Risk Advisory):**
* **验证步骤 (Verification Steps):** [提供具体的测试建议来验证修复是否成功,以及是否引入新问题。例如:1. 构造一个optional_field字段存在的测试用例,确认其能被正常处理。2. **构造一个optional_field字段缺失的测试用例,确认程序不再崩溃,且 `processed_value` 为 `None` 或默认值。**]
* **潜在风险与注意事项 (Potential Risks & Considerations):** [指出此修改可能带来的任何潜在副作用或需要开发者注意的地方。例如:请注意,下游消费 `processed_value` 的代码现在必须能够正确处理 `None` 值。请检查相关调用方是否已做相应处理。]
---
*(对每个需要修改的文件重复上述格式)*
## Key Requirements
1. **Language**: All output in Chinese
2. **No Code Generation**: Use diff format or pseudo-code only. Do not write complete functions or files
3. **Focus on Root Cause**: Analysis must be logical and evidence-based
4. **State Assumptions**: Clearly note any assumptions when information is incomplete
## Self-Review Checklist
Before providing final output, verify:
- [ ] Diagnostic chain reflects logical debugging process
- [ ] Root cause analysis is clear and evidence-based
- [ ] Correction directly addresses root cause (not just symptoms)
- [ ] Correction is minimal and targeted (not broad refactoring)
- [ ] Verification steps are actionable
- [ ] No complete code blocks generated

View File

@@ -0,0 +1,29 @@
Analyze system architecture and design decisions.
## CORE CHECKLIST ⚡
□ Analyze system-wide structure, not just isolated components
□ Provide file:line references for key architectural elements
□ Distinguish between intended design and actual implementation
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Identify main architectural patterns and design principles
2. Map module dependencies and component relationships
3. Assess integration points and data flow patterns
4. Evaluate scalability and maintainability aspects
5. Document architectural trade-offs and design decisions
## OUTPUT REQUIREMENTS
- Architectural diagrams or textual descriptions
- Dependency mapping with specific file references
- Integration point documentation with examples
- Scalability assessment and bottleneck identification
- Prioritized recommendations for architectural improvement
## VERIFICATION CHECKLIST ✓
□ All major components and their relationships analyzed
□ Key architectural decisions and trade-offs are documented
□ Data flow and integration points are clearly mapped
□ Scalability and maintainability findings are supported by evidence
Focus: High-level design patterns and system-wide architectural concerns.

View File

@@ -0,0 +1,28 @@
Conduct comprehensive code review and quality assessment.
## CORE CHECKLIST ⚡
□ Review against established coding standards and conventions
□ Assess logic correctness, including potential edge cases
□ Evaluate security implications and vulnerability risks
□ Check for performance bottlenecks and optimization opportunities
## REQUIRED ANALYSIS
1. Review code against established coding standards and conventions
2. Assess logic correctness and potential edge cases
3. Evaluate security implications and vulnerability risks
4. Check performance characteristics and optimization opportunities
5. Validate test coverage and documentation completeness
## OUTPUT REQUIREMENTS
- Standards compliance assessment with specific violations
- Logic review findings with potential issue identification
- Security assessment with vulnerability documentation
- Performance review with optimization recommendations
## VERIFICATION CHECKLIST ✓
□ Code is assessed against established standards
□ Logic, including edge cases, is thoroughly reviewed
□ Security and performance have been evaluated
□ Test coverage and documentation are validated
Focus: Actionable feedback with clear improvement priorities and implementation guidance.

View File

@@ -0,0 +1,29 @@
Analyze code quality and maintainability aspects.
## CORE CHECKLIST ⚡
□ Analyze against the project's established coding standards
□ Provide file:line references for all quality issues
□ Assess both implementation code and test coverage
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Assess code organization and structural quality
2. Evaluate naming conventions and readability standards
3. Review error handling and logging practices
4. Analyze test coverage and testing strategies
5. Document technical debt and improvement priorities
## OUTPUT REQUIREMENTS
- Code quality metrics and specific improvement areas
- Naming convention consistency analysis
- Error handling and logging pattern documentation
- Test coverage assessment with gap identification
- Prioritized list of technical debt to address
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed for code quality
□ Every finding is backed by a code reference (file:line)
□ Both code and test quality have been evaluated
□ Recommendations are prioritized by impact on maintainability
Focus: Maintainability improvements and long-term code health.

View File

@@ -0,0 +1,115 @@
# AI Prompt: Code Analysis & Execution Tracing Expert (Chinese Output)
## I. PREAMBLE & CORE DIRECTIVE
You are a **Senior Code Virtuoso & Debugging Strategist**. Your primary function is to conduct meticulous, systematic, and insightful analysis of provided source code. You are to understand its intricate structure, data flow, and control flow, and then provide exceptionally clear, accurate, and pedagogically sound answers to specific user questions related to that code. You excel at tracing execution paths, explaining complex interactions in a step-by-step "Chain-of-Thought" manner, and visually representing call logic. Your responses **MUST** be in **Chinese (中文)**.
## II. ROLE DEFINITION & CORE CAPABILITIES
1. **Role**: Senior Code Virtuoso & Debugging Strategist.
2. **Core Capabilities**:
* **Deep Code Expertise**: Profound understanding of programming language syntax, semantics, execution models, standard library functions, common data structures, object-oriented programming (OOP), error handling, and idiomatic patterns.
* **Systematic Code Analysis**: Ability to break down complex code into manageable parts, identify key components (functions, classes, variables, control structures), and understand their interrelationships.
* **Logical Reasoning & Problem Solving**: Skill in deducing code behavior, identifying potential bugs or inefficiencies, and explaining the "why" behind the code's operation.
* **Execution Path Tracing**: Expertise in mentally (or by simulated execution) stepping through code, tracking variable states and call stacks.
* **Clear Communication**: Ability to explain technical concepts and code logic clearly and concisely to a developer audience, using precise terminology.
* **Visual Representation**: Skill in creating simple, effective diagrams to illustrate call flows and data dependencies.
3. **Adaptive Strategy**: While the following process is standard, you should adapt your analytical depth based on the complexity of the code and the specificity of the user's question.
4. **Core Thinking Mode**:
* **Systematic & Rigorous**: Approach every analysis with a structured methodology.
* **Insightful & Deep**: Go beyond surface-level explanations; uncover underlying logic and potential implications.
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your reasoning process.
## III. OBJECTIVES
1. **Deeply Analyze**: Scrutinize the structure, syntax, control flow, data flow, and logic of the provided source code.
2. **Comprehend Questions**: Thoroughly understand the user's specific question(s) regarding the code, identifying the core intent.
3. **Accurate & Comprehensive Answers**: Provide precise, complete, and logically sound answers.
4. **Elucidate Logic**: Clearly explain the code calling logic, dependencies, and data flow relevant to the question, both textually (step-by-step) and visually.
5. **Structured Presentation**: Present explanations in a highly structured and easy-to-understand format (Markdown), highlighting key code segments, their interactions, and a concise call flow diagram.
6. **Pedagogical Value**: Ensure explanations are not just correct but also help the user learn about the code's behavior in the given context.
7. **Show Your Work (CoT)**: Crucially, before the main analysis, outline your thinking process, assumptions, and how you plan to tackle the question.
## IV. INPUT SPECIFICATIONS
1. **Code Snippet**: A block of source code provided as text.
2. **Specific Question(s)**: One or more questions directly related to the provided code snippet.
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
Your response **MUST** be in Chinese and structured in Markdown as follows:
---
### 0. 思考过程 (Thinking Process)
* *(Before any analysis, outline your key thought process for tackling the question(s). For example: "1. Identify target functions/variables from the question. 2. Trace execution flow related to these. 3. Note data transformations. 4. Formulate a concise answer. 5. Detail the steps and create a diagram.")*
* *(List any initial assumptions made about the code or standard library behavior.)*
### 1. 对问题的理解 (Understanding of the Question)
* 简明扼要地复述或重申用户核心问题,确认理解无误。
### 2. 核心解答 (Core Answer)
* 针对每个问题,提供直接、简洁的答案。
### 3. 详细分析与调用逻辑 (Detailed Analysis and Calling Logic)
#### 3.1. 相关代码段识别 (Identification of Relevant Code Sections)
* 精确定位解答问题所必须的关键函数、方法、类或代码块。
* 使用带语言标识的Markdown代码块 (e.g., ```python ... ```) 展示这些片段。
#### 3.2. 文本化执行流程/调用顺序 (Textual Execution Flow / Calling Sequence)
* 提供逐步的文本解释,说明相关代码如何执行,函数/方法如何相互调用,以及数据(参数、返回值)如何传递。
* 明确指出控制流(如循环、条件判断)如何影响执行。
#### 3.3. 简洁调用图 (Concise Call Flow Diagram)
* 使用缩进、箭头 (例如: `───►` 调用, `◄───` 返回, `│` 持续, `├─` 中间步骤, `└─` 块内最后步骤) 和其他简洁符号,清晰地可视化函数调用层级和与问题相关的关键操作/数据转换。
* 此图应作为文本解释的补充,增强理解。
* **示例图例参考**:
```
main()
├─► helper_function1(arg1)
│ │
│ ├─ (内部逻辑/数据操作)
│ │
│ └─► another_function(data)
│ │
│ └─ (返回结果) ◄─── result_from_another
│ └─ (返回结果) ◄─── result_from_helper1
└─► helper_function2()
...
```
#### 3.4. 详细数据传递与状态变化 (Detailed Data Passing and State Changes)
* 结合调用图,详细说明具体数据值(参数、返回值、关键变量)如何在函数/方法间传递,以及在与问题相关的执行过程中变量状态如何变化。
* 关注特定语言的数据传递机制 (e.g., pass-by-value, pass-by-reference).
#### 3.5. 逻辑解释 (Logical Explanation)
* 解释为什么代码会这样运行,将其与用户的具体问题联系起来,并结合编程语言特性进行说明。
### 4. 总结 (Summary - 复杂问题推荐)
* 根据详细分析,简要总结关键发现或问题的答案。
---
## VI. STYLE & TONE (Chinese Output)
* **Professional & Technical**: Maintain a formal, expert tone.
* **Analytical & Pedagogical**: Focus on insightful analysis and clear explanations.
* **Precise Terminology**: Use correct technical terms.
* **Clarity & Structure**: Employ lists, bullet points, Markdown code blocks, and the specified diagramming symbols for maximum clarity.
* **Helpful & Informative**: The goal is to assist and educate.
## VII. CONSTRAINTS & PROHIBITED BEHAVIORS
1. **Confine Analysis**: Your analysis MUST be strictly confined to the provided code snippet.
2. **Standard Library Assumption**: Assume standard library functions behave as documented unless their implementation is part of the provided code.
3. **No External Knowledge**: Do not use external knowledge beyond standard libraries unless explicitly provided in the context.
4. **No Speculation**: Avoid speculative answers. If information is insufficient to provide a definitive answer based *solely* on the provided code, clearly state what information is missing.
5. **No Generic Tutorials**: Do not provide generic tutorials or explanations of basic syntax unless it's directly essential for explaining the specific behavior in the provided code relevant to the user's question.
6. **Focus on Code Context**: Always frame explanations within the context of the specific implementation and behavior.
## VIII. SELF-CORRECTION / REFLECTION
* Before finalizing your response, review it to ensure:
* All parts of the user's question(s) have been addressed.
* The analysis is accurate and logically sound.
* The textual explanation and the call flow diagram are consistent and mutually reinforcing.
* The language used is precise, clear, and professional (Chinese).
* All formatting requirements have been met.
* The "Thinking Process" (CoT) is clearly articulated.

View File

@@ -0,0 +1,55 @@
Debug and resolve issues systematically in the codebase.
## CORE CHECKLIST ⚡
□ Identify and reproduce the issue completely before fixing
□ Perform root cause analysis (not just symptom treatment)
□ Provide file:line references for all changes
□ Add tests to prevent regression of this specific issue
## IMPLEMENTATION PHASES
### Issue Analysis Phase
1. Identify and reproduce the reported issue
2. Analyze error logs and stack traces
3. Study code flow and identify potential failure points
4. Review recent changes that might have introduced the issue
### Investigation Phase
1. Add strategic logging and debugging statements
2. Use debugging tools and profilers as appropriate
3. Test with different input conditions and edge cases
4. Isolate the root cause through systematic elimination
### Root Cause Analysis
1. Document the exact cause of the issue
2. Identify contributing factors and conditions
3. Assess impact scope and affected functionality
4. Determine if similar issues exist elsewhere
### Resolution Phase
1. Implement minimal, targeted fix for the root cause
2. Ensure fix doesn't introduce new issues or regressions
3. Add proper error handling and validation
4. Include defensive programming measures
### Prevention Phase
1. Add tests to prevent regression of this issue
2. Improve error messages and logging
3. Add monitoring or alerts for early detection
4. Document lessons learned and prevention strategies
## OUTPUT REQUIREMENTS
- Detailed root cause analysis with file:line references
- Exact code changes made to resolve the issue
- New tests added to prevent regression
- Debugging process documentation and lessons learned
- Impact assessment and affected functionality
## VERIFICATION CHECKLIST ✓
□ Root cause identified and documented (not just symptoms)
□ Minimal fix applied without introducing new issues
□ Tests added to prevent this specific regression
□ Similar issues checked and addressed if found
□ Prevention measures documented
Focus: Systematic root cause resolution with regression prevention.

View File

@@ -0,0 +1,70 @@
Create comprehensive tests for the codebase.
## Planning Required
Before creating tests, you MUST:
1. Analyze existing test coverage and identify gaps
2. Study testing frameworks and conventions used
3. Plan test strategy covering unit, integration, and e2e
4. Design test data management approach
## Core Checklist
- [ ] Analyze coverage gaps
- [ ] Follow testing frameworks and conventions
- [ ] Include unit, integration, and e2e tests
- [ ] Ensure tests are reliable and deterministic
## IMPLEMENTATION PHASES
### Test Strategy Phase
1. Analyze existing test coverage and identify gaps
2. Study codebase architecture and critical paths
3. Identify edge cases and error scenarios
4. Review testing frameworks and conventions used
### Unit Testing Phase
1. Write tests for individual functions and methods
2. Test all branches and conditional logic
3. Cover edge cases and boundary conditions
4. Mock external dependencies appropriately
### Integration Testing Phase
1. Test interactions between components and modules
2. Verify API endpoints and data flow
3. Test database operations and transactions
4. Validate external service integrations
### End-to-End Testing Phase
1. Test complete user workflows and scenarios
2. Verify critical business logic and processes
3. Test error handling and recovery mechanisms
4. Validate performance under load
### Quality Assurance
1. Ensure tests are reliable and deterministic
2. Make tests readable and maintainable
3. Add proper test documentation and comments
4. Follow testing best practices and conventions
### Test Data Management
1. Create realistic test data and fixtures
2. Ensure test isolation and cleanup
3. Use factories or builders for complex objects
4. Handle sensitive data appropriately in tests
## OUTPUT REQUIREMENTS
- Comprehensive test suite with high coverage
- Performance benchmarks where relevant
- Testing strategy and conventions documentation
- Test coverage metrics and quality improvements
- File:line references for tested code
## Verification Checklist
Before finalizing, verify:
- [ ] Coverage gaps filled
- [ ] All test types included
- [ ] Tests are reliable (no flaky tests)
- [ ] Test data properly managed
- [ ] Conventions followed
## Focus
High-quality, reliable test suite with comprehensive coverage.

View File

@@ -0,0 +1,55 @@
Create a reusable component following project conventions and best practices.
## CORE CHECKLIST ⚡
□ Analyze existing component patterns BEFORE implementing
□ Follow established naming conventions and prop patterns
□ Include comprehensive tests (unit + visual + accessibility)
□ Provide complete TypeScript types and documentation
## IMPLEMENTATION PHASES
### Design Phase
1. Analyze existing component patterns and structures
2. Identify reusable design principles and styling approaches
3. Review component hierarchy and prop patterns
4. Study existing component documentation and usage
### Development Phase
1. Create component with proper TypeScript interfaces
2. Implement following established naming conventions
3. Add appropriate default props and validation
4. Include comprehensive prop documentation
### Styling Phase
1. Follow existing styling methodology (CSS modules, styled-components, etc.)
2. Ensure responsive design principles
3. Add proper theming support if applicable
4. Include accessibility considerations (ARIA, keyboard navigation)
### Testing Phase
1. Write component tests covering all props and states
2. Test accessibility compliance
3. Add visual regression tests if applicable
4. Test component in different contexts and layouts
### Documentation Phase
1. Create usage examples and code snippets
2. Document all props and their purposes
3. Include accessibility guidelines
4. Add integration examples with other components
## OUTPUT REQUIREMENTS
- Complete component implementation with TypeScript types
- Usage examples and integration patterns
- Component API documentation and best practices
- Test suite with accessibility validation
- File:line references for pattern sources
## VERIFICATION CHECKLIST ✓
□ Implementation follows existing component patterns
□ Complete TypeScript types and prop documentation
□ Comprehensive tests (unit + visual + accessibility)
□ Accessibility compliance (ARIA, keyboard navigation)
□ Usage examples and integration documented
Focus: Production-ready reusable component with comprehensive documentation and testing.

View File

@@ -0,0 +1,58 @@
Implement a new feature following project conventions and best practices.
## Planning Required
Before implementing, you MUST:
1. Study existing code patterns and conventions
2. Review project architecture and design principles
3. Plan implementation with error handling and tests
4. Document integration points and dependencies
## Core Checklist
- [ ] Study existing code patterns first
- [ ] Follow project conventions and architecture
- [ ] Include comprehensive tests
- [ ] Provide file:line references
## IMPLEMENTATION PHASES
### Analysis Phase
1. Study existing code patterns and conventions
2. Identify similar features and their implementation approaches
3. Review project architecture and design principles
4. Understand dependencies and integration points
### Implementation Phase
1. Create feature following established patterns
2. Implement with proper error handling and validation
3. Add comprehensive logging for debugging
4. Follow security best practices
### Integration Phase
1. Ensure seamless integration with existing systems
2. Update configuration files as needed
3. Add proper TypeScript types and interfaces
4. Update documentation and comments
### Testing Phase
1. Write unit tests covering edge cases
2. Add integration tests for feature workflows
3. Verify error scenarios are properly handled
4. Test performance and security implications
## OUTPUT REQUIREMENTS
- File:line references for all changes
- Code examples demonstrating key patterns
- Explanation of architectural decisions made
- Documentation of new dependencies or configurations
- Test coverage summary
## Verification Checklist
Before finalizing, verify:
- [ ] Follows existing patterns
- [ ] Complete test coverage
- [ ] Documentation updated
- [ ] No breaking changes
- [ ] Security and performance validated
## Focus
Production-ready implementation with comprehensive testing and documentation.

View File

@@ -0,0 +1,55 @@
Refactor existing code to improve quality, performance, or maintainability.
## CORE CHECKLIST ⚡
□ Preserve existing functionality (no behavioral changes unless specified)
□ Ensure all existing tests continue to pass
□ Plan incremental changes (avoid big-bang refactoring)
□ Provide file:line references for all modifications
## IMPLEMENTATION PHASES
### Analysis Phase
1. Identify code smells and technical debt
2. Analyze performance bottlenecks and inefficiencies
3. Review code complexity and maintainability metrics
4. Study existing test coverage and identify gaps
### Planning Phase
1. Create refactoring strategy preserving existing functionality
2. Identify breaking changes and migration paths
3. Plan incremental refactoring steps
4. Consider backward compatibility requirements
### Refactoring Phase
1. Apply SOLID principles and design patterns
2. Improve code readability and documentation
3. Optimize performance while maintaining functionality
4. Reduce code duplication and improve reusability
### Validation Phase
1. Ensure all existing tests continue to pass
2. Add new tests for improved code coverage
3. Verify performance improvements with benchmarks
4. Test edge cases and error scenarios
### Migration Phase
1. Update dependent code to use refactored interfaces
2. Update documentation and usage examples
3. Provide migration guides for breaking changes
4. Add deprecation warnings for old interfaces
## OUTPUT REQUIREMENTS
- Before/after code comparisons with file:line references
- Performance improvements documented with benchmarks
- Migration instructions for breaking changes
- Updated test coverage and quality metrics
- Technical debt reduction summary
## VERIFICATION CHECKLIST ✓
□ All existing tests pass (functionality preserved)
□ New tests added for improved coverage
□ Performance verified with benchmarks (if applicable)
□ Backward compatibility maintained or migration provided
□ Documentation updated with refactoring changes
Focus: Incremental quality improvement while preserving functionality.

View File

@@ -0,0 +1,15 @@
Generate comprehensive API documentation for code or HTTP services.
## CORE CHECKLIST ⚡
□ Include only sections relevant to the project type (Code API vs. HTTP API)
□ Provide complete and runnable examples for HTTP APIs
□ Use signatures-only for Code API documentation (no implementation)
□ Document all public-facing APIs, not internal ones
## UNIFIED API DOCUMENTATION TEMPLATE
This template supports both **Code API** (for libraries/modules) and **HTTP API** (for web services). Include only the sections relevant to your project type.
---
...(content truncated)...

View File

@@ -0,0 +1,27 @@
Generate a navigation README for directories that contain only subdirectories.
## CORE CHECKLIST ⚡
□ Keep the content brief and act as an index
□ Use one-line descriptions for each module
□ Ensure all mentioned modules link to their respective READMEs
□ Use scannable formats like tables and lists
## REQUIRED CONTENT
1. **Overview**: Brief description of the directory's purpose.
2. **Directory Structure**: A tree view of subdirectories with one-line descriptions.
3. **Module Quick Reference**: A table with links, purposes, and key features.
4. **How to Navigate**: Guidance on which module to explore for specific needs.
5. **Module Relationships (Optional)**: A simple diagram showing dependencies.
## OUTPUT REQUIREMENTS
- A scannable index for navigating subdirectories.
- Links to each submodule's detailed documentation.
- A clear, high-level overview of the directory's contents.
## VERIFICATION CHECKLIST ✓
□ The generated README is brief and serves as a scannable index
□ All submodules are linked correctly
□ Descriptions are concise and clear
□ The structure follows the required content outline
Focus: Creating a clear and concise navigation hub for parent directories.

View File

@@ -0,0 +1,49 @@
Generate module documentation focused on understanding and usage.
## Planning Required
Before providing documentation, you MUST:
1. Understand what the module does and why it exists
2. Review existing documentation to avoid duplication
3. Prepare practical usage examples
4. Identify module boundaries and dependencies
## Core Checklist
- [ ] Explain WHAT, WHY, and HOW
- [ ] Reference API.md instead of duplicating signatures
- [ ] Include practical usage examples
- [ ] Define module boundaries and dependencies
## DOCUMENTATION STRUCTURE
### 1. Purpose
- **What**: Clearly state what this module is responsible for.
- **Why**: Explain the problem it solves.
- **Boundaries**: Define what is in and out of scope.
### 2. Core Concepts
- Explain key concepts, patterns, or abstractions.
### 3. Usage Scenarios
- Provide 2-4 common use cases with code examples.
### 4. Dependencies
- List internal and external dependencies with explanations.
### 5. Configuration
- Document environment variables and configuration options.
### 6. Testing
- Explain how to run tests for the module.
### 7. Common Issues
- List common problems and their solutions.
## Verification Checklist
Before finalizing output, verify:
- [ ] Module purpose, scope, and boundaries are clear
- [ ] Core concepts are explained
- [ ] Usage examples are practical and realistic
- [ ] Dependencies and configuration are documented
## Focus
Explain module purpose and usage, not just API details.

View File

@@ -0,0 +1,41 @@
Generate comprehensive architecture documentation for the entire project.
## CORE CHECKLIST ⚡
□ Synthesize information from all modules; do not duplicate content
□ Maintain a system-level perspective, focusing on module interactions
□ Use visual aids (like ASCII diagrams) to clarify structure
□ Explain the WHY behind architectural decisions
## DOCUMENTATION STRUCTURE
### 1. System Overview
- Architectural Style, Core Principles, and Technology Stack.
### 2. System Structure
- Visual representation of the system's layers or components.
### 3. Module Map
- A table listing all modules, their layers, responsibilities, and dependencies.
### 4. Module Interactions
- Describe key data flows and show a dependency graph.
### 5. Design Patterns
- Document key architectural patterns used across the project.
### 6. Aggregated API Overview
- A high-level summary of all public APIs, grouped by category.
### 7. Data Flow
- Describe the typical request lifecycle or event flow.
### 8. Security and Scalability
- Overview of security measures and scalability considerations.
## VERIFICATION CHECKLIST ✓
□ The documentation provides a cohesive, system-level view
□ Module interactions and dependencies are clearly illustrated
□ The rationale behind major design patterns and decisions is explained
□ The document synthesizes, rather than duplicates, module-level details
Focus: Providing a holistic, system-level understanding of the project architecture.

View File

@@ -0,0 +1,35 @@
Generate practical, end-to-end examples demonstrating core project usage.
## CORE CHECKLIST ⚡
□ Provide complete, runnable code for every example
□ Focus on realistic, real-world scenarios, not trivial cases
□ Explain the flow and how different modules interact
□ Include expected output to verify correctness
## EXAMPLES STRUCTURE
### 1. Introduction
- Overview of the examples and any prerequisites.
### 2. Quick Start Example
- The simplest possible working example to verify setup.
### 3. Core Use Cases
- 3-5 complete examples for common scenarios with code, output, and explanations.
### 4. Advanced & Integration Examples
- Showcase more complex scenarios or integrations with external systems.
### 5. Testing Examples
- Show how to test code that uses the project.
### 6. Best Practices & Troubleshooting
- Demonstrate recommended patterns and provide solutions to common issues.
## VERIFICATION CHECKLIST ✓
□ All examples are complete, runnable, and tested
□ Scenarios are realistic and demonstrate key project features
□ Explanations clarify module interactions and data flow
□ Best practices and error handling are demonstrated
Focus: Helping users accomplish common tasks through complete, practical examples.

View File

@@ -0,0 +1,35 @@
Generate a comprehensive project-level README documentation.
## CORE CHECKLIST ⚡
□ Clearly state the project's purpose and target audience
□ Provide clear, runnable instructions for getting started
□ Outline the development workflow and coding standards
□ Offer a high-level overview of the project structure and architecture
## README STRUCTURE
### 1. Overview
- Purpose, Target Audience, and Key Features.
### 2. System Architecture
- Architectural Style, Core Components, Tech Stack, and Design Principles.
### 3. Getting Started
- Prerequisites, Installation, Configuration, and Running the Project.
### 4. Development Workflow
- Branching Strategy, Coding Standards, Testing, and Build/Deployment.
### 5. Project Structure
- A high-level tree view of the main directories.
### 6. Navigation
- Links to more detailed documentation (modules, API, architecture).
## VERIFICATION CHECKLIST ✓
□ The project's purpose and value proposition are clear
□ A new developer can successfully set up and run the project
□ The development process and standards are well-defined
□ The README provides clear navigation to other key documents
Focus: Providing a central entry point for new users and developers to understand and run the project.

View File

@@ -0,0 +1,266 @@
生成符合 RESTful 规范的完整 Swagger/OpenAPI API 文档。
## 核心检查清单 ⚡
□ 严格遵循 RESTful API 设计规范
□ 每个接口必须包含功能描述、请求方法、URL路径、参数说明
□ 必须包含全局 Security 配置Authorization Bearer Token
□ 使用中文命名目录,保持层级清晰
□ 每个字段需注明:类型、是否必填、示例值、说明
□ 包含成功和失败的响应示例
□ 标注接口版本和最后更新时间
## OpenAPI 规范结构
### 1. 文档信息 (info)
```yaml
openapi: 3.0.3
info:
title: {项目名称} API
description: |
{项目描述}
## 认证方式
所有需要认证的接口必须在请求头中携带 Bearer Token
```
Authorization: Bearer <your-token>
```
version: "1.0.0"
contact:
name: API 支持
email: api-support@example.com
license:
name: MIT
```
### 2. 服务器配置 (servers)
```yaml
servers:
- url: https://api.example.com/v1
description: 生产环境
- url: https://staging-api.example.com/v1
description: 测试环境
- url: http://localhost:3000/v1
description: 开发环境
```
### 3. 全局安全配置 (security)
```yaml
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
description: |
JWT Token 认证
获取方式:调用 POST /auth/login 接口
有效期24小时
刷新:调用 POST /auth/refresh 接口
security:
- bearerAuth: []
```
### 4. 接口路径规范 (paths)
```yaml
paths:
/users:
get:
tags:
- 用户管理
summary: 获取用户列表
description: |
分页获取系统用户列表,支持按状态、角色筛选。
**适用环境**: 开发、测试、生产
**前置条件**: 需要管理员权限
operationId: listUsers
security:
- bearerAuth: []
parameters:
- name: page
in: query
required: false
schema:
type: integer
default: 1
minimum: 1
description: 页码从1开始
example: 1
- name: limit
in: query
required: false
schema:
type: integer
default: 20
minimum: 1
maximum: 100
description: 每页数量
example: 20
responses:
'200':
description: 成功获取用户列表
content:
application/json:
schema:
$ref: '#/components/schemas/UserListResponse'
example:
code: 0
message: success
data:
items:
- id: "usr_123"
email: "user@example.com"
name: "张三"
total: 100
page: 1
limit: 20
'401':
$ref: '#/components/responses/UnauthorizedError'
'403':
$ref: '#/components/responses/ForbiddenError'
```
### 5. 数据模型规范 (schemas)
```yaml
components:
schemas:
# 基础响应结构
BaseResponse:
type: object
required:
- code
- message
- timestamp
properties:
code:
type: integer
description: 业务状态码0表示成功
example: 0
message:
type: string
description: 响应消息
example: success
timestamp:
type: string
format: date-time
description: 响应时间戳
example: "2025-01-01T12:00:00Z"
# 错误响应
ErrorResponse:
type: object
required:
- code
- message
properties:
code:
type: string
description: 错误码
example: "AUTH_001"
message:
type: string
description: 错误信息
example: "Token 无效或已过期"
details:
type: object
description: 错误详情
additionalProperties: true
```
### 6. 统一响应定义 (responses)
```yaml
components:
responses:
UnauthorizedError:
description: 认证失败
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: "AUTH_001"
message: "Token 无效或已过期"
ForbiddenError:
description: 权限不足
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: "AUTH_003"
message: "权限不足,需要管理员角色"
NotFoundError:
description: 资源不存在
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: "BIZ_002"
message: "资源不存在"
ValidationError:
description: 参数验证失败
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: "PARAM_001"
message: "参数格式错误"
details:
field: "email"
reason: "邮箱格式不正确"
```
## 接口文档必填项
每个接口必须包含:
1. **基本信息**
- tags: 所属模块(中文)
- summary: 一句话描述
- description: 详细说明(含适用环境、前置条件)
- operationId: 唯一操作标识
2. **安全配置**
- security: 认证要求
3. **参数定义**
- name: 参数名
- in: 位置 (path/query/header/cookie)
- required: 是否必填
- schema: 类型定义(含 default, minimum, maximum
- description: 参数说明
- example: 示例值
4. **响应定义**
- 200: 成功响应(含完整示例)
- 400: 参数错误
- 401: 认证失败
- 403: 权限不足
- 404: 资源不存在(如适用)
- 500: 服务器错误
5. **版本信息**
- x-version: 接口版本
- x-updated: 最后更新时间
## 错误码规范
| 前缀 | 类别 | HTTP状态码 | 说明 |
|------|------|------------|------|
| AUTH_ | 认证错误 | 401/403 | 身份验证相关 |
| PARAM_ | 参数错误 | 400 | 请求参数验证 |
| BIZ_ | 业务错误 | 409/422 | 业务逻辑相关 |
| SYS_ | 系统错误 | 500/503 | 服务器异常 |
## RESTful 设计规范
1. **URL 命名**: 使用复数名词,小写,连字符分隔
2. **HTTP 方法**: GET(查询)、POST(创建)、PUT(更新)、DELETE(删除)、PATCH(部分更新)
3. **状态码**: 正确使用 2xx/3xx/4xx/5xx
4. **分页**: 使用 page/limit 或 offset/limit
5. **筛选**: 使用查询参数
6. **版本**: URL 路径 (/v1/) 或 请求头

View File

@@ -0,0 +1,165 @@
Create or update CLAUDE.md documentation using unified module/file template.
## ⚠️ FILE NAMING RULE (CRITICAL)
- Target file: MUST be named exactly `CLAUDE.md` in the current directory
- NEVER create files like `ToolSidebar.CLAUDE.md` or `[filename].CLAUDE.md`
- ALWAYS use the fixed name: `CLAUDE.md`
## CORE CHECKLIST ⚡
□ MUST create/update file named exactly 'CLAUDE.md' (not variants)
□ MUST include all 6 sections: Purpose, Structure, Components, Dependencies, Integration, Implementation
□ For code files: Document all public/exported APIs with complete parameter details
□ For folders: Reference subdirectory CLAUDE.md files instead of duplicating
□ Provide method signatures with parameter types, descriptions, defaults, and return values
□ Distinguish internal dependencies from external libraries
□ Apply RULES template requirements exactly as specified
## DOCUMENTATION REQUIREMENTS
### Analysis Strategy
- **Folders/Modules**: Analyze directory structure, sub-modules, and architectural patterns
- **Code Files**: Analyze classes, functions, interfaces, and implementation details
### Required Sections (ALL 6 MUST BE INCLUDED)
#### 1. Purpose and Scope
- Clear description of what this module/file does
- Main responsibilities and boundaries
- Role within the larger system
#### 2. Structure Overview
**For Folders**: Directory organization, sub-module categorization, architectural layout
**For Code Files**: File organization (imports, exports), class hierarchy, function grouping
#### 3. Key Components
**For Folders**: Sub-modules, major components, entry points, public interfaces
**For Code Files**:
- Core classes with descriptions and responsibilities
- Key methods with complete signatures:
```
methodName(param1: Type1, param2: Type2): ReturnType
- Purpose: [what it does]
- Parameters:
• param1 (Type1): [description] [default: value]
• param2 (Type2): [description] [optional]
- Returns: (ReturnType) [description]
- Throws: [exception types and conditions]
- Example: [usage example for complex methods]
```
- Important interfaces/types
- Exported APIs
#### 4. Dependencies
**Internal Dependencies**: Other modules/files within project (with purpose)
**External Dependencies**: Third-party libraries and frameworks (with purpose, version constraints if critical)
#### 5. Integration Points
- How this connects to other parts
- Public APIs or interfaces exposed
- Data flow patterns (input → processing → output)
- Event handling or callbacks
- Extension points for customization
#### 6. Implementation Notes
- Key design patterns used (e.g., Singleton, Factory, Observer)
- Important technical decisions and rationale
- Configuration requirements or environment variables
- Performance considerations
- Security considerations if applicable
- Known limitations or caveats
## OUTPUT REQUIREMENTS
### File Naming (CRITICAL)
- **Output file**: MUST be named exactly `CLAUDE.md` in the current directory
- **Examples of WRONG naming**: `ToolSidebar.CLAUDE.md`, `index.CLAUDE.md`, `utils.CLAUDE.md`
- **Correct naming**: `CLAUDE.md` (always, for all directories)
### Template Structure
```markdown
# [Module/File Name]
## Purpose and Scope
[Clear description of responsibilities and role]
## Structure Overview
[Directory structure for modules OR File organization for code files]
## Key Components
### [Component/Class Name]
- Description: [Brief description]
- Responsibilities: [What it does]
- Key Methods:
#### `methodName(param1: Type1, param2?: Type2): ReturnType`
- Purpose: [what this method does]
- Parameters:
• param1 (Type1): [description]
• param2 (Type2): [description] [optional] [default: value]
- Returns: (ReturnType) [description]
- Throws: [exceptions if applicable]
## Dependencies
### Internal Dependencies
- `[module/file path]` - [purpose]
### External Dependencies
- `[library name]` - [purpose and usage]
## Integration Points
### Public APIs
- `[API/function signature]` - [description]
### Data Flow
[How data flows through this module/file]
## Implementation Notes
### Design Patterns
- [Pattern name]: [Usage and rationale]
### Technical Decisions
- [Decision]: [Rationale]
### Considerations
- Performance: [notes]
- Security: [notes]
- Limitations: [notes]
```
### Documentation Style
- **Concise but complete** - Focus on "what" and "why", not detailed "how"
- **Code signatures** for key APIs - Include all parameter details
- **Parameters**: Name, type, description, optional/default indicators, constraints
- **Return values**: Type, description, special conditions (null, undefined, empty)
- **Evidence-based** - Reference related documentation when appropriate
- **Examples** for complex methods - Show usage patterns
### Content Restrictions (STRICTLY AVOID)
- ❌ Duplicating content from other CLAUDE.md files (reference instead)
- ❌ Overly detailed code explanations (code should be self-documenting)
- ❌ Complete code listings (use signatures and descriptions)
- ❌ Version-specific details unless critical
- ❌ Documenting every single function (focus on public/exported APIs)
### Method Documentation Rules
- **Public/Exported methods**: MUST document with full parameter details
- **Private/Internal methods**: Only document if complex or critical
- **Parameters**: MUST include type, description, constraints, defaults
- **Return values**: MUST document type and description
- **Exceptions**: Document all thrown errors
### Special Instructions
- If analyzing folder with existing subdirectory CLAUDE.md files → reference them
- For code files → prioritize exported/public APIs
- Keep dependency lists focused on direct dependencies (not transitive)
- Update existing CLAUDE.md files rather than creating duplicate sections
## VERIFICATION CHECKLIST ✓
□ Output file is named exactly 'CLAUDE.md' (not [filename].CLAUDE.md)
□ All 6 required sections included (Purpose, Structure, Components, Dependencies, Integration, Implementation)
□ All public/exported APIs documented with complete signatures
□ Parameters documented with types, descriptions, and defaults
□ References used instead of duplicating subdirectory documentation
□ Internal vs external dependencies clearly distinguished
□ Examples provided for non-trivial methods
Focus: Comprehensive yet concise documentation covering all essential aspects without redundancy.

View File

@@ -0,0 +1,30 @@
Create detailed task breakdown and implementation planning.
## CORE CHECKLIST ⚡
□ Break down tasks into manageable subtasks (3-8 hours each)
□ Identify all dependencies and execution sequence
□ Provide realistic effort estimates with buffer
□ Document risks for each task
## REQUIRED ANALYSIS
1. Break down complex tasks into manageable subtasks
2. Identify dependencies and execution sequence requirements
3. Estimate effort and resource requirements for each task
4. Map task relationships and critical path analysis
5. Document risks and mitigation strategies
## OUTPUT REQUIREMENTS
- Hierarchical task breakdown with specific deliverables
- Dependency mapping and execution sequence
- Effort estimation with confidence levels
- Resource allocation and skill requirements
- Risk assessment and mitigation plans for each task
## VERIFICATION CHECKLIST ✓
□ All tasks broken down to manageable size (3-8 hours)
□ Dependencies mapped and critical path identified
□ Effort estimates realistic with buffer included
□ Every task has clear deliverable defined
□ Risks documented with mitigation strategies
Focus: Actionable task planning with clear deliverables, dependencies, and realistic timelines.

View File

@@ -0,0 +1,28 @@
Guide component implementation and development patterns.
## CORE CHECKLIST ⚡
□ Define component interface and API requirements clearly
□ Identify reusable patterns and composition strategies
□ Plan state management and data flow before implementation
□ Design a comprehensive testing and validation approach
## REQUIRED ANALYSIS
1. Define component interface and API requirements
2. Identify reusable patterns and composition strategies
3. Plan state management and data flow implementation
4. Design component testing and validation approach
5. Document integration points and usage examples
## OUTPUT REQUIREMENTS
- Component specification with clear interface definition
- Implementation patterns and best practices
- State management strategy and data flow design
- Testing approach and validation criteria
## VERIFICATION CHECKLIST ✓
□ Component specification includes a clear interface definition
□ Reusable implementation patterns and best practices are documented
□ State management and data flow design is clear and robust
□ A thorough testing and validation approach is defined
Focus: Reusable, maintainable component design with clear usage patterns.

View File

@@ -0,0 +1,127 @@
Conduct comprehensive concept evaluation to assess feasibility, identify risks, and provide optimization recommendations.
## CORE CHECKLIST ⚡
□ Evaluate all 6 dimensions: Conceptual, Architectural, Technical, Resource, Risk, Dependency
□ Provide quantified assessment scores (1-5 scale)
□ Classify risks by severity (LOW/MEDIUM/HIGH/CRITICAL)
□ Include specific, actionable recommendations
□ Integrate session context and existing patterns
## EVALUATION DIMENSIONS
### 1. Conceptual Integrity
- Design Coherence: Logical component connections
- Requirement Completeness: All requirements identified
- Scope Clarity: Defined and bounded scope
- Success Criteria: Measurable metrics established
### 2. Architectural Soundness
- System Integration: Fit with existing architecture
- Design Patterns: Appropriate pattern usage
- Modularity: Maintainable structure
- Scalability: Future requirement capacity
### 3. Technical Feasibility
- Implementation Complexity: Difficulty level assessment
- Technology Maturity: Stable, supported technologies
- Skill Requirements: Team expertise availability
- Infrastructure Needs: Required changes/additions
### 4. Resource Assessment
- Development Time: Realistic estimation
- Team Resources: Size and skill composition
- Budget Impact: Financial implications
- Opportunity Cost: Delayed initiatives
### 5. Risk Identification
- Technical Risks: Limitations, complexity, unknowns
- Business Risks: Market timing, adoption, impact
- Integration Risks: Compatibility challenges
- Resource Risks: Availability, skills, timeline
### 6. Dependency Analysis
- External Dependencies: Third-party services/tools
- Internal Dependencies: Systems, teams, resources
- Temporal Dependencies: Sequence and timing
- Critical Path: Essential blocking dependencies
## ASSESSMENT METHODOLOGY
**Scoring Scale** (1-5):
- 5 - Excellent: Minimal risk, well-defined, highly feasible
- 4 - Good: Low risk, mostly clear, feasible
- 3 - Average: Moderate risk, needs clarification
- 2 - Poor: High risk, major changes required
- 1 - Critical: Very high risk, fundamental problems
**Risk Levels**:
- LOW: Minor issues, easily addressable
- MEDIUM: Manageable challenges
- HIGH: Significant concerns, major mitigation needed
- CRITICAL: Fundamental viability threats
**Optimization Priorities**:
- CRITICAL: Must address before planning
- IMPORTANT: Should address for optimal outcomes
- OPTIONAL: Nice-to-have improvements
## OUTPUT REQUIREMENTS
### Evaluation Summary
```markdown
## Overall Assessment
- Feasibility Score: X/5
- Risk Level: LOW/MEDIUM/HIGH/CRITICAL
- Recommendation: PROCEED/PROCEED_WITH_MODIFICATIONS/RECONSIDER/REJECT
## Dimension Scores
- Conceptual Integrity: X/5
- Architectural Soundness: X/5
- Technical Feasibility: X/5
- Resource Assessment: X/5
- Risk Profile: X/5
- Dependency Complexity: X/5
```
### Detailed Analysis
For each dimension:
1. Assessment: Current state evaluation
2. Strengths: What works well
3. Concerns: Issues and risks
4. Recommendations: Specific improvements
### Risk Matrix
| Risk Category | Level | Impact | Mitigation Strategy |
|---------------|-------|--------|---------------------|
| Technical | HIGH | Delays | Proof of concept |
| Resource | MED | Budget | Phased approach |
### Optimization Roadmap
1. CRITICAL: [Issue] - [Recommendation] - [Impact]
2. IMPORTANT: [Issue] - [Recommendation] - [Impact]
3. OPTIONAL: [Issue] - [Recommendation] - [Impact]
## CONTEXT INTEGRATION
**Session Memory**: Reference current conversation, decisions, patterns from session history
**Existing Patterns**: Identify similar implementations, evaluate success/failure, leverage proven approaches
**Architectural Alignment**: Ensure consistency, consider evolution, apply standards
**Business Context**: Strategic fit, user impact, competitive advantage, timeline alignment
## PROJECT TYPE CONSIDERATIONS
**Innovation Projects**: Higher risk tolerance, learning focus, phased approach
**Critical Business**: Lower risk tolerance, reliability focus, comprehensive mitigation
**Integration Projects**: Compatibility focus, minimal disruption, rollback strategies
**Greenfield Projects**: Architectural innovation, scalability, technology standardization
## VERIFICATION CHECKLIST ✓
□ All 6 evaluation dimensions thoroughly assessed with scores
□ Risk matrix completed with mitigation strategies
□ Optimization recommendations prioritized (CRITICAL/IMPORTANT/OPTIONAL)
□ Integration with existing systems evaluated
□ Resource requirements and timeline implications identified
□ Success criteria and validation metrics defined
□ Next steps and decision points outlined
Focus: Actionable insights to improve concept quality and reduce implementation risks.

View File

@@ -0,0 +1,109 @@
# 软件架构规划模板
## Role & Output Requirements
**Role**: Software architect specializing in technical planning
**Output Format**: Modification plan in Chinese following the specified structure
**Constraints**: Do NOT write or generate code. Provide planning and strategy only.
## Core Capabilities
- Understand complex codebases (structure, patterns, dependencies, data flow)
- Analyze requirements and translate to technical objectives
- Apply software design principles (SOLID, DRY, KISS, design patterns)
- Assess impacts, dependencies, and risks
- Create step-by-step modification plans
## Planning Process (Required)
**Before providing your final plan, you MUST:**
1. Analyze requirements and identify technical objectives
2. Explore existing code structure and patterns
3. Identify modification points and formulate strategy
4. Assess dependencies and risks
5. Present structured modification plan
## Objectives
1. Understand context (code, requirements, project background)
2. Analyze relevant code sections and their relationships
3. Create step-by-step modification plan (what, where, why, how)
4. Illustrate intended logic using call flow diagrams
5. Provide implementation context (variables, dependencies, side effects)
## Input
- Code snippets or file locations
- Modification requirements and goals
- Project context (if available)
## Output Structure (Required)
Output in Chinese using this Markdown structure:
---
### 0. 思考过程与规划策略 (Thinking Process & Planning Strategy)
Present your planning process in these steps:
1. **需求解析**: Break down requirements and clarify core objectives
2. **代码结构勘探**: Analyze current code structure and logic flow
3. **核心修改点识别**: Identify modification points and formulate strategy
4. **依赖与风险评估**: Assess dependencies and risks
5. **规划文档组织**: Organize planning document
### **代码修改规划方案 (Code Modification Plan)**
### **第一部分:需求分析与规划总览 (Part 1: Requirements Analysis & Planning Overview)**
* **1.1 用户原始需求结构化解析 (Structured Analysis of Users Original Requirements):**
* [将用户的原始需求拆解成一个或多个清晰、独立、可操作的要点列表。每个要点都是一个明确的目标。]
* **- 需求点 A:** [描述第一个具体需求]
* **- 需求点 B:** [描述第二个具体需求]
* **- ...**
* **1.2 技术实现目标与高级策略 (Technical Implementation Goals & High-Level Strategy):**
* [基于上述需求分析,将其转化为具体的、可衡量的技术目标。并简述为达成这些目标将采用的整体技术思路或架构策略。例如:为实现【需求点A】,我们需要在 `ServiceA` 中引入一个新的处理流程。为实现【需求点B】,我们将重构 `ModuleB` 的数据验证逻辑,以提高其扩展性。]
### **第二部分:涉及文件概览 (Part 2: Involved Files Overview)**
* **文件列表 (File List):** [列出所有识别出的相关文件名(若路径已知/可推断,请包含路径)。不仅包括直接修改的文件,也包括提供关键上下文或可能受间接影响的文件。示例: `- src/core/module_a.py (直接修改)`, `- src/utils/helpers.py (依赖项,可能受影响)`, `- configs/settings.json (配置参考)`]
### **第三部分:详细修改计划 (Part 3: Detailed Modification Plan)**
---
*针对每个需要直接修改的文件进行描述:*
**文件: [文件路径或文件名] (File: [File path or filename])**
* **1. 位置 (Location):**
* [清晰说明函数、类、方法或具体的代码区域,如果可能,指出大致行号范围。示例: 函数 `calculate_total_price` 内部,约第 75-80 行]
* **1.bis 相关原始代码片段 (Relevant Original Code Snippet):**
* [**在此处引用需要修改或在其附近进行修改的、最相关的几行原始代码。** 这为开发者提供了直接的上下文。如果代码未提供,则注明相关代码未提供,根据描述进行规划。]
* ```[language]
// 引用相关的1-5行原始代码
```
* **2. 修改描述与预期逻辑 (Modification Description & Intended Logic):**
* **当前状态简述 (Brief Current State):** [可选,如果有助于理解变更,简述当前位置代码的核心功能。]
* **拟议修改点 (Proposed Changes):**
* [分步骤详细描述需要进行的逻辑更改。用清晰的中文自然语言解释 *什么* 需要被改变或添加。]
* **预期逻辑与数据流示意 (Intended Logic and Data Flow Sketch):**
* [使用简洁调用图的风格,描述此修改点引入或改变后的 *预期* 控制流程和关键数据传递。]
* [**图例参考**: `───►` 调用/流程转向, `◄───` 返回/结果, `◊───` 条件分支, `ループ` 循环块, `[数据]` 表示关键数据, `// 注释` ]
* **修改理由 (Reason for Modification):** [解释 *为什么* 这个修改是必要的,并明确关联到 **第一部分** 中解析出的某个【需求点】或【技术目标】。]
* **预期结果 (Intended Outcome):** [描述此修改完成后,该代码段预期的行为或产出。]
* **3. 必要上下文与注意事项 (Necessary Context & Considerations):**
* [提及实施者在进行此特定更改时必须了解的关键变量、数据结构、已有函数的依赖关系、新引入的依赖。]
* [**重点指出**潜在的连锁反应、对其他模块的可能影响、性能考量、错误处理、事务性、并发问题或数据完整性等重要风险点。]
---
*(对每个需要修改的文件重复上述格式)*
## Key Requirements
1. **Language**: All output in Chinese
2. **No Code Generation**: Do not write actual code. Provide descriptive modification plan only
3. **Focus**: Detail what and why. Use logic sketches to illustrate how
4. **Completeness**: State assumptions clearly when information is incomplete
## Self-Review Checklist
Before providing final output, verify:
- [ ] Thinking process outlines structured analytical approach
- [ ] All requirements addressed in the plan
- [ ] Plan is logical, actionable, and detailed
- [ ] Modification reasons link back to requirements
- [ ] Context and risks are highlighted
- [ ] No actual code generated

View File

@@ -0,0 +1,30 @@
Plan system migration and modernization strategies.
## CORE CHECKLIST ⚡
□ Assess current system completely before planning migration
□ Plan incremental migration (avoid big-bang approach)
□ Include rollback plan for every migration step
□ Provide file:line references for all affected code
## REQUIRED ANALYSIS
1. Assess current system architecture and migration requirements
2. Identify migration paths and transformation strategies
3. Plan data migration and system cutover procedures
4. Evaluate compatibility and integration challenges
5. Document rollback plans and risk mitigation strategies
## OUTPUT REQUIREMENTS
- Migration strategy with step-by-step execution plan
- Data migration procedures and validation checkpoints
- Compatibility assessment with file:line references
- Risk analysis and rollback procedures for each phase
- Testing strategy for migration validation
## VERIFICATION CHECKLIST ✓
□ Migration planned in incremental phases (not big-bang)
□ Every phase has rollback plan documented
□ Data migration validated with checkpoints
□ Compatibility issues identified and mitigated
□ Testing strategy covers all migration phases
Focus: Low-risk incremental migration with comprehensive fallback options.

View File

@@ -0,0 +1,122 @@
# Rule Template: API Rules (Backend/Fullstack Only)
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
- {API_FRAMEWORK}: API framework (Express, FastAPI, etc)
## Output Format
```markdown
---
paths:
- "**/api/**/*.{FILE_EXT}"
- "**/routes/**/*.{FILE_EXT}"
- "**/endpoints/**/*.{FILE_EXT}"
- "**/controllers/**/*.{FILE_EXT}"
- "**/handlers/**/*.{FILE_EXT}"
---
# {TECH_STACK_NAME} API Rules
## Endpoint Design
[REST/GraphQL conventions from Exa research]
### URL Structure
- Resource naming (plural nouns)
- Nesting depth limits
- Query parameter conventions
- Version prefixing
### HTTP Methods
- GET: Read operations
- POST: Create operations
- PUT/PATCH: Update operations
- DELETE: Remove operations
### Status Codes
- 2xx: Success responses
- 4xx: Client errors
- 5xx: Server errors
## Request Validation
[Input validation patterns]
### Schema Validation
```{lang}
// Example validation schema
```
### Required Fields
- Validation approach
- Error messages format
- Sanitization rules
## Response Format
[Standard response structures]
### Success Response
```json
{
"data": {},
"meta": {}
}
```
### Pagination
```json
{
"data": [],
"pagination": {
"page": 1,
"limit": 20,
"total": 100
}
}
```
## Error Responses
[Error handling for APIs]
### Error Format
```json
{
"error": {
"code": "ERROR_CODE",
"message": "Human readable message",
"details": {}
}
}
```
### Common Error Codes
- VALIDATION_ERROR
- NOT_FOUND
- UNAUTHORIZED
- FORBIDDEN
## Authentication & Authorization
[Auth patterns]
- Token handling
- Permission checks
- Rate limiting
## Documentation
[API documentation standards]
- OpenAPI/Swagger
- Inline documentation
- Example requests/responses
```
## Content Guidelines
- Focus on API-specific patterns
- Include request/response examples
- Cover security considerations
- Reference framework conventions

View File

@@ -0,0 +1,122 @@
# Rule Template: Component Rules (Frontend/Fullstack Only)
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
- {UI_FRAMEWORK}: UI framework (React, Vue, etc)
## Output Format
```markdown
---
paths:
- "**/components/**/*.{FILE_EXT}"
- "**/ui/**/*.{FILE_EXT}"
- "**/views/**/*.{FILE_EXT}"
- "**/pages/**/*.{FILE_EXT}"
---
# {TECH_STACK_NAME} Component Rules
## Component Structure
[Organization patterns from Exa research]
### File Organization
```
components/
├── common/ # Shared components
├── features/ # Feature-specific
├── layout/ # Layout components
└── ui/ # Base UI elements
```
### Component Template
```{lang}
// Standard component structure
```
### Naming Conventions
- PascalCase for components
- Descriptive names
- Prefix conventions (if any)
## Props & State
[State management guidelines]
### Props Definition
```{lang}
// Props type/interface example
```
### Props Best Practices
- Required vs optional
- Default values
- Prop validation
- Prop naming
### Local State
- When to use local state
- State initialization
- State updates
### Shared State
- State management approach
- Context usage
- Store patterns
## Styling
[CSS/styling conventions]
### Approach
- [CSS Modules/Styled Components/Tailwind/etc]
### Style Organization
```{lang}
// Style example
```
### Naming Conventions
- Class naming (BEM, etc)
- CSS variable usage
- Theme integration
## Accessibility
[A11y requirements]
### Essential Requirements
- Semantic HTML
- ARIA labels
- Keyboard navigation
- Focus management
### Testing A11y
- Automated checks
- Manual testing
- Screen reader testing
## Performance
[Performance guidelines]
### Optimization Patterns
- Memoization
- Lazy loading
- Code splitting
- Virtual lists
### Avoiding Re-renders
- When to memoize
- Callback optimization
- State structure
```
## Content Guidelines
- Focus on component-specific patterns
- Include framework-specific examples
- Cover accessibility requirements
- Address performance considerations

View File

@@ -0,0 +1,89 @@
# Rule Template: Configuration Rules
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {CONFIG_FILES}: List of config file patterns
## Output Format
```markdown
---
paths:
- "*.config.*"
- ".*rc"
- ".*rc.{js,json,yaml,yml}"
- "package.json"
- "tsconfig*.json"
- "pyproject.toml"
- "Cargo.toml"
- "go.mod"
- ".env*"
---
# {TECH_STACK_NAME} Configuration Rules
## Project Setup
[Configuration guidelines from Exa research]
### Essential Config Files
- [List primary config files]
- [Purpose of each]
### Recommended Structure
```
project/
├── [config files]
├── src/
└── tests/
```
## Tooling
[Linters, formatters, bundlers]
### Linting
- Tool: [ESLint/Pylint/etc]
- Config file: [.eslintrc/pyproject.toml/etc]
- Key rules to enable
### Formatting
- Tool: [Prettier/Black/etc]
- Integration with editor
- Pre-commit hooks
### Build Tools
- Bundler: [Webpack/Vite/etc]
- Build configuration
- Optimization settings
## Environment
[Environment management]
### Environment Variables
- Naming conventions
- Required vs optional
- Secret handling
- .env file structure
### Development vs Production
- Environment-specific configs
- Feature flags
- Debug settings
## Dependencies
[Dependency management]
- Lock file usage
- Version pinning strategy
- Security updates
- Peer dependencies
```
## Content Guidelines
- Focus on config file best practices
- Include security considerations
- Cover development workflow setup
- Mention CI/CD integration where relevant

View File

@@ -0,0 +1,60 @@
# Rule Template: Core Principles
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
## Output Format
```markdown
---
paths: **/*.{FILE_EXT}
---
# {TECH_STACK_NAME} Core Principles
## Philosophy
[Synthesize core philosophy from Exa research]
- Key paradigms and mental models
- Design philosophy
- Community conventions
## Naming Conventions
[Language-specific naming rules]
- Variables and functions
- Classes and types
- Files and directories
- Constants and enums
## Code Organization
[Structure and module guidelines]
- File structure patterns
- Module boundaries
- Import organization
- Dependency management
## Type Safety
[Type system best practices - if applicable]
- Type annotation guidelines
- Generic usage patterns
- Type inference vs explicit types
- Null/undefined handling
## Documentation
[Documentation standards]
- Comment style
- JSDoc/docstring format
- README conventions
```
## Content Guidelines
- Focus on universal principles that apply to ALL files
- Keep rules actionable and specific
- Include rationale for each rule
- Reference official style guides where applicable

View File

@@ -0,0 +1,70 @@
# Rule Template: Implementation Patterns
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
## Output Format
```markdown
---
paths: src/**/*.{FILE_EXT}
---
# {TECH_STACK_NAME} Implementation Patterns
## Common Patterns
[With code examples from Exa research]
### Pattern 1: [Name]
```{lang}
// Example code
```
**When to use**: [Context]
**Benefits**: [Why this pattern]
### Pattern 2: [Name]
...
## Anti-Patterns to Avoid
[Common mistakes with examples]
### Anti-Pattern 1: [Name]
```{lang}
// Bad example
```
**Problem**: [Why it's bad]
**Solution**: [Better approach]
## Error Handling
[Error handling conventions]
- Error types and hierarchy
- Try-catch patterns
- Error propagation
- Logging practices
## Async Patterns
[Asynchronous code conventions - if applicable]
- Promise handling
- Async/await usage
- Concurrency patterns
- Error handling in async code
## State Management
[State handling patterns]
- Local state patterns
- Shared state approaches
- Immutability practices
```
## Content Guidelines
- Focus on source code implementation
- Provide concrete code examples
- Show both good and bad patterns
- Include context for when to apply each pattern

View File

@@ -0,0 +1,81 @@
# Rule Template: Testing Rules
## Variables
- {TECH_STACK_NAME}: Tech stack display name
- {FILE_EXT}: File extension pattern
- {TEST_FRAMEWORK}: Primary testing framework
## Output Format
```markdown
---
paths:
- "**/*.{test,spec}.{FILE_EXT}"
- "tests/**/*.{FILE_EXT}"
- "__tests__/**/*.{FILE_EXT}"
- "**/test_*.{FILE_EXT}"
- "**/*_test.{FILE_EXT}"
---
# {TECH_STACK_NAME} Testing Rules
## Testing Framework
[Recommended frameworks from Exa research]
- Primary: {TEST_FRAMEWORK}
- Assertion library
- Mocking library
- Coverage tool
## Test Structure
[Organization patterns]
### File Naming
- Unit tests: `*.test.{ext}` or `*.spec.{ext}`
- Integration tests: `*.integration.test.{ext}`
- E2E tests: `*.e2e.test.{ext}`
### Test Organization
```{lang}
describe('[Component/Module]', () => {
describe('[method/feature]', () => {
it('should [expected behavior]', () => {
// Arrange
// Act
// Assert
});
});
});
```
## Mocking & Fixtures
[Best practices]
- Mock creation patterns
- Fixture organization
- Test data factories
- Cleanup strategies
## Assertions
[Assertion patterns]
- Common assertions
- Custom matchers
- Async assertions
- Error assertions
## Coverage Requirements
[Coverage guidelines]
- Minimum coverage thresholds
- What to cover vs skip
- Coverage report interpretation
```
## Content Guidelines
- Include framework-specific patterns
- Show test structure examples
- Cover both unit and integration testing
- Include async testing patterns

View File

@@ -0,0 +1,89 @@
# Tech Stack Rules Generation Agent Prompt
## Context Variables
- {TECH_STACK_NAME}: Normalized tech stack name (e.g., "typescript-react")
- {PRIMARY_LANG}: Primary language (e.g., "typescript")
- {FILE_EXT}: File extension pattern (e.g., "{ts,tsx}")
- {FRAMEWORK_TYPE}: frontend | backend | fullstack | library
- {COMPONENTS}: Array of tech components
- {OUTPUT_DIR}: .claude/rules/tech/{TECH_STACK_NAME}/
## Agent Instructions
Generate path-conditional rules for Claude Code automatic loading.
### Step 1: Execute Exa Research
Run 4-6 parallel queries based on tech stack:
**Base Queries** (always execute):
```
mcp__exa__get_code_context_exa(query: "{PRIMARY_LANG} best practices principles 2025", tokensNum: 8000)
mcp__exa__get_code_context_exa(query: "{PRIMARY_LANG} implementation patterns examples", tokensNum: 7000)
mcp__exa__get_code_context_exa(query: "{PRIMARY_LANG} testing strategies conventions", tokensNum: 5000)
mcp__exa__web_search_exa(query: "{PRIMARY_LANG} configuration setup 2025", numResults: 5)
```
**Component Queries** (for each framework in COMPONENTS):
```
mcp__exa__get_code_context_exa(query: "{PRIMARY_LANG} {component} integration patterns", tokensNum: 5000)
```
### Step 2: Read Rule Templates
Read each template file before generating content:
```
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-core.txt)
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-patterns.txt)
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-testing.txt)
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-config.txt)
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-api.txt) # Only if backend/fullstack
Read(~/.ccw/workflows/cli-templates/prompts/rules/rule-components.txt) # Only if frontend/fullstack
```
### Step 3: Generate Rule Files
Create directory and write files:
```bash
mkdir -p "{OUTPUT_DIR}"
```
**Always Generate**:
- core.md (from rule-core.txt template)
- patterns.md (from rule-patterns.txt template)
- testing.md (from rule-testing.txt template)
- config.md (from rule-config.txt template)
**Conditional**:
- api.md: Only if FRAMEWORK_TYPE == 'backend' or 'fullstack'
- components.md: Only if FRAMEWORK_TYPE == 'frontend' or 'fullstack'
### Step 4: Write Metadata
```json
{
"tech_stack": "{TECH_STACK_NAME}",
"primary_lang": "{PRIMARY_LANG}",
"file_ext": "{FILE_EXT}",
"framework_type": "{FRAMEWORK_TYPE}",
"components": ["{COMPONENTS}"],
"generated_at": "{ISO_TIMESTAMP}",
"source": "exa-research",
"files_generated": ["core.md", "patterns.md", "testing.md", "config.md", ...]
}
```
### Step 5: Report Completion
Provide summary:
- Files created with their path patterns
- Exa queries executed (count)
- Sources consulted (count)
## Critical Requirements
1. Every .md file MUST start with `paths` YAML frontmatter
2. Use {FILE_EXT} consistently across all rule files
3. Synthesize Exa research into actionable rules
4. Include code examples from Exa sources
5. Keep each file focused on its specific domain

View File

@@ -0,0 +1,359 @@
Template for generating tech stack module documentation files
## Purpose
Guide agent to create modular tech stack documentation from Exa research results.
## File Location
`.claude/skills/{tech_stack_name}/*.md`
## Module Structure
Each module should include:
- **Frontmatter**: YAML with module name and tech stack
- **Main Sections**: Clear headings with hierarchical organization
- **Code Examples**: Real examples from Exa research
- **Best Practices**: Do's and don'ts sections
- **References**: Attribution to Exa sources
---
## Module 1: principles.md (~3K tokens)
**Purpose**: Core concepts, philosophies, and fundamental principles
**Frontmatter**:
```yaml
---
module: principles
tech_stack: {tech_stack_name}
description: Core concepts and philosophies
tokens: ~3000
---
```
**Structure**:
```markdown
# {Tech} Principles
## Core Concepts
- Fundamental principle 1
- Fundamental principle 2
- Key philosophy
## Design Philosophy
- Approach to problem-solving
- Architectural principles
- Core values
## Key Features
- Feature 1: Description
- Feature 2: Description
## When to Use
- Use case scenarios
- Best fit situations
## References
- Source 1 from Exa
- Source 2 from Exa
```
---
## Module 2: patterns.md (~5K tokens)
**Purpose**: Implementation patterns with code examples
**Frontmatter**:
```yaml
---
module: patterns
tech_stack: {tech_stack_name}
description: Implementation patterns with examples
tokens: ~5000
---
```
**Structure**:
```markdown
# {Tech} Patterns
## Common Patterns
### Pattern 1: {Name}
**Use Case**: When to use this pattern
**Implementation**:
\`\`\`{language}
// Code example from Exa
\`\`\`
**Benefits**: Why use this pattern
### Pattern 2: {Name}
[Same structure]
## Architectural Patterns
- Pattern descriptions
- Code examples
## Component Patterns
- Reusable component structures
- Integration examples
## References
- Exa sources with pattern examples
```
---
## Module 3: practices.md (~4K tokens)
**Purpose**: Best practices, anti-patterns, pitfalls
**Frontmatter**:
```yaml
---
module: practices
tech_stack: {tech_stack_name}
description: Best practices and anti-patterns
tokens: ~4000
---
```
**Structure**:
```markdown
# {Tech} Best Practices
## Do's
✅ **Practice 1**: Description
- Rationale
- Example scenario
✅ **Practice 2**: Description
## Don'ts
❌ **Anti-pattern 1**: What to avoid
- Why it's problematic
- Better alternative
❌ **Anti-pattern 2**: What to avoid
## Common Pitfalls
1. **Pitfall 1**: Description and solution
2. **Pitfall 2**: Description and solution
## Performance Considerations
- Optimization techniques
- Common bottlenecks
## Security Best Practices
- Security considerations
- Common vulnerabilities
## References
- Exa sources for best practices
```
---
## Module 4: testing.md (~3K tokens)
**Purpose**: Testing strategies, frameworks, and examples
**Frontmatter**:
```yaml
---
module: testing
tech_stack: {tech_stack_name}
description: Testing strategies and frameworks
tokens: ~3000
---
```
**Structure**:
```markdown
# {Tech} Testing
## Testing Strategies
- Unit testing approach
- Integration testing approach
- E2E testing approach
## Testing Frameworks
### Framework 1
- Setup
- Basic usage
- Example:
\`\`\`{language}
// Test example from Exa
\`\`\`
## Test Patterns
- Common test patterns
- Mock strategies
- Assertion best practices
## Coverage Recommendations
- What to test
- Coverage targets
## References
- Exa sources for testing examples
```
---
## Module 5: config.md (~3K tokens)
**Purpose**: Setup, configuration, and tooling
**Frontmatter**:
```yaml
---
module: config
tech_stack: {tech_stack_name}
description: Setup, configuration, and tooling
tokens: ~3000
---
```
**Structure**:
```markdown
# {Tech} Configuration
## Installation
\`\`\`bash
# Installation commands
\`\`\`
## Basic Configuration
\`\`\`{config-format}
// Configuration example from Exa
\`\`\`
## Common Configurations
### Development
- Dev config setup
- Hot reload configuration
### Production
- Production optimizations
- Build configurations
## Tooling
- Recommended tools
- IDE/Editor setup
- Linters and formatters
## Environment Setup
- Environment variables
- Config file structure
## References
- Exa sources for configuration
```
---
## Module 6: frameworks.md (~4K tokens) [CONDITIONAL]
**Purpose**: Framework integration patterns (only for composite tech stacks)
**Condition**: Only generate if `is_composite = true`
**Frontmatter**:
```yaml
---
module: frameworks
tech_stack: {tech_stack_name}
description: Framework integration patterns
tokens: ~4000
conditional: composite_only
---
```
**Structure**:
```markdown
# {Main Tech} + {Framework} Integration
## Integration Overview
- How {main_tech} works with {framework}
- Architecture considerations
## Setup
\`\`\`bash
# Integration setup commands
\`\`\`
## Integration Patterns
### Pattern 1: {Name}
\`\`\`{language}
// Integration example from Exa
\`\`\`
## Best Practices
- Integration best practices
- Common pitfalls
## Examples
- Real-world integration examples
- Code samples from Exa
## References
- Exa sources for integration patterns
```
---
## Metadata File: metadata.json
**Purpose**: Store generation metadata and research summary
**Structure**:
```json
{
"tech_stack_name": "typescript-react-nextjs",
"components": ["typescript", "react", "nextjs"],
"is_composite": true,
"generated_at": "2025-11-04T22:00:00Z",
"source": "exa-research",
"research_summary": {
"total_queries": 6,
"total_sources": 25,
"query_list": [
"typescript core principles best practices 2025",
"react common patterns architecture examples",
"nextjs configuration setup tooling 2025",
"testing strategies",
"react nextjs integration",
"typescript react integration"
]
}
}
```
---
## Generation Guidelines
### Content Synthesis from Exa
- Extract relevant code examples from Exa results
- Synthesize information from multiple sources
- Maintain technical accuracy
- Cite sources in References section
### Formatting Rules
- Use clear markdown headers
- Include code fences with language specification
- Use emoji for Do's (✅) and Don'ts (❌)
- Keep token estimates accurate
### Error Handling
- If Exa query fails, note in References section
- If insufficient data, mark section as "Limited research available"
- Handle missing components gracefully
### Token Distribution
- Total budget: ~22K tokens for 6 modules
- Adjust module size based on content availability
- Prioritize quality over hitting exact token counts

View File

@@ -0,0 +1,185 @@
Template for generating tech stack SKILL.md index file
## Purpose
Create main SKILL package index with module references and loading recommendations.
## File Location
`.claude/skills/{tech_stack_name}/SKILL.md`
## Template Structure
```markdown
---
name: {TECH_STACK_NAME}
description: {MAIN_TECH} development guidelines from industry standards (Exa research)
version: 1.0.0
generated: {ISO_TIMESTAMP}
source: exa-research
---
# {TechStackTitle} SKILL Package
## Overview
{Brief 1-2 sentence description of the tech stack and purpose of this SKILL package}
**Primary Technology**: {MAIN_TECH}
{IF_COMPOSITE}**Frameworks**: {COMPONENT_LIST}{/IF_COMPOSITE}
## Modular Documentation
### Core Understanding (~8K tokens)
- [Principles](./principles.md) - Core concepts and philosophies
- [Patterns](./patterns.md) - Implementation patterns with examples
### Practical Guidance (~7K tokens)
- [Best Practices](./practices.md) - Do's, don'ts, anti-patterns
- [Testing](./testing.md) - Testing strategies and frameworks
### Configuration & Integration (~7K tokens)
- [Configuration](./config.md) - Setup, tooling, configuration
{IF_COMPOSITE}- [Frameworks](./frameworks.md) - Integration patterns{/IF_COMPOSITE}
## Loading Recommendations
### Quick Reference (~7K tokens)
Load for quick consultation on core concepts:
- principles.md
- practices.md
**Use When**: Need quick reminder of best practices or core principles
### Implementation Focus (~8K tokens)
Load for active development work:
- patterns.md
- config.md
**Use When**: Writing code, setting up projects, implementing features
### Complete Package (~22K tokens)
Load all modules for comprehensive understanding:
- All 5-6 modules
**Use When**: Learning tech stack, architecture reviews, comprehensive reference
## Usage
**Load this SKILL when**:
- Starting new {TECH_STACK} projects
- Reviewing {TECH_STACK} code
- Learning {TECH_STACK} best practices
- Implementing {TECH_STACK} patterns
- Troubleshooting {TECH_STACK} issues
**Auto-triggers on**:
- Keywords: {TECH_KEYWORDS}
- File types: {FILE_EXTENSIONS}
## Research Metadata
- **Generated**: {ISO_TIMESTAMP}
- **Source**: Exa Research (web search + code context)
- **Queries Executed**: {QUERY_COUNT}
- **Sources Consulted**: {SOURCE_COUNT}
- **Research Quality**: {QUALITY_INDICATOR}
## Tech Stack Components
**Primary**: {MAIN_TECH} - {MAIN_TECH_DESCRIPTION}
{IF_COMPOSITE}
**Additional Frameworks**:
{FOR_EACH_COMPONENT}
- **{COMPONENT_NAME}**: {COMPONENT_DESCRIPTION}
{/FOR_EACH_COMPONENT}
{/IF_COMPOSITE}
## Version History
- **v1.0.0** ({DATE}): Initial SKILL package generated from Exa research
---
## Developer Notes
This SKILL package was auto-generated using:
- `/memory:tech-research` command
- Exa AI research APIs (mcp__exa__get_code_context_exa, mcp__exa__web_search_exa)
- Token limit: ~5K per module, ~22K total
To regenerate:
```bash
/memory:tech-research "{tech_stack_name}" --regenerate
```
```
---
## Variable Substitution Guide
### Required Variables
- `{TECH_STACK_NAME}`: Lowercase hyphenated name (e.g., "typescript-react-nextjs")
- `{TechStackTitle}`: Title case display name (e.g., "TypeScript React Next.js")
- `{MAIN_TECH}`: Primary technology (e.g., "TypeScript")
- `{ISO_TIMESTAMP}`: ISO 8601 timestamp (e.g., "2025-11-04T22:00:00Z")
- `{QUERY_COUNT}`: Number of Exa queries executed (e.g., 6)
- `{SOURCE_COUNT}`: Total sources consulted (e.g., 25)
### Conditional Variables
- `{IF_COMPOSITE}...{/IF_COMPOSITE}`: Only include if tech stack has multiple components
- `{COMPONENT_LIST}`: Comma-separated list of framework names
- `{FOR_EACH_COMPONENT}...{/FOR_EACH_COMPONENT}`: Loop through components
### Optional Variables
- `{MAIN_TECH_DESCRIPTION}`: One-line description of primary tech
- `{COMPONENT_DESCRIPTION}`: One-line description per component
- `{TECH_KEYWORDS}`: Comma-separated trigger keywords
- `{FILE_EXTENSIONS}`: File extensions (e.g., ".ts, .tsx, .jsx")
- `{QUALITY_INDICATOR}`: Research quality metric (e.g., "High", "Medium")
---
## Generation Instructions
### Step 1: Read metadata.json
Extract values for variables from metadata.json generated during module creation.
### Step 2: Determine composite status
- Single tech: Omit {IF_COMPOSITE} sections
- Composite: Include frameworks section and integration module reference
### Step 3: Calculate token estimates
- Verify module files exist
- Adjust token estimates based on actual file sizes
- Update loading recommendation estimates
### Step 4: Generate descriptions
- **Overview**: Brief description of tech stack purpose
- **Main tech description**: One-liner for primary technology
- **Component descriptions**: One-liner per additional framework
### Step 5: Build keyword lists
- Extract common keywords from tech stack name
- Add file extensions relevant to tech stack
- Include framework-specific triggers
### Step 6: Format timestamps
- Use ISO 8601 format for all timestamps
- Include timezone (UTC recommended)
### Step 7: Write SKILL.md
- Apply template with all substitutions
- Validate markdown formatting
- Verify all relative paths work
---
## Validation Checklist
- [ ] All module files exist and are referenced
- [ ] Token estimates are reasonably accurate
- [ ] Conditional sections handled correctly (composite vs single)
- [ ] Timestamps in ISO 8601 format
- [ ] All relative paths use ./ prefix
- [ ] Metadata section matches metadata.json
- [ ] Loading recommendations align with actual module sizes
- [ ] Usage section includes relevant trigger keywords

View File

@@ -0,0 +1,38 @@
PURPOSE: Generate comprehensive multi-layer test enhancement suggestions
- Success: Cover L0-L3 layers with focus on API, integration, and error scenarios
- Scope: Files with coverage gaps identified in TEST_ANALYSIS_RESULTS.md
- Goal: Provide specific, actionable test case suggestions that increase coverage completeness
TASK:
• L1 (Unit Tests): Suggest edge cases, boundary conditions, error paths, state transitions
• L2.1 (Integration): Suggest module interaction patterns, dependency injection scenarios
• L2.2 (API Contracts): Suggest request/response test cases, validation, status codes, error responses
• L2.4 (External APIs): Suggest mock strategies, failure scenarios, timeout handling, retry logic
• L2.5 (Failure Modes): Suggest exception hierarchies, error propagation, recovery strategies
• Cross-cutting: Suggest performance test cases, security considerations
MODE: analysis
CONTEXT: @.workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md
Memory: Project type, test framework, existing test patterns, coverage gaps
EXPECTED: Markdown report with structured test enhancement suggestions organized by:
1. File-level test requirements (per file needing tests)
2. Layer-specific test cases (L1, L2.1, L2.2, L2.4, L2.5)
3. Each suggestion includes:
- Test type and layer (e.g., "L2.2 API Contract Test")
- Specific test case description (e.g., "POST /api/users - Invalid email format")
- Expected behavior (e.g., "Returns 400 with validation error message")
- Dependencies/mocks needed (e.g., "Mock email service")
- Success criteria (e.g., "Status 400, error.field === 'email'")
4. Test ordering/dependencies (which tests should run first)
5. Integration test strategies (how components interact)
6. Error scenario matrix (all failure modes covered)
CONSTRAINTS:
- Focus on identified coverage gaps from TEST_ANALYSIS_RESULTS.md
- Prioritize API tests, integration tests, and error scenarios
- No code generation - suggestions only with sufficient detail for implementation
- Consider project conventions and existing test patterns
- Each suggestion should be actionable and specific (not generic)
- Output format: Markdown with clear section headers

View File

@@ -0,0 +1,179 @@
PURPOSE: Analyze test coverage gaps and design comprehensive test generation strategy
TASK:
• Read test-context-package.json to understand coverage gaps and framework
• Study implementation context from source session summaries
• Analyze existing test patterns and conventions
• Design test requirements for missing coverage
• Generate actionable test generation strategy
MODE: analysis
CONTEXT: @test-context-package.json @../../../{source-session-id}/.summaries/*.md
EXPECTED: Comprehensive test analysis document (gemini-test-analysis.md) with test requirements, scenarios, and generation strategy
RULES:
- Focus on test requirements and strategy, NOT code generation
- Study existing test patterns for consistency
- Prioritize critical business logic tests
- Specify clear test scenarios and coverage targets
- Identify all dependencies requiring mocks
- Output ONLY test analysis and generation strategy
## ANALYSIS REQUIREMENTS
### 1. Implementation Understanding
- Load all implementation summaries from source session
- Understand implemented features, APIs, and business logic
- Extract key functions, classes, and modules
- Identify integration points and dependencies
### 2. Existing Test Pattern Analysis
- Study existing test files for patterns and conventions
- Identify test structure (describe/it, test suites, fixtures)
- Analyze assertion patterns and mocking strategies
- Extract test setup/teardown patterns
### 3. Coverage Gap Assessment
For each file in missing_tests[], analyze:
- File purpose and functionality
- Public APIs requiring test coverage
- Critical paths and edge cases
- Integration points requiring tests
- Priority: high (core logic), medium (utilities), low (helpers)
### 4. Test Requirements Specification
For each missing test file, specify:
- Test scope: What needs to be tested
- Test scenarios: Happy path, error cases, edge cases, integration
- Test data: Required fixtures, mocks, test data
- Dependencies: External services, databases, APIs to mock
- Coverage targets: Functions/methods requiring tests
### 5. Test Generation Strategy
- Determine test generation approach for each file
- Identify reusable test patterns from existing tests
- Plan test data and fixture requirements
- Define mocking strategy for dependencies
- Specify expected test file structure
## EXPECTED OUTPUT FORMAT
Write comprehensive analysis to gemini-test-analysis.md:
# Test Generation Analysis
## 1. Implementation Context Summary
- **Source Session**: {source_session_id}
- **Implemented Features**: {feature_summary}
- **Changed Files**: {list_of_implementation_files}
- **Tech Stack**: {technologies_used}
## 2. Test Coverage Assessment
- **Existing Tests**: {count} files
- **Missing Tests**: {count} files
- **Coverage Percentage**: {percentage}%
- **Priority Breakdown**:
- High Priority: {count} files (core business logic)
- Medium Priority: {count} files (utilities, helpers)
- Low Priority: {count} files (configuration, constants)
## 3. Existing Test Pattern Analysis
- **Test Framework**: {framework_name_and_version}
- **File Naming Convention**: {pattern}
- **Test Structure**: {describe_it_or_other}
- **Assertion Style**: {expect_assert_should}
- **Mocking Strategy**: {mocking_framework_and_patterns}
- **Setup/Teardown**: {beforeEach_afterEach_patterns}
- **Test Data**: {fixtures_factories_builders}
## 4. Test Requirements by File
### File: {implementation_file_path}
**Test File**: {suggested_test_file_path}
**Priority**: {high|medium|low}
#### Scope
- {description_of_what_needs_testing}
#### Test Scenarios
1. **Happy Path Tests**
- {scenario_1}
- {scenario_2}
2. **Error Handling Tests**
- {error_scenario_1}
- {error_scenario_2}
3. **Edge Case Tests**
- {edge_case_1}
- {edge_case_2}
4. **Integration Tests** (if applicable)
- {integration_scenario_1}
- {integration_scenario_2}
#### Test Data & Fixtures
- {required_test_data}
- {required_mocks}
- {required_fixtures}
#### Dependencies to Mock
- {external_service_1}
- {external_service_2}
#### Coverage Targets
- Function: {function_name} - {test_requirements}
- Function: {function_name} - {test_requirements}
---
[Repeat for each missing test file]
---
## 5. Test Generation Strategy
### Overall Approach
- {strategy_description}
### Test Generation Order
1. {file_1} - {rationale}
2. {file_2} - {rationale}
3. {file_3} - {rationale}
### Reusable Patterns
- {pattern_1_from_existing_tests}
- {pattern_2_from_existing_tests}
### Test Data Strategy
- {approach_to_test_data_and_fixtures}
### Mocking Strategy
- {approach_to_mocking_dependencies}
### Quality Criteria
- Code coverage target: {percentage}%
- Test scenarios per function: {count}
- Integration test coverage: {approach}
## 6. Implementation Targets
**Purpose**: Identify new test files to create
**Format**: New test files only (no existing files to modify)
**Test Files to Create**:
1. **Target**: `tests/auth/TokenValidator.test.ts`
- **Type**: Create new test file
- **Purpose**: Test TokenValidator class
- **Scenarios**: 15 test cases covering validation logic, error handling, edge cases
- **Dependencies**: Mock JWT library, test fixtures for tokens
2. **Target**: `tests/middleware/errorHandler.test.ts`
- **Type**: Create new test file
- **Purpose**: Test error handling middleware
- **Scenarios**: 8 test cases for different error types and response formats
- **Dependencies**: Mock Express req/res/next, error fixtures
[List all test files to create]
## 7. Success Metrics
- **Test Coverage Goal**: {target_percentage}%
- **Test Quality**: All scenarios covered (happy, error, edge, integration)
- **Convention Compliance**: Follow existing test patterns
- **Maintainability**: Clear test descriptions, reusable fixtures

View File

@@ -0,0 +1,95 @@
# AI Prompt: Universal Creative Exploration Template (Chinese Output)
## I. CORE DIRECTIVE
You are an **Innovative Problem-Solving Catalyst**. Approach tasks with creative thinking, explore multiple solution paths, and generate novel approaches while maintaining practical viability. Responses **MUST** be in **Chinese (中文)**.
## II. CORE CAPABILITIES & THINKING MODE
**Capabilities**: Divergent thinking, pattern recognition, creative synthesis, constraint reframing, rapid prototyping, contextual adaptation, elegant simplicity, future-oriented design
**Thinking Mode**: Exploratory & open-minded, adaptive & flexible, synthesis-driven, innovation-focused
## III. EXPLORATION PHASES
**Divergent Phase**: Multiple perspectives, analogies & metaphors, constraint questioning, alternative approaches (3+), what-if scenarios
**Convergent Phase**: Pattern integration, practical viability, elegant simplicity, context optimization, future proofing
**Validation Phase**: Constraint compliance, risk assessment, proof of concept, iterative refinement, documentation
## IV. QUALITY STANDARDS
**Innovation**: Novelty, elegance, effectiveness, flexibility, insight, viability
**Process**: Exploration breadth, synthesis quality, justification clarity, alternatives documented
## V. RESPONSE STRUCTURE (Output in Chinese)
---
### 0. 创造性思考过程 (Creative Thinking Process)
* **问题重构**: 从多个角度理解和重新定义问题
* **灵感来源**: 识别可借鉴的模式、类比或跨领域经验
* **可能性空间**: 探索不同解决方案的可能性
* **约束与自由**: 识别硬性约束与创新空间
* **综合策略**: 规划如何整合不同思路形成最优方案
### 1. 问题深度理解 (Deep Problem Understanding)
* **表层需求**: 明确的功能和非功能需求
* **隐含目标**: 未明说但重要的用户期望和体验目标
* **约束条件**: 必须遵守的技术和业务约束
* **机会空间**: 可以创新和优化的领域
### 2. 多角度探索 (Multi-Perspective Exploration)
* **视角1-3**: 从不同角度分析问题,每个视角包含:核心洞察、解决思路、优势与局限
### 3. 跨领域类比 (Cross-Domain Analogies)
* **类比1-2**: 相关领域或模式,说明如何应用到当前问题
### 4. 候选方案生成 (Solution Candidates - 2-3个)
* **方案A/B/C**: 每个方案包含:核心思路、关键特点、优势、潜在挑战、适用场景
### 5. 方案综合与优化 (Solution Synthesis & Optimization)
* **选择理由**: 为什么选择或综合某些方案
* **综合策略**: 如何结合不同方案的优点
* **简化优化**: 如何使方案更简洁优雅
* **创新点**: 方案的独特价值和创新之处
### 6. 实施细节与代码 (Implementation Details & Code)
* **架构设计**: 清晰的结构设计
* **核心实现**: 关键功能的实现
* **扩展点**: 预留的扩展和定制接口
* **优雅之处**: 设计中的巧妙和优雅元素
### 7. 验证与迭代 (Validation & Iteration)
* **快速验证**: 如何快速验证核心假设
* **迭代路径**: 从MVP到完整方案的演进路径
* **反馈机制**: 如何收集反馈并改进
* **风险应对**: 主要风险和应对策略
### 8. 替代方案记录 (Alternative Approaches)
* **未采纳方案**: 列出其他考虑过的方案
* **未来可能性**: 可能在未来更合适的方案
* **经验教训**: 从探索过程中学到的洞察
### 9. 总结与展望 (Summary & Future Vision)
* **核心价值**: 该方案的核心价值和创新点
* **关键决策**: 重要的设计决策及其理由
* **扩展可能**: 未来可能的扩展方向
* **开放性**: 留下的开放问题和探索空间
---
## VI. STYLE & CONSTRAINTS
**Style**: Exploratory & open, insightful & thoughtful, enthusiastic & positive, clear & inspiring, balanced & practical
**Core Constraints**:
- Practical viability - innovation must be implementable
- Constraint awareness - creativity within boundaries
- Value-driven - innovation must add real value
- Simplicity preference - elegant solutions over complex ones
- Documentation - alternative approaches must be recorded
- Justification - creative choices must be reasoned
- Iterative mindset - embrace refinement and evolution
## VII. CREATIVE THINKING TECHNIQUES (Optional)
**Toolkit**: First principles thinking, inversion, constraint removal, analogical reasoning, combination & synthesis, abstraction ladder, pattern languages, what-if scenarios
**Application**: Select 2-3 techniques, apply during divergent phase, document insights, inform synthesis
## VIII. FINAL VALIDATION
Before finalizing, verify: Multiple perspectives explored, 2-3 distinct approaches considered, cross-domain analogies identified, solution elegance pursued, practical constraints respected, innovation points articulated, alternatives documented, implementation viable, future extensibility considered, rationale clear

View File

@@ -0,0 +1,92 @@
# AI Prompt: Universal Rigorous Execution Template (Chinese Output)
## I. CORE DIRECTIVE
You are a **Precision-Driven Expert System**. Execute tasks with rigorous accuracy, systematic validation, and adherence to standards. Responses **MUST** be in **Chinese (中文)**.
## II. CORE CAPABILITIES & THINKING MODE
**Capabilities**: Systematic methodology, specification adherence, validation & verification, edge case handling, error prevention, formal reasoning, documentation excellence, quality assurance
**Thinking Mode**: Rigorous & methodical, defensive & cautious, standards-driven, traceable & auditable
## III. EXECUTION CHECKLIST
**Before Starting**: Clarify requirements, identify standards, plan validation, assess risks
**During Execution**: Follow patterns, validate continuously, handle edge cases, maintain consistency, document decisions
**After Completion**: Comprehensive testing, code review, backward compatibility, documentation update
## IV. QUALITY STANDARDS
**Code**: Correctness, robustness, maintainability, performance, security, testability
**Process**: Repeatability, traceability, reversibility, incremental progress
## V. RESPONSE STRUCTURE (Output in Chinese)
---
### 0. 规范性思考过程 (Rigorous Thinking Process)
* **任务理解**: 明确任务目标、范围和约束条件
* **标准识别**: 确定适用的规范、最佳实践和质量标准
* **风险分析**: 识别潜在问题、边界条件和失败模式
* **验证计划**: 定义成功标准和验证检查点
* **执行策略**: 制定系统化、可追溯的实施方案
### 1. 需求分析与验证 (Requirement Analysis & Validation)
* **核心需求**: 列出所有明确的功能和非功能需求
* **隐式约束**: 识别未明确说明但必须遵守的约束
* **边界条件**: 明确输入范围、特殊情况和异常场景
* **验证标准**: 定义可测试的成功标准
### 2. 标准与模式分析 (Standards & Pattern Analysis)
* **适用标准**: 列出相关编码规范、设计模式、最佳实践
* **现有模式**: 识别项目中类似的成功实现
* **依赖关系**: 分析与现有代码的集成点和依赖
* **兼容性要求**: 确保向后兼容和接口稳定性
### 3. 详细实施方案 (Detailed Implementation Plan)
* **分解步骤**: 将任务分解为小的、可验证的步骤
* **关键决策**: 记录所有重要的技术决策及其理由
* **边界处理**: 说明如何处理边界条件和错误情况
* **验证点**: 在每个步骤设置验证检查点
### 4. 实施细节与代码 (Implementation Details & Code)
* **核心逻辑**: 实现主要功能,确保正确性
* **错误处理**: 完善的异常捕获和错误处理
* **输入验证**: 严格的输入校验和边界检查
* **代码注释**: 关键逻辑的清晰注释说明
### 5. 测试与验证 (Testing & Validation)
* **单元测试**: 覆盖所有主要功能和边界条件
* **集成测试**: 验证与现有系统的集成
* **边界测试**: 测试极端情况和异常输入
* **回归测试**: 确保未破坏现有功能
### 6. 质量检查清单 (Quality Checklist)
- [ ] 功能完整性: 所有需求都已实现
- [ ] 规范遵循: 符合代码规范和最佳实践
- [ ] 边界处理: 所有边界条件都已处理
- [ ] 错误处理: 完善的异常处理机制
- [ ] 向后兼容: 未破坏现有功能
- [ ] 文档完整: 代码注释和文档齐全
- [ ] 测试覆盖: 全面的测试覆盖
- [ ] 性能优化: 符合性能要求
### 7. 总结与建议 (Summary & Recommendations)
* **实施总结**: 简要总结完成的工作
* **关键决策**: 重申重要技术决策
* **后续建议**: 提出改进和优化建议
* **风险提示**: 指出需要关注的潜在问题
---
## VI. STYLE & CONSTRAINTS
**Style**: Formal & professional, precise & unambiguous, evidence-based, defensive & cautious, structured & systematic
**Core Constraints**:
- Zero tolerance for errors - correctness is paramount
- Standards compliance - follow established conventions
- Complete validation - all assumptions must be validated
- Comprehensive testing - all paths must be tested
- Full documentation - all decisions must be documented
- Backward compatibility - existing functionality is sacred
- No shortcuts - quality cannot be compromised
## VII. FINAL VALIDATION
Before finalizing, verify: All requirements addressed, edge cases handled, standards followed, decisions documented, code tested, documentation complete, backward compatibility maintained, quality standards met

View File

@@ -0,0 +1,28 @@
Assess the technical feasibility of a workflow implementation plan.
## CORE CHECKLIST ⚡
□ Evaluate implementation complexity and required skills
□ Validate all technical dependencies and prerequisites
□ Assess the proposed code structure and integration patterns
□ Verify the completeness of the testing strategy
## REQUIRED TECHNICAL ANALYSIS
1. **Implementation Complexity**: Evaluate code difficulty and required skills.
2. **Technical Dependencies**: Review libraries, versions, and build systems.
3. **Code Structure**: Assess file organization, naming, and modularity.
4. **Testing Completeness**: Evaluate test coverage, types, and gaps.
5. **Execution Readiness**: Validate control flow, context, and file targets.
## OUTPUT REQUIREMENTS
- **Technical Assessment Report**: Grades for implementation, complexity, and quality.
- **Detailed Technical Findings**: Blocking issues, performance concerns, and improvements.
- **Implementation Recommendations**: Prerequisites, best practices, and refactoring.
- **Risk Mitigation**: Technical, dependency, integration, and quality risks.
## VERIFICATION CHECKLIST ✓
□ Implementation complexity and feasibility have been thoroughly evaluated
□ All technical dependencies and prerequisites are validated
□ The proposed code structure aligns with project standards
□ The testing plan is complete and adequate for the proposed changes
Focus: Technical execution details, code quality concerns, and implementation feasibility.

View File

@@ -0,0 +1,28 @@
Cross-validate strategic (Gemini) and technical (Codex) assessments.
## CORE CHECKLIST ⚡
□ Identify both consensus and conflict between the two analyses
□ Synthesize a unified risk profile and recommendation set
□ Resolve conflicting suggestions with a balanced approach
□ Frame final decisions as clear choices for the user
## REQUIRED CROSS-VALIDATION ANALYSIS
1. **Consensus Identification**: Find where both analyses agree.
2. **Conflict Resolution**: Analyze and resolve discrepancies.
3. **Risk Level Synthesis**: Combine risk assessments into a single profile.
4. **Recommendation Integration**: Synthesize recommendations into a unified plan.
5. **Quality Assurance Framework**: Establish combined quality metrics.
## OUTPUT REQUIREMENTS
- **Cross-Validation Summary**: Overall grade, confidence score, and risk profile.
- **Synthesis Report**: Consensus areas, conflict areas, and integrated recommendations.
- **User Approval Framework**: A clear breakdown of changes for user approval.
- **Modification Categories**: Classify changes by type (e.g., Task Structure, Technical).
## VERIFICATION CHECKLIST ✓
□ Both consensus and conflict between analyses are identified and documented
□ Risks and recommendations are synthesized into a single, coherent plan
□ Conflicting points are resolved with balanced, well-reasoned proposals
□ Final output is structured to facilitate clear user decisions
Focus: A balanced integration of strategic and technical perspectives to produce a single, actionable plan.

View File

@@ -0,0 +1,27 @@
Validate the strategic and architectural soundness of a workflow implementation plan.
## CORE CHECKLIST ⚡
□ Evaluate the plan against high-level system architecture
□ Assess the logic of the task breakdown and dependencies
□ Verify alignment with stated business objectives and success criteria
□ Identify strategic risks, not just low-level technical ones
## REQUIRED STRATEGIC ANALYSIS
1. **Architectural Soundness**: Assess design coherence and pattern consistency.
2. **Task Decomposition Logic**: Review task breakdown, granularity, and completeness.
3. **Dependency Coherence**: Analyze task interdependencies and logical flow.
4. **Business Alignment**: Validate against business objectives and requirements.
5. **Strategic Risk Identification**: Identify architectural, resource, and timeline risks.
## OUTPUT REQUIREMENTS
- **Strategic Assessment Report**: Grades for architecture, decomposition, and business alignment.
- **Detailed Recommendations**: Critical issues, improvements, and alternative approaches.
- **Action Items**: A prioritized list of changes (Immediate, Short-term, Long-term).
## VERIFICATION CHECKLIST ✓
□ The plan's architectural soundness has been thoroughly assessed
□ Task decomposition and dependencies are logical and coherent
□ The plan is confirmed to be in alignment with business goals
□ Strategic risks are identified with clear recommendations
Focus: High-level strategic concerns, business alignment, and long-term architectural implications.

View File

@@ -0,0 +1,224 @@
Generate ANALYSIS_RESULTS.md with comprehensive solution design and technical analysis.
## OUTPUT FILE STRUCTURE
### Required Sections
```markdown
# Technical Analysis & Solution Design
## Executive Summary
- **Analysis Focus**: {core_problem_or_improvement_area}
- **Analysis Timestamp**: {timestamp}
- **Tools Used**: {analysis_tools}
- **Overall Assessment**: {feasibility_score}/5 - {recommendation_status}
---
## 1. Current State Analysis
### Architecture Overview
- **Existing Patterns**: {key_architectural_patterns}
- **Code Structure**: {current_codebase_organization}
- **Integration Points**: {system_integration_touchpoints}
- **Technical Debt Areas**: {identified_debt_with_impact}
### Compatibility & Dependencies
- **Framework Alignment**: {framework_compatibility_assessment}
- **Dependency Analysis**: {critical_dependencies_and_risks}
- **Migration Considerations**: {backward_compatibility_concerns}
### Critical Findings
- **Strengths**: {what_works_well}
- **Gaps**: {missing_capabilities_or_issues}
- **Risks**: {identified_technical_and_business_risks}
---
## 2. Proposed Solution Design
### Core Architecture Principles
- **Design Philosophy**: {key_design_principles}
- **Architectural Approach**: {chosen_architectural_pattern_with_rationale}
- **Scalability Strategy**: {how_solution_scales}
### System Design
- **Component Architecture**: {high_level_component_design}
- **Data Flow**: {data_flow_patterns_and_state_management}
- **API Design**: {interface_contracts_and_specifications}
- **Integration Strategy**: {how_components_integrate}
### Key Design Decisions
1. **Decision**: {critical_design_choice}
- **Rationale**: {why_this_approach}
- **Alternatives Considered**: {other_options_and_tradeoffs}
- **Impact**: {implications_on_architecture}
2. **Decision**: {another_critical_choice}
- **Rationale**: {reasoning}
- **Alternatives Considered**: {tradeoffs}
- **Impact**: {consequences}
### Technical Specifications
- **Technology Stack**: {chosen_technologies_with_justification}
- **Code Organization**: {module_structure_and_patterns}
- **Testing Strategy**: {testing_approach_and_coverage}
- **Performance Targets**: {performance_requirements_and_benchmarks}
---
## 3. Implementation Strategy
### Development Approach
- **Core Implementation Pattern**: {primary_implementation_strategy}
- **Module Dependencies**: {dependency_graph_and_order}
- **Quality Assurance**: {qa_approach_and_validation}
### Code Modification Targets
**Purpose**: Specific code locations for modification AND new files to create
**Identified Targets**:
1. **Target**: `src/module/File.ts:function:45-52`
- **Type**: Modify existing
- **Modification**: {what_to_change}
- **Rationale**: {why_change_needed}
2. **Target**: `src/module/NewFile.ts`
- **Type**: Create new file
- **Purpose**: {file_purpose}
- **Rationale**: {why_new_file_needed}
**Format Rules**:
- Existing files: `file:function:lines` (with line numbers)
- New files: `file` (no function or lines)
- Unknown lines: `file:function:*`
### Feasibility Assessment
- **Technical Complexity**: {complexity_rating_and_analysis}
- **Performance Impact**: {expected_performance_characteristics}
- **Resource Requirements**: {development_resources_needed}
- **Maintenance Burden**: {ongoing_maintenance_considerations}
### Risk Mitigation
- **Technical Risks**: {implementation_risks_and_mitigation}
- **Integration Risks**: {compatibility_challenges_and_solutions}
- **Performance Risks**: {performance_concerns_and_strategies}
- **Security Risks**: {security_vulnerabilities_and_controls}
---
## 4. Solution Optimization
### Performance Optimization
- **Optimization Strategies**: {key_performance_improvements}
- **Caching Strategy**: {caching_approach_and_invalidation}
- **Resource Management**: {resource_utilization_optimization}
- **Bottleneck Mitigation**: {identified_bottlenecks_and_solutions}
### Security Enhancements
- **Security Model**: {authentication_authorization_approach}
- **Data Protection**: {data_security_and_encryption}
- **Vulnerability Mitigation**: {known_vulnerabilities_and_controls}
- **Compliance**: {regulatory_and_compliance_considerations}
### Code Quality
- **Code Standards**: {coding_conventions_and_patterns}
- **Testing Coverage**: {test_strategy_and_coverage_goals}
- **Documentation**: {documentation_requirements}
- **Maintainability**: {maintainability_practices}
---
## 5. Critical Success Factors
### Technical Requirements
- **Must Have**: {essential_technical_capabilities}
- **Should Have**: {important_but_not_critical_features}
- **Nice to Have**: {optional_enhancements}
### Quality Metrics
- **Performance Benchmarks**: {measurable_performance_targets}
- **Code Quality Standards**: {quality_metrics_and_thresholds}
- **Test Coverage Goals**: {testing_coverage_requirements}
- **Security Standards**: {security_compliance_requirements}
### Success Validation
- **Acceptance Criteria**: {how_to_validate_success}
- **Testing Strategy**: {validation_testing_approach}
- **Monitoring Plan**: {production_monitoring_strategy}
- **Rollback Plan**: {failure_recovery_strategy}
---
## 6. Analysis Confidence & Recommendations
### Assessment Scores
- **Conceptual Integrity**: {score}/5 - {brief_assessment}
- **Architectural Soundness**: {score}/5 - {brief_assessment}
- **Technical Feasibility**: {score}/5 - {brief_assessment}
- **Implementation Readiness**: {score}/5 - {brief_assessment}
- **Overall Confidence**: {overall_score}/5
### Final Recommendation
**Status**: {PROCEED|PROCEED_WITH_MODIFICATIONS|RECONSIDER|REJECT}
**Rationale**: {clear_explanation_of_recommendation}
**Critical Prerequisites**: {what_must_be_resolved_before_proceeding}
---
## 7. Reference Information
### Tool Analysis Summary
- **Gemini Insights**: {key_architectural_and_pattern_insights}
- **Codex Validation**: {technical_feasibility_and_implementation_notes}
- **Consensus Points**: {agreements_between_tools}
- **Conflicting Views**: {disagreements_and_resolution}
### Context & Resources
- **Analysis Context**: {context_package_reference}
- **Documentation References**: {relevant_documentation}
- **Related Patterns**: {similar_implementations_in_codebase}
- **External Resources**: {external_references_and_best_practices}
```
## CONTENT REQUIREMENTS
### Analysis Priority Sources
1. **PRIMARY**: Individual role analysis.md files (system-architect, ui-designer, etc.) - technical details, ADRs, decision context
2. **SECONDARY**: role analysis documents - multi-perspective requirements and design specs
3. **REFERENCE**: guidance-specification.md - discussion context
### Focus Areas
- **SOLUTION IMPROVEMENTS**: How to enhance current design
- **KEY DESIGN DECISIONS**: Critical choices with rationale, alternatives, tradeoffs
- **CRITICAL INSIGHTS**: Non-obvious findings, risks, opportunities
- **OPTIMIZATION**: Performance, security, code quality recommendations
### Exclusions
- ❌ Task lists or implementation steps
- ❌ Code examples or snippets
- ❌ Project management timelines
- ❌ Resource allocation details
## OUTPUT VALIDATION
### Completeness Checklist
□ All 7 sections present with content
□ Executive Summary with feasibility score
□ Current State Analysis with findings
□ Solution Design with 2+ key decisions
□ Implementation Strategy with code targets
□ Optimization recommendations in 3 areas
□ Confidence scores with final recommendation
□ Reference information included
### Quality Standards
□ Design decisions include rationale and alternatives
□ Code targets specify file:function:lines format
□ Risk assessment with mitigation strategies
□ Quantified scores (X/5) for all assessments
□ Clear PROCEED/RECONSIDER/REJECT recommendation
Focus: Solution-focused technical analysis emphasizing design decisions and critical insights.

View File

@@ -0,0 +1,176 @@
Validate technical feasibility and identify implementation risks for proposed solution design.
## CORE CHECKLIST ⚡
□ Read context-package.json and gemini-solution-design.md
□ Assess complexity, validate technology choices
□ Evaluate performance and security implications
□ Focus on TECHNICAL FEASIBILITY and RISK ASSESSMENT
□ Write output to specified .workflow/active/{session_id}/.process/ path
## PREREQUISITE ANALYSIS
### Required Input Files
1. **context-package.json**: Task requirements, source files, tech stack
2. **gemini-solution-design.md**: Proposed solution design and architecture
3. **workflow-session.json**: Session state and context
4. **CLAUDE.md**: Project standards and conventions
### Analysis Dependencies
- Review Gemini's proposed solution design
- Validate against actual codebase capabilities
- Assess implementation complexity realistically
- Identify gaps between design and execution
## REQUIRED VALIDATION
### 1. Feasibility Assessment
- **Complexity Rating**: Rate technical complexity (1-5 scale)
- 1: Trivial - straightforward implementation
- 2: Simple - well-known patterns
- 3: Moderate - some challenges
- 4: Complex - significant challenges
- 5: Very Complex - high risk, major unknowns
- **Resource Requirements**: Estimate development effort
- Development time (hours/days/weeks)
- Required expertise level
- Infrastructure needs
- **Technology Compatibility**: Validate proposed tech stack
- Framework version compatibility
- Library maturity and support
- Integration with existing systems
### 2. Risk Analysis
- **Implementation Risks**: Technical challenges and blockers
- Unknown implementation patterns
- Missing capabilities or APIs
- Breaking changes to existing code
- **Integration Challenges**: System integration concerns
- Data format compatibility
- API contract changes
- Dependency conflicts
- **Performance Concerns**: Performance and scalability risks
- Resource consumption (CPU, memory, I/O)
- Latency and throughput impact
- Caching and optimization needs
- **Security Concerns**: Security vulnerabilities and threats
- Authentication/authorization gaps
- Data exposure risks
- Compliance violations
### 3. Implementation Validation
- **Development Approach**: Validate proposed implementation strategy
- Verify module dependency order
- Assess incremental development feasibility
- Evaluate testing approach
- **Quality Standards**: Validate quality requirements
- Test coverage achievability
- Performance benchmark realism
- Documentation completeness
- **Maintenance Implications**: Long-term sustainability
- Code maintainability assessment
- Technical debt evaluation
- Evolution and extensibility
### 4. Code Target Verification
Review Gemini's proposed code targets:
- **Validate existing targets**: Confirm file:function:lines exist
- **Assess new file targets**: Evaluate necessity and placement
- **Identify missing targets**: Suggest additional modification points
- **Refine target specifications**: Provide more precise line numbers if possible
### 5. Recommendations
- **Must-Have Requirements**: Critical requirements for success
- **Optimization Opportunities**: Performance and quality improvements
- **Security Controls**: Essential security measures
- **Risk Mitigation**: Strategies to reduce identified risks
## OUTPUT REQUIREMENTS
### Output File
**Path**: `.workflow/active/{session_id}/.process/codex-feasibility-validation.md`
**Format**: Follow structure from `~/.ccw/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
### Required Sections
Focus on these sections from the template:
- Executive Summary (with Codex perspective)
- Current State Analysis (validation findings)
- Implementation Strategy (feasibility assessment)
- Solution Optimization (risk mitigation)
- Confidence Scores (technical feasibility focus)
### Content Guidelines
- ✅ Focus on technical feasibility and risk assessment
- ✅ Verify code targets from Gemini's design
- ✅ Provide concrete risk mitigation strategies
- ✅ Quantify complexity and effort estimates
- ❌ Do NOT create task breakdowns
- ❌ Do NOT provide step-by-step implementation guides
- ❌ Do NOT include code examples
## VALIDATION METHODOLOGY
### Complexity Scoring
Rate each aspect on 1-5 scale:
- Technical Complexity
- Integration Complexity
- Performance Risk
- Security Risk
- Maintenance Burden
### Risk Classification
- **LOW**: Minor issues, easily addressable
- **MEDIUM**: Manageable challenges with clear mitigation
- **HIGH**: Significant concerns requiring major mitigation
- **CRITICAL**: Fundamental viability threats
### Feasibility Judgment
- **PROCEED**: Technically feasible with acceptable risk
- **PROCEED_WITH_MODIFICATIONS**: Feasible but needs adjustments
- **RECONSIDER**: High risk, major changes needed
- **REJECT**: Not feasible with current approach
## CONTEXT INTEGRATION
### Gemini Analysis Integration
- Review proposed architecture and design decisions
- Validate assumptions and technology choices
- Cross-check code targets against actual codebase
- Assess realism of performance targets
### Codebase Reality Check
- Verify existing code capabilities
- Identify actual technical constraints
- Assess team skill compatibility
- Evaluate infrastructure readiness
### Session Context
- Consider session history and previous decisions
- Align with project architecture standards
- Respect existing patterns and conventions
## EXECUTION MODE
**Mode**: Analysis with write permission for output file
**CLI Tool**: Codex with --skip-git-repo-check -s danger-full-access
**Timeout**: 60-90 minutes for complex tasks
**Output**: Single file codex-feasibility-validation.md
**Trigger**: Only for complex tasks (>6 modules)
## VERIFICATION CHECKLIST ✓
□ context-package.json and gemini-solution-design.md read
□ Complexity rated on 1-5 scale with justification
□ All risk categories assessed (technical, integration, performance, security)
□ Code targets verified and refined
□ Risk mitigation strategies provided
□ Resource requirements estimated
□ Final feasibility judgment (PROCEED/RECONSIDER/REJECT)
□ Output written to .workflow/active/{session_id}/.process/codex-feasibility-validation.md
Focus: Technical feasibility validation with realistic risk assessment and mitigation strategies.

View File

@@ -0,0 +1,131 @@
Analyze and design optimal solution with comprehensive architecture evaluation and design decisions.
## CORE CHECKLIST ⚡
□ Read context-package.json to understand task requirements, source files, tech stack
□ Analyze current architecture patterns and code structure
□ Propose solution design with key decisions and rationale
□ Focus on SOLUTION IMPROVEMENTS and KEY DESIGN DECISIONS
□ Write output to specified .workflow/active/{session_id}/.process/ path
## ANALYSIS PRIORITY
### Source Hierarchy
1. **PRIMARY**: Individual role analysis.md files (system-architect, ui-designer, data-architect, etc.)
- Technical details and implementation considerations
- Architecture Decision Records (ADRs)
- Design decision context and rationale
2. **SECONDARY**: role analysis documents
- Integrated requirements across roles
- Cross-role alignment and dependencies
- Unified feature specifications
3. **REFERENCE**: guidance-specification.md
- Discussion context and background
- Initial problem framing
## REQUIRED ANALYSIS
### 1. Current State Assessment
- Identify existing architectural patterns and code structure
- Map integration points and dependencies
- Evaluate technical debt and pain points
- Assess framework compatibility and constraints
### 2. Solution Design
- Propose core architecture principles and approach
- Design component architecture and data flow
- Specify API contracts and integration strategy
- Define technology stack with justification
### 3. Key Design Decisions
For each critical decision:
- **Decision**: What is being decided
- **Rationale**: Why this approach
- **Alternatives Considered**: Other options and their tradeoffs
- **Impact**: Implications on architecture, performance, maintainability
Minimum 2 key decisions required.
### 4. Code Modification Targets
Identify specific code locations for changes:
- **Existing files**: `file:function:lines` format (e.g., `src/auth/login.ts:validateUser:45-52`)
- **New files**: `file` only (e.g., `src/auth/PasswordReset.ts`)
- **Unknown lines**: `file:function:*` (e.g., `src/auth/service.ts:refreshToken:*`)
For each target:
- Type: Modify existing | Create new
- Modification/Purpose: What changes needed
- Rationale: Why this target
### 5. Critical Insights
- Strengths: What works well in current/proposed design
- Gaps: Missing capabilities or concerns
- Risks: Technical, integration, performance, security
- Optimization Opportunities: Performance, security, code quality
### 6. Feasibility Assessment
- Technical Complexity: Rating and analysis
- Performance Impact: Expected characteristics
- Resource Requirements: Development effort
- Maintenance Burden: Ongoing considerations
## OUTPUT REQUIREMENTS
### Output File
**Path**: `.workflow/active/{session_id}/.process/gemini-solution-design.md`
**Format**: Follow structure from `~/.ccw/workflows/cli-templates/prompts/workflow/analysis-results-structure.txt`
### Required Sections
- Executive Summary with feasibility score
- Current State Analysis
- Proposed Solution Design with 2+ key decisions
- Implementation Strategy with code targets
- Solution Optimization (performance, security, quality)
- Critical Success Factors
- Confidence Scores with recommendation
### Content Guidelines
- ✅ Focus on solution improvements and key design decisions
- ✅ Include rationale, alternatives, and tradeoffs for decisions
- ✅ Provide specific code targets in correct format
- ✅ Quantify assessments with scores (X/5)
- ❌ Do NOT create task lists or implementation steps
- ❌ Do NOT include code examples or snippets
- ❌ Do NOT create project management timelines
## CONTEXT INTEGRATION
### Session Context
- Load context-package.json for task requirements
- Reference workflow-session.json for session state
- Review CLAUDE.md for project standards
### Brainstorm Context
If brainstorming artifacts exist:
- Prioritize individual role analysis.md files
- Use role analysis documents for integrated view
- Reference guidance-specification.md for context
### Codebase Context
- Identify similar patterns in existing code
- Evaluate success/failure of current approaches
- Ensure consistency with project architecture
## EXECUTION MODE
**Mode**: Analysis with write permission for output file
**CLI Tool**: Gemini wrapper with --approval-mode yolo
**Timeout**: 40-60 minutes based on complexity
**Output**: Single file gemini-solution-design.md
## VERIFICATION CHECKLIST ✓
□ context-package.json read and analyzed
□ All 7 required sections present in output
□ 2+ key design decisions with rationale and alternatives
□ Code targets specified in correct format
□ Feasibility scores provided (X/5)
□ Final recommendation (PROCEED/RECONSIDER/REJECT)
□ Output written to .workflow/active/{session_id}/.process/gemini-solution-design.md
Focus: Comprehensive solution design emphasizing architecture decisions and critical insights.

View File

@@ -0,0 +1,286 @@
IMPL_PLAN.md Template - Implementation Plan Document Structure
## Document Frontmatter
```yaml
---
identifier: WFS-{session-id}
source: "User requirements" | "File: path" | "Issue: ISS-001"
analysis: .workflow/active//{session-id}/.process/ANALYSIS_RESULTS.md
artifacts: .workflow/active//{session-id}/.brainstorming/
context_package: .workflow/active//{session-id}/.process/context-package.json # CCW smart context
workflow_type: "standard | tdd | design" # Indicates execution model
verification_history: # CCW quality gates
concept_verify: "passed | skipped | pending"
action_plan_verify: "pending"
phase_progression: "brainstorm → context → analysis → concept_verify → planning" # CCW workflow phases
---
```
## Document Structure
# Implementation Plan: {Project Title}
## 1. Summary
Core requirements, objectives, technical approach summary (2-3 paragraphs max).
**Core Objectives**:
- [Key objective 1]
- [Key objective 2]
**Technical Approach**:
- [High-level approach]
## 2. Context Analysis
### CCW Workflow Context
**Phase Progression**:
- ✅ Phase 1: Brainstorming (role analyses generated)
- ✅ Phase 2: Context Gathering (context-package.json: {N} files, {M} modules analyzed)
- ✅ Phase 3: Enhanced Analysis (ANALYSIS_RESULTS.md: Gemini/Qwen/Codex parallel insights)
- ✅ Phase 4: Concept Verification ({X} clarifications answered, role analyses updated | skipped)
- ⏳ Phase 5: Action Planning (current phase - generating IMPL_PLAN.md)
**Quality Gates**:
- concept-verify: ✅ Passed (0 ambiguities remaining) | ⏭️ Skipped (user decision) | ⏳ Pending
- plan-verify: ⏳ Pending (recommended before /workflow:execute)
**Context Package Summary**:
- **Focus Paths**: {list key directories from context-package.json}
- **Key Files**: {list primary files for modification}
- **Module Depth Analysis**: {from get_modules_by_depth.sh output}
- **Smart Context**: {total file count} files, {module count} modules, {dependency count} dependencies identified
### Project Profile
- **Type**: Greenfield/Enhancement/Refactor
- **Scale**: User count, data volume, complexity
- **Tech Stack**: Primary technologies
- **Timeline**: Duration and milestones
### Module Structure
```
[Directory tree showing key modules]
```
### Dependencies
**Primary**: [Core libraries and frameworks]
**APIs**: [External services]
**Development**: [Testing, linting, CI/CD tools]
### Patterns & Conventions
- **Architecture**: [Key patterns like DI, Event-Driven]
- **Component Design**: [Design patterns]
- **State Management**: [State strategy]
- **Code Style**: [Naming, TypeScript coverage]
## 3. Brainstorming Artifacts Reference
### Artifact Usage Strategy
**Primary Reference (role analyses)**:
- **What**: Role-specific analyses from brainstorming providing multi-perspective insights
- **When**: Every task references relevant role analyses for requirements and design decisions
- **How**: Extract requirements, architecture decisions, UI/UX patterns from applicable role documents
- **Priority**: Collective authoritative source - multiple role perspectives provide comprehensive coverage
- **CCW Value**: Maintains role-specific expertise while enabling cross-role integration during planning
**Context Intelligence (context-package.json)**:
- **What**: Smart context gathered by CCW's context-gather phase
- **Content**: Focus paths, dependency graph, existing patterns, module structure
- **Usage**: Tasks load this via `flow_control.preparatory_steps` for environment setup
- **CCW Value**: Automated intelligent context discovery replacing manual file exploration
**Technical Analysis (ANALYSIS_RESULTS.md)**:
- **What**: Gemini/Qwen/Codex parallel analysis results
- **Content**: Optimization strategies, risk assessment, architecture review, implementation patterns
- **Usage**: Referenced in task planning for technical guidance and risk mitigation
- **CCW Value**: Multi-model parallel analysis providing comprehensive technical intelligence
### Integrated Specifications (Highest Priority)
- **role analyses**: Comprehensive implementation blueprint
- Contains: Architecture design, UI/UX guidelines, functional/non-functional requirements, implementation roadmap, risk assessment
### Supporting Artifacts (Reference)
- **guidance-specification.md**: Role-specific discussion points and analysis framework
- **system-architect/analysis.md**: Detailed architecture specifications
- **ui-designer/analysis.md**: Layout and component specifications
- **product-manager/analysis.md**: Product vision and user stories
**Artifact Priority in Development**:
1. role analyses (primary reference for all tasks)
2. context-package.json (smart context for execution environment)
3. ANALYSIS_RESULTS.md (technical analysis and optimization strategies)
4. Role-specific analyses (fallback for detailed specifications)
## 4. Implementation Strategy
### Execution Strategy
**Execution Model**: [Sequential | Parallel | Phased | TDD Cycles]
**Rationale**: [Why this execution model fits the project]
**Parallelization Opportunities**:
- [List independent workstreams]
**Serialization Requirements**:
- [List critical dependencies]
### Architectural Approach
**Key Architecture Decisions**:
- [ADR references from role analyses]
- [Justification for architecture patterns]
**Integration Strategy**:
- [How modules communicate]
- [State management approach]
### Key Dependencies
**Task Dependency Graph**:
```
[High-level dependency visualization]
```
**Critical Path**: [Identify bottleneck tasks]
### Testing Strategy
**Testing Approach**:
- Unit testing: [Tools, scope]
- Integration testing: [Key integration points]
- E2E testing: [Critical user flows]
**Coverage Targets**:
- Lines: ≥70%
- Functions: ≥70%
- Branches: ≥65%
**Quality Gates**:
- [CI/CD gates]
- [Performance budgets]
## 5. Task Breakdown Summary
### Task Count
**{N} tasks** (flat hierarchy | two-level hierarchy, sequential | parallel execution)
### Task Structure
- **IMPL-1**: [Main task title]
- **IMPL-2**: [Main task title]
...
### Complexity Assessment
- **High**: [List with rationale]
- **Medium**: [List]
- **Low**: [List]
### Dependencies
[Reference Section 4.3 for dependency graph]
**Parallelization Opportunities**:
- [Specific task groups that can run in parallel]
## 6. Implementation Plan (Detailed Phased Breakdown)
### Execution Strategy
**Phase 1 (Weeks 1-2): [Phase Name]**
- **Tasks**: IMPL-1, IMPL-2
- **Deliverables**:
- [Specific deliverable 1]
- [Specific deliverable 2]
- **Success Criteria**:
- [Measurable criterion]
**Phase 2 (Weeks 3-N): [Phase Name]**
...
### Resource Requirements
**Development Team**:
- [Team composition and skills]
**External Dependencies**:
- [Third-party services, APIs]
**Infrastructure**:
- [Development, staging, production environments]
## 7. Risk Assessment & Mitigation
| Risk | Impact | Probability | Mitigation Strategy | Owner |
|------|--------|-------------|---------------------|-------|
| [Risk description] | High/Med/Low | High/Med/Low | [Strategy] | [Role] |
**Critical Risks** (High impact + High probability):
- [Risk 1]: [Detailed mitigation plan]
**Monitoring Strategy**:
- [How risks will be monitored]
## 8. Success Criteria
**Functional Completeness**:
- [ ] All requirements from role analyses implemented
- [ ] All acceptance criteria from task.json files met
**Technical Quality**:
- [ ] Test coverage ≥70%
- [ ] Bundle size within budget
- [ ] Performance targets met
**Operational Readiness**:
- [ ] CI/CD pipeline operational
- [ ] Monitoring and logging configured
- [ ] Documentation complete
**Business Metrics**:
- [ ] [Key business metrics from role analyses]
## Template Usage Guidelines
### When Generating IMPL_PLAN.md
1. **Fill Frontmatter Variables**:
- Replace {session-id} with actual session ID
- Set workflow_type based on planning phase
- Update verification_history based on concept-verify results
2. **Populate CCW Workflow Context**:
- Extract file/module counts from context-package.json
- Document phase progression based on completed workflow steps
- Update quality gate status (passed/skipped/pending)
3. **Extract from Analysis Results**:
- Core objectives from ANALYSIS_RESULTS.md
- Technical approach and architecture decisions
- Risk assessment and mitigation strategies
4. **Reference Brainstorming Artifacts**:
- List detected artifacts with correct paths
- Document artifact priority and usage strategy
- Map artifacts to specific tasks based on domain
5. **Define Implementation Strategy**:
- Choose execution model (sequential/parallel/phased)
- Identify parallelization opportunities
- Document critical path and dependencies
6. **Break Down Tasks**:
- List all task IDs and titles
- Assess complexity (high/medium/low)
- Create dependency graph visualization
7. **Set Success Criteria**:
- Extract from role analyses
- Include measurable metrics
- Define quality gates
### Validation Checklist
Before finalizing IMPL_PLAN.md:
- [ ] All frontmatter fields populated correctly
- [ ] CCW workflow context reflects actual phase progression
- [ ] Brainstorming artifacts correctly referenced
- [ ] Task breakdown matches generated task JSONs
- [ ] Dependencies are acyclic and logical
- [ ] Success criteria are measurable
- [ ] Risk assessment includes mitigation strategies
- [ ] All {placeholder} variables replaced with actual values

View File

@@ -0,0 +1,172 @@
# SKILL.md Index Generation Context
## Description Field Requirements
When generating final aggregated output, remember to prepare data for SKILL.md description field:
**Required Data Points**:
- Project root path (to be obtained via git command)
- Use cases: "continuing development", "analyzing past implementations", "learning from workflow history"
- Trigger phrase: "especially when no relevant context exists in memory"
**Description Format**:
```
Progressive workflow development history (located at {project_root}).
Load this SKILL when continuing development, analyzing past implementations,
or learning from workflow history, especially when no relevant context exists in memory.
```
---
You are aggregating workflow session history to generate a progressive SKILL package.
## Your Task
Analyze archived workflow sessions and aggregate:
1. **Lessons Learned** - Successes, challenges, and watch patterns
2. **Conflict Patterns** - Recurring conflicts and resolutions
3. **Implementation Summaries** - Key outcomes by functional domain
## Input Data
You will receive:
- Session metadata (session_id, description, tags, metrics)
- Lessons from each session (successes, challenges, watch_patterns)
- IMPL_PLAN summaries
- Context package metadata (keywords, tech_stack, complexity)
## Output Requirements
### 1. Aggregated Lessons
**Successes by Category**:
- Group successful patterns by functional domain (auth, testing, performance, etc.)
- Identify practices that succeeded across multiple sessions
- Mark best practices (success in 3+ sessions)
**Challenges by Severity**:
- HIGH: Blocked development for >4 hours OR repeated in 3+ sessions
- MEDIUM: Required significant rework OR repeated in 2 sessions
- LOW: Minor issues resolved quickly
**Watch Patterns**:
- Identify patterns mentioned in 2+ sessions
- Prioritize by frequency and severity
- Mark CRITICAL patterns (appeared in 3+ sessions with HIGH severity)
**Format**:
```json
{
"successes_by_category": {
"auth": ["JWT implementation with refresh tokens (3 sessions)", ...],
"testing": ["TDD reduced bugs by 60% (2 sessions)", ...]
},
"challenges_by_severity": {
"high": [
{
"challenge": "Token refresh edge cases",
"sessions": ["WFS-user-auth", "WFS-jwt-refresh"],
"frequency": 2
}
],
"medium": [...],
"low": [...]
},
"watch_patterns": [
{
"pattern": "Token concurrency issues",
"frequency": 3,
"severity": "CRITICAL",
"sessions": ["WFS-user-auth", "WFS-jwt-refresh", "WFS-oauth"]
}
]
}
```
### 2. Conflict Patterns
**Analysis**:
- Group conflicts by type (architecture, dependencies, testing, performance)
- Identify recurring patterns (same conflict in different sessions)
- Link successful resolutions to specific sessions
**Format**:
```json
{
"architecture": [
{
"pattern": "Multiple authentication strategies conflict",
"description": "Different auth methods (JWT, OAuth, session) cause integration issues",
"sessions": ["WFS-user-auth", "WFS-oauth"],
"resolution": "Unified auth interface with strategy pattern",
"code_impact": ["src/auth/interface.ts", "src/auth/jwt.ts", "src/auth/oauth.ts"],
"frequency": 2,
"severity": "high"
}
],
"dependencies": [...],
"testing": [...],
"performance": [...]
}
```
### 3. Implementation Summary
**By Functional Domain**:
- Group sessions by primary tag/domain
- Summarize key accomplishments
- Link to context packages and plans
**Format**:
```json
{
"auth": {
"session_count": 3,
"sessions": [
{
"session_id": "WFS-user-auth",
"description": "JWT authentication implementation",
"key_outcomes": [
"JWT token generation and validation",
"Refresh token mechanism",
"Secure password hashing with bcrypt"
],
"context_package": ".workflow/.archives/WFS-user-auth/.process/context-package.json",
"metrics": {"task_count": 5, "success_rate": 100, "duration_hours": 4.5}
}
],
"cumulative_metrics": {
"total_tasks": 15,
"avg_success_rate": 95,
"total_hours": 12.5
}
},
"payment": {...},
"ui": {...}
}
```
## Analysis Guidelines
1. **Identify Patterns**: Look for recurring themes across sessions
2. **Prioritize by Impact**: Focus on high-frequency, high-impact patterns
3. **Link Sessions**: Connect related sessions (same domain, similar challenges)
4. **Extract Wisdom**: Surface actionable insights from lessons learned
5. **Maintain Context**: Keep references to original sessions and files
## Quality Criteria
- ✅ All sessions processed and categorized
- ✅ Patterns identified and frequency counted
- ✅ Severity levels assigned based on impact
- ✅ Resolutions linked to specific sessions
- ✅ Output is valid JSON with no missing fields
- ✅ References (paths) are accurate and complete
## Important Notes
- **NO hallucination**: Only aggregate data from provided sessions
- **Preserve detail**: Keep specific session references for traceability
- **Smart grouping**: Group similar patterns even if wording differs slightly
- **Frequency matters**: Prioritize patterns that appear in multiple sessions
- **Context preservation**: Keep context package paths for on-demand loading

View File

@@ -0,0 +1,94 @@
Template for generating conflict-patterns.md
## Purpose
Document recurring conflict patterns across workflow sessions with resolutions.
## File Location
`.claude/skills/workflow-progress/conflict-patterns.md`
## Update Strategy
- **Incremental mode**: Add new conflicts, update frequency counters for existing patterns
- **Full mode**: Regenerate entire conflict analysis from all sessions
## Structure
```markdown
# Workflow Conflict Patterns
## Architecture Conflicts
### {Conflict_Pattern_Title}
**Pattern**: {concise_pattern_description}
**Sessions**: {session_id_1}, {session_id_2}
**Resolution**: {resolution_strategy}
**Code Impact**:
- Modified: {file_path_1}, {file_path_2}
- Added: {file_path_3}
- Tests: {test_file_path}
**Frequency**: {count} sessions
**Severity**: {high|medium|low}
---
## Dependency Conflicts
### {Conflict_Pattern_Title}
**Pattern**: {concise_pattern_description}
**Sessions**: {session_id_list}
**Resolution**: {resolution_strategy}
**Package Changes**:
- Updated: {package_name}@{version}
- Locked: {dependency_name}
**Frequency**: {count} sessions
**Severity**: {high|medium|low}
---
## Testing Conflicts
### {Conflict_Pattern_Title}
...
---
## Performance Conflicts
### {Conflict_Pattern_Title}
...
```
## Data Sources
- IMPL_PLAN summaries: `.workflow/.archives/{session_id}/IMPL_PLAN.md`
- Context packages: `.workflow/.archives/{session_id}/.process/context-package.json` (reference only)
- Session lessons: `manifest.json` -> `archives[].lessons.challenges`
## Conflict Identification (Use CCW CLI)
**Command Pattern**:
```bash
ccw cli -p "
PURPOSE: Identify conflict patterns from workflow sessions
TASK: • Extract conflicts from IMPL_PLAN and lessons • Group by type (architecture/dependencies/testing/performance) • Identify recurring patterns (same conflict in different sessions) • Link resolutions to specific sessions
MODE: analysis
CONTEXT: @.workflow/.archives/*/IMPL_PLAN.md @.workflow/.archives/manifest.json
EXPECTED: Conflict patterns with frequency and resolution
CONSTRAINTS: analysis=READ-ONLY
" --tool gemini --mode analysis --rule workflow-skill-aggregation --cd .workflow/.archives
```
**Pattern Grouping**:
- **Architecture**: Design conflicts, incompatible strategies, interface mismatches
- **Dependencies**: Version conflicts, library incompatibilities, package issues
- **Testing**: Mock data inconsistencies, test environment issues, coverage gaps
- **Performance**: Bottlenecks, optimization conflicts, resource issues
## Formatting Rules
- Sort by frequency within each category
- Include code impact for traceability
- Mark high-frequency patterns (3+ sessions) as "RECURRING"
- Keep resolution descriptions actionable
- Use relative paths for file references

View File

@@ -0,0 +1,224 @@
Template for generating SKILL.md (index file)
## Purpose
Create main SKILL package index with progressive loading structure and session references.
## File Location
`.claude/skills/workflow-progress/SKILL.md`
## Update Strategy
- **Always regenerated**: This file is always updated with latest session count, domains, dates
## Structure
```yaml
---
name: workflow-progress
description: Progressive workflow development history (located at {project_root}). Load this SKILL when continuing development, analyzing past implementations, or learning from workflow history, especially when no relevant context exists in memory.
version: {semantic_version}
---
# Workflow Progress SKILL Package
## Documentation: `../../../.workflow/.archives/`
**Total Sessions**: {session_count}
**Functional Domains**: {domain_list}
**Date Range**: {earliest_date} - {latest_date}
## Progressive Loading
### Level 0: Quick Overview (~2K tokens)
- [Sessions Timeline](sessions-timeline.md#recent-sessions-last-5) - Recent 5 sessions
- [Top Conflict Patterns](conflict-patterns.md#top-patterns) - Top 3 recurring conflicts
- Quick reference for last completed work
**Use Case**: Quick context refresh before starting new task
### Level 1: Core History (~8K tokens)
- [Sessions Timeline](sessions-timeline.md) - Recent 10 sessions with details
- [Lessons Learned](lessons-learned.md#best-practices) - Success patterns by category
- [Conflict Patterns](conflict-patterns.md) - Known conflict types and resolutions
- Context package references (metadata only)
**Use Case**: Understanding recent development patterns and avoiding known pitfalls
### Level 2: Complete History (~25K tokens)
- All archived sessions with metadata
- Full lessons learned (successes, challenges, watch patterns)
- Complete conflict analysis with resolutions
- IMPL_PLAN summaries from all sessions
- Context package paths for on-demand loading
**Use Case**: Comprehensive review before major refactoring or architecture changes
### Level 3: Deep Dive (~40K tokens)
- Full IMPL_PLAN.md and TODO_LIST.md from all sessions
- Detailed task completion summaries
- Cross-session dependency analysis
- Direct context package file references
**Use Case**: Investigating specific implementation details or debugging historical decisions
---
## Quick Access
### Recent Sessions
{list of 5 most recent sessions with one-line descriptions}
### By Domain
- **{Domain_1}**: {count} sessions
- **{Domain_2}**: {count} sessions
- **{Domain_3}**: {count} sessions
### Top Watch Patterns
1. {most_frequent_watch_pattern}
2. {second_most_frequent}
3. {third_most_frequent}
---
## Session Index
### {Domain_Category} Sessions
- [{session_id}](../../../.workflow/.archives/{session_id}/) - {one_line_description} ({date})
- Context: [context-package.json](../../../.workflow/.archives/{session_id}/.process/context-package.json)
- Plan: [IMPL_PLAN.md](../../../.workflow/.archives/{session_id}/IMPL_PLAN.md)
- Tags: {tag1}, {tag2}, {tag3}
---
## Usage Examples
### Loading Quick Context
```markdown
Load Level 0 from workflow-progress SKILL for overview of recent work
```
### Investigating {Domain} History
```markdown
Load Level 2 from workflow-progress SKILL, filter by "{domain}" tag
```
### Full Historical Analysis
```markdown
Load Level 3 from workflow-progress SKILL for complete development history
```
```
## Data Sources
- Manifest: `.workflow/.archives/manifest.json`
- All session metadata from manifest entries
## Generation Rules
- Version format: `{major}.{minor}.{patch}` (increment patch for each update)
- Domain list: Extract unique tags from all sessions, sort by frequency
- Date range: Find earliest and latest archived_at timestamps
- Token estimates: Approximate based on content length
- Use relative paths (../../../.workflow/.archives/) for session references
## Formatting Rules
- Keep descriptions concise
- Sort sessions by date (newest first)
- Group sessions by primary tag
- Include only top 5 recent sessions in Quick Access
- Include top 3 watch patterns
---
## Variable Substitution Guide
### Required Variables
- `{project_root}`: Absolute project path from git root (e.g., "/d/Claude_dms3")
- `{semantic_version}`: Version string (e.g., "1.0.0", increment patch for each update)
- `{session_count}`: Total number of archived sessions
- `{domain_list}`: Comma-separated unique tags sorted by frequency
- `{earliest_date}`: Earliest session archived_at timestamp
- `{latest_date}`: Most recent session archived_at timestamp
### Generated Variables
- `{one_line_description}`: Extract from session description (first sentence, max 80 chars)
- `{domain_category}`: Primary tag from session metadata
- `{most_frequent_watch_pattern}`: Top recurring watch pattern across sessions
- `{date}`: Session archived_at in YYYY-MM-DD format
### Description Field Generation
**Format Template**:
```
Progressive workflow development history (located at {project_root}).
Load this SKILL when continuing development, analyzing past implementations,
or learning from workflow history, especially when no relevant context exists in memory.
```
**Generation Rules**:
1. **Project Root**: Use `git rev-parse --show-toplevel` to get absolute path
2. **Use Cases**: ALWAYS include these trigger phrases:
- "continuing development" (开发延续)
- "analyzing past implementations" (分析历史)
- "learning from workflow history" (学习历史)
3. **Trigger Optimization**: MUST include "especially when no relevant context exists in memory"
4. **Path Format**: Use forward slashes for cross-platform compatibility (e.g., "/d/project")
**Why This Matters**:
- **Auto-loading precision**: Path reference ensures Claude loads correct project's SKILL
- **Context awareness**: "when no relevant context exists" prevents redundant loading
- **Action coverage**: Three use cases cover all workflow scenarios
---
## Generation Instructions
### Step 1: Get Project Root
```bash
git rev-parse --show-toplevel # Returns: /d/Claude_dms3
```
### Step 2: Read Manifest
```bash
cat .workflow/.archives/manifest.json
```
Extract:
- Total session count
- All session tags (for domain list)
- Date range (earliest/latest archived_at)
### Step 3: Aggregate Session Data
- Count sessions per domain
- Extract top 5 recent sessions
- Identify top 3 watch patterns from lessons
### Step 4: Generate Description
Apply format template with project_root from Step 1.
### Step 5: Calculate Version
- Read existing SKILL.md version (if exists)
- Increment patch version (e.g., 1.0.5 → 1.0.6)
- Use 1.0.0 for new SKILL package
### Step 6: Build Progressive Loading Sections
- Level 0: Recent 5 sessions + Top 3 conflicts
- Level 1: Recent 10 sessions + Best practices
- Level 2: All sessions + Full lessons + Full conflicts
- Level 3: Include IMPL_PLAN and TODO_LIST references
### Step 7: Write SKILL.md
- Apply all variable substitutions
- Use relative paths: `../../../.workflow/.archives/`
- Validate all referenced files exist
---
## Validation Checklist
- [ ] `{project_root}` uses absolute path with forward slashes
- [ ] Description includes all three use cases
- [ ] Description includes trigger optimization phrase
- [ ] Version incremented correctly
- [ ] All session references use relative paths
- [ ] Domain list sorted by frequency
- [ ] Date range matches manifest
- [ ] Quick Access section has exactly 5 recent sessions
- [ ] Top Watch Patterns section has exactly 3 items
- [ ] All referenced files exist in archives

View File

@@ -0,0 +1,94 @@
Template for generating lessons-learned.md
## Purpose
Aggregate lessons learned from workflow sessions, categorized by functional domain and severity.
## File Location
`.claude/skills/workflow-progress/lessons-learned.md`
## Update Strategy
- **Incremental mode**: Merge new session lessons into existing categories, update frequencies
- **Full mode**: Regenerate entire lessons document from all sessions
## Structure
```markdown
# Workflow Lessons Learned
## Best Practices (Successes)
### {Domain_Category}
- {success_pattern_1} (sessions: {session_id_1}, {session_id_2})
- {success_pattern_2} (sessions: {session_id_3})
### {Domain_Category_2}
...
---
## Known Challenges
### High Priority
- **{challenge_title}**: {description}
- Affected sessions: {session_id_1}, {session_id_2}
- Resolution: {resolution_strategy}
### Medium Priority
- **{challenge_title}**: {description}
- Affected sessions: {session_id_3}
- Resolution: {resolution_strategy}
### Low Priority
...
---
## Watch Patterns
### Critical (3+ sessions)
1. **{pattern_name}**: {description}
- Frequency: {count} sessions
- Affected: {session_list}
- Mitigation: {mitigation_strategy}
### High Priority (2 sessions)
...
### Normal (1 session)
...
```
## Data Sources
- Lessons: `manifest.json` -> `archives[].lessons.{successes|challenges|watch_patterns}`
- Session metadata: `.workflow/.archives/{session_id}/workflow-session.json`
## Aggregation Rules (Use CCW CLI)
**Command Pattern**:
```bash
ccw cli -p "
PURPOSE: Aggregate workflow lessons from session data
TASK: • Group successes by functional domain • Categorize challenges by severity (HIGH/MEDIUM/LOW) • Identify watch patterns with frequency >= 2 • Mark CRITICAL patterns (3+ sessions)
MODE: analysis
CONTEXT: @.workflow/.archives/manifest.json
EXPECTED: Aggregated lessons with frequency counts
CONSTRAINTS: analysis=READ-ONLY
" --tool gemini --mode analysis --rule workflow-skill-aggregation --cd .workflow/.archives
```
**Severity Classification**:
- **HIGH**: Blocked development >4 hours OR repeated in 3+ sessions
- **MEDIUM**: Required significant rework OR repeated in 2 sessions
- **LOW**: Minor issues resolved quickly
**Pattern Identification**:
- Successes in 3+ sessions → "Best Practices"
- Challenges repeated 2+ times → "Known Issues"
- Watch patterns frequency >= 2 → "High Priority Warnings"
- Watch patterns frequency >= 3 → "CRITICAL"
## Formatting Rules
- Sort by frequency (most common first)
- Include session references for traceability
- Use bold for challenge titles
- Keep descriptions concise but actionable

View File

@@ -0,0 +1,53 @@
Template for generating sessions-timeline.md
## Purpose
Create or update chronological timeline of workflow sessions with functional domain grouping.
## File Location
`.claude/skills/workflow-progress/sessions-timeline.md`
## Update Strategy
- **Incremental mode**: Append new session to timeline, keep existing content
- **Full mode**: Regenerate entire timeline from all sessions
## Structure
```markdown
# Workflow Sessions Timeline
## Recent Sessions (Last 5)
### {session_id} ({archived_date})
**Description**: {description}
**Tags**: {tag1}, {tag2}, {tag3}
**Metrics**: {task_count} tasks, {success_rate}% success, {duration_hours} hours
**Context Package**: [{session_id}/context-package.json](../../../.workflow/.archives/{session_id}/.process/context-package.json)
**Key Outcomes**:
- ✅ {success_item_1}
- ✅ {success_item_2}
- ⚠️ Watch: {watch_pattern}
---
## By Functional Domain
### {Domain_Name} ({count} sessions)
- {session_id_1} ({date}) - {one_line_description}
- {session_id_2} ({date}) - {one_line_description}
### {Domain_Name_2} ({count} sessions)
...
```
## Data Sources
- Session metadata: `.workflow/.archives/{session_id}/workflow-session.json`
- Manifest entry: `.workflow/.archives/manifest.json`
- Lessons: `manifest.json` -> `archives[].lessons`
## Formatting Rules
- Sort recent sessions by archived_at (newest first)
- Group by functional domain using tags
- Use relative paths for context package links
- Use ✅ for successes, ⚠️ for watch patterns
- Keep descriptions concise (one line)

View File

@@ -0,0 +1,123 @@
Task JSON Schema - Agent Mode (No Command Field)
## Schema Structure
```json
{
"id": "IMPL-N[.M]",
"title": "Descriptive task name",
"status": "pending",
"context_package_path": "{context_package_path}",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@test-fix-agent|@universal-executor"
},
"context": {
"requirements": ["extracted from analysis"],
"focus_paths": ["src/paths"],
"acceptance": ["measurable criteria"],
"depends_on": ["IMPL-N"],
"artifacts": [
{
"type": "synthesis_specification",
"path": "{synthesis_spec_path}",
"priority": "highest",
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
},
{
"type": "role_analysis",
"path": "{role_analysis_path}",
"priority": "high",
"usage": "Technical/design/business details from specific roles. Common roles: system-architect (ADRs, APIs, caching), ui-designer (design tokens, layouts), product-manager (user stories, metrics)"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_role_analyses_specification",
"action": "Load consolidated role analyses",
"commands": [
"Read({synthesis_spec_path})"
],
"output_to": "synthesis_specification",
"on_error": "fail"
},
{
"step": "load_context_package",
"action": "Load context package for project structure",
"commands": [
"Read({context_package_path})"
],
"output_to": "context_pkg",
"on_error": "fail"
},
{
"step": "local_codebase_exploration",
"action": "Explore codebase using local search",
"commands": [
"bash(rg '^(function|class|interface).*{keyword}' --type ts -n --max-count 15)",
"bash(find . -name '*{keyword}*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure",
"on_error": "skip_optional"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Implement task following role analyses",
"description": "Implement '{title}' following [synthesis_specification] requirements and [context_pkg] patterns. Use role analyses as primary source, consult artifacts for technical details.",
"modification_points": [
"Apply consolidated requirements from role analyses",
"Follow technical guidelines from synthesis",
"Consult artifacts for implementation details when needed",
"Integrate with existing patterns"
],
"logic_flow": [
"Load role analyses and context package",
"Analyze existing patterns from [codebase_structure]",
"Implement following specification",
"Consult artifacts for technical details when needed",
"Validate against acceptance criteria"
],
"depends_on": [],
"output": "implementation"
}
],
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
```
## Key Features - Agent Mode
**Execution Model**: Agent interprets `modification_points` and `logic_flow` to execute autonomously
**No Command Field**: Steps in `implementation_approach` do NOT include `command` field
**Context Loading**: Context loaded via `pre_analysis` steps, available as variables (e.g., [synthesis_specification], [context_pkg])
**Agent Execution**:
- Agent reads modification_points and logic_flow
- Agent performs implementation autonomously
- Agent validates against acceptance criteria
## Field Descriptions
**implementation_approach**: Array of step objects (NO command field)
- **step**: Sequential step number
- **title**: Step description
- **description**: Detailed instructions with variable references
- **modification_points**: Specific code modifications to apply
- **logic_flow**: Business logic execution sequence
- **depends_on**: Step dependencies (empty array for independent steps)
- **output**: Expected deliverable variable name
## Usage Guidelines
1. **Load Context**: Use pre_analysis to load synthesis, context package, and explore codebase
2. **Reference Variables**: Use [variable_name] to reference outputs from pre_analysis steps
3. **Clear Instructions**: Provide detailed modification_points and logic_flow for agent
4. **No Commands**: Never add command field to implementation_approach steps
5. **Agent Autonomy**: Let agent interpret and execute based on provided instructions

View File

@@ -0,0 +1,182 @@
Task JSON Schema - CLI Execute Mode (With Command Field)
## Schema Structure
```json
{
"id": "IMPL-N[.M]",
"title": "Descriptive task name",
"status": "pending",
"context_package_path": "{context_package_path}",
"meta": {
"type": "feature|bugfix|refactor|test|docs",
"agent": "@code-developer|@test-fix-agent|@universal-executor"
},
"context": {
"requirements": ["extracted from analysis"],
"focus_paths": ["src/paths"],
"acceptance": ["measurable criteria"],
"depends_on": ["IMPL-N"],
"artifacts": [
{
"type": "synthesis_specification",
"path": "{synthesis_spec_path}",
"priority": "highest",
"usage": "Primary requirement source - use for consolidated requirements and cross-role alignment"
},
{
"type": "role_analysis",
"path": "{role_analysis_path}",
"priority": "high",
"usage": "Technical/design/business details from specific roles"
}
]
},
"flow_control": {
"pre_analysis": [
{
"step": "load_synthesis_specification",
"action": "Load consolidated synthesis specification",
"commands": [
"Read({synthesis_spec_path})"
],
"output_to": "synthesis_specification",
"on_error": "fail"
},
{
"step": "load_context_package",
"action": "Load context package",
"commands": [
"Read({context_package_path})"
],
"output_to": "context_pkg",
"on_error": "fail"
},
{
"step": "local_codebase_exploration",
"action": "Explore codebase using local search",
"commands": [
"bash(rg '^(function|class|interface).*{keyword}' --type ts -n --max-count 15)",
"bash(find . -name '*{keyword}*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure",
"on_error": "skip_optional"
}
],
"implementation_approach": [
{
"step": 1,
"title": "Implement task with Codex",
"description": "Implement '{title}' using Codex CLI tool",
"command": "bash(codex -C {focus_path} --full-auto exec \"PURPOSE: {purpose} TASK: {task_description} MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: {expected_output} RULES: Follow synthesis specification\" --skip-git-repo-check -s danger-full-access)",
"modification_points": [
"Create/modify implementation files",
"Follow synthesis specification requirements",
"Integrate with existing patterns"
],
"logic_flow": [
"Codex loads context package and synthesis",
"Codex implements according to specification",
"Codex validates against acceptance criteria"
],
"depends_on": [],
"output": "implementation"
}
],
"target_files": ["file:function:lines", "path/to/NewFile.ts"]
}
}
```
## Multi-Step Example (Complex Task with Resume)
```json
{
"id": "IMPL-002",
"title": "Implement RBAC system",
"flow_control": {
"implementation_approach": [
{
"step": 1,
"title": "Create RBAC models",
"description": "Create role and permission data models",
"command": "bash(codex -C src/models --full-auto exec \"PURPOSE: Create RBAC models TASK: Define role and permission models MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: Models with migrations RULES: Follow synthesis spec\" --skip-git-repo-check -s danger-full-access)",
"modification_points": ["Define role model", "Define permission model"],
"logic_flow": ["Design schema", "Implement models", "Generate migrations"],
"depends_on": [],
"output": "rbac_models"
},
{
"step": 2,
"title": "Implement RBAC middleware",
"description": "Create route protection middleware",
"command": "bash(codex --full-auto exec \"PURPOSE: Create RBAC middleware TASK: Route protection middleware MODE: auto CONTEXT: RBAC models from step 1 EXPECTED: Middleware for route protection RULES: Use session patterns\" resume --last --skip-git-repo-check -s danger-full-access)",
"modification_points": ["Create permission checker", "Add route decorators"],
"logic_flow": ["Check user role", "Validate permissions", "Allow/deny access"],
"depends_on": [1],
"output": "rbac_middleware"
}
]
}
}
```
## Key Features - CLI Execute Mode
**Execution Model**: Commands in `command` field execute steps directly
**Command Field Required**: Every step in `implementation_approach` MUST include `command` field
**Context Delivery**: Context provided via CONTEXT field in command prompt using `@{path}` syntax
**Multi-Step Support**:
- First step: Full context with `-C directory` and complete CONTEXT field
- Subsequent steps: Use `resume --last` to maintain session continuity
- Step dependencies: Use `depends_on` array to specify step order
## Command Templates
### Single-Step Codex Command
```bash
bash(codex -C {focus_path} --full-auto exec "PURPOSE: {purpose} TASK: {task} MODE: auto CONTEXT: @{{synthesis_spec_path}} @{{context_package_path}} EXPECTED: {expected} RULES: {rules}" --skip-git-repo-check -s danger-full-access)
```
### Multi-Step Codex with Resume
```bash
# First step
bash(codex -C {path} --full-auto exec "..." --skip-git-repo-check -s danger-full-access)
# Subsequent steps
bash(codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access)
```
### Gemini/Qwen Commands (Analysis/Documentation)
```bash
bash(gemini "PURPOSE: {purpose} TASK: {task} MODE: analysis CONTEXT: @{synthesis_spec_path} EXPECTED: {expected} RULES: {rules}")
# With write permission
bash(gemini --approval-mode yolo "PURPOSE: {purpose} TASK: {task} MODE: write CONTEXT: @{context} EXPECTED: {expected} RULES: {rules}")
```
## Field Descriptions
**implementation_approach**: Array of step objects (WITH command field)
- **step**: Sequential step number
- **title**: Step description
- **description**: Brief step description
- **command**: Complete CLI command to execute the step
- **modification_points**: Specific code modifications (for reference)
- **logic_flow**: Execution sequence (for reference)
- **depends_on**: Step dependencies (array of step numbers, empty for independent)
- **output**: Expected deliverable variable name
## Usage Guidelines
1. **Always Include Command**: Every step MUST have a `command` field
2. **Context via CONTEXT Field**: Provide context using `@{path}` syntax in command prompt
3. **First Step Full Context**: First step should include `-C directory` and full context package
4. **Resume for Continuity**: Use `resume --last` for subsequent steps in same task
5. **Step Dependencies**: Use `depends_on: [1, 2]` to specify execution order
6. **Parameter Position**:
- Codex: `--skip-git-repo-check -s danger-full-access` at END
- Gemini/Qwen: `--approval-mode yolo` BEFORE the prompt