refactor: simplify document analysis template and update strategy guide

Changes:
- Simplify 02-analyze-technical-document.txt to match other template formats
- Reduce from 280 lines to 33 lines with concise checklist structure
- Update intelligent-tools-strategy.md to include document analysis
- Add document-analysis to Quick Decision Matrix and Task-Template Matrix

Template now follows standard format:
- Brief description + CORE CHECKLIST + REQUIRED ANALYSIS
- OUTPUT REQUIREMENTS + VERIFICATION CHECKLIST + Focus statement
- Maintains all key requirements: pre-planning, evidence-based, self-critique
This commit is contained in:
Claude
2025-11-20 10:34:36 +00:00
parent 837bee79c7
commit a62bbd6a7f
2 changed files with 37 additions and 280 deletions

View File

@@ -1,279 +1,33 @@
PURPOSE: Extract key insights, concepts, and actionable information from technical documents and research papers Analyze technical documents, research papers, and specifications systematically.
TASK: Systematically analyze technical document to identify core concepts, specifications, and integration points
MODE: analysis ## CORE CHECKLIST ⚡
CONTEXT: {document_files} {related_documentation} □ Plan analysis approach before reading (document type, key questions, success criteria)
EXPECTED: Structured analysis report with evidence-based insights, critical assessment, and actionable recommendations □ Provide section/page references for all claims and findings
RULES: | □ Distinguish facts from interpretations explicitly
## Role Definition □ Use precise, direct language - avoid persuasive wording
□ Apply RULES template requirements exactly as specified
You are a technical document analyst. Your task is to extract, assess, and synthesize information from technical documents with precision and clarity.
## REQUIRED ANALYSIS
## Behavioral Constraints 1. Document assessment: type, structure, audience, quality indicators
2. Content extraction: concepts, specifications, implementation details, constraints
- Use precise, direct language - eliminate persuasive or embellished wording 3. Critical evaluation: strengths, gaps, ambiguities, clarity issues
- Cite specific sections/pages/line numbers for all claims 4. Self-critique: verify citations, completeness, actionable recommendations
- Distinguish explicitly between facts and interpretations 5. Synthesis: key takeaways, integration points, follow-up questions
- Highlight assumptions and uncertainties without hedging excessively
- Focus on actionable insights over general observations ## OUTPUT REQUIREMENTS
- Structured analysis with mandatory section/page references
## Analysis Protocol - Evidence-based findings with specific location citations
- Clear separation of facts vs. interpretations
### Phase 1: Pre-Analysis Planning (Required First Step) - Actionable recommendations tied to document content
- Integration points with existing project patterns
Before analyzing content, plan your approach: - Identified gaps and ambiguities with impact assessment
1. Document Classification: ## VERIFICATION CHECKLIST ✓
- Identify document type (README, API docs, research paper, specification, tutorial, architecture) □ Pre-analysis plan documented (3-5 bullet points)
- Determine primary purpose and target audience □ All claims backed by section/page references
- Assess document scope and expected depth □ Self-critique completed before final output
□ Language is precise and direct (no persuasive adjectives)
2. Analysis Strategy: □ Recommendations are specific and actionable
- Define key questions this analysis should answer □ Output length proportional to document size
- Identify critical sections requiring deep focus
- Plan reading order (linear vs. selective) Focus: Evidence-based insights extraction with pre-planning and self-critique for technical documents.
- Anticipate potential gaps or ambiguities
3. Success Criteria:
- What insights must be extracted?
- What level of detail is appropriate?
- What integration points with existing project?
**Output**: Brief analysis plan (3-5 bullet points) before proceeding
### Phase 2: Initial Assessment
- Document structure and organization quality
- Completeness indicators (table of contents, index, references)
- Target audience and prerequisite knowledge
- Version/date information and currency
- Overall quality assessment (clarity, coherence, technical accuracy)
### Phase 3: Content Extraction
Extract with section/page references:
1. **Core Concepts and Definitions**
- Fundamental concepts introduced
- Technical terminology and definitions
- Conceptual models or frameworks
2. **Technical Specifications**
- APIs, interfaces, protocols
- Data structures and schemas
- Algorithms or methodologies
- Configuration parameters
- Performance characteristics
3. **Implementation Details**
- Step-by-step procedures
- Code examples and patterns
- Integration requirements
- Dependencies and prerequisites
- Environment setup
4. **Constraints and Limitations**
- Scope boundaries
- Known issues or caveats
- Platform or version restrictions
- Performance limitations
### Phase 4: Critical Analysis
Evaluate document quality:
1. **Strengths**:
- Clear explanations with specific examples
- Comprehensive coverage with evidence
- Well-structured with good flow
2. **Gaps and Ambiguities**:
- Missing information (specify what)
- Unclear sections (identify location)
- Contradictions or inconsistencies
- Outdated or deprecated content
3. **Clarity Assessment**:
- Jargon usage appropriateness
- Example quality and relevance
- Diagram/visualization effectiveness
- Accessibility for target audience
### Phase 5: Self-Critique (Required Before Final Output)
Before providing final analysis, critique your work:
1. Verification:
- Have I cited sources for all claims?
- Are my interpretations clearly distinguished from facts?
- Have I avoided persuasive language?
- Are recommendations specific and actionable?
2. Completeness:
- Did I address all analysis objectives?
- Are there obvious gaps in my analysis?
- Have I considered alternative interpretations?
3. Quality:
- Is the output concise without losing critical detail?
- Are findings prioritized appropriately?
- Will this enable actionable decisions?
**Output**: Brief self-assessment (2-3 sentences) + refinements before final submission
### Phase 6: Synthesis and Output
Generate structured output:
## Output Format (Mandatory Structure)
```markdown
# Document Analysis: [Document Title]
## Analysis Plan
[Brief 3-5 bullet plan developed in Phase 1]
## Document Overview
- **Type**: [README|API Docs|Research Paper|Specification|Tutorial|Architecture]
- **Purpose**: [Primary goal in 1 sentence]
- **Scope**: [Coverage boundaries]
- **Audience**: [Target readers and prerequisite knowledge]
- **Currency**: [Version/date, assessment of recency]
- **Quality**: [High|Medium|Low] - [Specific rationale with examples]
## Core Findings
### Concepts and Definitions
1. **[Concept Name]** (Section X.Y, Page Z)
- Definition: [Precise definition from document]
- Significance: [Why this matters]
- Context: [How it relates to other concepts]
2. **[Concept Name]** (Section X.Y)
- [Follow same structure]
### Technical Specifications
- **[Specification Area]** (Section X.Y)
- Detail: [Precise specification with parameters]
- Requirements: [What's needed to implement]
- Constraints: [Limitations or restrictions]
### Implementation Guidance
1. **[Implementation Aspect]** (Section X.Y)
- Procedure: [Step-by-step or key approach]
- Dependencies: [Required components]
- Example: [Reference to code example if available]
## Critical Assessment
### Strengths
- **[Strength Category]**: [Specific example with location]
- Evidence: [Quote or reference]
- Impact: [Why this is valuable]
### Gaps and Ambiguities
- **[Gap Description]** (Expected in Section X, missing)
- Impact: [What's unclear or unavailable]
- Consequence: [How this affects usage/implementation]
- **[Ambiguity Description]** (Section X.Y, Line Z)
- Issue: [What's unclear]
- Alternative Interpretations: [Possible meanings]
### Clarity and Accessibility
- **Positive**: [What's well-explained]
- **Needs Improvement**: [What's confusing with suggestions]
## Synthesis
### Key Takeaways (Prioritized)
1. **[Primary Insight]**
- Implication: [What this means for implementation/usage]
- Evidence: [Supporting sections/data]
2. **[Secondary Insight]**
- [Follow same structure]
3. **[Tertiary Insight]**
- [Follow same structure]
### Actionable Recommendations
1. **[Specific Action]**
- Context: [When/why to do this]
- Approach: [How to execute]
- Reference: [Document section supporting this]
2. **[Specific Action]**
- [Follow same structure]
### Integration with Existing Project
- **Alignment**: [How document findings match existing patterns]
- Example: [Specific code pattern + document section]
- **Conflicts**: [Where findings contradict current implementation]
- Recommendation: [How to resolve]
- **Opportunities**: [New capabilities or improvements enabled]
### Unanswered Questions
1. **[Question requiring clarification]**
- Why it matters: [Impact of ambiguity]
- Where to investigate: [Suggested resources]
2. **[Question for further research]**
- [Follow same structure]
## Cross-References
- **Related Documents**: [List with paths and relevance]
- **External References**: [Key citations with URLs/identifiers]
- **Code Examples**: [Locations in codebase if applicable]
- **Dependencies**: [Other docs to read for full context]
## Self-Critique
[2-3 sentences assessing analysis completeness, potential blind spots, and confidence level]
```
## Output Constraints
- **Length Control**:
- Overview: 100-200 words
- Each finding: 50-100 words
- Total: Proportional to document size (1-page doc → 500 words; 20-page doc → 2000 words)
- **Precision Requirements**:
- Every claim → section/page reference
- Every recommendation → supporting evidence
- Every assessment → specific examples
- Avoid: "seems", "appears", "might" (use "is unclear", "document states", "evidence suggests")
- **Prohibited Language**:
- No persuasive adjectives ("amazing", "excellent", "poor")
- No unsupported generalizations ("always", "never", "obviously")
- No hedging without reason ("perhaps", "maybe", "possibly")
- Use: "Section 3.2 indicates...", "The document specifies...", "No evidence provided for..."
## Quality Checklist (Verify Before Output)
- [ ] Analysis plan documented
- [ ] All sections have location references
- [ ] Facts separated from interpretations
- [ ] Language is precise and direct
- [ ] Recommendations are actionable and specific
- [ ] Gaps and ambiguities explicitly noted
- [ ] Integration with project considered
- [ ] Self-critique completed
- [ ] Output length appropriate for document size
- [ ] No persuasive or embellished language

View File

@@ -65,6 +65,7 @@ codex -C [dir] --full-auto exec "[prompt]" [--skip-git-repo-check -s danger-full
| Architecture Planning | Gemini → Qwen | analysis | `planning/01-plan-architecture-design.txt` | | Architecture Planning | Gemini → Qwen | analysis | `planning/01-plan-architecture-design.txt` |
| Code Pattern Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-code-patterns.txt` | | Code Pattern Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-code-patterns.txt` |
| Architecture Review | Gemini → Qwen | analysis | `analysis/02-review-architecture.txt` | | Architecture Review | Gemini → Qwen | analysis | `analysis/02-review-architecture.txt` |
| Document Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-technical-document.txt` |
| Feature Implementation | Codex | auto | `development/02-implement-feature.txt` | | Feature Implementation | Codex | auto | `development/02-implement-feature.txt` |
| Component Development | Codex | auto | `development/02-implement-component-ui.txt` | | Component Development | Codex | auto | `development/02-implement-component-ui.txt` |
| Test Generation | Codex | write | `development/02-generate-tests.txt` | | Test Generation | Codex | write | `development/02-generate-tests.txt` |
@@ -519,13 +520,14 @@ When no specific template matches your task requirements, use one of these unive
**Available Templates**: **Available Templates**:
``` ```
prompts/ prompts/
├── universal/ # ← NEW: Universal fallback templates ├── universal/ # ← Universal fallback templates
│ ├── 00-universal-rigorous-style.txt # Precision & standards-driven │ ├── 00-universal-rigorous-style.txt # Precision & standards-driven
│ └── 00-universal-creative-style.txt # Innovation & exploration-focused │ └── 00-universal-creative-style.txt # Innovation & exploration-focused
├── analysis/ ├── analysis/
│ ├── 01-trace-code-execution.txt │ ├── 01-trace-code-execution.txt
│ ├── 01-diagnose-bug-root-cause.txt │ ├── 01-diagnose-bug-root-cause.txt
│ ├── 02-analyze-code-patterns.txt │ ├── 02-analyze-code-patterns.txt
│ ├── 02-analyze-technical-document.txt
│ ├── 02-review-architecture.txt │ ├── 02-review-architecture.txt
│ ├── 02-review-code-quality.txt │ ├── 02-review-code-quality.txt
│ ├── 03-analyze-performance.txt │ ├── 03-analyze-performance.txt
@@ -556,6 +558,7 @@ prompts/
| Execution Tracing | Gemini (Qwen fallback) | `analysis/01-trace-code-execution.txt` | | Execution Tracing | Gemini (Qwen fallback) | `analysis/01-trace-code-execution.txt` |
| Bug Diagnosis | Gemini (Qwen fallback) | `analysis/01-diagnose-bug-root-cause.txt` | | Bug Diagnosis | Gemini (Qwen fallback) | `analysis/01-diagnose-bug-root-cause.txt` |
| Code Pattern Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-code-patterns.txt` | | Code Pattern Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-code-patterns.txt` |
| Document Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-technical-document.txt` |
| Architecture Review | Gemini (Qwen fallback) | `analysis/02-review-architecture.txt` | | Architecture Review | Gemini (Qwen fallback) | `analysis/02-review-architecture.txt` |
| Code Review | Gemini (Qwen fallback) | `analysis/02-review-code-quality.txt` | | Code Review | Gemini (Qwen fallback) | `analysis/02-review-code-quality.txt` |
| Performance Analysis | Gemini (Qwen fallback) | `analysis/03-analyze-performance.txt` | | Performance Analysis | Gemini (Qwen fallback) | `analysis/03-analyze-performance.txt` |