This commit is contained in:
catlog22
2025-11-20 18:51:24 +08:00
17 changed files with 2194 additions and 208 deletions

View File

@@ -0,0 +1,126 @@
---
name: document-analysis
description: Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic"
allowed-tools: SlashCommand(*), Bash(*), Task(*), Read(*)
---
# CLI Mode: Document Analysis (/cli:mode:document-analysis)
## Purpose
Systematic analysis of technical documents, research papers, API documentation, and technical specifications.
**Tool Selection**:
- **gemini** (default) - Best for document comprehension and structure analysis
- **qwen** - Fallback when Gemini unavailable
- **codex** - Alternative for complex technical documents
**Key Feature**: `--cd` flag for directory-scoped document discovery
## Parameters
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
- `--enhance` - Enhance analysis target with `/enhance-prompt`
- `--cd "path"` - Target directory for document search
- `<document-path-or-topic>` (Required) - File path or topic description
## Tool Usage
**Gemini** (Primary):
```bash
/cli:mode:document-analysis "README.md"
/cli:mode:document-analysis --tool gemini "analyze API documentation"
```
**Qwen** (Fallback):
```bash
/cli:mode:document-analysis --tool qwen "docs/architecture.md"
```
**Codex** (Alternative):
```bash
/cli:mode:document-analysis --tool codex "research paper in docs/"
```
## Execution Flow
Uses **cli-execution-agent** for automated document analysis:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Systematic document comprehension and insights extraction",
prompt=`
Task: ${document_path_or_topic}
Mode: document-analysis
Tool: ${tool_flag || 'gemini'}
Directory: ${cd_path || '.'}
Enhance: ${enhance_flag}
Template: ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-technical-document.txt
Execute systematic document analysis:
1. Document Discovery:
- Locate target document(s) via path or topic keywords
- Identify document type (README, API docs, research paper, spec, tutorial)
- Detect document format (Markdown, PDF, plain text, reStructuredText)
- Discover related documents (references, appendices, examples)
- Use MCP/ripgrep for comprehensive file discovery
2. Pre-Analysis Planning (Required):
- Determine document structure (sections, hierarchy, flow)
- Identify key components (abstract, methodology, implementation details)
- Map dependencies and cross-references
- Assess document scope and complexity
- Plan analysis approach based on document type
3. CLI Command Construction:
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex docs)
- Directory: cd ${cd_path || '.'} &&
- Context: @{document_paths} + @CLAUDE.md + related files
- Mode: analysis (read-only)
- Template: analysis/02-analyze-technical-document.txt
4. Analysis Execution:
- Apply 6-field template structure (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
- Execute multi-phase analysis protocol with pre-planning
- Perform self-critique before final output
- Generate structured report with evidence-based insights
5. Output Generation:
- Comprehensive document analysis report
- Structured insights with section references
- Critical assessment with evidence
- Actionable recommendations
- Save to .workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md (or .scratchpad/)
`
)
```
## Core Rules
- **Read-only**: Analyzes documents, does NOT modify files
- **Evidence-based**: All claims must reference specific sections/pages
- **Pre-planning**: Requires analysis approach planning before execution
- **Precise language**: Direct, accurate wording - no persuasive embellishment
- **Output**: `.workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md` (or `.scratchpad/` if no session)
## Document Types Supported
| Type | Focus Areas | Key Outputs |
|------|-------------|-------------|
| README | Purpose, setup, usage | Integration steps, quick-start guide |
| API Documentation | Endpoints, parameters, responses | API usage patterns, integration points |
| Research Paper | Methodology, findings, validity | Applicable techniques, implementation feasibility |
| Specification | Requirements, standards, constraints | Compliance checklist, implementation requirements |
| Tutorial | Learning path, examples, exercises | Key concepts, practical applications |
| Architecture Docs | System design, components, patterns | Design decisions, integration points, trade-offs |
## Best Practices
1. **Scope Definition**: Clearly define what aspects to analyze before starting
2. **Layered Reading**: Structure/Overview → Details → Critical Analysis → Synthesis
3. **Evidence Trail**: Track section references for all extracted information
4. **Gap Identification**: Note missing information or unclear sections explicitly
5. **Actionable Output**: Focus on insights that inform decisions or actions

View File

@@ -93,7 +93,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
@@ -122,7 +122,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
@@ -131,8 +131,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Commands**:
```bash
# Count existing docs from phase2-analysis.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -186,8 +186,8 @@ Large Projects (single dir >10 docs):
**Commands**:
```bash
# 1. Get top-level directories from phase2-analysis.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
@@ -205,7 +205,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
@@ -219,7 +219,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json)
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -241,7 +241,7 @@ api_id=$((group_count + 3))
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from phase2-analysis.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
@@ -266,14 +266,14 @@ api_id=$((group_count + 3))
},
"context": {
"requirements": [
"Process directories from group ${group_number} in phase2-analysis.json",
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
@@ -282,8 +282,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -328,7 +328,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -468,7 +468,7 @@ api_id=$((group_count + 3))
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
@@ -477,7 +477,7 @@ api_id=$((group_count + 3))
└── IMPL-{N+3}.json # HTTP API (optional)
```
**phase2-analysis.json Structure**:
**doc-planning-data.json Structure**:
```json
{
"metadata": {

View File

@@ -1,15 +1,16 @@
---
name: workflow:status
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
argument-hint: "[optional: --project|task-id|--validate]"
argument-hint: "[optional: --project|task-id|--validate|--dashboard]"
---
# Workflow Status Command (/workflow:status)
## Overview
Generates on-demand views from project and session data. Supports two modes:
Generates on-demand views from project and session data. Supports multiple modes:
1. **Project Overview** (`--project`): Shows completed features and project statistics
2. **Workflow Tasks** (default): Shows current session task progress
3. **HTML Dashboard** (`--dashboard`): Generates interactive HTML task board with active and archived sessions
No synchronization needed - all views are calculated from current JSON state.
@@ -19,6 +20,7 @@ No synchronization needed - all views are calculated from current JSON state.
/workflow:status --project # Show project-level feature registry
/workflow:status impl-1 # Show specific task details
/workflow:status --validate # Validate workflow integrity
/workflow:status --dashboard # Generate HTML dashboard board
```
## Implementation Flow
@@ -192,4 +194,135 @@ find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null |
## Completed Tasks
- [COMPLETED] impl-0: Setup completed
```
## Dashboard Mode (HTML Board)
### Step 1: Check for --dashboard flag
```bash
# If --dashboard flag present → Execute Dashboard Mode
```
### Step 2: Collect Workflow Data
**Collect Active Sessions**:
```bash
# Find all active sessions
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null
# For each active session, read metadata and tasks
for session in $(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null); do
cat "$session/workflow-session.json"
find "$session/.task/" -name "*.json" -type f 2>/dev/null
done
```
**Collect Archived Sessions**:
```bash
# Find all archived sessions
find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null
# Read manifest if exists
cat .workflow/archives/manifest.json 2>/dev/null
# For each archived session, read metadata
for archive in $(find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null); do
cat "$archive/workflow-session.json" 2>/dev/null
# Count completed tasks
find "$archive/.task/" -name "*.json" -type f 2>/dev/null | wc -l
done
```
### Step 3: Process and Structure Data
**Build data structure for dashboard**:
```javascript
const dashboardData = {
activeSessions: [],
archivedSessions: [],
generatedAt: new Date().toISOString()
};
// Process active sessions
for each active_session in active_sessions:
const sessionData = JSON.parse(Read(active_session/workflow-session.json));
const tasks = [];
// Load all tasks for this session
for each task_file in find(active_session/.task/*.json):
const taskData = JSON.parse(Read(task_file));
tasks.push({
task_id: taskData.task_id,
title: taskData.title,
status: taskData.status,
type: taskData.type
});
dashboardData.activeSessions.push({
session_id: sessionData.session_id,
project: sessionData.project,
status: sessionData.status,
created_at: sessionData.created_at || sessionData.initialized_at,
tasks: tasks
});
// Process archived sessions
for each archived_session in archived_sessions:
const sessionData = JSON.parse(Read(archived_session/workflow-session.json));
const taskCount = bash(find archived_session/.task/*.json | wc -l);
dashboardData.archivedSessions.push({
session_id: sessionData.session_id,
project: sessionData.project,
archived_at: sessionData.completed_at || sessionData.archived_at,
taskCount: parseInt(taskCount),
archive_path: archived_session
});
```
### Step 4: Generate HTML from Template
**Load template and inject data**:
```javascript
// Read the HTML template
const template = Read("~/.claude/templates/workflow-dashboard.html");
// Prepare data for injection
const dataJson = JSON.stringify(dashboardData, null, 2);
// Replace placeholder with actual data
const htmlContent = template.replace('{{WORKFLOW_DATA}}', dataJson);
// Ensure .workflow directory exists
bash(mkdir -p .workflow);
```
### Step 5: Write HTML File
```bash
# Write the generated HTML to .workflow/dashboard.html
Write({
file_path: ".workflow/dashboard.html",
content: htmlContent
})
```
### Step 6: Display Success Message
```markdown
Dashboard generated successfully!
Location: .workflow/dashboard.html
Open in browser:
file://$(pwd)/.workflow/dashboard.html
Features:
- 📊 Active sessions overview
- 📦 Archived sessions history
- 🔍 Search and filter
- 📈 Progress tracking
- 🎨 Dark/light theme
Refresh data: Re-run /workflow:status --dashboard
```

View File

@@ -89,7 +89,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
@@ -118,7 +118,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
@@ -127,8 +127,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Commands**:
```bash
# Count existing docs from phase2-analysis.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
# Count existing docs from doc-planning-data.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -182,8 +182,8 @@ Large Projects (single dir >10 docs):
**Commands**:
```bash
# 1. Get top-level directories from phase2-analysis.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
@@ -201,7 +201,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
@@ -215,7 +215,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/phase2-analysis.json)
group_count=$(jq '.groups.count' .workflow/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -237,7 +237,7 @@ api_id=$((group_count + 3))
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from phase2-analysis.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
@@ -262,14 +262,14 @@ api_id=$((group_count + 3))
},
"context": {
"requirements": [
"Process directories from group ${group_number} in phase2-analysis.json",
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
@@ -278,8 +278,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -324,7 +324,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -464,7 +464,7 @@ api_id=$((group_count + 3))
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
@@ -473,7 +473,7 @@ api_id=$((group_count + 3))
└── IMPL-{N+3}.json # HTTP API (optional)
```
**phase2-analysis.json Structure**:
**doc-planning-data.json Structure**:
```json
{
"metadata": {

View File

@@ -0,0 +1,664 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Workflow Dashboard - Task Board</title>
<style>
:root {
--bg-primary: #f5f7fa;
--bg-secondary: #ffffff;
--bg-card: #ffffff;
--text-primary: #1a202c;
--text-secondary: #718096;
--border-color: #e2e8f0;
--accent-color: #4299e1;
--success-color: #48bb78;
--warning-color: #ed8936;
--danger-color: #f56565;
--shadow: 0 1px 3px 0 rgba(0, 0, 0, 0.1), 0 1px 2px 0 rgba(0, 0, 0, 0.06);
--shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05);
}
[data-theme="dark"] {
--bg-primary: #1a202c;
--bg-secondary: #2d3748;
--bg-card: #2d3748;
--text-primary: #f7fafc;
--text-secondary: #a0aec0;
--border-color: #4a5568;
--shadow: 0 1px 3px 0 rgba(0, 0, 0, 0.3), 0 1px 2px 0 rgba(0, 0, 0, 0.2);
--shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.3), 0 4px 6px -2px rgba(0, 0, 0, 0.2);
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
background-color: var(--bg-primary);
color: var(--text-primary);
line-height: 1.6;
transition: background-color 0.3s, color 0.3s;
}
.container {
max-width: 1400px;
margin: 0 auto;
padding: 20px;
}
header {
background-color: var(--bg-secondary);
box-shadow: var(--shadow);
padding: 20px;
margin-bottom: 30px;
border-radius: 8px;
}
h1 {
font-size: 2rem;
margin-bottom: 10px;
color: var(--accent-color);
}
.header-controls {
display: flex;
gap: 15px;
flex-wrap: wrap;
align-items: center;
margin-top: 15px;
}
.search-box {
flex: 1;
min-width: 250px;
position: relative;
}
.search-box input {
width: 100%;
padding: 10px 15px;
border: 1px solid var(--border-color);
border-radius: 6px;
background-color: var(--bg-primary);
color: var(--text-primary);
font-size: 0.95rem;
}
.filter-group {
display: flex;
gap: 10px;
flex-wrap: wrap;
}
.btn {
padding: 10px 20px;
border: none;
border-radius: 6px;
cursor: pointer;
font-size: 0.9rem;
font-weight: 500;
transition: all 0.2s;
background-color: var(--bg-card);
color: var(--text-primary);
border: 1px solid var(--border-color);
}
.btn:hover {
transform: translateY(-1px);
box-shadow: var(--shadow);
}
.btn.active {
background-color: var(--accent-color);
color: white;
border-color: var(--accent-color);
}
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 20px;
margin-bottom: 30px;
}
.stat-card {
background-color: var(--bg-card);
padding: 20px;
border-radius: 8px;
box-shadow: var(--shadow);
transition: transform 0.2s;
}
.stat-card:hover {
transform: translateY(-2px);
box-shadow: var(--shadow-lg);
}
.stat-value {
font-size: 2rem;
font-weight: bold;
color: var(--accent-color);
}
.stat-label {
color: var(--text-secondary);
font-size: 0.9rem;
margin-top: 5px;
}
.section {
margin-bottom: 40px;
}
.section-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 20px;
}
.section-title {
font-size: 1.5rem;
font-weight: 600;
}
.sessions-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(350px, 1fr));
gap: 20px;
}
.session-card {
background-color: var(--bg-card);
border-radius: 8px;
box-shadow: var(--shadow);
padding: 20px;
transition: all 0.3s;
}
.session-card:hover {
transform: translateY(-4px);
box-shadow: var(--shadow-lg);
}
.session-header {
display: flex;
justify-content: space-between;
align-items: start;
margin-bottom: 15px;
}
.session-title {
font-size: 1.2rem;
font-weight: 600;
color: var(--text-primary);
margin-bottom: 5px;
}
.session-status {
padding: 4px 12px;
border-radius: 12px;
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
}
.status-active {
background-color: #c6f6d5;
color: #22543d;
}
.status-archived {
background-color: #e2e8f0;
color: #4a5568;
}
[data-theme="dark"] .status-active {
background-color: #22543d;
color: #c6f6d5;
}
[data-theme="dark"] .status-archived {
background-color: #4a5568;
color: #e2e8f0;
}
.session-meta {
display: flex;
gap: 15px;
font-size: 0.85rem;
color: var(--text-secondary);
margin-bottom: 15px;
}
.progress-bar {
width: 100%;
height: 8px;
background-color: var(--bg-primary);
border-radius: 4px;
overflow: hidden;
margin: 15px 0;
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, var(--accent-color), var(--success-color));
transition: width 0.3s;
}
.tasks-list {
margin-top: 15px;
}
.task-item {
display: flex;
align-items: center;
padding: 10px;
margin-bottom: 8px;
background-color: var(--bg-primary);
border-radius: 6px;
border-left: 3px solid var(--border-color);
transition: all 0.2s;
}
.task-item:hover {
transform: translateX(4px);
}
.task-item.completed {
border-left-color: var(--success-color);
opacity: 0.8;
}
.task-item.in_progress {
border-left-color: var(--warning-color);
}
.task-item.pending {
border-left-color: var(--text-secondary);
}
.task-checkbox {
width: 20px;
height: 20px;
border-radius: 50%;
border: 2px solid var(--border-color);
margin-right: 12px;
display: flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
}
.task-item.completed .task-checkbox {
background-color: var(--success-color);
border-color: var(--success-color);
}
.task-item.completed .task-checkbox::after {
content: '✓';
color: white;
font-size: 0.8rem;
font-weight: bold;
}
.task-item.in_progress .task-checkbox {
border-color: var(--warning-color);
background-color: var(--warning-color);
}
.task-item.in_progress .task-checkbox::after {
content: '⟳';
color: white;
font-size: 0.9rem;
}
.task-title {
flex: 1;
font-size: 0.9rem;
}
.task-id {
font-size: 0.75rem;
color: var(--text-secondary);
font-family: monospace;
margin-left: 10px;
}
.empty-state {
text-align: center;
padding: 60px 20px;
color: var(--text-secondary);
}
.empty-state-icon {
font-size: 4rem;
margin-bottom: 20px;
opacity: 0.5;
}
.theme-toggle {
position: fixed;
bottom: 30px;
right: 30px;
width: 60px;
height: 60px;
border-radius: 50%;
background-color: var(--accent-color);
color: white;
border: none;
cursor: pointer;
font-size: 1.5rem;
box-shadow: var(--shadow-lg);
transition: all 0.3s;
z-index: 1000;
}
.theme-toggle:hover {
transform: scale(1.1);
}
@media (max-width: 768px) {
.sessions-grid {
grid-template-columns: 1fr;
}
.stats-grid {
grid-template-columns: repeat(2, 1fr);
}
h1 {
font-size: 1.5rem;
}
.header-controls {
flex-direction: column;
align-items: stretch;
}
.search-box {
width: 100%;
}
}
.badge {
display: inline-block;
padding: 2px 8px;
border-radius: 4px;
font-size: 0.75rem;
font-weight: 500;
margin-left: 8px;
}
.badge-count {
background-color: var(--accent-color);
color: white;
}
.session-footer {
margin-top: 15px;
padding-top: 15px;
border-top: 1px solid var(--border-color);
font-size: 0.85rem;
color: var(--text-secondary);
}
</style>
</head>
<body>
<div class="container">
<header>
<h1>🚀 Workflow Dashboard</h1>
<p style="color: var(--text-secondary);">Task Board - Active and Archived Sessions</p>
<div class="header-controls">
<div class="search-box">
<input type="text" id="searchInput" placeholder="🔍 Search tasks or sessions..." />
</div>
<div class="filter-group">
<button class="btn active" data-filter="all">All</button>
<button class="btn" data-filter="active">Active</button>
<button class="btn" data-filter="archived">Archived</button>
</div>
</div>
</header>
<div class="stats-grid">
<div class="stat-card">
<div class="stat-value" id="totalSessions">0</div>
<div class="stat-label">Total Sessions</div>
</div>
<div class="stat-card">
<div class="stat-value" id="activeSessions">0</div>
<div class="stat-label">Active Sessions</div>
</div>
<div class="stat-card">
<div class="stat-value" id="totalTasks">0</div>
<div class="stat-label">Total Tasks</div>
</div>
<div class="stat-card">
<div class="stat-value" id="completedTasks">0</div>
<div class="stat-label">Completed Tasks</div>
</div>
</div>
<div class="section" id="activeSectionContainer">
<div class="section-header">
<h2 class="section-title">📋 Active Sessions</h2>
</div>
<div class="sessions-grid" id="activeSessions"></div>
</div>
<div class="section" id="archivedSectionContainer">
<div class="section-header">
<h2 class="section-title">📦 Archived Sessions</h2>
</div>
<div class="sessions-grid" id="archivedSessions"></div>
</div>
</div>
<button class="theme-toggle" id="themeToggle">🌙</button>
<script>
// Workflow data will be injected here
const workflowData = {{WORKFLOW_DATA}};
// Theme management
function initTheme() {
const savedTheme = localStorage.getItem('theme') || 'light';
document.documentElement.setAttribute('data-theme', savedTheme);
updateThemeIcon(savedTheme);
}
function toggleTheme() {
const currentTheme = document.documentElement.getAttribute('data-theme');
const newTheme = currentTheme === 'dark' ? 'light' : 'dark';
document.documentElement.setAttribute('data-theme', newTheme);
localStorage.setItem('theme', newTheme);
updateThemeIcon(newTheme);
}
function updateThemeIcon(theme) {
document.getElementById('themeToggle').textContent = theme === 'dark' ? '☀️' : '🌙';
}
// Statistics calculation
function updateStatistics() {
const stats = {
totalSessions: workflowData.activeSessions.length + workflowData.archivedSessions.length,
activeSessions: workflowData.activeSessions.length,
totalTasks: 0,
completedTasks: 0
};
workflowData.activeSessions.forEach(session => {
stats.totalTasks += session.tasks.length;
stats.completedTasks += session.tasks.filter(t => t.status === 'completed').length;
});
workflowData.archivedSessions.forEach(session => {
stats.totalTasks += session.taskCount || 0;
stats.completedTasks += session.taskCount || 0;
});
document.getElementById('totalSessions').textContent = stats.totalSessions;
document.getElementById('activeSessions').textContent = stats.activeSessions;
document.getElementById('totalTasks').textContent = stats.totalTasks;
document.getElementById('completedTasks').textContent = stats.completedTasks;
}
// Render session card
function createSessionCard(session, isActive) {
const card = document.createElement('div');
card.className = 'session-card';
card.dataset.sessionType = isActive ? 'active' : 'archived';
const completedTasks = isActive
? session.tasks.filter(t => t.status === 'completed').length
: (session.taskCount || 0);
const totalTasks = isActive ? session.tasks.length : (session.taskCount || 0);
const progress = totalTasks > 0 ? (completedTasks / totalTasks * 100) : 0;
let tasksHtml = '';
if (isActive && session.tasks.length > 0) {
tasksHtml = `
<div class="tasks-list">
${session.tasks.map(task => `
<div class="task-item ${task.status}">
<div class="task-checkbox"></div>
<div class="task-title">${task.title || 'Untitled Task'}</div>
<span class="task-id">${task.task_id || ''}</span>
</div>
`).join('')}
</div>
`;
}
card.innerHTML = `
<div class="session-header">
<div>
<h3 class="session-title">${session.session_id || 'Unknown Session'}</h3>
<div style="color: var(--text-secondary); font-size: 0.9rem; margin-top: 5px;">
${session.project || ''}
</div>
</div>
<span class="session-status ${isActive ? 'status-active' : 'status-archived'}">
${isActive ? 'Active' : 'Archived'}
</span>
</div>
<div class="session-meta">
<span>📅 ${session.created_at || session.archived_at || 'N/A'}</span>
<span>📊 ${completedTasks}/${totalTasks} tasks</span>
</div>
${totalTasks > 0 ? `
<div class="progress-bar">
<div class="progress-fill" style="width: ${progress}%"></div>
</div>
<div style="text-align: center; font-size: 0.85rem; color: var(--text-secondary);">
${Math.round(progress)}% Complete
</div>
` : ''}
${tasksHtml}
${!isActive && session.archive_path ? `
<div class="session-footer">
📁 Archive: ${session.archive_path}
</div>
` : ''}
`;
return card;
}
// Render all sessions
function renderSessions(filter = 'all') {
const activeContainer = document.getElementById('activeSessions');
const archivedContainer = document.getElementById('archivedSessions');
activeContainer.innerHTML = '';
archivedContainer.innerHTML = '';
if (filter === 'all' || filter === 'active') {
if (workflowData.activeSessions.length === 0) {
activeContainer.innerHTML = `
<div class="empty-state">
<div class="empty-state-icon">📭</div>
<p>No active sessions</p>
</div>
`;
} else {
workflowData.activeSessions.forEach(session => {
activeContainer.appendChild(createSessionCard(session, true));
});
}
}
if (filter === 'all' || filter === 'archived') {
if (workflowData.archivedSessions.length === 0) {
archivedContainer.innerHTML = `
<div class="empty-state">
<div class="empty-state-icon">📦</div>
<p>No archived sessions</p>
</div>
`;
} else {
workflowData.archivedSessions.forEach(session => {
archivedContainer.appendChild(createSessionCard(session, false));
});
}
}
// Show/hide sections
document.getElementById('activeSectionContainer').style.display =
(filter === 'all' || filter === 'active') ? 'block' : 'none';
document.getElementById('archivedSectionContainer').style.display =
(filter === 'all' || filter === 'archived') ? 'block' : 'none';
}
// Search functionality
function setupSearch() {
const searchInput = document.getElementById('searchInput');
searchInput.addEventListener('input', (e) => {
const query = e.target.value.toLowerCase();
const cards = document.querySelectorAll('.session-card');
cards.forEach(card => {
const text = card.textContent.toLowerCase();
card.style.display = text.includes(query) ? 'block' : 'none';
});
});
}
// Filter functionality
function setupFilters() {
const filterButtons = document.querySelectorAll('[data-filter]');
filterButtons.forEach(btn => {
btn.addEventListener('click', () => {
filterButtons.forEach(b => b.classList.remove('active'));
btn.classList.add('active');
renderSessions(btn.dataset.filter);
});
});
}
// Initialize
document.addEventListener('DOMContentLoaded', () => {
initTheme();
updateStatistics();
renderSessions();
setupSearch();
setupFilters();
document.getElementById('themeToggle').addEventListener('click', toggleTheme);
});
</script>
</body>
</html>

View File

@@ -5,27 +5,22 @@ description: Product backlog management, user story creation, and feature priori
# Product Owner Planning Template
You are a **Product Owner** specializing in product backlog management, user story creation, and feature prioritization.
## Role & Scope
## Your Role & Responsibilities
**Role**: Product Owner
**Focus**: Product backlog management, user story definition, stakeholder alignment, value delivery
**Excluded**: Team management, technical implementation, detailed system design
**Primary Focus**: Product backlog management, user story definition, stakeholder alignment, and value delivery
**Core Responsibilities**:
- Product backlog creation and prioritization
- User story writing with acceptance criteria
- Stakeholder engagement and requirement gathering
- Feature value assessment and ROI analysis
- Release planning and roadmap management
- Sprint goal definition and commitment
- Acceptance testing and definition of done
**Does NOT Include**: Team management, technical implementation, detailed system design
## Planning Process (Required)
Before providing planning document, you MUST:
1. Analyze product vision and stakeholder needs
2. Define backlog structure and prioritization framework
3. Create user stories with acceptance criteria
4. Plan releases and define success metrics
5. Present structured planning document
## Planning Document Structure
Generate a comprehensive Product Owner planning document with the following structure:
### 1. Product Vision & Strategy
- **Product Vision**: Long-term product goals and target outcomes
- **Value Proposition**: User value and business benefits

View File

@@ -5,55 +5,52 @@ category: development
keywords: [bug诊断, 故障分析, 修复方案]
---
# AI Persona & Core Mission
# Role & Output Requirements
You are a **资深软件工程师 & 故障诊断专家 (Senior Software Engineer & Fault Diagnosis Expert)**. Your mission is to meticulously analyze user-provided bug reports, logs, and code snippets to perform a forensic-level investigation. Your goal is to pinpoint the precise root cause of the bug and then propose a targeted, robust, and minimally invasive correction plan. **Critically, you will *not* write complete, ready-to-use code files. Your output is a diagnostic report and a clear, actionable correction suggestion, articulated in professional Chinese.** You are an expert at logical deduction, tracing execution flows, and anticipating the side effects of any proposed fix.
**Role**: Software engineer specializing in bug diagnosis
**Output Format**: Diagnostic report in Chinese following the specified structure
**Constraints**: Do NOT write complete code files. Provide diagnostic analysis and targeted correction suggestions only.
## II. ROLE DEFINITION & CORE CAPABILITIES
1. **Role**: Senior Software Engineer & Fault Diagnosis Expert.
2. **Core Capabilities**:
* **Symptom Interpretation**: Deconstructing bug reports, stack traces, logs, and user descriptions into concrete technical observations.
* **Logical Deduction & Root Cause Analysis**: Masterfully applying deductive reasoning to trace symptoms back to their fundamental cause, moving from what is happening to why its happening.
* **Code Traversal & Execution Flow Analysis**: Mentally (or schematically) tracing code paths, state changes, and data transformations to identify logical flaws.
* **Hypothesis Formulation & Validation**: Formulating plausible hypotheses about the bugs origin and systematically validating or refuting them based on the provided evidence.
* **Targeted Solution Design**: Proposing precise, effective, and low-risk code corrections rather than broad refactoring.
* **Impact Analysis**: Foreseeing the potential ripple effects or unintended consequences of a proposed fix on other parts of the system.
* **Clear Technical Communication (Chinese)**: Articulating complex diagnostic processes and correction plans in clear, unambiguous Chinese for a developer audience.
## Core Capabilities
- Interpret symptoms from bug reports, stack traces, and logs
- Trace execution flow to identify root causes
- Formulate and validate hypotheses about bug origins
- Design targeted, low-risk corrections
- Analyze impact on other system components
3. **Core Thinking Mode**:
* **Detective-like & Methodical**: Start with the evidence (symptoms), follow the clues (code paths), identify the suspect (flawed logic), and prove the case (root cause).
* **Hypothesis-Driven**: Actively form and state your working theories (My initial hypothesis is that the null pointer is originating from module X because...) before reaching a conclusion.
* **From Effect to Cause**: Your primary thought process should be working backward from the observed failure to the initial error.
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your entire diagnostic journey, from symptom analysis to root cause identification.
## Analysis Process (Required)
**Before providing your final diagnosis, you MUST:**
1. Analyze symptoms and form initial hypothesis
2. Trace code execution to identify root cause
3. Design correction strategy
4. Assess potential impacts and risks
5. Present structured diagnostic report
## III. OBJECTIVES
1. **Analyze Evidence**: Thoroughly examine all provided information (bug description, code, logs) to understand the failure conditions.
2. **Pinpoint Root Cause**: Go beyond surface-level symptoms to identify the fundamental logical error, race condition, data corruption, or configuration issue.
3. **Propose Precise Correction**: Formulate a clear and targeted suggestion for how to fix the bug.
4. **Explain the Why**: Justify why the proposed correction effectively resolves the root cause.
5. **Assess Risks & Side Effects**: Identify potential negative impacts of the fix and suggest verification steps.
6. **Professional Chinese Output**: Produce a highly structured, professional diagnostic report and correction plan entirely in Chinese.
7. **Show Your Work (CoT)**: Demonstrate your analytical process clearly in the 思考过程 section.
## Objectives
1. Identify root cause (not just symptoms)
2. Propose targeted correction with justification
3. Assess risks and side effects
4. Provide verification steps
## IV. INPUT SPECIFICATIONS
1. **Bug Description**: A description of the problem, including observed behavior vs. expected behavior.
2. **Code Snippets/File Information**: Relevant source code where the bug is suspected to be.
3. **Logs/Stack Traces (Highly Recommended)**: Error messages, logs, or stack traces associated with the bug.
4. **Reproduction Steps (Optional)**: Steps to reproduce the bug.
## Input
- Bug description (observed vs. expected behavior)
- Code snippets or file locations
- Logs, stack traces, error messages
- Reproduction steps (if available)
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
## Output Structure (Required)
Your response **MUST** be in Chinese and structured in Markdown as follows:
Output in Chinese using this Markdown structure:
---
### 0. 诊断思维链 (Diagnostic Chain-of-Thought)
* *(在此处,您必须结构化地展示您的诊断流程。)*
* **1. 症状分析 (Symptom Analysis):** 我首先将用户的描述、日志和错误信息进行归纳,提炼出关键的异常行为和技术线索。
* **2. 代码勘察与初步假设 (Code Exploration & Initial Hypothesis):** 基于症状,我将定位到最可疑的代码区域,并提出一个关于根本原因的初步假设。
* **3. 逻辑推演与根本原因定位 (Logical Deduction & Root Cause Pinpointing):** 我将沿着代码执行路径进行深入推演,验证或修正我的假设,直至锁定导致错误的精确逻辑点。
* **4. 修复方案设计 (Correction Strategy Design):** 在确定根本原因后,我将设计一个最直接、风险最低的修复方案。
* **5. 影响评估与验证规划 (Impact Assessment & Verification Planning):** 我会评估修复方案可能带来的副作用,并构思如何验证修复的有效性及系统的稳定性。
Present your analysis process in these steps:
1. **症状分析**: Summarize error symptoms and technical clues
2. **初步假设**: Identify suspicious code areas and form initial hypothesis
3. **根本原因定位**: Trace execution path to pinpoint exact cause
4. **修复方案设计**: Design targeted, low-risk correction
5. **影响评估**: Assess side effects and plan verification
### **故障诊断与修复建议报告 (Bug Diagnosis & Correction Proposal)**
@@ -114,17 +111,17 @@ Your response **MUST** be in Chinese and structured in Markdown as follows:
---
*(对每个需要修改的文件重复上述格式)*
## VI. KEY DIRECTIVES & CONSTRAINTS
1. **Language**: **All** descriptive parts MUST be in **Chinese**.
2. **No Full Code Generation**: **Strictly refrain** from writing complete functions or files. Your correction suggestions should be concise, using single lines, `diff` format, or pseudo-code to illustrate the change. Your role is to guide the developer, not replace them.
3. **Focus on RCA**: The quality of your Root Cause Analysis is paramount. It must be logical, convincing, and directly supported by the evidence.
4. **State Assumptions**: If the provided information is insufficient to be 100% certain, clearly state your assumptions in the 诊断分析过程 section.
## Key Requirements
1. **Language**: All output in Chinese
2. **No Code Generation**: Use diff format or pseudo-code only. Do not write complete functions or files
3. **Focus on Root Cause**: Analysis must be logical and evidence-based
4. **State Assumptions**: Clearly note any assumptions when information is incomplete
## VII. SELF-CORRECTION / REFLECTION
* Before finalizing your response, review it to ensure:
* The 诊断思维链 accurately reflects a logical debugging process.
* The Root Cause Analysis is deep, clear, and compelling.
* The proposed correction directly addresses the identified root cause.
* The correction suggestion is minimal and precise (not large-scale refactoring).
* The verification steps are actionable and cover both success and failure cases.
* You have strictly avoided generating large blocks of code.
## Self-Review Checklist
Before providing final output, verify:
- [ ] Diagnostic chain reflects logical debugging process
- [ ] Root cause analysis is clear and evidence-based
- [ ] Correction directly addresses root cause (not just symptoms)
- [ ] Correction is minimal and targeted (not broad refactoring)
- [ ] Verification steps are actionable
- [ ] No complete code blocks generated

View File

@@ -1,10 +1,17 @@
Analyze implementation patterns and code structure.
## CORE CHECKLIST ⚡
□ Analyze ALL files in CONTEXT (not just samples)
□ Provide file:line references for every pattern identified
□ Distinguish between good patterns and anti-patterns
□ Apply RULES template requirements exactly as specified
## Planning Required
Before providing analysis, you MUST:
1. Review all files in context (not just samples)
2. Identify patterns with file:line references
3. Distinguish good patterns from anti-patterns
4. Apply template requirements
## Core Checklist
- [ ] Analyze ALL files in CONTEXT
- [ ] Provide file:line references for each pattern
- [ ] Distinguish good patterns from anti-patterns
- [ ] Apply RULES template requirements
## REQUIRED ANALYSIS
1. Identify common code patterns and architectural decisions
@@ -19,10 +26,12 @@ Analyze implementation patterns and code structure.
- Clear recommendations for pattern improvements
- Standards compliance assessment with priority levels
## VERIFICATION CHECKLIST ✓
□ All CONTEXT files analyzed (not partial coverage)
□ Every pattern backed by code reference (file:line)
□ Anti-patterns clearly distinguished from good patterns
□ Recommendations prioritized by impact
## Verification Checklist
Before finalizing output, verify:
- [ ] All CONTEXT files analyzed
- [ ] Every pattern has code reference (file:line)
- [ ] Anti-patterns clearly distinguished
- [ ] Recommendations prioritized by impact
Focus: Actionable insights with concrete implementation guidance.
## Output Requirements
Provide actionable insights with concrete implementation guidance.

View File

@@ -0,0 +1,33 @@
Analyze technical documents, research papers, and specifications systematically.
## CORE CHECKLIST ⚡
□ Plan analysis approach before reading (document type, key questions, success criteria)
□ Provide section/page references for all claims and findings
□ Distinguish facts from interpretations explicitly
□ Use precise, direct language - avoid persuasive wording
□ Apply RULES template requirements exactly as specified
## REQUIRED ANALYSIS
1. Document assessment: type, structure, audience, quality indicators
2. Content extraction: concepts, specifications, implementation details, constraints
3. Critical evaluation: strengths, gaps, ambiguities, clarity issues
4. Self-critique: verify citations, completeness, actionable recommendations
5. Synthesis: key takeaways, integration points, follow-up questions
## OUTPUT REQUIREMENTS
- Structured analysis with mandatory section/page references
- Evidence-based findings with specific location citations
- Clear separation of facts vs. interpretations
- Actionable recommendations tied to document content
- Integration points with existing project patterns
- Identified gaps and ambiguities with impact assessment
## VERIFICATION CHECKLIST ✓
□ Pre-analysis plan documented (3-5 bullet points)
□ All claims backed by section/page references
□ Self-critique completed before final output
□ Language is precise and direct (no persuasive adjectives)
□ Recommendations are specific and actionable
□ Output length proportional to document size
Focus: Evidence-based insights extraction with pre-planning and self-critique for technical documents.

View File

@@ -1,10 +1,17 @@
Create comprehensive tests for the codebase.
## CORE CHECKLIST ⚡
□ Analyze existing test coverage and identify gaps
□ Follow project testing frameworks and conventions
□ Include unit, integration, and end-to-end tests
□ Ensure tests are reliable and deterministic
## Planning Required
Before creating tests, you MUST:
1. Analyze existing test coverage and identify gaps
2. Study testing frameworks and conventions used
3. Plan test strategy covering unit, integration, and e2e
4. Design test data management approach
## Core Checklist
- [ ] Analyze coverage gaps
- [ ] Follow testing frameworks and conventions
- [ ] Include unit, integration, and e2e tests
- [ ] Ensure tests are reliable and deterministic
## IMPLEMENTATION PHASES
@@ -51,11 +58,13 @@ Create comprehensive tests for the codebase.
- Test coverage metrics and quality improvements
- File:line references for tested code
## VERIFICATION CHECKLIST ✓
□ Test coverage gaps identified and filled
□ All test types included (unit + integration + e2e)
□ Tests are reliable and deterministic (no flaky tests)
□ Test data properly managed (isolation + cleanup)
□ Testing conventions followed consistently
## Verification Checklist
Before finalizing, verify:
- [ ] Coverage gaps filled
- [ ] All test types included
- [ ] Tests are reliable (no flaky tests)
- [ ] Test data properly managed
- [ ] Conventions followed
Focus: High-quality, reliable test suite with comprehensive coverage.
## Focus
High-quality, reliable test suite with comprehensive coverage.

View File

@@ -1,10 +1,17 @@
Implement a new feature following project conventions and best practices.
## CORE CHECKLIST ⚡
□ Study existing code patterns BEFORE implementing
□ Follow established project conventions and architecture
□ Include comprehensive tests (unit + integration)
□ Provide file:line references for all changes
## Planning Required
Before implementing, you MUST:
1. Study existing code patterns and conventions
2. Review project architecture and design principles
3. Plan implementation with error handling and tests
4. Document integration points and dependencies
## Core Checklist
- [ ] Study existing code patterns first
- [ ] Follow project conventions and architecture
- [ ] Include comprehensive tests
- [ ] Provide file:line references
## IMPLEMENTATION PHASES
@@ -39,11 +46,13 @@ Implement a new feature following project conventions and best practices.
- Documentation of new dependencies or configurations
- Test coverage summary
## VERIFICATION CHECKLIST ✓
□ Implementation follows existing patterns (no divergence)
□ Complete test coverage (unit + integration)
□ Documentation updated (code comments + external docs)
□ Integration verified (no breaking changes)
□ Security and performance validated
## Verification Checklist
Before finalizing, verify:
- [ ] Follows existing patterns
- [ ] Complete test coverage
- [ ] Documentation updated
- [ ] No breaking changes
- [ ] Security and performance validated
Focus: Production-ready implementation with comprehensive testing and documentation.
## Focus
Production-ready implementation with comprehensive testing and documentation.

View File

@@ -1,10 +1,17 @@
Generate comprehensive module documentation focused on understanding and usage.
Generate module documentation focused on understanding and usage.
## CORE CHECKLIST ⚡
□ Explain WHAT the module does, WHY it exists, and HOW to use it
□ Do NOT duplicate API signatures from API.md; refer to it instead
□ Provide practical, real-world usage examples
□ Clearly define the module's boundaries and dependencies
## Planning Required
Before providing documentation, you MUST:
1. Understand what the module does and why it exists
2. Review existing documentation to avoid duplication
3. Prepare practical usage examples
4. Identify module boundaries and dependencies
## Core Checklist
- [ ] Explain WHAT, WHY, and HOW
- [ ] Reference API.md instead of duplicating signatures
- [ ] Include practical usage examples
- [ ] Define module boundaries and dependencies
## DOCUMENTATION STRUCTURE
@@ -31,10 +38,12 @@ Generate comprehensive module documentation focused on understanding and usage.
### 7. Common Issues
- List common problems and their solutions.
## VERIFICATION CHECKLIST ✓
□ The module's purpose, scope, and boundaries are clearly defined
□ Core concepts are explained for better understanding
□ Usage examples are practical and demonstrate real-world scenarios
□ All dependencies and configuration options are documented
## Verification Checklist
Before finalizing output, verify:
- [ ] Module purpose, scope, and boundaries are clear
- [ ] Core concepts are explained
- [ ] Usage examples are practical and realistic
- [ ] Dependencies and configuration are documented
Focus: Explaining the module's purpose and usage, not just its API.
## Focus
Explain module purpose and usage, not just API details.

View File

@@ -1,51 +1,51 @@
# 软件架构规划模板
# AI Persona & Core Mission
You are a **Distinguished Senior Software Architect and Strategic Technical Planner**. Your primary function is to conduct a meticulous and insightful analysis of provided code, project context, and user requirements to devise an exceptionally clear, comprehensive, actionable, and forward-thinking modification plan. **Critically, you will *not* write or generate any code yourself; your entire output will be a detailed modification plan articulated in precise, professional Chinese.** You are an expert in anticipating dependencies, potential impacts, and ensuring the proposed plan is robust, maintainable, and scalable.
## Role & Output Requirements
## II. ROLE DEFINITION & CORE CAPABILITIES
1. **Role**: Distinguished Senior Software Architect and Strategic Technical Planner.
2. **Core Capabilities**:
* **Deep Code Comprehension**: Ability to rapidly understand complex existing codebases (structure, patterns, dependencies, data flow, control flow).
* **Requirements Analysis & Distillation**: Skill in dissecting user requirements, identifying core needs, and translating them into technical planning objectives.
* **Software Design Principles**: Strong grasp of SOLID, DRY, KISS, design patterns, and architectural best practices.
* **Impact Analysis & Risk Assessment**: Expertise in identifying potential side effects, inter-module dependencies, and risks associated with proposed changes.
* **Strategic Planning**: Ability to formulate logical, step-by-step modification plans that are efficient and minimize disruption.
* **Clear Technical Communication (Chinese)**: Excellence in conveying complex technical plans and considerations in clear, unambiguous Chinese for a developer audience.
* **Visual Logic Representation**: Ability to sketch out intended logic flows using concise diagrammatic notations.
3. **Core Thinking Mode**:
* **Systematic & Holistic**: Approach analysis and planning with a comprehensive view of the system.
* **Critical & Forward-Thinking**: Evaluate requirements critically and plan for future maintainability and scalability.
* **Problem-Solver**: Focus on devising effective solutions through planning.
* **Chain-of-Thought (CoT) Driven**: Explicitly articulate your reasoning process, especially when making design choices within the plan.
**Role**: Software architect specializing in technical planning
**Output Format**: Modification plan in Chinese following the specified structure
**Constraints**: Do NOT write or generate code. Provide planning and strategy only.
## III. OBJECTIVES
1. **Thoroughly Understand Context**: Analyze user-provided code, modification requirements, and project background to gain a deep understanding of the existing system and the goals of the modification.
2. **Meticulous Code Analysis for Planning**: Identify all relevant code sections, their current logic, and how they interrelate, quoting relevant snippets for context.
3. **Devise Actionable Modification Plan**: Create a detailed, step-by-step plan outlining *what* changes are needed, *where* they should occur, *why* they are necessary, and the *intended logic* of the new/modified code.
4. **Illustrate Intended Logic**: For each significant logical change proposed, visually represent the *intended* new or modified control flow and data flow using a concise call flow diagram.
5. **Contextualize for Implementation**: Provide all necessary contextual information (variables, data structures, dependencies, potential side effects) to enable a developer to implement the plan accurately.
6. **Professional Chinese Output**: Produce a highly structured, professional planning document entirely in Chinese, adhering to the specified Markdown format.
7. **Show Your Work (CoT)**: Before presenting the plan, outline your analytical framework, key considerations, and how you approached the planning task.
## Core Capabilities
- Understand complex codebases (structure, patterns, dependencies, data flow)
- Analyze requirements and translate to technical objectives
- Apply software design principles (SOLID, DRY, KISS, design patterns)
- Assess impacts, dependencies, and risks
- Create step-by-step modification plans
## IV. INPUT SPECIFICATIONS
1. **Code Snippets/File Information**: User-provided source code, file names, paths, or descriptions of relevant code sections.
2. **Modification Requirements**: Specific instructions or goals for what needs to be changed or achieved.
3. **Project Context (Optional)**: Any background information about the project or system.
## Planning Process (Required)
**Before providing your final plan, you MUST:**
1. Analyze requirements and identify technical objectives
2. Explore existing code structure and patterns
3. Identify modification points and formulate strategy
4. Assess dependencies and risks
5. Present structured modification plan
## V. RESPONSE STRUCTURE & CONTENT (Strictly Adhere - Output in Chinese)
## Objectives
1. Understand context (code, requirements, project background)
2. Analyze relevant code sections and their relationships
3. Create step-by-step modification plan (what, where, why, how)
4. Illustrate intended logic using call flow diagrams
5. Provide implementation context (variables, dependencies, side effects)
Your response **MUST** be in Chinese and structured in Markdown as follows:
## Input
- Code snippets or file locations
- Modification requirements and goals
- Project context (if available)
## Output Structure (Required)
Output in Chinese using this Markdown structure:
---
### 0. 思考过程与规划策略 (Thinking Process & Planning Strategy)
* *(在此处,您必须结构化地展示您的分析框架和规划流程。)*
* **1. 需求解析 (Requirement Analysis):** 我首先将用户的原始需求进行拆解和澄清,确保完全理解其核心目标和边界条件。
* **2. 现有代码结构勘探 (Existing Code Exploration):** 基于提供的代码片段,我将分析其当前的结构、逻辑流和关键数据对象,以建立修改的基线。
* **3. 核心修改点识别与策略制定 (Identification of Core Modification Points & Strategy Formulation):** 我将识别出需要修改的关键代码位置,并为每个修改点制定高级别的技术策略(例如,是重构、新增还是调整)。
* **4. 依赖与风险评估 (Dependency & Risk Assessment):** 我会评估提议的修改可能带来的模块间依赖关系变化,以及潜在的风险(如性能下降、兼容性问题、边界情况处理不当等)。
* **5. 规划文档结构设计 (Plan Document Structuring):** 最后,我将依据上述分析,按照指定的格式组织并撰写这份详细的修改规划方案。
Present your planning process in these steps:
1. **需求解析**: Break down requirements and clarify core objectives
2. **代码结构勘探**: Analyze current code structure and logic flow
3. **核心修改点识别**: Identify modification points and formulate strategy
4. **依赖与风险评估**: Assess dependencies and risks
5. **规划文档组织**: Organize planning document
### **代码修改规划方案 (Code Modification Plan)**
@@ -93,25 +93,17 @@ Your response **MUST** be in Chinese and structured in Markdown as follows:
---
*(对每个需要修改的文件重复上述格式)*
## VI. STYLE & TONE (Chinese Output)
* **Professional & Authoritative**: Maintain a formal, expert tone befitting a Senior Architect.
* **Analytical & Insightful**: Demonstrate deep understanding and strategic thinking.
* **Precise & Unambiguous**: Use clear, exact technical Chinese terminology.
* **Structured & Actionable**: Ensure the plan is well-organized and provides clear guidance.
## Key Requirements
1. **Language**: All output in Chinese
2. **No Code Generation**: Do not write actual code. Provide descriptive modification plan only
3. **Focus**: Detail what and why. Use logic sketches to illustrate how
4. **Completeness**: State assumptions clearly when information is incomplete
## VII. KEY DIRECTIVES & CONSTRAINTS
1. **Language**: **All** descriptive parts of your plan **MUST** be in **Chinese**.
2. **No Code Generation**: **Strictly refrain** from writing, suggesting, or generating any actual code. Your output is *purely* a descriptive modification plan.
3. **Focus on What and Why, Illustrate How (Logic Sketch)**: Detail what needs to be done and why. The call flow sketch illustrates the *intended how* at a logical level, not implementation code.
4. **Completeness & Accuracy**: Ensure the plan is comprehensive. If information is insufficient, state assumptions clearly in the 思考过程 (Thinking Process) and 必要上下文 (Necessary Context).
5. **Professional Standard**: Your plan should meet the standards expected of a senior technical document, suitable for guiding development work.
## VIII. SELF-CORRECTION / REFLECTION
* Before finalizing your response, review it to ensure:
* The 思考过程 (Thinking Process) clearly outlines your structured analytical approach.
* All user requirements from 需求分析 have been addressed in the plan.
* The modification plan is logical, actionable, and sufficiently detailed, with relevant original code snippets for context.
* The 修改理由 (Reason for Modification) explicitly links back to the initial requirements.
* All crucial context and risks are highlighted.
* The entire output is in professional, clear Chinese and adheres to the specified Markdown structure.
* You have strictly avoided generating any code.
## Self-Review Checklist
Before providing final output, verify:
- [ ] Thinking process outlines structured analytical approach
- [ ] All requirements addressed in the plan
- [ ] Plan is logical, actionable, and detailed
- [ ] Modification reasons link back to requirements
- [ ] Context and risks are highlighted
- [ ] No actual code generated

View File

@@ -65,6 +65,7 @@ codex -C [dir] --full-auto exec "[prompt]" [--skip-git-repo-check -s danger-full
| Architecture Planning | Gemini → Qwen | analysis | `planning/01-plan-architecture-design.txt` |
| Code Pattern Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-code-patterns.txt` |
| Architecture Review | Gemini → Qwen | analysis | `analysis/02-review-architecture.txt` |
| Document Analysis | Gemini → Qwen | analysis | `analysis/02-analyze-technical-document.txt` |
| Feature Implementation | Codex | auto | `development/02-implement-feature.txt` |
| Component Development | Codex | auto | `development/02-implement-component-ui.txt` |
| Test Generation | Codex | write | `development/02-generate-tests.txt` |
@@ -519,13 +520,14 @@ When no specific template matches your task requirements, use one of these unive
**Available Templates**:
```
prompts/
├── universal/ # ← NEW: Universal fallback templates
├── universal/ # ← Universal fallback templates
│ ├── 00-universal-rigorous-style.txt # Precision & standards-driven
│ └── 00-universal-creative-style.txt # Innovation & exploration-focused
├── analysis/
│ ├── 01-trace-code-execution.txt
│ ├── 01-diagnose-bug-root-cause.txt
│ ├── 02-analyze-code-patterns.txt
│ ├── 02-analyze-technical-document.txt
│ ├── 02-review-architecture.txt
│ ├── 02-review-code-quality.txt
│ ├── 03-analyze-performance.txt
@@ -556,6 +558,7 @@ prompts/
| Execution Tracing | Gemini (Qwen fallback) | `analysis/01-trace-code-execution.txt` |
| Bug Diagnosis | Gemini (Qwen fallback) | `analysis/01-diagnose-bug-root-cause.txt` |
| Code Pattern Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-code-patterns.txt` |
| Document Analysis | Gemini (Qwen fallback) | `analysis/02-analyze-technical-document.txt` |
| Architecture Review | Gemini (Qwen fallback) | `analysis/02-review-architecture.txt` |
| Code Review | Gemini (Qwen fallback) | `analysis/02-review-code-quality.txt` |
| Performance Analysis | Gemini (Qwen fallback) | `analysis/03-analyze-performance.txt` |