This commit is contained in:
catlog22
2025-11-20 18:51:24 +08:00
17 changed files with 2194 additions and 208 deletions

View File

@@ -0,0 +1,126 @@
---
name: document-analysis
description: Read-only technical document/paper analysis using Gemini/Qwen/Codex with systematic comprehension template for insights extraction
argument-hint: "[--tool codex|gemini|qwen] [--enhance] [--cd path] document path or topic"
allowed-tools: SlashCommand(*), Bash(*), Task(*), Read(*)
---
# CLI Mode: Document Analysis (/cli:mode:document-analysis)
## Purpose
Systematic analysis of technical documents, research papers, API documentation, and technical specifications.
**Tool Selection**:
- **gemini** (default) - Best for document comprehension and structure analysis
- **qwen** - Fallback when Gemini unavailable
- **codex** - Alternative for complex technical documents
**Key Feature**: `--cd` flag for directory-scoped document discovery
## Parameters
- `--tool <gemini|qwen|codex>` - Tool selection (default: gemini)
- `--enhance` - Enhance analysis target with `/enhance-prompt`
- `--cd "path"` - Target directory for document search
- `<document-path-or-topic>` (Required) - File path or topic description
## Tool Usage
**Gemini** (Primary):
```bash
/cli:mode:document-analysis "README.md"
/cli:mode:document-analysis --tool gemini "analyze API documentation"
```
**Qwen** (Fallback):
```bash
/cli:mode:document-analysis --tool qwen "docs/architecture.md"
```
**Codex** (Alternative):
```bash
/cli:mode:document-analysis --tool codex "research paper in docs/"
```
## Execution Flow
Uses **cli-execution-agent** for automated document analysis:
```javascript
Task(
subagent_type="cli-execution-agent",
description="Systematic document comprehension and insights extraction",
prompt=`
Task: ${document_path_or_topic}
Mode: document-analysis
Tool: ${tool_flag || 'gemini'}
Directory: ${cd_path || '.'}
Enhance: ${enhance_flag}
Template: ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-technical-document.txt
Execute systematic document analysis:
1. Document Discovery:
- Locate target document(s) via path or topic keywords
- Identify document type (README, API docs, research paper, spec, tutorial)
- Detect document format (Markdown, PDF, plain text, reStructuredText)
- Discover related documents (references, appendices, examples)
- Use MCP/ripgrep for comprehensive file discovery
2. Pre-Analysis Planning (Required):
- Determine document structure (sections, hierarchy, flow)
- Identify key components (abstract, methodology, implementation details)
- Map dependencies and cross-references
- Assess document scope and complexity
- Plan analysis approach based on document type
3. CLI Command Construction:
- Tool: ${tool_flag || 'gemini'} (qwen fallback, codex for complex docs)
- Directory: cd ${cd_path || '.'} &&
- Context: @{document_paths} + @CLAUDE.md + related files
- Mode: analysis (read-only)
- Template: analysis/02-analyze-technical-document.txt
4. Analysis Execution:
- Apply 6-field template structure (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
- Execute multi-phase analysis protocol with pre-planning
- Perform self-critique before final output
- Generate structured report with evidence-based insights
5. Output Generation:
- Comprehensive document analysis report
- Structured insights with section references
- Critical assessment with evidence
- Actionable recommendations
- Save to .workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md (or .scratchpad/)
`
)
```
## Core Rules
- **Read-only**: Analyzes documents, does NOT modify files
- **Evidence-based**: All claims must reference specific sections/pages
- **Pre-planning**: Requires analysis approach planning before execution
- **Precise language**: Direct, accurate wording - no persuasive embellishment
- **Output**: `.workflow/active/WFS-[id]/.chat/doc-analysis-[timestamp].md` (or `.scratchpad/` if no session)
## Document Types Supported
| Type | Focus Areas | Key Outputs |
|------|-------------|-------------|
| README | Purpose, setup, usage | Integration steps, quick-start guide |
| API Documentation | Endpoints, parameters, responses | API usage patterns, integration points |
| Research Paper | Methodology, findings, validity | Applicable techniques, implementation feasibility |
| Specification | Requirements, standards, constraints | Compliance checklist, implementation requirements |
| Tutorial | Learning path, examples, exercises | Key concepts, practical applications |
| Architecture Docs | System design, components, patterns | Design decisions, integration points, trade-offs |
## Best Practices
1. **Scope Definition**: Clearly define what aspects to analyze before starting
2. **Layered Reading**: Structure/Overview → Details → Critical Analysis → Synthesis
3. **Evidence Trail**: Track section references for all extracted information
4. **Gap Identification**: Note missing information or unclear sections explicitly
5. **Actionable Output**: Focus on insights that inform decisions or actions

View File

@@ -93,7 +93,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null | xargs cat 2>/dev/null; fi)
```
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/phase2-analysis.json` with structure:
**Data Processing**: Parse bash outputs, calculate statistics, use **Write tool** to create `${session_dir}/.process/doc-planning-data.json` with structure:
```json
{
@@ -122,7 +122,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Then** use **Edit tool** to update `workflow-session.json` adding analysis field.
**Output**: Single `phase2-analysis.json` with all analysis data (no temp files or Python scripts).
**Output**: Single `doc-planning-data.json` with all analysis data (no temp files or Python scripts).
**Auto-skipped**: Tests (`**/test/**`, `**/*.test.*`), Build (`**/node_modules/**`, `**/dist/**`), Config (root-level files), Vendor directories.
@@ -131,8 +131,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
**Commands**:
```bash
# Count existing docs from phase2-analysis.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq '.existing_docs.file_list | length')
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -186,8 +186,8 @@ Large Projects (single dir >10 docs):
**Commands**:
```bash
# 1. Get top-level directories from phase2-analysis.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json | jq -r '.top_level_dirs[]')
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
@@ -205,7 +205,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
- If total ≤10 docs: create group
- If total >10 docs: split to 1 dir/group or subdivide
- If single dir >10 docs: split by subdirectories
3. Use **Edit tool** to update `phase2-analysis.json` adding groups field:
3. Use **Edit tool** to update `doc-planning-data.json` adding groups field:
```json
"groups": {
"count": 3,
@@ -219,7 +219,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/phase2-analysis.json)
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -241,7 +241,7 @@ api_id=$((group_count + 3))
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
2. Read group assignments from phase2-analysis.json
2. Read group assignments from doc-planning-data.json
3. Generate Level 1 tasks (IMPL-001 to IMPL-N, one per group)
4. Generate Level 2+ tasks if mode=full (README, ARCHITECTURE, HTTP API)
@@ -266,14 +266,14 @@ api_id=$((group_count + 3))
},
"context": {
"requirements": [
"Process directories from group ${group_number} in phase2-analysis.json",
"Process directories from group ${group_number} in doc-planning-data.json",
"Generate docs to .workflow/docs/${project_name}/ (mirrored structure)",
"Code folders: API.md + README.md; Navigation folders: README.md only",
"Use pre-analyzed data from Phase 2 (no redundant analysis)"
],
"focus_paths": ["${group_dirs_from_json}"],
"precomputed_data": {
"phase2_analysis": "${session_dir}/.process/phase2-analysis.json"
"phase2_analysis": "${session_dir}/.process/doc-planning-data.json"
}
},
"flow_control": {
@@ -282,8 +282,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/phase2-analysis.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/phase2-analysis.json)"
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -328,7 +328,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/phase2-analysis.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -468,7 +468,7 @@ api_id=$((group_count + 3))
├── IMPL_PLAN.md
├── TODO_LIST.md
├── .process/
│ └── phase2-analysis.json # All Phase 2 analysis data (replaces 7+ files)
│ └── doc-planning-data.json # All Phase 2 analysis data (replaces 7+ files)
└── .task/
├── IMPL-001.json # Small: all modules | Large: group 1
├── IMPL-00N.json # (Large only: groups 2-N)
@@ -477,7 +477,7 @@ api_id=$((group_count + 3))
└── IMPL-{N+3}.json # HTTP API (optional)
```
**phase2-analysis.json Structure**:
**doc-planning-data.json Structure**:
```json
{
"metadata": {

View File

@@ -1,15 +1,16 @@
---
name: workflow:status
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
argument-hint: "[optional: --project|task-id|--validate]"
argument-hint: "[optional: --project|task-id|--validate|--dashboard]"
---
# Workflow Status Command (/workflow:status)
## Overview
Generates on-demand views from project and session data. Supports two modes:
Generates on-demand views from project and session data. Supports multiple modes:
1. **Project Overview** (`--project`): Shows completed features and project statistics
2. **Workflow Tasks** (default): Shows current session task progress
3. **HTML Dashboard** (`--dashboard`): Generates interactive HTML task board with active and archived sessions
No synchronization needed - all views are calculated from current JSON state.
@@ -19,6 +20,7 @@ No synchronization needed - all views are calculated from current JSON state.
/workflow:status --project # Show project-level feature registry
/workflow:status impl-1 # Show specific task details
/workflow:status --validate # Validate workflow integrity
/workflow:status --dashboard # Generate HTML dashboard board
```
## Implementation Flow
@@ -192,4 +194,135 @@ find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null |
## Completed Tasks
- [COMPLETED] impl-0: Setup completed
```
## Dashboard Mode (HTML Board)
### Step 1: Check for --dashboard flag
```bash
# If --dashboard flag present → Execute Dashboard Mode
```
### Step 2: Collect Workflow Data
**Collect Active Sessions**:
```bash
# Find all active sessions
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null
# For each active session, read metadata and tasks
for session in $(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null); do
cat "$session/workflow-session.json"
find "$session/.task/" -name "*.json" -type f 2>/dev/null
done
```
**Collect Archived Sessions**:
```bash
# Find all archived sessions
find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null
# Read manifest if exists
cat .workflow/archives/manifest.json 2>/dev/null
# For each archived session, read metadata
for archive in $(find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null); do
cat "$archive/workflow-session.json" 2>/dev/null
# Count completed tasks
find "$archive/.task/" -name "*.json" -type f 2>/dev/null | wc -l
done
```
### Step 3: Process and Structure Data
**Build data structure for dashboard**:
```javascript
const dashboardData = {
activeSessions: [],
archivedSessions: [],
generatedAt: new Date().toISOString()
};
// Process active sessions
for each active_session in active_sessions:
const sessionData = JSON.parse(Read(active_session/workflow-session.json));
const tasks = [];
// Load all tasks for this session
for each task_file in find(active_session/.task/*.json):
const taskData = JSON.parse(Read(task_file));
tasks.push({
task_id: taskData.task_id,
title: taskData.title,
status: taskData.status,
type: taskData.type
});
dashboardData.activeSessions.push({
session_id: sessionData.session_id,
project: sessionData.project,
status: sessionData.status,
created_at: sessionData.created_at || sessionData.initialized_at,
tasks: tasks
});
// Process archived sessions
for each archived_session in archived_sessions:
const sessionData = JSON.parse(Read(archived_session/workflow-session.json));
const taskCount = bash(find archived_session/.task/*.json | wc -l);
dashboardData.archivedSessions.push({
session_id: sessionData.session_id,
project: sessionData.project,
archived_at: sessionData.completed_at || sessionData.archived_at,
taskCount: parseInt(taskCount),
archive_path: archived_session
});
```
### Step 4: Generate HTML from Template
**Load template and inject data**:
```javascript
// Read the HTML template
const template = Read("~/.claude/templates/workflow-dashboard.html");
// Prepare data for injection
const dataJson = JSON.stringify(dashboardData, null, 2);
// Replace placeholder with actual data
const htmlContent = template.replace('{{WORKFLOW_DATA}}', dataJson);
// Ensure .workflow directory exists
bash(mkdir -p .workflow);
```
### Step 5: Write HTML File
```bash
# Write the generated HTML to .workflow/dashboard.html
Write({
file_path: ".workflow/dashboard.html",
content: htmlContent
})
```
### Step 6: Display Success Message
```markdown
Dashboard generated successfully!
Location: .workflow/dashboard.html
Open in browser:
file://$(pwd)/.workflow/dashboard.html
Features:
- 📊 Active sessions overview
- 📦 Archived sessions history
- 🔍 Search and filter
- 📈 Progress tracking
- 🎨 Dark/light theme
Refresh data: Re-run /workflow:status --dashboard
```