Compare commits

..

26 Commits

Author SHA1 Message Date
catlog22
67a578450c chore: Bump version to 6.3.52 2026-01-29 21:22:49 +08:00
catlog22
d5199ad2d4 Add merge-plans-with-file and quick-plan-with-file prompts for enhanced planning workflows
- Introduced merge-plans-with-file prompt for aggregating multiple planning artifacts, resolving conflicts, and synthesizing a unified execution plan.
- Implemented detailed execution process including discovery, normalization, conflict detection, resolution, and synthesis phases.
- Added quick-plan-with-file prompt for rapid planning with minimal documentation, utilizing multi-agent analysis for actionable outputs.
- Both prompts support various input formats and provide structured outputs for execution and documentation.
2026-01-29 21:21:29 +08:00
catlog22
f3c773a81e Refactor Codex Issue Plan-Execute Skill Documentation and CLI Options
- Deleted obsolete INDEX.md and OPTIMIZATION_SUMMARY.md files, consolidating documentation for improved clarity and organization.
- Removed skipGitRepoCheck option from CLI execution parameters to streamline command usage.
- Updated CLI executor utilities to automatically skip git repository checks, allowing execution in non-git directories.
- Enhanced documentation with new ARCHITECTURE.md and INDEX.md files for better navigation and understanding of the system architecture.
- Created CONTENT_MIGRATION_REPORT.md to verify zero content loss during the consolidation process.
2026-01-29 20:39:12 +08:00
catlog22
875b1f19bd feat: 完成 Codex Issue Plan-Execute Skill v2.0 优化
- 新增 OPTIMIZATION_SUMMARY.md,详细记录优化过程和成果
- 新增 README_OPTIMIZATION.md,概述优化后的文件结构和关键指标
- 创建 specs/agent-roles.md,整合 Planning Agent 和 Execution Agent 的角色定义
- 合并多个提示词文件,减少内容重复,优化 Token 使用
- 新建 ARCHITECTURE.md 和 INDEX.md,提供系统架构和文档导航
- 添加 CONTENT_MIGRATION_REPORT.md,确保内容迁移的完整性和零丢失
- 更新文件引用,确保向后兼容性,添加弃用通知
2026-01-29 20:37:30 +08:00
catlog22
c08f5382d3 fix: Improve command detail modal tab handling and error reporting 2026-01-29 17:48:06 +08:00
catlog22
21d764127f Add command relationships, essential commands, and validation script
- Introduced `command-relationships.json` to define internal calls, next steps, and prerequisites for various workflows.
- Created `essential-commands.json` to document key commands, their descriptions, arguments, and usage scenarios.
- Added `validate-help.py` script to check for the existence of source files referenced in command definitions, ensuring all necessary files are present.
2026-01-29 17:29:37 +08:00
catlog22
860dbdab56 fix: Unify execution IDs between broadcast events and session storage
- Pass generated executionId to cliExecutorTool.execute as id parameter
- Ensures CLI_EXECUTION_STARTED broadcast uses same ID as saved session
- Fixes "Conversation not found" errors when querying by broadcast ID
- Add DEBUG logging for executionId tracking

This resolves the mismatch where:
  - Broadcast event used ID from Date.now() at broadcast time
  - Session saved used different ID from Date.now() at completion time
  - Now all use the same ID generated at cli.ts:868

Changes:
- cli.ts:868 - executionId generated once
- cli.ts:1001 - pass executionId to execute() as id parameter
- cli-executor-core.ts automatically uses passed id as conversationId
2026-01-29 16:59:00 +08:00
catlog22
113dce55c5 fix: Auto-detect JSON Lines output format for Codex CLI
Problem: Codex CLI uses --json flag to output JSONL events, but executor was using plain text parser. This prevented proper parsing of structured events, breaking session creation.

Root cause: buildCommand() added --json flag for Codex but never communicated this to the output parser. Result: JSONL events treated as raw text → session markers lost.

Solution:
- Extend buildCommand() to return outputFormat
- Auto-detect 'json-lines' when tool is 'codex'
- Use auto-detected format in executeCliTool()
- Properly parse structured events and extract session data

Files modified:
- ccw/src/tools/cli-executor-utils.ts: Add output format auto-detection
- ccw/src/tools/cli-executor-core.ts: Use auto-detected format for parser
- ccw/src/commands/cli.ts: Add debug instrumentation

Verified:
- Codex outputs valid JSONL (confirmed via direct test)
- CLI_EXECUTION_STARTED events broadcast correctly
- Issue was downstream in output parsing, not event transmission
2026-01-29 16:38:30 +08:00
catlog22
0b791c03cf fix: Resolve API path resolution for document loading
- Fixed source paths in command.json: change ../../../ to ../../
  (sources are relative to .claude/skills/ccw-help/, need 2 levels to reach .claude/)
- Rewrote help-routes.ts /api/help/command-content endpoint:
  - Use resolve() to properly handle ../ sequences in paths
  - Resolve paths against commandJsonDir (where command.json is located)
  - Maintain security checks to prevent path traversal
- Verified all document paths now resolve correctly to .claude/commands/*

This fixes the 404 errors when loading command documentation in Help page.
2026-01-29 16:29:10 +08:00
catlog22
bbc94fb73a chore: Update ccw-help command index with all 73 commands
- Regenerated by analyze_commands.py
- Now includes all workflow, issue, memory, cli, and general commands
- Updated to version 3.0.0 with 73 commands and 19 agents
- Full index sync with file system definitions
2026-01-29 16:00:29 +08:00
catlog22
f5e435f791 feat: Optimize ccw-help skill with user-prompted update mechanism
- Add auto-update.py script for simple index regeneration
- Update SKILL.md with clear update instructions
- Simplify update mechanism: prompt user on skill execution
- Support both automatic and manual update workflows
- Clean version 2.3.0 metadata in command.json
2026-01-29 15:58:51 +08:00
catlog22
86d5be8288 feat: Enhance CCW help system with new command orchestration and dashboard features 2026-01-29 15:43:07 +08:00
catlog22
9762445876 refactor: Convert skill-generator from Chinese to English and remove emoji icons
- Convert all markdown files from Chinese to English
- Remove all emoji/icon decorations (🔧📋⚙️🏁🔍📚)
- Update all section headers, descriptions, and documentation
- Keep all content logic, structure, code examples unchanged
- Maintain template variables and file paths as-is

Files converted (9 files total):
- SKILL.md: Output structure comments
- templates/skill-md.md: All Chinese descriptions and comments
- specs/reference-docs-spec.md: All section headers and explanations
- phases/01-requirements-discovery.md through 05-validation.md (5 files)
- specs/execution-modes.md, skill-requirements.md, cli-integration.md, scripting-integration.md (4 files)
- templates/sequential-phase.md, autonomous-orchestrator.md, autonomous-action.md, code-analysis-action.md, llm-action.md, script-template.md (6 files)

All 16 files in skill-generator are now fully in English.
2026-01-29 15:42:46 +08:00
catlog22
b791c09476 docs: Add reference-docs-spec and optimize skill-generator for proper document organization
- Create specs/reference-docs-spec.md with comprehensive guidelines for phase-based reference document organization
- Update skill-generator's Mandatory Prerequisites to include new reference-docs-spec
- Refactor skill-md.md template to generate phase-based reference tables with 'When to Use' guidance
- Add generateReferenceTable() function to automatically create structured reference sections
- Replace flat template reference lists with phase-based navigation
- Update skill-generator's own SKILL.md to demonstrate correct reference documentation pattern
- Ensure all generated skills will have clear document usage timing and context
2026-01-29 15:28:21 +08:00
catlog22
26283e7a5a docs: Optimize reference documents with phase-based guidance and usage timing 2026-01-29 15:24:38 +08:00
catlog22
1040459fef docs: Add comprehensive summary of unified-execute-with-file implementation 2026-01-29 15:23:41 +08:00
catlog22
0fe8c18a82 docs: Add comparison guide between Claude and Codex unified-execute versions 2026-01-29 15:22:24 +08:00
catlog22
0086413f95 feat: Add Codex unified-execute-with-file prompt
- Create codex version of unified-execute-with-file command
- Supports universal execution of planning/brainstorm/analysis output
- Coordinates multi-agents with smart dependency management
- Features parallel/sequential execution modes
- Unified event logging as single source of truth (execution-events.md)
- Agent context passing through previous execution history
- Knowledge chain: each agent reads full history of prior executions

Codex-specific adaptations:
- Use $VARIABLE format for argument substitution
- Simplified header configuration (description + argument-hint)
- Plan format agnostic parsing (IMPL_PLAN.md, synthesis.json, conclusions.json, debug recommendations)
- Multi-wave execution orchestration
- Dynamic artifact location handling

Execution flow:
1. Parse and validate plan from $PLAN_PATH
2. Extract and normalize tasks with dependencies
3. Create execution session (.workflow/.execution/{sessionId}/)
4. Group tasks into execution waves (topological sort)
5. Execute waves sequentially, tasks within wave execute in parallel
6. Unified event logging: execution-events.md (SINGLE SOURCE OF TRUTH)
7. Each agent reads previous executions for context
8. Final statistics and completion report
2026-01-29 15:21:40 +08:00
catlog22
8ff698ae73 refactor: Optimize unified-execute-with-file command documentation
- Consolidate Phase 3 (Progress Tracking) from 205+ to 30 lines by merging redundant explanations of execution-events.md format
- Merge error handling logic from separate handleTaskFailure function into executeTask catch block
- Remove duplicate Execution Document Template section (identical to Step 1.2)
- Consolidate Phase 4 (Completion & Summary) from 90+ to 40 lines
- Overall reduction: 1094 → 807 lines (26% reduction) while preserving all technical information

Key improvements:
- Single source of truth for execution state (execution-events.md)
- Clearer knowledge chain explanation between agents
- More concise yet complete Phase documentation
- Unified event logging format is now prominently featured
2026-01-29 15:19:40 +08:00
catlog22
8cdd6a8b5f Add execution and planning agent prompts, specifications, and quality standards
- Created execution agent prompt for issue execution with detailed deliverables and validation criteria.
- Developed planning agent prompt to analyze issues and generate structured solution plans.
- Introduced issue handling specifications outlining the workflow and issue structure.
- Established quality standards for evaluating completeness, consistency, correctness, and clarity of solutions.
- Defined solution schema specification detailing the required structure and validation rules for solutions.
- Documented subagent roles and responsibilities, emphasizing the dual-agent strategy for improved workflow efficiency.
2026-01-29 15:15:42 +08:00
catlog22
b86a8afd8b feat: 添加统一执行引擎文档,支持多任务协调与增量执行 2026-01-29 15:14:56 +08:00
catlog22
53bd5a6d4b feat: 添加自定义提示文档,说明如何创建和管理可重用的提示 2026-01-29 11:30:29 +08:00
catlog22
3a7bbe0e42 feat: Optimize Codex prompt commands parameter flexibility
- Enhanced 14 commands with flexible parameter support
- Standardized argument formats across all commands
- Added English parameter descriptions for clarity
- Maintained backward compatibility

Commands optimized:
- analyze-with-file: Added --depth, --max-iterations
- brainstorm-with-file: Added --perspectives, --max-ideas, --focus
- debug-with-file: Added --scope, --focus, --depth
- issue-execute: Unified format, added --skip-tests, --skip-build, --dry-run
- lite-plan-a/b/c: Added depth and execution control flags
- execute: Added --parallel, --filter, --skip-tests
- brainstorm-to-cycle: Unified to --session format, added --launch
- lite-fix: Added --hotfix, --severity, --scope
- clean: Added --focus, --target, --confirm
- lite-execute: Unified --plan format, added execution control
- compact: Added --description, --tags, --force
- issue-new: Complete flexible parameter support

Unchanged (already optimal):
- issue-plan, issue-discover, issue-queue, issue-discover-by-prompt
2026-01-29 11:29:39 +08:00
catlog22
04a84f9893 feat: Simplify issue creation documentation by removing examples and clarifying title 2026-01-29 10:51:42 +08:00
catlog22
11638facf7 feat: Add --to-file option to ccw cli for saving output to files
Adds support for saving CLI execution output directly to files with the following features:
- Support for relative paths: --to-file output.txt
- Support for nested directories: --to-file results/analysis/output.txt (auto-creates directories)
- Support for absolute paths: --to-file /tmp/output.txt or --to-file D:/results/output.txt
- Works in both streaming and non-streaming modes
- Automatically creates parent directories if they don't exist
- Proper error handling with user-friendly messages
- Shows file save location in completion feedback

Implementation details:
- Updated CLI option parser in ccw/src/cli.ts
- Added toFile parameter to CliExecOptions interface
- Implemented file saving logic in execAction() for both streaming and non-streaming modes
- Updated HTTP API endpoint /api/cli/execute to support toFile parameter
- All changes are backward compatible

Testing:
- Tested with relative paths (single and nested directories)
- Tested with absolute paths (Windows and Unix style)
- Tested with streaming mode
- All tests passed successfully
2026-01-29 09:48:30 +08:00
catlog22
4d93ffb06c feat: Add migration handling for Codex old reference format in CLI manager 2026-01-28 23:37:46 +08:00
80 changed files with 15251 additions and 2914 deletions

View File

@@ -413,5 +413,4 @@ function parseMarkdownBody(body) {
## Related Commands
- `/issue:plan` - Plan solution for issue
- `/issue:plan` - Plan solution for issue

367
.claude/commands/view.md Normal file
View File

@@ -0,0 +1,367 @@
---
name: ccw view
description: Dashboard - Open CCW workflow dashboard for managing tasks and sessions
category: general
---
# CCW View Command
Open the CCW workflow dashboard for visualizing and managing project tasks, sessions, and workflow execution status.
## Description
`ccw view` launches an interactive web dashboard that provides:
- **Workflow Overview**: Visualize current workflow status and command chain execution
- **Session Management**: View and manage active workflow sessions
- **Task Tracking**: Monitor TODO items and task progress
- **Workspace Switching**: Switch between different project workspaces
- **Real-time Updates**: Live updates of command execution and status
## Usage
```bash
# Open dashboard for current workspace
ccw view
# Specify workspace path
ccw view --path /path/to/workspace
# Custom port (default: 3456)
ccw view --port 3000
# Bind to specific host
ccw view --host 0.0.0.0 --port 3456
# Open without launching browser
ccw view --no-browser
# Show URL without opening browser
ccw view --no-browser
```
## Options
| Option | Default | Description |
|--------|---------|-------------|
| `--path <path>` | Current directory | Workspace path to display |
| `--port <port>` | 3456 | Server port for dashboard |
| `--host <host>` | 127.0.0.1 | Server host/bind address |
| `--no-browser` | false | Don't launch browser automatically |
| `-h, --help` | - | Show help message |
## Features
### Dashboard Sections
#### 1. **Workflow Overview**
- Current workflow status
- Command chain visualization (with Minimum Execution Units marked)
- Live progress tracking
- Error alerts
#### 2. **Session Management**
- List active sessions by type (workflow, review, tdd)
- Session details (created time, last activity, session ID)
- Quick actions (resume, pause, complete)
- Session logs/history
#### 3. **Task Tracking**
- TODO list with status indicators
- Progress percentage
- Task grouping by workflow stage
- Quick inline task updates
#### 4. **Workspace Switcher**
- Browse available workspaces
- Switch context with one click
- Recent workspaces list
#### 5. **Command History**
- Recent commands executed
- Execution time and status
- Quick re-run options
### Keyboard Shortcuts
| Shortcut | Action |
|----------|--------|
| `R` | Refresh dashboard |
| `Cmd/Ctrl + J` | Jump to session search |
| `Cmd/Ctrl + K` | Open command palette |
| `?` | Show help |
## Multi-Instance Support
The dashboard supports multiple concurrent instances:
```bash
# Terminal 1: Workspace A on port 3456
ccw view --path ~/projects/workspace-a
# Terminal 2: Workspace B on port 3457
ccw view --path ~/projects/workspace-b --port 3457
# Switching workspaces on the same port
ccw view --path ~/projects/workspace-c # Auto-switches existing server
```
When the server is already running and you execute `ccw view` with a different path:
1. Detects running server on the port
2. Sends workspace switch request
3. Updates dashboard to new workspace
4. Opens browser with updated context
## Server Lifecycle
### Startup
```
ccw view
├─ Check if server running on port
│ ├─ If yes: Send switch-path request
│ └─ If no: Start new server
├─ Launch browser (unless --no-browser)
└─ Display dashboard URL
```
### Running
The dashboard server continues running until:
- User explicitly stops it (Ctrl+C)
- All connections close after timeout
- System shutdown
### Multiple Workspaces
Switching to a different workspace keeps the same server instance:
```
Server State Before: workspace-a on port 3456
ccw view --path ~/projects/workspace-b
Server State After: workspace-b on port 3456 (same instance)
```
## Environment Variables
```bash
# Set default port
export CCW_VIEW_PORT=4000
ccw view # Uses port 4000
# Set default host
export CCW_VIEW_HOST=localhost
ccw view --port 3456 # Binds to localhost:3456
# Disable browser launch by default
export CCW_VIEW_NO_BROWSER=true
ccw view # Won't auto-launch browser
```
## Integration with CCW Workflows
The dashboard is fully integrated with CCW commands:
### Viewing Workflow Progress
```bash
# Start a workflow
ccw "Add user authentication"
# In another terminal, view progress
ccw view # Shows execution progress in real-time
```
### Session Management from Dashboard
- Start new session: Click "New Session" button
- Resume paused session: Sessions list → Resume button
- View session logs: Click session name
- Complete session: Sessions list → Complete button
### Real-time Command Execution
- View active command chain execution
- Watch command transition through Minimum Execution Units
- See error alerts and recovery options
- View command output logs
## Troubleshooting
### Port Already in Use
```bash
# Use different port
ccw view --port 3457
# Or kill existing server
lsof -i :3456 # Find process
kill -9 <pid> # Kill it
ccw view # Start fresh
```
### Dashboard Not Loading
```bash
# Try without browser
ccw view --no-browser
# Check server logs
tail -f ~/.ccw/logs/dashboard.log
# Verify network access
curl http://localhost:3456/api/health
```
### Workspace Path Not Found
```bash
# Use full absolute path
ccw view --path "$(pwd)"
# Or specify explicit path
ccw view --path ~/projects/my-project
```
## Related Commands
- **`/ccw`** - Main workflow orchestrator
- **`/workflow:session:list`** - List workflow sessions
- **`/workflow:session:resume`** - Resume paused session
- **`/memory:compact`** - Compact session memory for dashboard display
## Examples
### Basic Dashboard View
```bash
cd ~/projects/my-app
ccw view
# → Launches http://localhost:3456 in browser
```
### Network-Accessible Dashboard
```bash
# Allow remote access
ccw view --host 0.0.0.0 --port 3000
# → Dashboard accessible at http://machine-ip:3000
```
### Multiple Workspaces on Different Ports
```bash
# Terminal 1: Main project
ccw view --path ~/projects/main --port 3456
# Terminal 2: Side project
ccw view --path ~/projects/side --port 3457
# View both simultaneously
# → http://localhost:3456 (main)
# → http://localhost:3457 (side)
```
### Headless Dashboard
```bash
# Run dashboard without browser
ccw view --port 3000 --no-browser
echo "Dashboard available at http://localhost:3000"
# Share URL with team
# Can be proxied through nginx/port forwarding
```
### Environment-Based Configuration
```bash
# Script for CI/CD
export CCW_VIEW_HOST=0.0.0.0
export CCW_VIEW_PORT=8080
ccw view --path /workspace
# → Dashboard accessible on port 8080 to all interfaces
```
## Dashboard Pages
### Overview Page (`/`)
- Current workflow status
- Active sessions summary
- Recent commands
- System health indicators
### Sessions Page (`/sessions`)
- All sessions (grouped by type)
- Session details and metadata
- Session logs viewer
- Quick actions (resume/complete)
### Tasks Page (`/tasks`)
- Current TODO items
- Progress tracking
- Inline task editing
- Workflow history
### Workspace Page (`/workspace`)
- Current workspace info
- Available workspaces
- Workspace switcher
- Workspace settings
### Settings Page (`/settings`)
- Port configuration
- Theme preferences
- Auto-refresh settings
- Export settings
## Server Health Monitoring
The dashboard includes health monitoring:
```bash
# Check health endpoint
curl http://localhost:3456/api/health
# → { "status": "ok", "uptime": 12345 }
# Monitor metrics
curl http://localhost:3456/api/metrics
# → { "sessions": 3, "tasks": 15, "lastUpdate": "2025-01-29T10:30:00Z" }
```
## Advanced Usage
### Custom Port with Dynamic Discovery
```bash
# Find next available port
available_port=$(find-available-port 3456)
ccw view --port $available_port
# Display in CI/CD
echo "Dashboard: http://localhost:$available_port"
```
### Dashboard Behind Proxy
```bash
# Configure nginx reverse proxy
# Proxy http://proxy.example.com/dashboard → http://localhost:3456
ccw view --host 127.0.0.1 --port 3456
# Access via proxy
# http://proxy.example.com/dashboard
```
### Session Export from Dashboard
- View → Sessions → Export JSON
- Exports session metadata and progress
- Useful for record-keeping and reporting
## See Also
- **CCW Commands**: `/ccw` - Auto workflow orchestration
- **Session Management**: `/workflow:session:start`, `/workflow:session:list`
- **Task Tracking**: `TodoWrite` tool for programmatic task management
- **Workflow Status**: `/workflow:status` for CLI-based status view

View File

@@ -0,0 +1,807 @@
---
name: merge-plans-with-file
description: Merge multiple planning/brainstorm/analysis outputs, resolve conflicts, and synthesize unified plan. Designed for multi-team input aggregation and final plan crystallization
argument-hint: "[-y|--yes] [-r|--rule consensus|priority|hierarchy] \"plan or topic name\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-resolve conflicts using specified rule (consensus/priority/hierarchy), minimal user prompts.
# Workflow Merge-Plans-With-File Command (/workflow:merge-plans-with-file)
## Overview
Plan aggregation and conflict resolution workflow. Takes multiple planning artifacts (brainstorm conclusions, analysis recommendations, quick-plans, implementation plans) and synthesizes them into a unified, conflict-resolved execution plan.
**Core workflow**: Load Sources → Parse Plans → Conflict Analysis → Arbitration → Unified Plan
**Key features**:
- **Multi-Source Support**: Brainstorm, analysis, quick-plan, IMPL_PLAN, task JSONs
- **Parallel Conflict Detection**: Identify all contradictions across input plans
- **Conflict Resolution**: Consensus, priority-based, or hierarchical resolution modes
- **Unified Synthesis**: Single authoritative plan from multiple perspectives
- **Decision Tracking**: Full audit trail of conflicts and resolutions
- **Resumable**: Save intermediate states, refine resolutions
## Usage
```bash
/workflow:merge-plans-with-file [FLAGS] <PLAN_NAME_OR_PATTERN>
# Flags
-y, --yes Auto-resolve conflicts using rule, skip confirmations
-r, --rule <rule> Conflict resolution rule: consensus (default) | priority | hierarchy
-o, --output <path> Output directory (default: .workflow/.merged/{name})
# Arguments
<plan-name-or-pattern> Plan name or glob pattern to identify input files/sessions
Examples: "auth-module", "*.analysis-*.json", "PLAN-*"
# Examples
/workflow:merge-plans-with-file "authentication" # Auto-detect all auth-related plans
/workflow:merge-plans-with-file -y -r priority "payment-system" # Auto-resolve with priority rule
/workflow:merge-plans-with-file -r hierarchy "feature-complete" # Use hierarchy rule (requires user ranking)
```
## Execution Process
```
Discovery & Loading:
├─ Search for planning artifacts matching pattern
├─ Load all synthesis.json, conclusions.json, IMPL_PLAN.md
├─ Parse each into normalized task/plan structure
└─ Validate data completeness
Session Initialization:
├─ Create .workflow/.merged/{sessionId}/
├─ Initialize merge.md with plan summary
├─ Index all source plans
└─ Extract planning metadata and constraints
Phase 1: Plan Normalization
├─ Convert all formats to common task representation
├─ Extract tasks, dependencies, effort, risks
├─ Identify plan scope and boundaries
├─ Validate no duplicate tasks
└─ Aggregate recommendations from each plan
Phase 2: Conflict Detection (Parallel)
├─ Architecture conflicts: different design approaches
├─ Task conflicts: overlapping responsibilities or different priorities
├─ Effort conflicts: vastly different estimates
├─ Risk assessment conflicts: different risk levels
├─ Scope conflicts: different feature inclusions
└─ Generate conflict matrix with severity levels
Phase 3: Consensus Building / Arbitration
├─ For each conflict, analyze source rationale
├─ Apply resolution rule (consensus/priority/hierarchy)
├─ Escalate unresolvable conflicts to user (unless --yes)
├─ Document decision rationale
└─ Generate resolutions.json
Phase 4: Plan Synthesis
├─ Merge task lists (remove duplicates, combine insights)
├─ Integrate dependencies from all sources
├─ Consolidate effort and risk estimates
├─ Generate unified execution sequence
├─ Create final unified plan
└─ Output ready for execution
Output:
├─ .workflow/.merged/{sessionId}/merge.md (merge process & decisions)
├─ .workflow/.merged/{sessionId}/source-index.json (input sources)
├─ .workflow/.merged/{sessionId}/conflicts.json (conflict matrix)
├─ .workflow/.merged/{sessionId}/resolutions.json (how conflicts were resolved)
├─ .workflow/.merged/{sessionId}/unified-plan.json (final merged plan)
└─ .workflow/.merged/{sessionId}/unified-plan.md (execution-ready markdown)
```
## Implementation
### Phase 1: Plan Discovery & Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const planPattern = "$PLAN_NAME_OR_PATTERN"
const resolutionRule = $ARGUMENTS.match(/--rule\s+(\w+)/)?.[1] || 'consensus'
const isAutoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
// Generate session ID
const mergeSlug = planPattern.toLowerCase()
.replace(/[*?]/g, '-')
.replace(/[^a-z0-9\u4e00-\u9fa5-]+/g, '-')
.substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `MERGE-${mergeSlug}-${dateStr}`
const sessionFolder = `.workflow/.merged/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
// Discover all relevant planning artifacts
const discoveryPaths = [
`.workflow/.brainstorm/*/${planPattern}*/synthesis.json`,
`.workflow/.analysis/*/${planPattern}*/conclusions.json`,
`.workflow/.planning/*/${planPattern}*/synthesis.json`,
`.workflow/.plan/${planPattern}*IMPL_PLAN.md`,
`.workflow/*/${planPattern}*.json`
]
const sourcePlans = []
for (const pattern of discoveryPaths) {
const matches = glob(pattern)
for (const path of matches) {
try {
const content = Read(path)
const plan = parsePlanFile(path, content)
if (plan && plan.tasks?.length > 0) {
sourcePlans.push({
source_path: path,
source_type: identifySourceType(path),
plan: plan,
loaded_at: getUtc8ISOString()
})
}
} catch (e) {
console.warn(`Failed to load plan from ${path}: ${e.message}`)
}
}
}
if (sourcePlans.length === 0) {
console.error(`
## Error: No Plans Found
Pattern: ${planPattern}
Searched locations:
${discoveryPaths.join('\n')}
Available plans in .workflow/:
`)
bash(`find .workflow -name "*.json" -o -name "*PLAN.md" | head -20`)
return { status: 'error', message: 'No plans found' }
}
console.log(`
## Plans Discovered
Total: ${sourcePlans.length}
${sourcePlans.map(sp => `- ${sp.source_type}: ${sp.source_path}`).join('\n')}
`)
```
---
### Phase 2: Plan Normalization
```javascript
// Normalize all plans to common format
const normalizedPlans = sourcePlans.map((sourcePlan, idx) => {
const plan = sourcePlan.plan
const tasks = plan.tasks || []
return {
index: idx,
source: sourcePlan.source_path,
source_type: sourcePlan.source_type,
metadata: {
title: plan.title || `Plan ${idx + 1}`,
topic: plan.topic || plan.planning_topic || 'unknown',
timestamp: plan.completed || plan.timestamp || sourcePlan.loaded_at,
source_ideas: plan.top_ideas?.length || 0,
complexity: plan.complexity_level || 'unknown'
},
// Normalized tasks
tasks: tasks.map(task => ({
id: task.id || `T${idx}-${task.title?.substring(0, 20)}`,
title: task.title || task.content,
description: task.description || '',
type: task.type || inferType(task),
priority: task.priority || 'normal',
// Effort estimation
effort: {
estimated: task.estimated_duration || task.effort_estimate || 'unknown',
from_plan: idx
},
// Risk assessment
risk: {
level: task.risk_level || 'medium',
from_plan: idx
},
// Dependencies
dependencies: task.dependencies || [],
// Source tracking
source_plan_index: idx,
original_id: task.id,
// Quality tracking
success_criteria: task.success_criteria || [],
challenges: task.challenges || []
}))
}
})
// Save source index
const sourceIndex = {
session_id: sessionId,
merge_timestamp: getUtc8ISOString(),
pattern: planPattern,
total_source_plans: sourcePlans.length,
sources: normalizedPlans.map(p => ({
index: p.index,
source_path: p.source,
source_type: p.source_type,
topic: p.metadata.topic,
task_count: p.tasks.length
}))
}
Write(`${sessionFolder}/source-index.json`, JSON.stringify(sourceIndex, null, 2))
```
---
### Phase 3: Conflict Detection
```javascript
// Detect conflicts across plans
const conflictDetector = {
// Architecture conflicts
architectureConflicts: [],
// Task conflicts (duplicates, different scope)
taskConflicts: [],
// Effort conflicts
effortConflicts: [],
// Risk assessment conflicts
riskConflicts: [],
// Scope conflicts
scopeConflicts: [],
// Priority conflicts
priorityConflicts: []
}
// Algorithm 1: Detect similar tasks across plans
const allTasks = normalizedPlans.flatMap(p => p.tasks)
const taskGroups = groupSimilarTasks(allTasks)
for (const group of taskGroups) {
if (group.tasks.length > 1) {
// Same task appears in multiple plans
const efforts = group.tasks.map(t => t.effort.estimated)
const effortVariance = calculateVariance(efforts)
if (effortVariance > 0.5) {
// Significant difference in effort estimates
conflictDetector.effortConflicts.push({
task_group: group.title,
conflicting_tasks: group.tasks.map((t, i) => ({
id: t.id,
from_plan: t.source_plan_index,
effort: t.effort.estimated
})),
variance: effortVariance,
severity: 'high'
})
}
// Check for scope differences
const scopeDifferences = analyzeScopeDifferences(group.tasks)
if (scopeDifferences.length > 0) {
conflictDetector.taskConflicts.push({
task_group: group.title,
scope_differences: scopeDifferences,
severity: 'medium'
})
}
}
}
// Algorithm 2: Architecture conflicts
const architectures = normalizedPlans.map(p => p.metadata.complexity)
if (new Set(architectures).size > 1) {
conflictDetector.architectureConflicts.push({
different_approaches: true,
complexity_levels: architectures.map((a, i) => ({
plan: i,
complexity: a
})),
severity: 'high'
})
}
// Algorithm 3: Risk assessment conflicts
const riskLevels = allTasks.map(t => ({ task: t.id, risk: t.risk.level }))
const taskRisks = {}
for (const tr of riskLevels) {
if (!taskRisks[tr.task]) taskRisks[tr.task] = []
taskRisks[tr.task].push(tr.risk)
}
for (const [task, risks] of Object.entries(taskRisks)) {
if (new Set(risks).size > 1) {
conflictDetector.riskConflicts.push({
task: task,
conflicting_risk_levels: risks,
severity: 'medium'
})
}
}
// Save conflicts
Write(`${sessionFolder}/conflicts.json`, JSON.stringify(conflictDetector, null, 2))
```
---
### Phase 4: Conflict Resolution
```javascript
// Resolve conflicts based on selected rule
const resolutions = {
resolution_rule: resolutionRule,
timestamp: getUtc8ISOString(),
effort_resolutions: [],
architecture_resolutions: [],
risk_resolutions: [],
scope_resolutions: [],
priority_resolutions: []
}
// Resolution Strategy 1: Consensus
if (resolutionRule === 'consensus') {
for (const conflict of conflictDetector.effortConflicts) {
// Use median or average
const efforts = conflict.conflicting_tasks.map(t => parseEffort(t.effort))
const resolved_effort = calculateMedian(efforts)
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
original_estimates: efforts,
resolved_estimate: resolved_effort,
method: 'consensus-median',
rationale: 'Used median of all estimates'
})
}
}
// Resolution Strategy 2: Priority-Based
else if (resolutionRule === 'priority') {
// Use the plan from highest priority source (first or most recent)
for (const conflict of conflictDetector.effortConflicts) {
const highestPriority = conflict.conflicting_tasks[0] // First plan has priority
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
conflicting_estimates: conflict.conflicting_tasks.map(t => t.effort),
resolved_estimate: highestPriority.effort,
selected_from_plan: highestPriority.from_plan,
method: 'priority-based',
rationale: `Selected estimate from plan ${highestPriority.from_plan} (highest priority)`
})
}
}
// Resolution Strategy 3: Hierarchy (requires user ranking)
else if (resolutionRule === 'hierarchy') {
if (!isAutoMode) {
// Ask user to rank plan importance
const planRanking = AskUserQuestion({
questions: [{
question: "请按重要性排序这些规划(从最重要到最不重要):",
header: "Plan Ranking",
multiSelect: false,
options: normalizedPlans.slice(0, 5).map(p => ({
label: `Plan ${p.index}: ${p.metadata.title.substring(0, 40)}`,
description: `${p.tasks.length} tasks, complexity: ${p.metadata.complexity}`
}))
}]
})
// Apply hierarchy
const hierarchy = extractHierarchy(planRanking)
for (const conflict of conflictDetector.effortConflicts) {
const topPriorityTask = conflict.conflicting_tasks
.sort((a, b) => hierarchy[a.from_plan] - hierarchy[b.from_plan])[0]
resolutions.effort_resolutions.push({
conflict: conflict.task_group,
resolved_estimate: topPriorityTask.effort,
selected_from_plan: topPriorityTask.from_plan,
method: 'hierarchy-based',
rationale: `Selected from highest-ranked plan`
})
}
}
}
Write(`${sessionFolder}/resolutions.json`, JSON.stringify(resolutions, null, 2))
```
---
### Phase 5: Plan Synthesis
```javascript
// Merge all tasks into unified plan
const unifiedTasks = []
const processedTaskIds = new Set()
for (const task of allTasks) {
const taskKey = generateTaskKey(task)
if (processedTaskIds.has(taskKey)) {
// Task already added, skip
continue
}
processedTaskIds.add(taskKey)
// Apply resolution if this task has conflicts
let resolvedTask = { ...task }
const effortResolution = resolutions.effort_resolutions
.find(r => r.conflict === taskKey)
if (effortResolution) {
resolvedTask.effort.estimated = effortResolution.resolved_estimate
resolvedTask.effort.resolution_method = effortResolution.method
}
unifiedTasks.push({
id: taskKey,
title: task.title,
description: task.description,
type: task.type,
priority: task.priority,
effort: resolvedTask.effort,
risk: task.risk,
dependencies: task.dependencies,
success_criteria: [...new Set([
...task.success_criteria,
...findRelatedTasks(task, allTasks)
.flatMap(t => t.success_criteria)
])],
challenges: [...new Set([
...task.challenges,
...findRelatedTasks(task, allTasks)
.flatMap(t => t.challenges)
])],
source_plans: [
...new Set(allTasks
.filter(t => generateTaskKey(t) === taskKey)
.map(t => t.source_plan_index))
]
})
}
// Generate execution sequence
const executionSequence = topologicalSort(unifiedTasks)
const criticalPath = identifyCriticalPath(unifiedTasks, executionSequence)
// Final unified plan
const unifiedPlan = {
session_id: sessionId,
merge_timestamp: getUtc8ISOString(),
summary: {
total_source_plans: normalizedPlans.length,
original_tasks_total: allTasks.length,
merged_tasks: unifiedTasks.length,
conflicts_resolved: Object.values(conflictDetector).flat().length,
resolution_rule: resolutionRule
},
merged_metadata: {
topics: [...new Set(normalizedPlans.map(p => p.metadata.topic))],
average_complexity: calculateAverage(normalizedPlans.map(p => parseComplexity(p.metadata.complexity))),
combined_scope: estimateScope(unifiedTasks)
},
tasks: unifiedTasks,
execution_sequence: executionSequence,
critical_path: criticalPath,
risks: aggregateRisks(unifiedTasks),
success_criteria: aggregateSuccessCriteria(unifiedTasks),
audit_trail: {
source_plans: normalizedPlans.length,
conflicts_detected: Object.values(conflictDetector).flat().length,
conflicts_resolved: Object.values(resolutions).flat().length,
resolution_method: resolutionRule
}
}
Write(`${sessionFolder}/unified-plan.json`, JSON.stringify(unifiedPlan, null, 2))
```
---
### Phase 6: Generate Execution Plan
```markdown
# Merged Planning Session
**Session ID**: ${sessionId}
**Pattern**: ${planPattern}
**Created**: ${getUtc8ISOString()}
---
## Merge Summary
**Source Plans**: ${unifiedPlan.summary.total_source_plans}
**Original Tasks**: ${unifiedPlan.summary.original_tasks_total}
**Merged Tasks**: ${unifiedPlan.summary.merged_tasks}
**Tasks Deduplicated**: ${unifiedPlan.summary.original_tasks_total - unifiedPlan.summary.merged_tasks}
**Conflicts Resolved**: ${unifiedPlan.summary.conflicts_resolved}
**Resolution Method**: ${unifiedPlan.summary.resolution_rule}
---
## Merged Plan Overview
**Topics**: ${unifiedPlan.merged_metadata.topics.join(', ')}
**Combined Complexity**: ${unifiedPlan.merged_metadata.average_complexity}
**Total Scope**: ${unifiedPlan.merged_metadata.combined_scope}
---
## Unified Task List
${unifiedPlan.tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}**
- Type: ${task.type}
- Effort: ${task.effort.estimated}
- Risk: ${task.risk.level}
- Source Plans: ${task.source_plans.join(', ')}
- ${task.description}
`).join('\n')}
---
## Execution Sequence
**Critical Path**: ${unifiedPlan.critical_path.join(' → ')}
**Execution Order**:
${unifiedPlan.execution_sequence.map((id, i) => `${i+1}. ${id}`).join('\n')}
---
## Conflict Resolution Report
**Total Conflicts**: ${unifiedPlan.summary.conflicts_resolved}
**Resolved Conflicts**:
${Object.entries(resolutions).flatMap(([key, items]) =>
items.slice(0, 3).map((item, i) => `
- ${key.replace('_', ' ')}: ${item.rationale || item.method}
`)
).join('\n')}
**Full Report**: See \`conflicts.json\` and \`resolutions.json\`
---
## Risks & Considerations
**Aggregated Risks**:
${unifiedPlan.risks.slice(0, 5).map(r => `- **${r.title}**: ${r.mitigation}`).join('\n')}
**Combined Success Criteria**:
${unifiedPlan.success_criteria.slice(0, 5).map(c => `- ${c}`).join('\n')}
---
## Next Steps
### Option 1: Direct Execution
Execute merged plan with unified-execute-with-file:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/unified-plan.json
\`\`\`
### Option 2: Detailed Planning
Create detailed IMPL_PLAN from merged plan:
\`\`\`
/workflow:plan "Based on merged plan from $(echo ${planPattern})"
\`\`\`
### Option 3: Review Conflicts
Review detailed conflict analysis:
\`\`\`
cat ${sessionFolder}/resolutions.json
\`\`\`
---
## Artifacts
- **source-index.json** - All input plans and sources
- **conflicts.json** - Conflict detection results
- **resolutions.json** - How each conflict was resolved
- **unified-plan.json** - Merged plan data structure (for execution)
- **unified-plan.md** - This document (human-readable)
```
---
## Session Folder Structure
```
.workflow/.merged/{sessionId}/
├── merge.md # Merge process and decisions
├── source-index.json # All input plan sources
├── conflicts.json # Detected conflicts
├── resolutions.json # Conflict resolutions applied
├── unified-plan.json # Merged plan (machine-parseable, for execution)
└── unified-plan.md # Execution-ready plan (human-readable)
```
---
## Resolution Rules
### Rule 1: Consensus (default)
- Use median or average of conflicting estimates
- Good for: Multiple similar perspectives
- Tradeoff: May miss important minority viewpoints
### Rule 2: Priority-Based
- First plan has highest priority, subsequent plans are fallback
- Good for: Clear ranking of plan sources
- Tradeoff: Discards valuable alternative perspectives
### Rule 3: Hierarchy
- User explicitly ranks importance of each plan
- Good for: Mixed-source plans (engineering + product + leadership)
- Tradeoff: Requires user input
---
## Input Format Support
| Source Type | Detection | Parsing | Notes |
|-------------|-----------|---------|-------|
| **Brainstorm** | `.brainstorm/*/synthesis.json` | Top ideas → tasks | Ideas converted to work items |
| **Analysis** | `.analysis/*/conclusions.json` | Recommendations → tasks | Recommendations prioritized |
| **Quick-Plan** | `.planning/*/synthesis.json` | Direct task list | Already normalized |
| **IMPL_PLAN** | `*IMPL_PLAN.md` | Markdown → tasks | Parsed from markdown structure |
| **Task JSON** | `.json` with `tasks` key | Direct mapping | Requires standard schema |
---
## Error Handling
| Situation | Action |
|-----------|--------|
| No plans found | Suggest search terms, list available plans |
| Incompatible formats | Skip unsupported format, continue with others |
| Circular dependencies | Alert user, suggest manual review |
| Unresolvable conflicts | Require user decision (unless --yes + conflict rule) |
| Contradictory recommendations | Document both options for user consideration |
---
## Usage Patterns
### Pattern 1: Merge Multiple Brainstorms
```bash
/workflow:merge-plans-with-file "authentication" -y -r consensus
# → Finds all brainstorm sessions with "auth"
# → Merges top ideas into unified task list
# → Uses consensus method for conflicts
```
### Pattern 2: Synthesize Team Input
```bash
/workflow:merge-plans-with-file "payment-integration" -r hierarchy
# → Loads plans from different team members
# → Asks for ranking by importance
# → Applies hierarchy-based conflict resolution
```
### Pattern 3: Bridge Planning Phases
```bash
/workflow:merge-plans-with-file "user-auth" -f analysis
# → Takes analysis conclusions
# → Merges with existing quick-plans
# → Produces execution-ready plan
```
---
## Advanced: Custom Conflict Resolution
For complex conflict scenarios, create custom resolution script:
```
.workflow/.merged/{sessionId}/
└── custom-resolutions.js (optional)
- Define custom conflict resolution logic
- Applied after automatic resolution
- Override specific decisions
```
---
## Best Practices
1. **Before merging**:
- Ensure all source plans have same quality level
- Verify plans address same scope/topic
- Document any special considerations
2. **During merging**:
- Review conflict matrix (conflicts.json)
- Understand resolution rationale (resolutions.json)
- Challenge assumptions if results seem odd
3. **After merging**:
- Validate unified plan makes sense
- Review critical path
- Ensure no important details lost
- Execute or iterate if needed
---
## Integration with Other Workflows
```
Multiple Brainstorms / Analyses
├─ brainstorm-with-file (session 1)
├─ brainstorm-with-file (session 2)
├─ analyze-with-file (session 3)
merge-plans-with-file ◄──── This workflow
unified-plan.json
├─ /workflow:unified-execute-with-file (direct execution)
├─ /workflow:plan (detailed planning)
└─ /workflow:quick-plan-with-file (refinement)
```
---
## Comparison: When to Use Which Merge Rule
| Rule | Use When | Pros | Cons |
|------|----------|------|------|
| **Consensus** | Similar-quality inputs | Fair, balanced | May miss extremes |
| **Priority** | Clear hierarchy | Simple, predictable | May bias to first input |
| **Hierarchy** | Mixed stakeholders | Respects importance | Requires user ranking |
---
**Ready to execute**: Run `/workflow:merge-plans-with-file` to start merging plans!

View File

@@ -0,0 +1,808 @@
---
name: quick-plan-with-file
description: Multi-agent rapid planning with minimal documentation, conflict resolution, and actionable synthesis. Designed as a lightweight planning supplement between brainstorm and full implementation planning
argument-hint: "[-y|--yes] [-c|--continue] [-f|--from <type>] \"planning topic or task description\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm planning decisions, use aggressive parallelization, minimal user interaction.
# Workflow Quick-Plan-With-File Command (/workflow:quick-plan-with-file)
## Overview
Multi-agent rapid planning workflow with **minimal documentation overhead**. Coordinates parallel agent analysis, synthesizes conflicting perspectives into actionable decisions, and generates a lightweight implementation-ready plan.
**Core workflow**: Parse Input → Parallel Analysis → Conflict Resolution → Plan Synthesis → Output
**Key features**:
- **Plan Format Agnostic**: Consumes brainstorm conclusions, analysis recommendations, or raw task descriptions
- **Minimal Docs**: Single `plan.md` (no lengthy brainstorm.md or discussion.md)
- **Parallel Multi-Agent**: 3-4 concurrent agent perspectives (architecture, implementation, validation, risk)
- **Conflict Resolution**: Automatic conflict detection and resolution via synthesis agent
- **Actionable Output**: Direct task breakdown ready for execution
- **Session Resumable**: Continue if interrupted, checkpoint at each phase
## Usage
```bash
/workflow:quick-plan-with-file [FLAGS] <PLANNING_TOPIC>
# Flags
-y, --yes Auto-confirm decisions, use defaults
-c, --continue Continue existing session (auto-detected)
-f, --from <type> Input source type: brainstorm|analysis|task|raw
# Arguments
<planning-topic> Planning topic, task, or reference to planning artifact
# Examples
/workflow:quick-plan-with-file "实现分布式缓存层支持Redis和内存后端"
/workflow:quick-plan-with-file --continue "缓存层规划" # Continue
/workflow:quick-plan-with-file -y -f analysis "从分析结论生成实施规划" # Auto mode
/workflow:quick-plan-with-file --from brainstorm BS-rate-limiting-2025-01-28 # From artifact
```
## Execution Process
```
Input Validation & Loading:
├─ Parse input (topic | artifact reference)
├─ Load artifact if referenced (synthesis.json | conclusions.json | etc.)
├─ Extract key constraints and requirements
└─ Initialize session folder and plan.md
Session Initialization:
├─ Create .workflow/.planning/{sessionId}/
├─ Initialize plan.md with input summary
├─ Parse existing output (if --from artifact)
└─ Define planning dimensions & focus areas
Phase 1: Parallel Multi-Agent Analysis (concurrent)
├─ Agent 1 (Architecture): High-level design & decomposition
├─ Agent 2 (Implementation): Technical approach & feasibility
├─ Agent 3 (Validation): Risk analysis & edge cases
├─ Agent 4 (Decision): Recommendations & tradeoffs
└─ Aggregate findings into perspectives.json
Phase 2: Conflict Detection & Resolution
├─ Analyze agent perspectives for contradictions
├─ Identify critical decision points
├─ Generate synthesis via arbitration agent
├─ Document conflicts and resolutions
└─ Update plan.md with decisive recommendations
Phase 3: Plan Synthesis
├─ Consolidate all insights
├─ Generate actionable task breakdown
├─ Create execution strategy
├─ Document assumptions & risks
└─ Generate synthesis.md with ready-to-execute tasks
Output:
├─ .workflow/.planning/{sessionId}/plan.md (minimal, actionable)
├─ .workflow/.planning/{sessionId}/perspectives.json (agent findings)
├─ .workflow/.planning/{sessionId}/conflicts.json (decision points)
├─ .workflow/.planning/{sessionId}/synthesis.md (task breakdown)
└─ Optional: Feed to /workflow:unified-execute-with-file
```
## Implementation
### Session Setup & Input Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse arguments
const planningTopic = "$PLANNING_TOPIC"
const inputType = $ARGUMENTS.match(/--from\s+(\w+)/)?.[1] || 'raw'
const isAutoMode = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const isContinue = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
// Auto-detect artifact if referenced
let artifact = null
let artifactContent = null
if (inputType === 'brainstorm' || planningTopic.startsWith('BS-')) {
const sessionId = planningTopic
const synthesisPath = `.workflow/.brainstorm/${sessionId}/synthesis.json`
if (fs.existsSync(synthesisPath)) {
artifact = { type: 'brainstorm', path: synthesisPath }
artifactContent = JSON.parse(Read(synthesisPath))
}
} else if (inputType === 'analysis' || planningTopic.startsWith('ANL-')) {
const sessionId = planningTopic
const conclusionsPath = `.workflow/.analysis/${sessionId}/conclusions.json`
if (fs.existsSync(conclusionsPath)) {
artifact = { type: 'analysis', path: conclusionsPath }
artifactContent = JSON.parse(Read(conclusionsPath))
}
}
// Generate session ID
const planSlug = planningTopic.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `PLAN-${planSlug}-${dateStr}`
const sessionFolder = `.workflow/.planning/${sessionId}`
// Session mode detection
const sessionExists = fs.existsSync(sessionFolder)
const hasPlan = sessionExists && fs.existsSync(`${sessionFolder}/plan.md`)
const mode = (hasPlan || isContinue) ? 'continue' : 'new'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Phase 1: Initialize plan.md (Minimal)
```markdown
# Quick Planning Session
**Session ID**: ${sessionId}
**Topic**: ${planningTopic}
**Started**: ${getUtc8ISOString()}
**Mode**: ${mode}
---
## Input Context
${artifact ? `
**Source**: ${artifact.type} artifact
**Path**: ${artifact.path}
**Artifact Summary**:
${artifact.type === 'brainstorm' ? `
- Topic: ${artifactContent.topic}
- Top Ideas: ${artifactContent.top_ideas?.length || 0}
- Key Insights: ${artifactContent.key_insights?.slice(0, 2).join(', ') || 'N/A'}
` : artifact.type === 'analysis' ? `
- Topic: ${artifactContent.topic}
- Key Conclusions: ${artifactContent.key_conclusions?.length || 0}
- Recommendations: ${artifactContent.recommendations?.length || 0}
` : ''}
` : `
**User Input**: ${planningTopic}
`}
---
## Planning Dimensions
*To be populated after agent analysis*
---
## Key Decisions
*Conflict resolution and recommendations - to be populated*
---
## Implementation Plan
*Task breakdown - to be populated after synthesis*
---
## Progress
- [ ] Multi-agent analysis
- [ ] Conflict detection
- [ ] Plan synthesis
- [ ] Ready for execution
```
---
### Phase 2: Parallel Multi-Agent Analysis
```javascript
const analysisPrompt = artifact
? `Convert ${artifact.type} artifact to planning requirements and execute parallel analysis`
: `Create planning breakdown for: ${planningTopic}`
// Prepare context for agents
const agentContext = {
topic: planningTopic,
artifact: artifact ? {
type: artifact.type,
summary: extractArtifactSummary(artifactContent)
} : null,
planning_focus: determineFocusAreas(planningTopic),
constraints: extractConstraints(planningTopic, artifactContent)
}
// Agent 1: Architecture & Design
const archPromise = Bash({
command: `ccw cli -p "
PURPOSE: Architecture & high-level design planning for '${planningTopic}'
Success: Clear component decomposition, interface design, and data flow
TASK:
• Decompose problem into major components/modules
• Identify architectural patterns and integration points
• Design interfaces and data models
• Assess scalability and maintainability implications
• Propose architectural approach with rationale
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Component decomposition (box diagram in text)
- Module interfaces and responsibilities
- Data flow between components
- Architectural patterns applied
- Scalability assessment (1-5 rating)
- Risks from architectural perspective
CONSTRAINTS: Focus on long-term maintainability
" --tool gemini --mode analysis`,
run_in_background: true
})
// Agent 2: Implementation & Feasibility
const implPromise = Bash({
command: `ccw cli -p "
PURPOSE: Implementation approach & technical feasibility for '${planningTopic}'
Success: Concrete implementation strategy with realistic resource estimates
TASK:
• Evaluate technical feasibility of approach
• Identify required technologies and dependencies
• Estimate effort: high/medium/low + rationale
• Suggest implementation phases and milestones
• Highlight technical blockers or challenges
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Technology stack recommendation
- Implementation complexity: high|medium|low with justification
- Estimated effort breakdown (analysis/design/coding/testing/deployment)
- Key technical decisions with tradeoffs
- Potential blockers and mitigations
- Suggested implementation phases
- Reusable components or libraries
CONSTRAINTS: Realistic with current tech stack
" --tool codex --mode analysis`,
run_in_background: true
})
// Agent 3: Risk & Validation
const riskPromise = Bash({
command: `ccw cli -p "
PURPOSE: Risk analysis and validation strategy for '${planningTopic}'
Success: Comprehensive risk matrix with testing strategy
TASK:
• Identify technical risks and failure scenarios
• Assess business/timeline risks
• Define validation/testing strategy
• Suggest monitoring and observability requirements
• Rate overall risk level (low/medium/high)
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Risk matrix (likelihood × impact, 1-5 each)
- Top 3 technical risks with mitigations
- Top 3 timeline/resource risks with mitigations
- Testing strategy (unit/integration/e2e/performance)
- Deployment strategy and rollback plan
- Monitoring/observability requirements
- Overall risk rating with confidence (low/medium/high)
CONSTRAINTS: Be realistic, not pessimistic
" --tool claude --mode analysis`,
run_in_background: true
})
// Agent 4: Decisions & Recommendations
const decisionPromise = Bash({
command: `ccw cli -p "
PURPOSE: Strategic decisions and execution recommendations for '${planningTopic}'
Success: Clear recommended approach with tradeoff analysis
TASK:
• Synthesize all considerations into recommendations
• Clearly identify critical decision points
• Outline key tradeoffs (speed vs quality, scope vs timeline, etc.)
• Propose go/no-go decision criteria
• Suggest execution strategy and sequencing
MODE: analysis
CONTEXT: @**/*
${artifact ? `| Source artifact: ${artifact.type}` : ''}
EXPECTED:
- Primary recommendation with strong rationale
- Alternative approaches with pros/cons
- 2-3 critical decision points with recommended choices
- Key tradeoffs and what we're optimizing for
- Success metrics and go/no-go criteria
- Suggested execution sequencing
- Resource requirements and dependencies
CONSTRAINTS: Focus on actionable decisions, not analysis
" --tool gemini --mode analysis`,
run_in_background: true
})
// Wait for all parallel analyses
const [archResult, implResult, riskResult, decisionResult] = await Promise.all([
archPromise, implPromise, riskPromise, decisionPromise
])
```
---
### Phase 3: Aggregate Perspectives
```javascript
// Parse and structure agent findings
const perspectives = {
session_id: sessionId,
timestamp: getUtc8ISOString(),
topic: planningTopic,
source_artifact: artifact?.type || 'raw',
architecture: {
source: 'gemini (design)',
components: extractComponents(archResult),
interfaces: extractInterfaces(archResult),
patterns: extractPatterns(archResult),
scalability_rating: extractRating(archResult, 'scalability'),
risks_from_design: extractRisks(archResult)
},
implementation: {
source: 'codex (pragmatic)',
technology_stack: extractStack(implResult),
complexity: extractComplexity(implResult),
effort_breakdown: extractEffort(implResult),
blockers: extractBlockers(implResult),
phases: extractPhases(implResult)
},
validation: {
source: 'claude (systematic)',
risk_matrix: extractRiskMatrix(riskResult),
top_risks: extractTopRisks(riskResult),
testing_strategy: extractTestingStrategy(riskResult),
deployment_strategy: extractDeploymentStrategy(riskResult),
monitoring_requirements: extractMonitoring(riskResult),
overall_risk_rating: extractRiskRating(riskResult)
},
recommendation: {
source: 'gemini (synthesis)',
primary_approach: extractPrimaryApproach(decisionResult),
alternatives: extractAlternatives(decisionResult),
critical_decisions: extractDecisions(decisionResult),
tradeoffs: extractTradeoffs(decisionResult),
success_criteria: extractCriteria(decisionResult),
execution_sequence: extractSequence(decisionResult)
},
analysis_timestamp: getUtc8ISOString()
}
Write(`${sessionFolder}/perspectives.json`, JSON.stringify(perspectives, null, 2))
```
---
### Phase 4: Conflict Detection & Resolution
```javascript
// Analyze for conflicts and contradictions
const conflicts = detectConflicts({
arch_vs_impl: compareArchitectureAndImplementation(perspectives),
design_vs_risk: compareDesignAndRisk(perspectives),
effort_vs_scope: compareEffortAndScope(perspectives),
timeline_implications: extractTimingConflicts(perspectives)
})
// If conflicts exist, invoke arbitration agent
if (conflicts.critical.length > 0) {
const arbitrationResult = await Bash({
command: `ccw cli -p "
PURPOSE: Resolve planning conflicts and generate unified recommendation
Input: ${JSON.stringify(conflicts, null, 2)}
TASK:
• Review all conflicts presented
• Recommend resolution for each critical conflict
• Explain tradeoff choices
• Identify what we're optimizing for (speed/quality/risk/resource)
• Generate unified execution strategy
MODE: analysis
EXPECTED:
- For each conflict: recommended resolution + rationale
- Unified optimization criteria (what matters most?)
- Final recommendation with confidence level
- Any unresolved tensions that need user input
CONSTRAINTS: Be decisive, not fence-sitting
" --tool gemini --mode analysis`,
run_in_background: false
})
const conflictResolution = {
detected_conflicts: conflicts,
arbitration_result: arbitrationResult,
timestamp: getUtc8ISOString()
}
Write(`${sessionFolder}/conflicts.json`, JSON.stringify(conflictResolution, null, 2))
}
```
---
### Phase 5: Plan Synthesis & Task Breakdown
```javascript
const synthesisPrompt = `
Given the planning context:
- Topic: ${planningTopic}
- Architecture: ${perspectives.architecture.components.map(c => c.name).join(', ')}
- Implementation Complexity: ${perspectives.implementation.complexity}
- Timeline Risk: ${perspectives.validation.overall_risk_rating}
- Primary Recommendation: ${perspectives.recommendation.primary_approach.summary}
Generate a minimal but complete implementation plan with:
1. Task breakdown (5-8 major tasks)
2. Dependencies between tasks
3. For each task: what needs to be done, why, and key considerations
4. Success criteria for the entire effort
5. Known risks and mitigation strategies
Output as structured task list ready for execution.
`
const synthesisResult = await Bash({
command: `ccw cli -p "${synthesisPrompt}" --tool gemini --mode analysis`,
run_in_background: false
})
// Parse synthesis and generate task breakdown
const tasks = parseTaskBreakdown(synthesisResult)
const synthesis = {
session_id: sessionId,
planning_topic: planningTopic,
completed: getUtc8ISOString(),
// Summary
executive_summary: perspectives.recommendation.primary_approach.summary,
optimization_focus: extractOptimizationFocus(perspectives),
// Architecture
architecture_approach: perspectives.architecture.patterns[0] || 'TBD',
key_components: perspectives.architecture.components.slice(0, 5),
// Implementation
technology_stack: perspectives.implementation.technology_stack,
complexity_level: perspectives.implementation.complexity,
estimated_effort: perspectives.implementation.effort_breakdown,
// Risks & Validation
top_risks: perspectives.validation.top_risks.slice(0, 3),
testing_approach: perspectives.validation.testing_strategy,
// Execution
phases: perspectives.implementation.phases,
critical_path_tasks: extractCriticalPath(tasks),
total_tasks: tasks.length,
// Task breakdown (ready for unified-execute-with-file)
tasks: tasks.map(task => ({
id: task.id,
title: task.title,
description: task.description,
type: task.type,
dependencies: task.dependencies,
effort_estimate: task.effort,
success_criteria: task.criteria
}))
}
Write(`${sessionFolder}/synthesis.md`, formatSynthesisMarkdown(synthesis))
Write(`${sessionFolder}/synthesis.json`, JSON.stringify(synthesis, null, 2))
```
---
### Phase 6: Update plan.md with Results
```markdown
# Quick Planning Session
**Session ID**: ${sessionId}
**Topic**: ${planningTopic}
**Started**: ${startTime}
**Completed**: ${completionTime}
---
## Executive Summary
${synthesis.executive_summary}
**Optimization Focus**: ${synthesis.optimization_focus}
**Complexity**: ${synthesis.complexity_level}
**Estimated Effort**: ${formatEffort(synthesis.estimated_effort)}
---
## Architecture
**Primary Pattern**: ${synthesis.architecture_approach}
**Key Components**:
${synthesis.key_components.map((c, i) => `${i+1}. ${c.name}: ${c.responsibility}`).join('\n')}
---
## Implementation Strategy
**Technology Stack**:
${synthesis.technology_stack.map(t => `- ${t}`).join('\n')}
**Phases**:
${synthesis.phases.map((p, i) => `${i+1}. ${p.name} (${p.effort})`).join('\n')}
---
## Risk Assessment
**Overall Risk Level**: ${synthesis.top_risks[0].risk_level}
**Top 3 Risks**:
${synthesis.top_risks.map((r, i) => `
${i+1}. **${r.title}** (Impact: ${r.impact})
- Mitigation: ${r.mitigation}
`).join('\n')}
**Testing Approach**: ${synthesis.testing_approach}
---
## Execution Plan
**Total Tasks**: ${synthesis.total_tasks}
**Critical Path**: ${synthesis.critical_path_tasks.map(t => t.id).join(' → ')}
### Task Breakdown
${synthesis.tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}** (Effort: ${task.effort_estimate})
- ${task.description}
- Depends on: ${task.dependencies.join(', ') || 'none'}
- Success: ${task.success_criteria}
`).join('\n')}
---
## Next Steps
**Recommended**: Execute with \`/workflow:unified-execute-with-file\` using:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/synthesis.json
\`\`\`
---
## Artifacts
- **Perspectives**: ${sessionFolder}/perspectives.json (all agent findings)
- **Conflicts**: ${sessionFolder}/conflicts.json (decision points and resolutions)
- **Synthesis**: ${sessionFolder}/synthesis.json (task breakdown for execution)
```
---
## Session Folder Structure
```
.workflow/.planning/{sessionId}/
├── plan.md # Minimal, actionable planning doc
├── perspectives.json # Multi-agent findings (architecture, impl, risk, decision)
├── conflicts.json # Detected conflicts and resolutions (if any)
├── synthesis.json # Task breakdown ready for execution
└── synthesis.md # Human-readable execution plan
```
---
## Multi-Agent Roles
| Agent | Focus | Input | Output |
|-------|-------|-------|--------|
| **Gemini (Design)** | Architecture & design patterns | Topic + constraints | Components, interfaces, patterns, scalability |
| **Codex (Pragmatic)** | Implementation reality | Topic + architecture | Tech stack, effort, phases, blockers |
| **Claude (Validation)** | Risk & testing | Architecture + impl | Risk matrix, test strategy, monitoring |
| **Gemini (Decision)** | Synthesis & strategy | All findings | Recommendations, tradeoffs, execution plan |
---
## Conflict Resolution Strategy
**Auto-Resolution for conflicts**:
1. **Architecture vs Implementation**: Recommend design-for-feasibility approach
2. **Scope vs Timeline**: Prioritize critical path, defer nice-to-haves
3. **Quality vs Speed**: Suggest iterative approach (MVP + iterations)
4. **Resource vs Effort**: Identify parallelizable tasks
**Require User Input for**:
- Strategic choices (which feature to prioritize?)
- Tool/technology decisions with strong team preferences
- Budget/resource constraints not stated in planning topic
---
## Continue & Resume
```bash
/workflow:quick-plan-with-file --continue "planning-topic"
```
When continuing:
1. Load existing plan.md and perspectives.json
2. Identify what's incomplete
3. Re-run affected agents (if planning has changed)
4. Update plan.md with new findings
5. Generate updated synthesis.json
---
## Integration Flow
```
Input Source:
├─ Raw task description
├─ Brainstorm synthesis.json
└─ Analysis conclusions.json
/workflow:quick-plan-with-file
plan.md + synthesis.json
/workflow:unified-execute-with-file
Implementation
```
---
## Usage Patterns
### Pattern 1: Quick Planning from Task
```bash
# User has a task, needs rapid multi-perspective plan
/workflow:quick-plan-with-file -y "实现实时通知系统支持推送和WebSocket"
# → Creates plan in ~5 minutes
# → Ready for execution
```
### Pattern 2: Convert Brainstorm to Executable Plan
```bash
# User completed brainstorm, wants to convert top idea to executable plan
/workflow:quick-plan-with-file --from brainstorm BS-notifications-2025-01-28
# → Reads synthesis.json from brainstorm
# → Generates implementation plan
# → Ready for unified-execute-with-file
```
### Pattern 3: From Analysis to Implementation
```bash
# Analysis completed, now need execution plan
/workflow:quick-plan-with-file --from analysis ANL-auth-architecture-2025-01-28
# → Reads conclusions.json from analysis
# → Generates planning with recommendations
# → Output task breakdown
```
### Pattern 4: Planning with Interactive Conflict Resolution
```bash
# Full planning with user involvement in decision-making
/workflow:quick-plan-with-file "新的支付流程集成"
# → Without -y flag
# → After conflict detection, asks user about tradeoffs
# → Generates plan based on user preferences
```
---
## Comparison with Other Workflows
| Feature | brainstorm | analyze | quick-plan | plan |
|---------|-----------|---------|-----------|------|
| **Purpose** | Ideation | Investigation | Lightweight planning | Detailed planning |
| **Multi-agent** | 3 perspectives | 2 CLI + explore | 4 concurrent agents | N/A (single) |
| **Documentation** | Extensive | Extensive | Minimal | Standard |
| **Output** | Ideas + synthesis | Conclusions | Executable tasks | IMPL_PLAN |
| **Typical Duration** | 30-60 min | 20-30 min | 5-10 min | 15-20 min |
| **User Interaction** | High (multi-round) | High (Q&A) | Low (decisions) | Medium |
---
## Error Handling
| Situation | Action |
|-----------|--------|
| Agents conflict on approach | Arbitration agent decides, document in conflicts.json |
| Missing critical files | Continue with available context, note limitations |
| Insufficient task breakdown | Ask user for planning focus areas |
| Effort estimate too high | Suggest MVP approach or phasing |
| Unclear requirements | Ask clarifying questions via AskUserQuestion |
| Agent timeout | Use last successful result, note partial analysis |
---
## Best Practices
1. **Use when**:
- You have clarity on WHAT but not HOW
- Need rapid multi-perspective planning
- Converting brainstorm/analysis into execution
- Want minimal planning overhead
2. **Avoid when**:
- Requirements are highly ambiguous (use brainstorm instead)
- Need deep investigation (use analyze instead)
- Want extensive planning document (use plan instead)
- No tech stack clarity (use analyze first)
3. **For best results**:
- Provide complete task/requirement description
- Include constraints and success criteria
- Specify preferences (speed vs quality vs risk)
- Review conflicts.json and make conscious tradeoff decisions
---
## Next Steps After Planning
### Feed to Execution
```bash
/workflow:unified-execute-with-file -p .workflow/.planning/{sessionId}/synthesis.json
```
### Detailed Planning if Needed
```bash
/workflow:plan "Based on quick-plan recommendations..."
```
### Continuous Refinement
```bash
/workflow:quick-plan-with-file --continue "{topic}" # Update plan with new constraints
```

View File

@@ -0,0 +1,807 @@
---
name: unified-execute-with-file
description: Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution
argument-hint: "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\"execution context or task name\"]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm execution decisions, use default parallel strategy where possible.
# Workflow Unified-Execute-With-File Command (/workflow:unified-execute-with-file)
## Overview
Universal execution engine that consumes **any** planning/brainstorm/analysis output and executes it with minimal progress tracking. Coordinates multiple agents (subagents or CLI tools), handles dependencies, and maintains execution timeline in a single minimal document.
**Core workflow**: Load Plan → Parse Tasks → Coordinate Agents → Execute → Track Progress → Verify
**Key features**:
- **Plan Format Agnostic**: Consumes IMPL_PLAN.md, brainstorm.md, analysis conclusions, debug resolutions
- **execution.md**: Single source of truth for progress, execution timeline, and results
- **Multi-Agent Orchestration**: Parallel execution where possible, sequential where needed
- **Incremental Execution**: Resume from failure point, no re-execution of completed tasks
- **Dependency Management**: Automatic topological sort and wait strategy
- **Real-Time Progress**: TodoWrite integration for live task status
## Usage
```bash
/workflow:unified-execute-with-file [FLAGS] [EXECUTION_CONTEXT]
# Flags
-y, --yes Auto-confirm execution decisions, use defaults
-p, --plan <path> Explicitly specify plan file (auto-detected if omitted)
-m, --mode <mode> Execution strategy: sequential (strict order) | parallel (smart dependencies)
# Arguments
[execution-context] Optional: Task category, module name, or execution focus (for filtering/priority)
# Examples
/workflow:unified-execute-with-file # Auto-detect and execute latest plan
/workflow:unified-execute-with-file -p .workflow/plans/auth-plan.md # Execute specific plan
/workflow:unified-execute-with-file -y "auth module" # Auto-execute with context focus
/workflow:unified-execute-with-file -m sequential "payment feature" # Sequential execution
```
## Execution Process
```
Plan Detection:
├─ Check for IMPL_PLAN.md or task JSON files in .workflow/
├─ Or use explicit --plan path
├─ Or auto-detect from git branch/issue context
└─ Load plan metadata and task definitions
Session Initialization:
├─ Create .workflow/.execution/{sessionId}/
├─ Initialize execution.md with plan summary
├─ Parse all tasks, identify dependencies
├─ Determine execution strategy (parallel/sequential)
└─ Initialize progress tracking
Pre-Execution Validation:
├─ Check task feasibility (required files exist, tools available)
├─ Validate dependency graph (detect cycles)
├─ Ask user to confirm execution (unless --yes)
└─ Display execution plan and timeline estimate
Task Execution Loop (Parallel/Sequential):
├─ Select next executable tasks (dependencies satisfied)
├─ Launch agents in parallel (if strategy=parallel)
├─ Monitor execution, wait for completion
├─ Capture outputs, log results
├─ Update execution.md with progress
├─ Mark tasks complete/failed
└─ Repeat until all done or max failures reached
Error Handling:
├─ Task failure → Ask user: retry|skip|abort
├─ Dependency failure → Auto-skip dependent tasks
├─ Output conflict → Ask for resolution
└─ Timeout → Mark as timeout, continue or escalate
Completion:
├─ Mark session complete
├─ Summarize execution results in execution.md
├─ Generate completion report (statistics, failures, recommendations)
└─ Offer follow-up: review|debug|enhance
Output:
├─ .workflow/.execution/{sessionId}/execution.md (plan and overall status)
├─ .workflow/.execution/{sessionId}/execution-events.md (SINGLE SOURCE OF TRUTH - all task executions)
└─ Generated files in project directories (src/*, tests/*, docs/*, etc.)
```
## Implementation
### Session Setup & Plan Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Plan detection strategy
let planPath = $ARGUMENTS.match(/--plan\s+(\S+)/)?.[1]
if (!planPath) {
// Auto-detect: check recent workflow artifacts
const candidates = [
'.workflow/.plan/IMPL_PLAN.md',
'.workflow/plans/IMPL_PLAN.md',
'.workflow/IMPL_PLAN.md',
]
// Find most recent plan
planPath = findMostRecentPlan(candidates)
if (!planPath) {
// Check for task JSONs
const taskJsons = glob('.workflow/**/*.json').filter(f => f.includes('IMPL-') || f.includes('task'))
if (taskJsons.length > 0) {
planPath = taskJsons[0] // Primary task
}
}
}
if (!planPath) {
AskUserQuestion({
questions: [{
question: "未找到执行规划。请选择方式:",
header: "Plan Source",
multiSelect: false,
options: [
{ label: "浏览文件", description: "从 .workflow 目录选择" },
{ label: "使用最近规划", description: "从git提交消息推断" },
{ label: "手动输入路径", description: "直接指定规划文件路径" }
]
}]
})
}
// Parse plan and extract tasks
const planContent = Read(planPath)
const plan = parsePlan(planContent, planPath) // Format-agnostic parser
const executionId = `EXEC-${plan.slug}-${getUtc8ISOString().substring(0, 10)}-${randomId(4)}`
const executionFolder = `.workflow/.execution/${executionId}`
const executionPath = `${executionFolder}/execution.md`
const eventLogPath = `${executionFolder}/execution-events.md`
bash(`mkdir -p ${executionFolder}`)
```
---
## Plan Format Parsers
Support multiple plan sources:
```javascript
function parsePlan(content, filePath) {
const ext = filePath.split('.').pop()
if (filePath.includes('IMPL_PLAN')) {
return parseImplPlan(content) // From /workflow:plan
} else if (filePath.includes('brainstorm')) {
return parseBrainstormPlan(content) // From /workflow:brainstorm-with-file
} else if (filePath.includes('synthesis')) {
return parseSynthesisPlan(content) // From /workflow:brainstorm-with-file synthesis.json
} else if (filePath.includes('conclusions')) {
return parseConclusionsPlan(content) // From /workflow:analyze-with-file conclusions.json
} else if (filePath.endsWith('.json') && content.includes('tasks')) {
return parseTaskJson(content) // Direct task JSON
}
throw new Error(`Unsupported plan format: ${filePath}`)
}
// IMPL_PLAN.md parser
function parseImplPlan(content) {
// Extract:
// - Overview/summary
// - Phase sections
// - Task list with dependencies
// - Critical files
// - Execution order
return {
type: 'impl-plan',
title: extractSection(content, 'Overview'),
phases: extractPhases(content),
tasks: extractTasks(content),
criticalFiles: extractCriticalFiles(content),
estimatedDuration: extractEstimate(content)
}
}
// Brainstorm synthesis.json parser
function parseSynthesisPlan(content) {
const synthesis = JSON.parse(content)
return {
type: 'brainstorm-synthesis',
title: synthesis.topic,
ideas: synthesis.top_ideas,
tasks: synthesis.top_ideas.map(idea => ({
id: `IDEA-${slugify(idea.title)}`,
type: 'investigation',
title: idea.title,
description: idea.description,
dependencies: [],
agent_type: 'cli-execution-agent',
prompt: `Implement: ${idea.title}\n${idea.description}`,
expected_output: idea.next_steps
})),
recommendations: synthesis.recommendations
}
}
```
---
### Phase 1: Plan Loading & Validation
**Step 1.1: Parse Plan and Extract Tasks**
```javascript
const tasks = plan.tasks || parseTasksFromContent(plan)
// Normalize task structure
const normalizedTasks = tasks.map(task => ({
id: task.id || `TASK-${generateId()}`,
title: task.title || task.content,
description: task.description || task.activeForm,
type: task.type || inferTaskType(task), // 'code', 'test', 'doc', 'analysis', 'integration'
agent_type: task.agent_type || selectBestAgent(task),
dependencies: task.dependencies || [],
// Execution parameters
prompt: task.prompt || task.description,
files_to_modify: task.files_to_modify || [],
expected_output: task.expected_output || [],
// Metadata
priority: task.priority || 'normal',
parallel_safe: task.parallel_safe !== false,
estimated_duration: task.estimated_duration || null,
// Status tracking
status: 'pending',
attempts: 0,
max_retries: 2
}))
// Validate and detect issues
const validation = {
cycles: detectDependencyCycles(normalizedTasks),
missing_dependencies: findMissingDependencies(normalizedTasks),
file_conflicts: detectOutputConflicts(normalizedTasks),
warnings: []
}
if (validation.cycles.length > 0) {
throw new Error(`Circular dependencies detected: ${validation.cycles.join(', ')}`)
}
```
**Step 1.2: Create execution.md**
```markdown
# Execution Progress
**Execution ID**: ${executionId}
**Plan Source**: ${planPath}
**Started**: ${getUtc8ISOString()}
**Mode**: ${executionMode}
**Plan Summary**:
- Title: ${plan.title}
- Total Tasks: ${tasks.length}
- Phases: ${plan.phases?.length || 'N/A'}
---
## Execution Plan
### Task Overview
| Task ID | Title | Type | Agent | Dependencies | Status |
|---------|-------|------|-------|--------------|--------|
${normalizedTasks.map(t => `| ${t.id} | ${t.title} | ${t.type} | ${t.agent_type} | ${t.dependencies.join(',')} | ${t.status} |`).join('\n')}
### Dependency Graph
\`\`\`
${generateDependencyGraph(normalizedTasks)}
\`\`\`
### Execution Strategy
- **Mode**: ${executionMode}
- **Parallelization**: ${calculateParallel(normalizedTasks)}
- **Estimated Duration**: ${estimateTotalDuration(normalizedTasks)}
---
## Execution Timeline
*Updates as execution progresses*
---
## Current Status
${executionStatus()}
```
**Step 1.3: Pre-Execution Confirmation**
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (!autoYes) {
AskUserQuestion({
questions: [{
question: `准备执行 ${normalizedTasks.length} 个任务,模式: ${executionMode}\n\n关键任务:\n${normalizedTasks.slice(0, 3).map(t => `${t.id}: ${t.title}`).join('\n')}\n\n继续?`,
header: "Confirmation",
multiSelect: false,
options: [
{ label: "开始执行", description: "按计划执行" },
{ label: "调整参数", description: "修改执行参数" },
{ label: "查看详情", description: "查看完整任务列表" },
{ label: "取消", description: "退出不执行" }
]
}]
})
}
```
---
## Phase 2: Execution Orchestration
**Step 2.1: Determine Execution Order**
```javascript
// Topological sort
const executionOrder = topologicalSort(normalizedTasks)
// For parallel mode, group tasks into waves
let executionWaves = []
if (executionMode === 'parallel') {
executionWaves = groupIntoWaves(executionOrder, parallelLimit = 3)
} else {
executionWaves = executionOrder.map(task => [task])
}
// Log execution plan to execution.md
// execution-events.md will track actual progress as tasks execute
```
**Step 2.2: Execute Task Waves**
```javascript
let completedCount = 0
let failedCount = 0
const results = {}
for (let waveIndex = 0; waveIndex < executionWaves.length; waveIndex++) {
const wave = executionWaves[waveIndex]
console.log(`\n=== Wave ${waveIndex + 1}/${executionWaves.length} ===`)
console.log(`Tasks: ${wave.map(t => t.id).join(', ')}`)
// Launch tasks in parallel
const taskPromises = wave.map(task => executeTask(task, executionFolder))
// Wait for wave completion
const waveResults = await Promise.allSettled(taskPromises)
// Process results
for (let i = 0; i < waveResults.length; i++) {
const result = waveResults[i]
const task = wave[i]
if (result.status === 'fulfilled') {
results[task.id] = result.value
if (result.value.success) {
completedCount++
task.status = 'completed'
console.log(`${task.id}: Completed`)
} else if (result.value.retry) {
console.log(`⚠️ ${task.id}: Will retry`)
task.status = 'pending'
} else {
console.log(`${task.id}: Failed`)
}
} else {
console.log(`${task.id}: Execution error`)
}
// Progress is tracked in execution-events.md (appended by executeTask)
}
// Update execution.md summary
appendExecutionTimeline(executionPath, waveIndex + 1, wave, waveResults)
}
```
**Step 2.3: Execute Individual Task with Unified Event Logging**
```javascript
async function executeTask(task, executionFolder) {
const eventLogPath = `${executionFolder}/execution-events.md`
const startTime = Date.now()
try {
// Read previous execution events for context
let previousEvents = ''
if (fs.existsSync(eventLogPath)) {
previousEvents = Read(eventLogPath)
}
// Select agent based on task type
const agent = selectAgent(task.agent_type)
// Build execution context including previous agent outputs
const executionContext = `
## Previous Agent Executions (for reference)
${previousEvents}
---
## Current Task: ${task.id}
**Title**: ${task.title}
**Agent**: ${agent}
**Time**: ${getUtc8ISOString()}
### Description
${task.description}
### Context
- Modified Files: ${task.files_to_modify.join(', ')}
- Expected Output: ${task.expected_output.join(', ')}
- Previous Artifacts: [list any artifacts from previous tasks]
### Requirements
${task.requirements || 'Follow the plan'}
### Constraints
${task.constraints || 'No breaking changes'}
`
// Execute based on agent type
let result
if (agent === 'code-developer' || agent === 'tdd-developer') {
// Code implementation
result = await Task({
subagent_type: agent,
description: `Execute: ${task.title}`,
prompt: executionContext,
run_in_background: false
})
} else if (agent === 'cli-execution-agent' || agent === 'universal-executor') {
// CLI-based execution
result = await Bash({
command: `ccw cli -p "${escapeQuotes(executionContext)}" --tool gemini --mode analysis`,
run_in_background: false
})
} else if (agent === 'test-fix-agent') {
// Test execution and fixing
result = await Task({
subagent_type: 'test-fix-agent',
description: `Execute Tests: ${task.title}`,
prompt: executionContext,
run_in_background: false
})
} else {
// Generic task execution
result = await Task({
subagent_type: 'universal-executor',
description: task.title,
prompt: executionContext,
run_in_background: false
})
}
// Capture artifacts (code, tests, docs generated by this task)
const artifacts = captureArtifacts(task, executionFolder)
// Append to unified execution events log
const eventEntry = `
## Task ${task.id} - COMPLETED ✅
**Timestamp**: ${getUtc8ISOString()}
**Duration**: ${calculateDuration(startTime)}ms
**Agent**: ${agent}
### Execution Summary
${generateSummary(result)}
### Key Outputs
${formatOutputs(result)}
### Generated Artifacts
${artifacts.map(a => `- **${a.type}**: \`${a.path}\` (${a.size})`).join('\n')}
### Notes for Next Agent
${generateNotesForNextAgent(result, task)}
---
`
appendToEventLog(eventLogPath, eventEntry)
return {
success: true,
task_id: task.id,
output: result,
artifacts: artifacts,
duration: calculateDuration(startTime)
}
} catch (error) {
// Append failure event to unified log
const failureEntry = `
## Task ${task.id} - FAILED ❌
**Timestamp**: ${getUtc8ISOString()}
**Duration**: ${calculateDuration(startTime)}ms
**Agent**: ${agent}
**Error**: ${error.message}
### Error Details
\`\`\`
${error.stack}
\`\`\`
### Recovery Notes for Next Attempt
${generateRecoveryNotes(error, task)}
---
`
appendToEventLog(eventLogPath, failureEntry)
// Handle failure: retry, skip, or abort
task.attempts++
if (task.attempts < task.max_retries && autoYes) {
console.log(`⚠️ ${task.id}: Failed, retrying (${task.attempts}/${task.max_retries})`)
return { success: false, task_id: task.id, error: error.message, retry: true, duration: calculateDuration(startTime) }
} else if (task.attempts >= task.max_retries && !autoYes) {
const decision = AskUserQuestion({
questions: [{
question: `任务失败: ${task.id}\n错误: ${error.message}`,
header: "Decision",
multiSelect: false,
options: [
{ label: "重试", description: "重新执行该任务" },
{ label: "跳过", description: "跳过此任务,继续下一个" },
{ label: "终止", description: "停止整个执行" }
]
}]
})
if (decision === 'retry') {
task.attempts = 0
return { success: false, task_id: task.id, error: error.message, retry: true, duration: calculateDuration(startTime) }
} else if (decision === 'skip') {
task.status = 'skipped'
skipDependentTasks(task.id, normalizedTasks)
} else {
throw new Error('Execution aborted by user')
}
} else {
task.status = 'failed'
skipDependentTasks(task.id, normalizedTasks)
}
return {
success: false,
task_id: task.id,
error: error.message,
duration: calculateDuration(startTime)
}
}
}
// Helper function to append to unified event log
function appendToEventLog(logPath, eventEntry) {
if (fs.existsSync(logPath)) {
const currentContent = Read(logPath)
Write(logPath, currentContent + eventEntry)
} else {
Write(logPath, eventEntry)
}
}
```
---
## Phase 3: Progress Tracking & Event Logging
The `execution-events.md` file is the **single source of truth** for all agent executions:
- Each agent **reads** previous execution events for context
- **Executes** its task (with full knowledge of what was done before)
- **Writes** its execution event (success or failure) in markdown format
- Next agent **reads** all previous events, creating a "knowledge chain"
**Event log format** (appended entry):
```markdown
## Task {id} - {STATUS} {emoji}
**Timestamp**: {time}
**Duration**: {ms}
**Agent**: {type}
### Execution Summary
{What was done}
### Generated Artifacts
- `src/types/auth.ts` (2.3KB)
### Notes for Next Agent
- Key decisions made
- Potential issues
- Ready for: TASK-003
```
---
## Phase 4: Completion & Summary
After all tasks complete or max failures reached:
1. **Collect results**: Count completed/failed/skipped tasks
2. **Update execution.md**: Add "Execution Completed" section with statistics
3. **execution-events.md**: Already contains all detailed execution records
```javascript
const statistics = {
total_tasks: normalizedTasks.length,
completed: normalizedTasks.filter(t => t.status === 'completed').length,
failed: normalizedTasks.filter(t => t.status === 'failed').length,
skipped: normalizedTasks.filter(t => t.status === 'skipped').length,
success_rate: (completedCount / normalizedTasks.length * 100).toFixed(1)
}
// Update execution.md with final status
appendExecutionSummary(executionPath, statistics)
```
**Post-Completion Options** (unless --yes):
```javascript
AskUserQuestion({
questions: [{
question: "执行完成。是否需要后续操作?",
header: "Next Steps",
multiSelect: true,
options: [
{ label: "查看详情", description: "查看完整执行日志" },
{ label: "调试失败项", description: "对失败任务进行调试" },
{ label: "优化执行", description: "分析执行改进建议" },
{ label: "完成", description: "不需要后续操作" }
]
}]
})
```
---
## Session Folder Structure
```
.workflow/.execution/{sessionId}/
├── execution.md # Execution plan and overall status
└── execution-events.md # 📋 Unified execution log (all agents) - SINGLE SOURCE OF TRUTH
# This is both human-readable AND machine-parseable
# Generated files go directly to project directories (not into execution folder)
# E.g. TASK-001 generates: src/types/auth.ts (not artifacts/src/types/auth.ts)
# execution-events.md records the actual project paths
```
**Key Concept**:
- **execution-events.md** is the **single source of truth** for execution state
- Human-readable: Clear markdown format with task summaries
- Machine-parseable: Status indicators (✅/❌/⏳) and structured sections
- Progress tracking: Read task count by parsing status indicators
- No redundancy: One unified log for all purposes
---
## Agent Selection Strategy
```javascript
function selectBestAgent(task) {
if (task.type === 'code' || task.type === 'implementation') {
return task.includes_tests ? 'tdd-developer' : 'code-developer'
} else if (task.type === 'test' || task.type === 'test-fix') {
return 'test-fix-agent'
} else if (task.type === 'doc' || task.type === 'documentation') {
return 'doc-generator'
} else if (task.type === 'analysis' || task.type === 'investigation') {
return 'cli-execution-agent'
} else if (task.type === 'debug') {
return 'debug-explore-agent'
} else {
return 'universal-executor'
}
}
```
## Parallelization Rules
```javascript
function calculateParallel(tasks) {
// Group tasks into execution waves
// Constraints:
// - Tasks with same file modifications must be sequential
// - Tasks with dependencies must wait
// - Max 3 parallel tasks per wave (resource constraint)
const waves = []
const completed = new Set()
while (completed.size < tasks.length) {
const available = tasks.filter(t =>
!completed.has(t.id) &&
t.dependencies.every(d => completed.has(d))
)
if (available.length === 0) break
// Check for file conflicts
const noConflict = []
const modifiedFiles = new Set()
for (const task of available) {
const conflicts = task.files_to_modify.some(f => modifiedFiles.has(f))
if (!conflicts && noConflict.length < 3) {
noConflict.push(task)
task.files_to_modify.forEach(f => modifiedFiles.add(f))
} else if (!conflicts && noConflict.length < 3) {
waves.push([task])
completed.add(task.id)
}
}
if (noConflict.length > 0) {
waves.push(noConflict)
noConflict.forEach(t => completed.add(t.id))
}
}
return waves
}
```
## Error Handling & Recovery
| Situation | Action |
|-----------|--------|
| Task timeout | Mark as timeout, ask user: retry/skip/abort |
| Missing dependency | Auto-skip dependent tasks, log warning |
| File conflict | Detect before execution, ask for resolution |
| Output mismatch | Validate against expected_output, flag for review |
| Agent unavailable | Fallback to universal-executor |
| Execution interrupted | Support resume with `/workflow:unified-execute-with-file --continue` |
## Usage Recommendations
Use `/workflow:unified-execute-with-file` when:
- Executing any planning document (IMPL_PLAN.md, brainstorm conclusions, analysis recommendations)
- Multiple tasks with dependencies need orchestration
- Want minimal progress tracking without clutter
- Need to handle failures gracefully and resume
- Want to parallelize where possible but ensure correctness
Use for consuming output from:
- `/workflow:plan` → IMPL_PLAN.md
- `/workflow:brainstorm-with-file` → synthesis.json → execution
- `/workflow:analyze-with-file` → conclusions.json → execution
- `/workflow:debug-with-file` → recommendations → execution
- `/workflow:lite-plan` → task JSONs → execution
## Session Resume
```bash
/workflow:unified-execute-with-file --continue # Resume last execution
/workflow:unified-execute-with-file --continue EXEC-xxx-2025-01-27-abcd # Resume specific
```
When resuming:
1. Load execution.md and execution-events.md
2. Parse execution-events.md to identify completed/failed/skipped tasks
3. Recalculate remaining dependencies
4. Resume from first incomplete task
5. Append to execution-events.md with "Resumed from [sessionId]" note

View File

@@ -11,8 +11,8 @@ CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
## Trigger Conditions
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用"
- 场景: 询问命令用法、搜索命令、请求下一步建议
- 关键词: "ccw-help", "ccw-issue", "帮助", "命令", "怎么用", "ccw 怎么用", "工作流"
- 场景: 询问命令用法、搜索命令、请求下一步建议、询问任务应该用哪个工作流
## Operation Modes
@@ -50,7 +50,35 @@ CCW 命令帮助系统,提供命令搜索、推荐、文档查看功能。
1. Query `essential_commands` array
2. Guide appropriate workflow entry point
### Mode 5: Issue Reporting
### Mode 5: CCW Command Orchestration
**Triggers**: "ccw ", "自动工作流", "自动选择工作流", "帮我规划"
**Process**:
1. Analyze user intent (task type, complexity, clarity)
2. Auto-select workflow level (1-4 or Issue)
3. Build command chain based on workflow
4. Get user confirmation
5. Execute chain with TODO tracking
**Supported Workflows**:
- **Level 1** (Lite-Lite-Lite): Ultra-simple quick tasks
- **Level 2** (Rapid/Hotfix): Bug fixes, simple features, documentation
- **Level 2.5** (Rapid-to-Issue): Bridge from quick planning to issue workflow
- **Level 3** (Coupled): Complex features with planning, execution, review, tests
- **Level 3 Variants**:
- TDD workflows (test-first development)
- Test-fix workflows (debug failing tests)
- Review workflows (code review and fixes)
- UI design workflows
- **Level 4** (Full): Exploratory tasks with brainstorming
- **With-File Workflows**: Documented exploration with multi-CLI collaboration
- `brainstorm-with-file`: Multi-perspective ideation
- `debug-with-file`: Hypothesis-driven debugging
- `analyze-with-file`: Collaborative analysis
- **Issue Workflow**: Batch issue discovery, planning, queueing, execution
### Mode 6: Issue Reporting
**Triggers**: "ccw-issue", "报告 bug"
@@ -84,28 +112,60 @@ Single source of truth: **[command.json](command.json)**
## Slash Commands
```bash
/ccw-help # 通用帮助入口
/ccw-help search <keyword> # 搜索命令
/ccw-help next <command> # 获取下一步建议
/ccw-issue # 问题报告
/ccw "task description" # Auto-select workflow and execute
/ccw-help # General help entry
/ccw-help search <keyword> # Search commands
/ccw-help next <command> # Get next step suggestions
/ccw-issue # Issue reporting
```
### CCW Command Examples
```bash
/ccw "Add user authentication" # → auto-select level 2-3
/ccw "Fix memory leak in WebSocket" # → auto-select bugfix workflow
/ccw "Implement with TDD" # → detect TDD, use tdd-plan → execute → tdd-verify
/ccw "头脑风暴: 用户通知系统" # → detect brainstorm, use brainstorm-with-file
/ccw "深度调试: 系统随机崩溃" # → detect debug-file, use debug-with-file
/ccw "协作分析: 认证架构设计" # → detect analyze-file, use analyze-with-file
```
## Maintenance
### Update Index
### Update Mechanism
CCW-Help skill supports manual updates through user confirmation dialog.
#### How to Update
**Option 1: When executing the skill, user will be prompted:**
```
Would you like to update CCW-Help command index?
- Yes: Run auto-update and regenerate command.json
- No: Use current index
```
**Option 2: Manual update**
```bash
cd D:/Claude_dms3/.claude/skills/ccw-help
python scripts/analyze_commands.py
python scripts/auto-update.py
```
脚本功能:扫描 commands/ agents/ 目录,生成统一的 command.json
This runs `analyze_commands.py` to scan commands/ and agents/ directories and regenerate `command.json`.
#### Update Scripts
- **`auto-update.py`**: Simple wrapper that runs analyze_commands.py
- **`analyze_commands.py`**: Scans directories and generates command index
## Statistics
- **Commands**: 88+
- **Commands**: 50+
- **Agents**: 16
- **Essential**: 10 核心命令
- **Workflows**: 6 main levels + 3 with-file variants
- **Essential**: 10 core commands
## Core Principle

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,97 @@
[
{
"name": "action-planning-agent",
"description": "|",
"source": "../../../agents/action-planning-agent.md"
},
{
"name": "cli-discuss-agent",
"description": "|",
"source": "../../../agents/cli-discuss-agent.md"
},
{
"name": "cli-execution-agent",
"description": "|",
"source": "../../../agents/cli-execution-agent.md"
},
{
"name": "cli-explore-agent",
"description": "|",
"source": "../../../agents/cli-explore-agent.md"
},
{
"name": "cli-lite-planning-agent",
"description": "|",
"source": "../../../agents/cli-lite-planning-agent.md"
},
{
"name": "cli-planning-agent",
"description": "|",
"source": "../../../agents/cli-planning-agent.md"
},
{
"name": "code-developer",
"description": "|",
"source": "../../../agents/code-developer.md"
},
{
"name": "conceptual-planning-agent",
"description": "|",
"source": "../../../agents/conceptual-planning-agent.md"
},
{
"name": "context-search-agent",
"description": "|",
"source": "../../../agents/context-search-agent.md"
},
{
"name": "debug-explore-agent",
"description": "|",
"source": "../../../agents/debug-explore-agent.md"
},
{
"name": "doc-generator",
"description": "|",
"source": "../../../agents/doc-generator.md"
},
{
"name": "issue-plan-agent",
"description": "|",
"source": "../../../agents/issue-plan-agent.md"
},
{
"name": "issue-queue-agent",
"description": "|",
"source": "../../../agents/issue-queue-agent.md"
},
{
"name": "memory-bridge",
"description": "Execute complex project documentation updates using script coordination",
"source": "../../../agents/memory-bridge.md"
},
{
"name": "tdd-developer",
"description": "|",
"source": "../../../agents/tdd-developer.md"
},
{
"name": "test-context-search-agent",
"description": "|",
"source": "../../../agents/test-context-search-agent.md"
},
{
"name": "test-fix-agent",
"description": "|",
"source": "../../../agents/test-fix-agent.md"
},
{
"name": "ui-design-agent",
"description": "|",
"source": "../../../agents/ui-design-agent.md"
},
{
"name": "universal-executor",
"description": "|",
"source": "../../../agents/universal-executor.md"
}
]

View File

@@ -0,0 +1,805 @@
[
{
"name": "ccw-coordinator",
"command": "/ccw-coordinator",
"description": "Command orchestration tool - analyze requirements, recommend chain, execute sequentially with state persistence",
"arguments": "[task description]",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-coordinator.md"
},
{
"name": "ccw-debug",
"command": "/ccw-debug",
"description": "Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes",
"arguments": "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \\\"bug description or error message\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-debug.md"
},
{
"name": "ccw",
"command": "/ccw",
"description": "Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process",
"arguments": "\\\"task description\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw.md"
},
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "codex-review",
"command": "/cli:codex-review",
"description": "Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions",
"arguments": "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]",
"category": "cli",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/cli/codex-review.md"
},
{
"name": "convert-to-plan",
"command": "/issue:convert-to-plan",
"description": "Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions",
"arguments": "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/convert-to-plan.md"
},
{
"name": "issue:discover-by-prompt",
"command": "/issue:discover-by-prompt",
"description": "Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).",
"arguments": "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover-by-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with DAG-based parallel orchestration (one commit per solution)",
"arguments": "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "from-brainstorm",
"command": "/issue:from-brainstorm",
"description": "Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle",
"arguments": "SESSION=\\\"<session-id>\\\" [--idea=<index>] [--auto] [-y|--yes]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/from-brainstorm.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "[-y|--yes] <github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[-y|--yes] [--queues <n>] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "tips",
"command": "/memory:tips",
"description": "Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference",
"arguments": "<note content> [--tag <tag1,tag2>] [--context <context>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tips.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "ccw view",
"command": "/ccw view",
"description": "Dashboard - Open CCW workflow dashboard for managing tasks and sessions",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/view.md"
},
{
"name": "analyze-with-file",
"command": "/workflow:analyze-with-file",
"description": "Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding",
"arguments": "[-y|--yes] [-c|--continue] \\\"topic or question\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Beginner",
"source": "../../../commands/workflow/analyze-with-file.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "role-analysis",
"command": "/workflow:brainstorm:role-analysis",
"description": "Unified role-specific analysis generation with interactive context gathering and incremental updates",
"arguments": "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/role-analysis.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[-y|--yes] [optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "brainstorm-with-file",
"command": "/workflow:brainstorm-with-file",
"description": "Interactive brainstorming with multi-CLI collaboration, idea expansion, and documented thought evolution",
"arguments": "[-y|--yes] [-c|--continue] [-m|--mode creative|structured] \\\"idea or topic\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm-with-file.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[-y|--yes] [--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug-with-file",
"command": "/workflow:debug-with-file",
"description": "Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction",
"arguments": "[-y|--yes] \\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug-with-file.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[-y|--yes] [--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[-y|--yes] [--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "workflow:lite-lite-lite",
"command": "/workflow:lite-lite-lite",
"description": "Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.",
"arguments": "[-y|--yes] <task description>",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-lite-lite.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "workflow:multi-cli-plan",
"command": "/workflow:multi-cli-plan",
"description": "Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.",
"arguments": "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/multi-cli-plan.md"
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan-verify.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[-y|--yes] [--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "review-cycle-fix",
"command": "/workflow:review-cycle-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-cycle-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "[-y|--yes] [--detailed]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.",
"arguments": "[optional: --session WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "[-y|--yes] --session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
},
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
},
{
"name": "unified-execute-with-file",
"command": "/workflow:unified-execute-with-file",
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/unified-execute-with-file.md"
}
]

View File

@@ -0,0 +1,833 @@
{
"general": {
"_root": [
{
"name": "ccw-coordinator",
"command": "/ccw-coordinator",
"description": "Command orchestration tool - analyze requirements, recommend chain, execute sequentially with state persistence",
"arguments": "[task description]",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-coordinator.md"
},
{
"name": "ccw-debug",
"command": "/ccw-debug",
"description": "Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes",
"arguments": "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \\\"bug description or error message\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-debug.md"
},
{
"name": "ccw",
"command": "/ccw",
"description": "Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process",
"arguments": "\\\"task description\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw.md"
},
{
"name": "ccw view",
"command": "/ccw view",
"description": "Dashboard - Open CCW workflow dashboard for managing tasks and sessions",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/view.md"
}
]
},
"cli": {
"_root": [
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "codex-review",
"command": "/cli:codex-review",
"description": "Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions",
"arguments": "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]",
"category": "cli",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/cli/codex-review.md"
}
]
},
"issue": {
"_root": [
{
"name": "convert-to-plan",
"command": "/issue:convert-to-plan",
"description": "Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions",
"arguments": "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/convert-to-plan.md"
},
{
"name": "issue:discover-by-prompt",
"command": "/issue:discover-by-prompt",
"description": "Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).",
"arguments": "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover-by-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with DAG-based parallel orchestration (one commit per solution)",
"arguments": "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "from-brainstorm",
"command": "/issue:from-brainstorm",
"description": "Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle",
"arguments": "SESSION=\\\"<session-id>\\\" [--idea=<index>] [--auto] [-y|--yes]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/from-brainstorm.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "[-y|--yes] <github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[-y|--yes] [--queues <n>] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
}
]
},
"memory": {
"_root": [
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
},
{
"name": "tips",
"command": "/memory:tips",
"description": "Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference",
"arguments": "<note content> [--tag <tag1,tag2>] [--context <context>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tips.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
}
]
},
"workflow": {
"_root": [
{
"name": "analyze-with-file",
"command": "/workflow:analyze-with-file",
"description": "Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding",
"arguments": "[-y|--yes] [-c|--continue] \\\"topic or question\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Beginner",
"source": "../../../commands/workflow/analyze-with-file.md"
},
{
"name": "brainstorm-with-file",
"command": "/workflow:brainstorm-with-file",
"description": "Interactive brainstorming with multi-CLI collaboration, idea expansion, and documented thought evolution",
"arguments": "[-y|--yes] [-c|--continue] [-m|--mode creative|structured] \\\"idea or topic\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm-with-file.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[-y|--yes] [--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug-with-file",
"command": "/workflow:debug-with-file",
"description": "Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction",
"arguments": "[-y|--yes] \\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug-with-file.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[-y|--yes] [--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[-y|--yes] [--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "workflow:lite-lite-lite",
"command": "/workflow:lite-lite-lite",
"description": "Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.",
"arguments": "[-y|--yes] <task description>",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-lite-lite.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "workflow:multi-cli-plan",
"command": "/workflow:multi-cli-plan",
"description": "Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.",
"arguments": "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/multi-cli-plan.md"
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan-verify.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[-y|--yes] [--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "review-cycle-fix",
"command": "/workflow:review-cycle-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-cycle-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.",
"arguments": "[optional: --session WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "unified-execute-with-file",
"command": "/workflow:unified-execute-with-file",
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/unified-execute-with-file.md"
}
],
"brainstorm": [
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "role-analysis",
"command": "/workflow:brainstorm:role-analysis",
"description": "Unified role-specific analysis generation with interactive context gathering and incremental updates",
"arguments": "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/role-analysis.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[-y|--yes] [optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
}
],
"session": [
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "[-y|--yes] [--detailed]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
}
],
"tools": [
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "[-y|--yes] --session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
}
],
"ui-design": [
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
]
}
}

View File

@@ -0,0 +1,819 @@
{
"general": [
{
"name": "ccw-coordinator",
"command": "/ccw-coordinator",
"description": "Command orchestration tool - analyze requirements, recommend chain, execute sequentially with state persistence",
"arguments": "[task description]",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-coordinator.md"
},
{
"name": "ccw-debug",
"command": "/ccw-debug",
"description": "Aggregated debug command - combines debugging diagnostics and test verification in a synergistic workflow supporting cli-quick / debug-first / test-first / bidirectional-verification modes",
"arguments": "[--mode cli|debug|test|bidirectional] [--yes|-y] [--hotfix] \\\"bug description or error message\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw-debug.md"
},
{
"name": "ccw",
"command": "/ccw",
"description": "Main workflow orchestrator - analyze intent, select workflow, execute command chain in main process",
"arguments": "\\\"task description\\",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/ccw.md"
},
{
"name": "cli-init",
"command": "/cli:cli-init",
"description": "Generate .gemini/ and .qwen/ config directories with settings.json and ignore files based on workspace technology detection",
"arguments": "[--tool gemini|qwen|all] [--output path] [--preview]",
"category": "cli",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/cli/cli-init.md"
},
{
"name": "issue:discover-by-prompt",
"command": "/issue:discover-by-prompt",
"description": "Discover issues from user prompt with Gemini-planned iterative multi-agent exploration. Uses ACE semantic search for context gathering and supports cross-module comparison (e.g., frontend vs backend API contracts).",
"arguments": "[-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover-by-prompt.md"
},
{
"name": "issue:discover",
"command": "/issue:discover",
"description": "Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.",
"arguments": "[-y|--yes] <path-pattern> [--perspectives=bug,ux,...] [--external]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/discover.md"
},
{
"name": "new",
"command": "/issue:new",
"description": "Create structured issue from GitHub URL or text description",
"arguments": "[-y|--yes] <github-url | text-description> [--priority 1-5]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/new.md"
},
{
"name": "queue",
"command": "/issue:queue",
"description": "Form execution queue from bound solutions using issue-queue-agent (solution-level)",
"arguments": "[-y|--yes] [--queues <n>] [--issue <id>]",
"category": "issue",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/issue/queue.md"
},
{
"name": "compact",
"command": "/memory:compact",
"description": "Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool",
"arguments": "[optional: session description]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/compact.md"
},
{
"name": "load",
"command": "/memory:load",
"description": "Delegate to universal-executor agent to analyze project via Gemini/Qwen CLI and return JSON core content package for task context",
"arguments": "[--tool gemini|qwen] \\\"task context description\\",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/load.md"
},
{
"name": "tips",
"command": "/memory:tips",
"description": "Quick note-taking command to capture ideas, snippets, reminders, and insights for later reference",
"arguments": "<note content> [--tag <tag1,tag2>] [--context <context>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/tips.md"
},
{
"name": "update-full",
"command": "/memory:update-full",
"description": "Update all CLAUDE.md files using layer-based execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[--tool gemini|qwen|codex] [--path <directory>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-full.md"
},
{
"name": "update-related",
"command": "/memory:update-related",
"description": "Update CLAUDE.md for git-changed modules using batched agent execution (4 modules/agent) with gemini→qwen→codex fallback, <15 modules uses direct execution",
"arguments": "[--tool gemini|qwen|codex]",
"category": "memory",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/memory/update-related.md"
},
{
"name": "ccw view",
"command": "/ccw view",
"description": "Dashboard - Open CCW workflow dashboard for managing tasks and sessions",
"arguments": "",
"category": "general",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/view.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "auto-parallel",
"command": "/workflow:brainstorm:auto-parallel",
"description": "Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/auto-parallel.md"
},
{
"name": "role-analysis",
"command": "/workflow:brainstorm:role-analysis",
"description": "Unified role-specific analysis generation with interactive context gathering and incremental updates",
"arguments": "[role-name] [--session session-id] [--update] [--include-questions] [--skip-questions]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/role-analysis.md"
},
{
"name": "synthesis",
"command": "/workflow:brainstorm:synthesis",
"description": "Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent",
"arguments": "[-y|--yes] [optional: --session session-id]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/brainstorm/synthesis.md"
},
{
"name": "clean",
"command": "/workflow:clean",
"description": "Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution",
"arguments": "[-y|--yes] [--dry-run] [\\\"focus area\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/clean.md"
},
{
"name": "debug-with-file",
"command": "/workflow:debug-with-file",
"description": "Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction",
"arguments": "[-y|--yes] \\\"bug description or error message\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/debug-with-file.md"
},
{
"name": "init",
"command": "/workflow:init",
"description": "Initialize project-level state with intelligent project analysis using cli-explore-agent",
"arguments": "[--regenerate]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/init.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[-y|--yes] [--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "workflow:lite-lite-lite",
"command": "/workflow:lite-lite-lite",
"description": "Ultra-lightweight multi-tool analysis and direct execution. No artifacts for simple tasks; auto-creates planning docs in .workflow/.scratchpad/ for complex tasks. Auto tool selection based on task analysis, user-driven iteration via AskUser.",
"arguments": "[-y|--yes] <task description>",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-lite-lite.md"
},
{
"name": "list",
"command": "/workflow:session:list",
"description": "List all workflow sessions with status filtering, shows session metadata and progress information",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Beginner",
"source": "../../../commands/workflow/session/list.md"
},
{
"name": "solidify",
"command": "/workflow:session:solidify",
"description": "Crystallize session learnings and user-defined constraints into permanent project guidelines",
"arguments": "[-y|--yes] [--type <convention|constraint|learning>] [--category <category>] \\\"rule or insight\\",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/solidify.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "conflict-resolution",
"command": "/workflow:tools:conflict-resolution",
"description": "Detect and resolve conflicts between plan and existing codebase using CLI-powered analysis with Gemini/Qwen",
"arguments": "[-y|--yes] --session WFS-session-id --context path/to/context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/conflict-resolution.md"
},
{
"name": "gather",
"command": "/workflow:tools:gather",
"description": "Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON",
"arguments": "--session WFS-session-id \\\"task description\\",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/context-gather.md"
},
{
"name": "animation-extract",
"command": "/workflow:ui-design:animation-extract",
"description": "Extract animation and transition patterns from prompt inference and image references for design system documentation",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--focus \"<types>\"] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/animation-extract.md"
},
{
"name": "explore-auto",
"command": "/workflow:ui-design:explore-auto",
"description": "Interactive exploratory UI design workflow with style-centric batch generation, creates design variants from prompts/images with parallel execution and user selection",
"arguments": "[--input \"<value>\"] [--targets \"<list>\"] [--target-type \"page|component\"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/explore-auto.md"
},
{
"name": "imitate-auto",
"command": "/workflow:ui-design:imitate-auto",
"description": "UI design workflow with direct code/image input for design token extraction and prototype generation",
"arguments": "[--input \"<value>\"] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/imitate-auto.md"
},
{
"name": "layout-extract",
"command": "/workflow:ui-design:layout-extract",
"description": "Extract structural layout information from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--targets \"<list>\"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/layout-extract.md"
},
{
"name": "style-extract",
"command": "/workflow:ui-design:style-extract",
"description": "Extract design style from reference images or text prompts using Claude analysis with variant generation or refinement mode",
"arguments": "[-y|--yes] [--design-id <id>] [--session <id>] [--images \"<glob>\"] [--prompt \"<desc>\"] [--variants <count>] [--interactive] [--refine]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/style-extract.md"
}
],
"analysis": [
{
"name": "codex-review",
"command": "/cli:codex-review",
"description": "Interactive code review using Codex CLI via ccw endpoint with configurable review target, model, and custom instructions",
"arguments": "[--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]",
"category": "cli",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/cli/codex-review.md"
},
{
"name": "analyze-with-file",
"command": "/workflow:analyze-with-file",
"description": "Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding",
"arguments": "[-y|--yes] [-c|--continue] \\\"topic or question\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Beginner",
"source": "../../../commands/workflow/analyze-with-file.md"
},
{
"name": "review-cycle-fix",
"command": "/workflow:review-cycle-fix",
"description": "Automated fixing of code review findings with AI-powered planning and coordinated execution. Uses intelligent grouping, multi-stage timeline coordination, and test-driven verification.",
"arguments": "<export-file|review-dir> [--resume] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-cycle-fix.md"
},
{
"name": "review-module-cycle",
"command": "/workflow:review-module-cycle",
"description": "Independent multi-dimensional code review for specified modules/files. Analyzes specific code paths across 7 dimensions with hybrid parallel-iterative execution, independent of workflow sessions.",
"arguments": "<path-pattern> [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-module-cycle.md"
},
{
"name": "review",
"command": "/workflow:review",
"description": "Post-implementation review with specialized types (security/architecture/action-items/quality) using analysis agents and Gemini",
"arguments": "[--type=security|architecture|action-items|quality] [--archived] [optional: session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "analysis",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review.md"
}
],
"planning": [
{
"name": "convert-to-plan",
"command": "/issue:convert-to-plan",
"description": "Convert planning artifacts (lite-plan, workflow session, markdown) to issue solutions",
"arguments": "[-y|--yes] [--issue <id>] [--supplement] <SOURCE>",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/convert-to-plan.md"
},
{
"name": "from-brainstorm",
"command": "/issue:from-brainstorm",
"description": "Convert brainstorm session ideas into issue with executable solution for parallel-dev-cycle",
"arguments": "SESSION=\\\"<session-id>\\\" [--idea=<index>] [--auto] [-y|--yes]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/from-brainstorm.md"
},
{
"name": "plan",
"command": "/issue:plan",
"description": "Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)",
"arguments": "[-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]",
"category": "issue",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/issue/plan.md"
},
{
"name": "brainstorm-with-file",
"command": "/workflow:brainstorm-with-file",
"description": "Interactive brainstorming with multi-CLI collaboration, idea expansion, and documented thought evolution",
"arguments": "[-y|--yes] [-c|--continue] [-m|--mode creative|structured] \\\"idea or topic\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm-with-file.md"
},
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "workflow:multi-cli-plan",
"command": "/workflow:multi-cli-plan",
"description": "Multi-CLI collaborative planning workflow with ACE context gathering and iterative cross-verification. Uses cli-discuss-agent for Gemini+Codex+Claude analysis to converge on optimal execution plan.",
"arguments": "[-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/multi-cli-plan.md"
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan-verify.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "replan",
"command": "/workflow:replan",
"description": "Interactive workflow replanning with session-level artifact updates and boundary clarification through guided questioning",
"arguments": "[-y|--yes] [--session session-id] [task-id] \\\"requirements\\\"|file.md [--interactive]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/replan.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-plan.md"
},
{
"name": "workflow:ui-design:codify-style",
"command": "/workflow:ui-design:codify-style",
"description": "Orchestrator to extract styles from code and generate shareable reference package with preview (automatic file discovery)",
"arguments": "<path> [--package-name <name>] [--output-dir <path>] [--overwrite]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/codify-style.md"
},
{
"name": "design-sync",
"command": "/workflow:ui-design:design-sync",
"description": "Synchronize finalized design system references to brainstorming artifacts, preparing them for /workflow:plan consumption",
"arguments": "--session <session_id> [--selected-prototypes \"<list>\"]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/design-sync.md"
},
{
"name": "workflow:ui-design:import-from-code",
"command": "/workflow:ui-design:import-from-code",
"description": "Import design system from code files (CSS/JS/HTML/SCSS) with automatic file discovery and parallel agent analysis",
"arguments": "[--design-id <id>] [--session <id>] [--source <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/import-from-code.md"
},
{
"name": "workflow:ui-design:reference-page-generator",
"command": "/workflow:ui-design:reference-page-generator",
"description": "Generate multi-component reference pages and documentation from design run extraction",
"arguments": "[--design-run <path>] [--package-name <name>] [--output-dir <path>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/reference-page-generator.md"
}
],
"implementation": [
{
"name": "execute",
"command": "/issue:execute",
"description": "Execute queue with DAG-based parallel orchestration (one commit per solution)",
"arguments": "[-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]",
"category": "issue",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/issue/execute.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "lite-execute",
"command": "/workflow:lite-execute",
"description": "Execute tasks based on in-memory plan, prompt description, or file content",
"arguments": "[-y|--yes] [--in-memory] [\\\"task description\\\"|file-path]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-execute.md"
},
{
"name": "test-cycle-execute",
"command": "/workflow:test-cycle-execute",
"description": "Execute test-fix workflow with dynamic task generation and iterative fix cycles until test pass rate >= 95% or max iterations reached. Uses @cli-planning-agent for failure analysis and task generation.",
"arguments": "[--resume-session=\\\"session-id\\\"] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-cycle-execute.md"
},
{
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-agent.md"
},
{
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "[-y|--yes] --session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/task-generate-tdd.md"
},
{
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-task-generate.md"
},
{
"name": "generate",
"command": "/workflow:ui-design:generate",
"description": "Assemble UI prototypes by combining layout templates with design tokens (default animation support), pure assembler without new content generation",
"arguments": "[--design-id <id>] [--session <id>]",
"category": "workflow",
"subcategory": "ui-design",
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/ui-design/generate.md"
},
{
"name": "unified-execute-with-file",
"command": "/workflow:unified-execute-with-file",
"description": "Universal execution engine for consuming any planning/brainstorm/analysis output with minimal progress tracking, multi-agent coordination, and incremental execution",
"arguments": "[-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] [\\\"execution context or task name\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/unified-execute-with-file.md"
}
],
"documentation": [
{
"name": "docs-full-cli",
"command": "/memory:docs-full-cli",
"description": "Generate full project documentation using CLI execution (Layer 3→1) with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <20 modules uses direct parallel",
"arguments": "[path] [--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-full-cli.md"
},
{
"name": "docs-related-cli",
"command": "/memory:docs-related-cli",
"description": "Generate/update documentation for git-changed modules using CLI execution with batched agents (4 modules/agent) and gemini→qwen→codex fallback, <15 modules uses direct parallel",
"arguments": "[--tool <gemini|qwen|codex>]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/docs-related-cli.md"
},
{
"name": "style-skill-memory",
"command": "/memory:style-skill-memory",
"description": "Generate SKILL memory package from style reference for easy loading and consistent design system usage",
"arguments": "[package-name] [--regenerate]",
"category": "memory",
"subcategory": null,
"usage_scenario": "documentation",
"difficulty": "Intermediate",
"source": "../../../commands/memory/style-skill-memory.md"
}
],
"session-management": [
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "complete",
"command": "/workflow:session:complete",
"description": "Mark active workflow session as complete, archive with lessons learned, update manifest, remove active flag",
"arguments": "[-y|--yes] [--detailed]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/complete.md"
},
{
"name": "resume",
"command": "/workflow:session:resume",
"description": "Resume the most recently paused workflow session with automatic session discovery and status update",
"arguments": "",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/resume.md"
}
],
"testing": [
{
"name": "tdd-verify",
"command": "/workflow:tdd-verify",
"description": "Verify TDD workflow compliance against Red-Green-Refactor cycles. Generates quality report with coverage analysis and quality gate recommendation. Orchestrates sub-commands for comprehensive validation.",
"arguments": "[optional: --session WFS-session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tdd-verify.md"
},
{
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-fix-gen.md"
},
{
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/test-gen.md"
},
{
"name": "tdd-coverage-analysis",
"command": "/workflow:tools:tdd-coverage-analysis",
"description": "Analyze test coverage and TDD cycle execution with Red-Green-Refactor compliance verification",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Advanced",
"source": "../../../commands/workflow/tools/tdd-coverage-analysis.md"
},
{
"name": "test-concept-enhanced",
"command": "/workflow:tools:test-concept-enhanced",
"description": "Coordinate test analysis workflow using cli-execution-agent to generate test strategy via Gemini",
"arguments": "--session WFS-test-session-id --context path/to/test-context-package.json",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-concept-enhanced.md"
},
{
"name": "test-context-gather",
"command": "/workflow:tools:test-context-gather",
"description": "Collect test coverage context using test-context-search-agent and package into standardized test-context JSON",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "testing",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/tools/test-context-gather.md"
}
]
}

View File

@@ -0,0 +1,160 @@
{
"workflow:plan": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:conflict-resolution",
"workflow:tools:task-generate-agent"
],
"next_steps": [
"workflow:plan-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:tdd-plan"
],
"prerequisites": []
},
"workflow:tdd-plan": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather",
"workflow:tools:task-generate-tdd"
],
"next_steps": [
"workflow:tdd-verify",
"workflow:status",
"workflow:execute"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:execute": {
"prerequisites": [
"workflow:plan",
"workflow:tdd-plan"
],
"related": [
"workflow:status",
"workflow:resume"
],
"next_steps": [
"workflow:review",
"workflow:tdd-verify"
]
},
"workflow:plan-verify": {
"prerequisites": [
"workflow:plan"
],
"next_steps": [
"workflow:execute"
],
"related": [
"workflow:status"
]
},
"workflow:tdd-verify": {
"prerequisites": [
"workflow:execute"
],
"related": [
"workflow:tools:tdd-coverage-analysis"
]
},
"workflow:session:start": {
"next_steps": [
"workflow:plan",
"workflow:execute"
],
"related": [
"workflow:session:list",
"workflow:session:resume"
]
},
"workflow:session:resume": {
"alternatives": [
"workflow:resume"
],
"related": [
"workflow:session:list",
"workflow:status"
]
},
"workflow:lite-plan": {
"calls_internally": [
"workflow:lite-execute"
],
"next_steps": [
"workflow:lite-execute",
"workflow:status"
],
"alternatives": [
"workflow:plan"
],
"prerequisites": []
},
"workflow:lite-fix": {
"next_steps": [
"workflow:lite-execute",
"workflow:status"
],
"alternatives": [
"workflow:lite-plan"
],
"related": [
"workflow:test-cycle-execute"
]
},
"workflow:lite-execute": {
"prerequisites": [
"workflow:lite-plan",
"workflow:lite-fix"
],
"related": [
"workflow:execute",
"workflow:status"
]
},
"workflow:review-session-cycle": {
"prerequisites": [
"workflow:execute"
],
"next_steps": [
"workflow:review-fix"
],
"related": [
"workflow:review-module-cycle"
]
},
"workflow:review-fix": {
"prerequisites": [
"workflow:review-module-cycle",
"workflow:review-session-cycle"
],
"related": [
"workflow:test-cycle-execute"
]
},
"memory:docs": {
"calls_internally": [
"workflow:session:start",
"workflow:tools:context-gather"
],
"next_steps": [
"workflow:execute"
]
},
"memory:skill-memory": {
"next_steps": [
"workflow:plan",
"cli:analyze"
],
"related": [
"memory:load-skill-memory"
]
}
}

View File

@@ -0,0 +1,90 @@
[
{
"name": "lite-plan",
"command": "/workflow:lite-plan",
"description": "Lightweight interactive planning workflow with in-memory planning, code exploration, and execution execute to lite-execute after user confirmation",
"arguments": "[-y|--yes] [-e|--explore] \\\"task description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-plan.md"
},
{
"name": "lite-fix",
"command": "/workflow:lite-fix",
"description": "Lightweight bug diagnosis and fix workflow with intelligent severity assessment and optional hotfix mode for production incidents",
"arguments": "[-y|--yes] [--hotfix] \\\"bug description or issue reference\\",
"category": "workflow",
"subcategory": null,
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/lite-fix.md"
},
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "[-y|--yes] \\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan.md"
},
{
"name": "execute",
"command": "/workflow:execute",
"description": "Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking",
"arguments": "[-y|--yes] [--resume-session=\\\"session-id\\\"]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "implementation",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/execute.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/session/start.md"
},
{
"name": "review-session-cycle",
"command": "/workflow:review-session-cycle",
"description": "Session-based comprehensive multi-dimensional code review. Analyzes git changes from workflow session across 7 dimensions with hybrid parallel-iterative execution, aggregates findings, and performs focused deep-dives on critical issues until quality gates met.",
"arguments": "[session-id] [--dimensions=security,architecture,...] [--max-iterations=N]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/review-session-cycle.md"
},
{
"name": "artifacts",
"command": "/workflow:brainstorm:artifacts",
"description": "Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis",
"arguments": "[-y|--yes] topic or challenge description [--count N]",
"category": "workflow",
"subcategory": "brainstorm",
"usage_scenario": "general",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/brainstorm/artifacts.md"
},
{
"name": "plan-verify",
"command": "/workflow:plan-verify",
"description": "Perform READ-ONLY verification analysis between IMPL_PLAN.md, task JSONs, and brainstorming artifacts. Generates structured report with quality gate recommendation. Does NOT modify any files.",
"arguments": "[optional: --session session-id]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
"difficulty": "Intermediate",
"source": "../../../commands/workflow/plan-verify.md"
}
]

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env python3
"""
Simple update script for ccw-help skill.
Runs analyze_commands.py to regenerate command index.
"""
import sys
import subprocess
from pathlib import Path
BASE_DIR = Path("D:/Claude_dms3/.claude")
SKILL_DIR = BASE_DIR / "skills" / "ccw-help"
ANALYZE_SCRIPT = SKILL_DIR / "scripts" / "analyze_commands.py"
def run_update():
"""Run command analysis update."""
try:
result = subprocess.run(
[sys.executable, str(ANALYZE_SCRIPT)],
capture_output=True,
text=True,
timeout=30
)
print(result.stdout)
return result.returncode == 0
except Exception as e:
print(f"Error running update: {e}")
return False
if __name__ == '__main__':
success = run_update()
sys.exit(0 if success else 1)

View File

@@ -1,6 +1,6 @@
---
name: skill-generator
description: Meta-skill for creating new Claude Code skills with configurable execution modes. Supports sequential (fixed order) and autonomous (stateless) phase patterns. Use for skill scaffolding, skill creation, or building new workflows. Triggers on "create skill", "new skill", "skill generator", "生成技能", "创建技能".
description: Meta-skill for creating new Claude Code skills with configurable execution modes. Supports sequential (fixed order) and autonomous (stateless) phase patterns. Use for skill scaffolding, skill creation, or building new workflows. Triggers on "create skill", "new skill", "skill generator".
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write
---
@@ -36,95 +36,94 @@ Meta-skill for creating new Claude Code skills with configurable execution modes
## Execution Modes
### Mode 1: Sequential (固定顺序)
### Mode 1: Sequential (Fixed Order)
传统线性执行模式,阶段按数字前缀顺序执行。
Traditional linear execution model, phases execute in numeric prefix order.
```
Phase 01 Phase 02 Phase 03 ... Phase N
Phase 01 -> Phase 02 -> Phase 03 -> ... -> Phase N
```
**适用场景**:
- 流水线式任务(收集 → 分析 → 生成)
- 阶段间有强依赖关系
- 输出结构固定
**Use Cases**:
- Pipeline tasks (collect -> analyze -> generate)
- Strong dependencies between phases
- Fixed output structure
**示例**: `software-manual`, `copyright-docs`
**Examples**: `software-manual`, `copyright-docs`
### Mode 2: Autonomous (无状态自主选择)
### Mode 2: Autonomous (Stateless Auto-Select)
智能路由模式,根据上下文动态选择执行路径。
Intelligent routing model, dynamically selects execution path based on context.
```
┌─────────────────────────────────────────┐
Orchestrator Agent
(读取状态 → 选择 Phase → 执行 → 更新) │
└───────────────┬─────────────────────────┘
┌───────────┼───────────┐
┌───────┐ ┌───────┐ ┌───────┐
│Phase A│ │Phase B│ │Phase C│
│(独立) │ │(独立) │ │(独立) │
└───────┘ └───────┘ └───────┘
---------------------------------------------------
Orchestrator Agent
(Read state -> Select Phase -> Execute -> Update)
---------------------------------------------------
|
---------+----------+----------
| | |
Phase A Phase B Phase C
(standalone) (standalone) (standalone)
```
**适用场景**:
- 交互式任务(对话、问答)
- 阶段间无强依赖
- 需要动态响应用户意图
**Use Cases**:
- Interactive tasks (chat, Q&A)
- No strong dependencies between phases
- Dynamic user intent response required
**示例**: `issue-manage`, `workflow-debug`
**Examples**: `issue-manage`, `workflow-debug`
## Key Design Principles
1. **模式感知**: 根据任务特性自动推荐执行模式
2. **骨架生成**: 生成完整目录结构和文件骨架
3. **规范遵循**: 严格遵循 `_shared/SKILL-DESIGN-SPEC.md`
4. **可扩展性**: 生成的 Skill 易于扩展和修改
1. **Mode Awareness**: Automatically recommend execution mode based on task characteristics
2. **Skeleton Generation**: Generate complete directory structure and file skeletons
3. **Standards Compliance**: Strictly follow `_shared/SKILL-DESIGN-SPEC.md`
4. **Extensibility**: Generated Skills are easy to extend and modify
---
## ⚠️ Mandatory Prerequisites (强制前置条件)
## Required Prerequisites
> **⛔ 禁止跳过**: 在执行任何生成操作之前,**必须**完整阅读以下文档。未阅读规范直接生成将导致输出不符合质量标准。
IMPORTANT: Before any generation operation, read the following specification documents. Generating without understanding these standards will result in non-conforming output.
### 核心规范 (必读)
### Core Specifications (Mandatory Read)
| Document | Purpose | Priority |
|----------|---------|----------|
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | 通用设计规范 - 定义所有 Skill 的结构、命名、质量标准 | **P0 - 最高** |
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | Universal design spec - defines structure, naming, quality standards for all Skills | **P0 - Critical** |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document generation spec - ensures generated Skills have proper phase-based Reference Documents with usage timing guidance | **P0 - Critical** |
### 模板文件 (生成前必读)
### Template Files (Read Before Generation)
| Document | Purpose |
|----------|---------|
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md 入口文件模板 |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase 模板 |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Autonomous 编排器模板 |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Autonomous Action 模板 |
| [templates/code-analysis-action.md](templates/code-analysis-action.md) | 代码分析 Action 模板 |
| [templates/llm-action.md](templates/llm-action.md) | LLM Action 模板 |
| [templates/script-template.md](templates/script-template.md) | 统一脚本模板 (Bash + Python) |
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md entry file template |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase template |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Autonomous Orchestrator template |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Autonomous Action template |
| [templates/code-analysis-action.md](templates/code-analysis-action.md) | Code Analysis Action template |
| [templates/llm-action.md](templates/llm-action.md) | LLM Action template |
| [templates/script-template.md](templates/script-template.md) | Unified Script Template (Bash + Python) |
### 规范文档 (按需阅读)
### Specification Documents (Read as Needed)
| Document | Purpose |
|----------|---------|
| [specs/execution-modes.md](specs/execution-modes.md) | 执行模式规范 |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill 需求规范 |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI 集成规范 |
| [specs/scripting-integration.md](specs/scripting-integration.md) | 脚本集成规范 |
| [specs/execution-modes.md](specs/execution-modes.md) | Execution Modes Specification |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill Requirements Specification |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI Integration Specification |
| [specs/scripting-integration.md](specs/scripting-integration.md) | Script Integration Specification |
### Phase 执行指南 (执行时参考)
### Phase Execution Guides (Reference During Execution)
| Document | Purpose |
|----------|---------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | 收集 Skill 需求 |
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | 生成目录结构 |
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | 生成 Phase 文件 |
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | 生成规范和模板 |
| [phases/05-validation.md](phases/05-validation.md) | 验证和文档 |
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | Collect Skill Requirements |
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | Generate Directory Structure |
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | Generate Phase Files |
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | Generate Specs and Templates |
| [phases/05-validation.md](phases/05-validation.md) | Validation and Documentation |
---
@@ -134,91 +133,73 @@ Phase 01 → Phase 02 → Phase 03 → ... → Phase N
Input Parsing:
└─ Convert user request to structured format (skill-name/purpose/mode)
Phase 0: Specification Study (⚠️ MANDATORY - 禁止跳过)
└─ Read specification documents
├─ Load: ../_shared/SKILL-DESIGN-SPEC.md
├─ Load: All templates/*.md files
├─ Understand: Structure rules, naming conventions, quality standards
└─ Output: Internalized requirements (in-memory, no file output)
└─ Validation: MUST complete before Phase 1
Phase 0: Specification Study (MANDATORY - Must complete before proceeding)
- Read specification documents
- Load: ../_shared/SKILL-DESIGN-SPEC.md
- Load: All templates/*.md files
- Understand: Structure rules, naming conventions, quality standards
- Output: Internalized requirements (in-memory, no file output)
- Validation: MUST complete before Phase 1
Phase 1: Requirements Discovery
└─ Gather skill requirements via user interaction
├─ Tool: AskUserQuestion
│ ├─ Prompt: Skill name, purpose, execution mode
│ ├─ Prompt: Phase/Action definition
│ └─ Prompt: Tool dependencies, output format
├─ Process: Generate configuration object
└─ Output: skill-config.json → ${workDir}/
├─ skill_name: string
├─ execution_mode: "sequential" | "autonomous"
├─ phases/actions: array
└─ allowed_tools: array
- Gather skill requirements via user interaction
- Tool: AskUserQuestion
- Collect: Skill name, purpose, execution mode
- Collect: Phase/Action definition
- Collect: Tool dependencies, output format
- Process: Generate configuration object
- Output: skill-config.json
- Contains: skill_name, execution_mode, phases/actions, allowed_tools
Phase 2: Structure Generation
└─ Create directory structure and entry file
├─ Input: skill-config.json (from Phase 1)
├─ Tool: Bash
│ └─ Execute: mkdir -p .claude/skills/{skill-name}/{phases,specs,templates,scripts}
├─ Tool: Write
│ └─ Generate: SKILL.md (entry point with architecture diagram)
└─ Output: Complete directory structure
├─ .claude/skills/{skill-name}/SKILL.md
├─ .claude/skills/{skill-name}/phases/
├─ .claude/skills/{skill-name}/specs/
├─ .claude/skills/{skill-name}/templates/
└─ .claude/skills/{skill-name}/scripts/
- Create directory structure and entry file
- Input: skill-config.json (from Phase 1)
- Tool: Bash
- Execute: mkdir -p .claude/skills/{skill-name}/{phases,specs,templates,scripts}
- Tool: Write
- Generate: SKILL.md (entry point with architecture diagram)
- Output: Complete directory structure
Phase 3: Phase/Action Generation
└─ Decision (execution_mode check):
├─ execution_mode === "sequential" Generate Sequential Phases
│ ├─ Tool: Read (template: templates/sequential-phase.md)
│ ├─ Loop: For each phase in config.sequential_config.phases
│ │ ├─ Generate: phases/{phase-id}.md
│ │ └─ Link: Previous phase output Current phase input
│ ├─ Tool: Write (orchestrator: phases/_orchestrator.md)
│ ├─ Tool: Write (workflow definition: workflow.json)
│ └─ Output: phases/01-{name}.md, phases/02-{name}.md, ...
└─ execution_mode === "autonomous" Generate Orchestrator + Actions
├─ Tool: Read (template: templates/autonomous-orchestrator.md)
├─ Tool: Write (state schema: phases/state-schema.md)
├─ Tool: Write (orchestrator: phases/orchestrator.md)
├─ Tool: Write (action catalog: specs/action-catalog.md)
├─ Loop: For each action in config.autonomous_config.actions
│ ├─ Tool: Read (template: templates/autonomous-action.md)
│ └─ Generate: phases/actions/{action-id}.md
└─ Output: phases/orchestrator.md, phases/actions/*.md
- Decision (execution_mode check):
- IF execution_mode === "sequential": Generate Sequential Phases
- Read template: templates/sequential-phase.md
- Loop: For each phase in config.sequential_config.phases
- Generate: phases/{phase-id}.md
- Link: Previous phase output -> Current phase input
- Write: phases/_orchestrator.md
- Write: workflow.json
- Output: phases/01-{name}.md, phases/02-{name}.md, ...
- ELSE IF execution_mode === "autonomous": Generate Orchestrator + Actions
- Read template: templates/autonomous-orchestrator.md
- Write: phases/state-schema.md
- Write: phases/orchestrator.md
- Write: specs/action-catalog.md
- Loop: For each action in config.autonomous_config.actions
- Read template: templates/autonomous-action.md
- Generate: phases/actions/{action-id}.md
- Output: phases/orchestrator.md, phases/actions/*.md
Phase 4: Specs & Templates
└─ Generate domain specifications and templates
├─ Input: skill-config.json (domain context)
├─ Tool: Write
│ ├─ Generate: specs/{domain}-requirements.md
│ ├─ Generate: specs/quality-standards.md
│ └─ Generate: templates/agent-base.md (if needed)
└─ Output: Domain-specific documentation
├─ specs/{skill-name}-requirements.md
├─ specs/quality-standards.md
└─ templates/agent-base.md
- Generate domain specifications and templates
- Input: skill-config.json (domain context)
- Reference: [specs/reference-docs-spec.md](specs/reference-docs-spec.md) for document organization
- Tool: Write
- Generate: specs/{domain}-requirements.md
- Generate: specs/quality-standards.md
- Generate: templates/agent-base.md (if needed)
- Output: Domain-specific documentation
Phase 5: Validation & Documentation
└─ Verify completeness and generate usage guide
├─ Input: All generated files from previous phases
├─ Tool: Glob + Read
│ └─ Check: Required files exist and contain proper structure
├─ Tool: Write
│ ├─ Generate: README.md (usage instructions)
│ └─ Generate: validation-report.json (completeness check)
└─ Output: Final documentation
├─ README.md (how to use this skill)
└─ validation-report.json (quality gate results)
Return:
└─ Summary with skill location and next steps
├─ Skill path: .claude/skills/{skill-name}/
├─ Status: ✅ All phases completed
└─ Suggestion: "Review SKILL.md and customize phase files as needed"
- Verify completeness and generate usage guide
- Input: All generated files from previous phases
- Tool: Glob + Read
- Check: Required files exist and contain proper structure
- Tool: Write
- Generate: README.md (usage instructions)
- Generate: validation-report.json (completeness check)
- Output: Final documentation
```
**Execution Protocol**:
@@ -273,118 +254,102 @@ Write(`${skillDir}/README.md`, generateReadme(config, validation));
---
## Phase Reference Guide
Navigation and entry points for each execution phase:
## Reference Documents by Phase
### Phase 0: Specification Study (Mandatory)
IMPORTANT: This section demonstrates how skill-generator organizes its own reference documentation. This is the pattern that all generated Skills should emulate. See [specs/reference-docs-spec.md](specs/reference-docs-spec.md) for details.
**Document**: 🔗 [SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md)
### Phase 0: Specification Study (Mandatory Prerequisites)
**Purpose**: Understand skill design standards before generating
Specification documents that must be read before any generation operation
**What to Read**:
- Skill structure conventions
- Naming standards
- Quality criteria
- Output format specifications
---
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | Universal Skill design specification | Understand Skill structure and naming conventions - **REQUIRED** |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document generation specification | Ensure Reference Documents have proper phase-based organization - **REQUIRED** |
### Phase 1: Requirements Discovery
**Document**: 🔗 [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md)
Collect Skill requirements and configuration
| Attribute | Value |
|-----------|-------|
| **Purpose** | Gather configuration from user via interactive prompts |
| **Input** | User responses (Skill name, purpose, execution mode) |
| **Output** | `skill-config.json` (configuration file) |
| **Key Decision** | Execution mode: Sequential vs Autonomous vs Hybrid |
---
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | Phase 1 execution guide | Understand how to collect user requirements and generate configuration |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill requirements specification | Understand what information a Skill should contain |
### Phase 2: Structure Generation
**Document**: 🔗 [phases/02-structure-generation.md](phases/02-structure-generation.md)
Generate directory structure and entry file
| Attribute | Value |
|-----------|-------|
| **Purpose** | Create directory structure and entry file |
| **Input** | `skill-config.json` (from Phase 1) |
| **Output** | `.claude/skills/{skill-name}/` directory + `SKILL.md` |
---
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | Phase 2 execution guide | Understand how to generate directory structure |
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md template | Learn how to generate the entry file |
### Phase 3: Phase/Action Generation
**Document**: 🔗 [phases/03-phase-generation.md](phases/03-phase-generation.md)
Generate specific phase or action files based on execution mode
| Attribute | Value |
|-----------|-------|
| **Purpose** | Generate execution phases or actions based on mode |
| **Input** | `skill-config.json` + Templates |
| **Output** | Sequential: `phases/01-*.md` ... OR Autonomous: `phases/orchestrator.md` + `actions/*.md` |
**Decision Logic**:
```
IF execution_mode === "sequential":
└─ Generate sequential phases with linear flow
ELSE IF execution_mode === "autonomous":
└─ Generate orchestrator + independent actions
```
---
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | Phase 3 execution guide | Understand Sequential vs Autonomous generation logic |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase template | Generate phase files for Sequential mode |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Orchestrator template | Generate orchestrator for Autonomous mode |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Action template | Generate action files for Autonomous mode |
### Phase 4: Specs & Templates
**Document**: 🔗 [phases/04-specs-templates.md](phases/04-specs-templates.md)
Generate domain-specific specifications and templates
| Attribute | Value |
|-----------|-------|
| **Purpose** | Generate domain specifications and template files |
| **Input** | `skill-config.json` (domain context) |
| **Output** | `specs/{skill-name}-requirements.md`, `specs/quality-standards.md` |
---
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | Phase 4 execution guide | Understand how to generate domain-specific documentation |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document specification | IMPORTANT: Follow this spec when generating Specs |
### Phase 5: Validation & Documentation
**Document**: 🔗 [phases/05-validation.md](phases/05-validation.md)
Verify results and generate final documentation
| Attribute | Value |
|-----------|-------|
| **Purpose** | Verify completeness and generate usage documentation |
| **Input** | All files from previous phases |
| **Output** | `README.md`, `validation-report.json` |
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/05-validation.md](phases/05-validation.md) | Phase 5 execution guide | Understand how to verify generated Skill completeness |
### Debugging & Troubleshooting
Reference documents when encountering issues
| Issue | Solution Document |
|-------|------------------|
| Generated Skill missing Reference Documents | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) - verify phase-based organization is followed |
| Reference document organization unclear | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) - Core Principles section |
| Generated documentation does not meet quality standards | [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) |
### Reference & Background
Documents for deep learning and design decisions
| Document | Purpose | Notes |
|----------|---------|-------|
| [specs/execution-modes.md](specs/execution-modes.md) | Detailed execution modes specification | Comparison and use cases for Sequential vs Autonomous |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI integration specification | How generated Skills integrate with CLI |
| [specs/scripting-integration.md](specs/scripting-integration.md) | Script integration specification | How to use scripts in Phases |
| [templates/script-template.md](templates/script-template.md) | Script template | Unified Bash + Python template |
---
## Template Reference
| Template | Generated For | When Used |
|----------|--------------|-----------|
| [skill-md.md](templates/skill-md.md) | SKILL.md entry file | Phase 2 |
| [sequential-phase.md](templates/sequential-phase.md) | Sequential phase files | Phase 3 |
| [autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Orchestrator (autonomous) | Phase 3 |
| [autonomous-action.md](templates/autonomous-action.md) | Action files | Phase 3 |
| [code-analysis-action.md](templates/code-analysis-action.md) | Code analysis actions | Phase 3 |
| [llm-action.md](templates/llm-action.md) | LLM-powered actions | Phase 3 |
| [script-template.md](templates/script-template.md) | Bash + Python scripts | Phase 3/4 |
## Output Structure
### Sequential Mode
```
.claude/skills/{skill-name}/
├── SKILL.md # 入口文件
├── SKILL.md # Entry file
├── phases/
│ ├── _orchestrator.md # 声明式编排器
│ ├── workflow.json # 工作流定义
│ ├── 01-{step-one}.md # 阶段 1
│ ├── 02-{step-two}.md # 阶段 2
│ └── 03-{step-three}.md # 阶段 3
│ ├── _orchestrator.md # Declarative orchestrator
│ ├── workflow.json # Workflow definition
│ ├── 01-{step-one}.md # Phase 1
│ ├── 02-{step-two}.md # Phase 2
│ └── 03-{step-three}.md # Phase 3
├── specs/
│ ├── {skill-name}-requirements.md
│ └── quality-standards.md
@@ -398,10 +363,10 @@ ELSE IF execution_mode === "autonomous":
```
.claude/skills/{skill-name}/
├── SKILL.md # 入口文件
├── SKILL.md # Entry file
├── phases/
│ ├── orchestrator.md # 编排器 (状态驱动)
│ ├── state-schema.md # 状态结构定义
│ ├── orchestrator.md # Orchestrator (state-driven)
│ ├── state-schema.md # State schema definition
│ └── actions/
│ ├── action-init.md
│ ├── action-create.md
@@ -415,4 +380,86 @@ ELSE IF execution_mode === "autonomous":
│ └── action-base.md
├── scripts/
└── README.md
```
```
---
## Reference Documents by Phase
IMPORTANT: This section demonstrates how skill-generator organizes its own reference documentation. This is the pattern that all generated Skills should emulate. See [specs/reference-docs-spec.md](specs/reference-docs-spec.md) for details.
### Phase 0: Specification Study (Mandatory Prerequisites)
Specification documents that must be read before any generation operation
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | Universal Skill design specification | Understand Skill structure and naming conventions - **REQUIRED** |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document generation specification | Ensure Reference Documents have proper phase-based organization - **REQUIRED** |
### Phase 1: Requirements Discovery
Collect Skill requirements and configuration
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | Phase 1 execution guide | Understand how to collect user requirements and generate configuration |
| [specs/skill-requirements.md](specs/skill-requirements.md) | Skill requirements specification | Understand what information a Skill should contain |
### Phase 2: Structure Generation
Generate directory structure and entry file
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/02-structure-generation.md](phases/02-structure-generation.md) | Phase 2 execution guide | Understand how to generate directory structure |
| [templates/skill-md.md](templates/skill-md.md) | SKILL.md template | Learn how to generate the entry file |
### Phase 3: Phase/Action Generation
Generate specific phase or action files based on execution mode
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/03-phase-generation.md](phases/03-phase-generation.md) | Phase 3 execution guide | Understand Sequential vs Autonomous generation logic |
| [templates/sequential-phase.md](templates/sequential-phase.md) | Sequential Phase template | Generate phase files for Sequential mode |
| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | Orchestrator template | Generate orchestrator for Autonomous mode |
| [templates/autonomous-action.md](templates/autonomous-action.md) | Action template | Generate action files for Autonomous mode |
### Phase 4: Specs & Templates
Generate domain-specific specifications and templates
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/04-specs-templates.md](phases/04-specs-templates.md) | Phase 4 execution guide | Understand how to generate domain-specific documentation |
| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference document specification | IMPORTANT: Follow this spec when generating Specs |
### Phase 5: Validation & Documentation
Verify results and generate final documentation
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/05-validation.md](phases/05-validation.md) | Phase 5 execution guide | Understand how to verify generated Skill completeness |
### Debugging & Troubleshooting
Reference documents when encountering issues
| Issue | Solution Document |
|-------|------------------|
| Generated Skill missing Reference Documents | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) - verify phase-based organization is followed |
| Reference document organization unclear | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) - Core Principles section |
| Generated documentation does not meet quality standards | [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) |
### Reference & Background
Documents for deep learning and design decisions
| Document | Purpose | Notes |
|----------|---------|-------|
| [specs/execution-modes.md](specs/execution-modes.md) | Detailed execution modes specification | Comparison and use cases for Sequential vs Autonomous |
| [specs/cli-integration.md](specs/cli-integration.md) | CLI integration specification | How generated Skills integrate with CLI |
| [specs/scripting-integration.md](specs/scripting-integration.md) | Script integration specification | How to use scripts in Phases |
| [templates/script-template.md](templates/script-template.md) | Script template | Unified Bash + Python template |

View File

@@ -1,114 +1,125 @@
# Phase 1: Requirements Discovery
Collect basic skill information, configuration, and execution mode based on user input.
### Step 1: 基本信息收集
## Objective
- Collect skill basic information (name, description, trigger words)
- Determine execution mode (Sequential/Autonomous/Hybrid)
- Define phases or actions
- Generate initial configuration file
## Execution Steps
### Step 1: Basic Information Collection
```javascript
const basicInfo = await AskUserQuestion({
questions: [
{
question: "新 Skill 的名称是什么?(英文,小写-连字符格式,如 'api-docs'",
header: "Skill 名称",
question: "What is the name of the new Skill? (English, lowercase with hyphens, e.g., 'api-docs')",
header: "Skill Name",
multiSelect: false,
options: [
{ label: "自动生成", description: "根据后续描述自动生成名称" },
{ label: "手动输入", description: "现在输入自定义名称" }
{ label: "Auto-generate", description: "Generate name automatically based on description" },
{ label: "Manual Input", description: "Enter custom name now" }
]
},
{
question: "Skill 的主要用途是什么?",
header: "用途类型",
question: "What is the primary purpose of the Skill?",
header: "Purpose Type",
multiSelect: false,
options: [
{ label: "文档生成", description: "生成 Markdown/HTML 文档(如手册、报告)" },
{ label: "代码分析", description: "分析代码结构、质量、安全性" },
{ label: "交互管理", description: "管理 Issue、任务、工作流CRUD 操作)" },
{ label: "数据处理", description: "ETL、格式转换、报告生成" }
{ label: "Document Generation", description: "Generate Markdown/HTML documents (manuals, reports)" },
{ label: "Code Analysis", description: "Analyze code structure, quality, security" },
{ label: "Interactive Management", description: "Manage Issues, tasks, workflows (CRUD operations)" },
{ label: "Data Processing", description: "ETL, format conversion, report generation" }
]
}
]
});
// 如果选择手动输入,进一步询问
if (basicInfo["Skill 名称"] === "手动输入") {
// 用户会在 "Other" 中输入
// If manual input is selected, prompt further
if (basicInfo["Skill Name"] === "Manual Input") {
// User will input in "Other"
}
// 根据用途类型推断描述模板
// Infer description template based on purpose type
const purposeTemplates = {
"文档生成": "Generate {type} documents from {source}",
"代码分析": "Analyze {target} for {purpose}",
"交互管理": "Manage {entity} with interactive operations",
"数据处理": "Process {data} and generate {output}"
"Document Generation": "Generate {type} documents from {source}",
"Code Analysis": "Analyze {target} for {purpose}",
"Interactive Management": "Manage {entity} with interactive operations",
"Data Processing": "Process {data} and generate {output}"
};
```
### Step 2: 执行模式选择
### Step 2: Execution Mode Selection
```javascript
const modeInfo = await AskUserQuestion({
questions: [
{
question: "选择执行模式:",
header: "执行模式",
question: "Select execution mode:",
header: "Execution Mode",
multiSelect: false,
options: [
{
label: "Sequential (顺序模式)",
description: "阶段按固定顺序执行(收集→分析→生成),适合流水线任务(推荐)"
{
label: "Sequential (Sequential Mode)",
description: "Phases execute in fixed order (collect→analyze→generate), suitable for pipeline tasks (recommended)"
},
{
label: "Autonomous (自主模式)",
description: "动态选择执行路径,适合交互式任务(如 Issue 管理)"
{
label: "Autonomous (Autonomous Mode)",
description: "Dynamically select execution path, suitable for interactive tasks (e.g., Issue management)"
},
{
label: "Hybrid (混合模式)",
description: "初始化和收尾固定,中间交互灵活"
{
label: "Hybrid (Hybrid Mode)",
description: "Fixed initialization and finalization, flexible interaction in the middle"
}
]
}
]
});
const executionMode = modeInfo["执行模式"].includes("Sequential") ? "sequential" :
modeInfo["执行模式"].includes("Autonomous") ? "autonomous" : "hybrid";
const executionMode = modeInfo["Execution Mode"].includes("Sequential") ? "sequential" :
modeInfo["Execution Mode"].includes("Autonomous") ? "autonomous" : "hybrid";
```
### Step 3: 阶段/动作定义
### Step 3: Phase/Action Definition
#### Sequential 模式
#### Sequential Mode
```javascript
if (executionMode === "sequential") {
const phaseInfo = await AskUserQuestion({
questions: [
{
question: "需要多少个执行阶段?",
header: "阶段数量",
question: "How many execution phases are needed?",
header: "Phase Count",
multiSelect: false,
options: [
{ label: "3 阶段(简单)", description: "收集 → 处理 → 输出" },
{ label: "5 阶段(标准)", description: "收集 → 探索 → 分析 → 组装 → 验证" },
{ label: "7 阶段(完整)", description: "含并行处理、汇总、迭代优化" }
{ label: "3 Phases (Simple)", description: "Collection → Processing → Output" },
{ label: "5 Phases (Standard)", description: "Collection → Exploration → Analysis → Assembly → Validation" },
{ label: "7 Phases (Complete)", description: "Includes parallel processing, consolidation, iterative optimization" }
]
}
]
});
// 根据选择生成阶段定义
// Generate phase definitions based on selection
const phaseTemplates = {
"3 阶段": [
"3 Phases": [
{ id: "01-collection", name: "Data Collection" },
{ id: "02-processing", name: "Processing" },
{ id: "03-output", name: "Output Generation" }
],
"5 阶段": [
"5 Phases": [
{ id: "01-collection", name: "Requirements Collection" },
{ id: "02-exploration", name: "Project Exploration" },
{ id: "03-analysis", name: "Deep Analysis" },
{ id: "04-assembly", name: "Document Assembly" },
{ id: "05-validation", name: "Validation" }
],
"7 阶段": [
"7 Phases": [
{ id: "01-collection", name: "Requirements Collection" },
{ id: "02-exploration", name: "Project Exploration" },
{ id: "03-parallel", name: "Parallel Analysis" },
@@ -121,23 +132,23 @@ if (executionMode === "sequential") {
}
```
#### Autonomous 模式
#### Autonomous Mode
```javascript
if (executionMode === "autonomous") {
const actionInfo = await AskUserQuestion({
questions: [
{
question: "核心动作有哪些?(可多选)",
header: "动作定义",
question: "What are the core actions? (Multiple selection allowed)",
header: "Action Definition",
multiSelect: true,
options: [
{ label: "初始化 (init)", description: "设置初始状态" },
{ label: "列表 (list)", description: "显示当前项目列表" },
{ label: "创建 (create)", description: "创建新项目" },
{ label: "编辑 (edit)", description: "修改现有项目" },
{ label: "删除 (delete)", description: "删除项目" },
{ label: "搜索 (search)", description: "搜索/过滤项目" }
{ label: "Initialize (init)", description: "Set initial state" },
{ label: "List (list)", description: "Display current item list" },
{ label: "Create (create)", description: "Create new item" },
{ label: "Edit (edit)", description: "Modify existing item" },
{ label: "Delete (delete)", description: "Delete item" },
{ label: "Search (search)", description: "Search/filter items" }
]
}
]
@@ -145,37 +156,37 @@ if (executionMode === "autonomous") {
}
```
### Step 4: 工具和输出配置
### Step 4: Tool and Output Configuration
```javascript
const toolsInfo = await AskUserQuestion({
questions: [
{
question: "需要哪些特殊工具?(基础工具已默认包含)",
header: "工具选择",
question: "Which special tools are needed? (Basic tools are included by default)",
header: "Tool Selection",
multiSelect: true,
options: [
{ label: "用户交互 (AskUserQuestion)", description: "需要与用户对话" },
{ label: "Chrome 截图 (mcp__chrome__*)", description: "需要网页截图" },
{ label: "外部搜索 (mcp__exa__search)", description: "需要搜索外部信息" },
{ label: "无特殊需求", description: "仅使用基础工具" }
{ label: "User Interaction (AskUserQuestion)", description: "Need to dialog with user" },
{ label: "Chrome Screenshot (mcp__chrome__*)", description: "Need web page screenshots" },
{ label: "External Search (mcp__exa__search)", description: "Need to search external information" },
{ label: "No Special Requirements", description: "Use basic tools only" }
]
},
{
question: "输出格式是什么?",
header: "输出格式",
question: "What is the output format?",
header: "Output Format",
multiSelect: false,
options: [
{ label: "Markdown", description: "适合文档和报告" },
{ label: "HTML", description: "适合交互式文档" },
{ label: "JSON", description: "适合数据和配置" }
{ label: "Markdown", description: "Suitable for documents and reports" },
{ label: "HTML", description: "Suitable for interactive documents" },
{ label: "JSON", description: "Suitable for data and configuration" }
]
}
]
});
```
### Step 5: 生成配置文件
### Step 5: Generate Configuration File
```javascript
const config = {
@@ -184,41 +195,40 @@ const config = {
description: description,
triggers: triggers,
execution_mode: executionMode,
// 模式特定配置
// Mode-specific configuration
...(executionMode === "sequential" ? {
sequential_config: { phases: phases }
} : {
autonomous_config: {
autonomous_config: {
state_schema: stateSchema,
actions: actions,
termination_conditions: ["user_exit", "error_limit", "task_completed"]
}
}),
allowed_tools: [
"Task", "Read", "Write", "Glob", "Grep", "Bash",
...selectedTools
],
output: {
format: outputFormat.toLowerCase(),
location: `.workflow/.scratchpad/${skillName}-{timestamp}`,
filename_pattern: `{name}-output.${outputFormat === "HTML" ? "html" : outputFormat === "JSON" ? "json" : "md"}`
},
created_at: new Date().toISOString(),
version: "1.0.0"
};
// 写入配置文件
// Write configuration file
const workDir = `.workflow/.scratchpad/skill-gen-${timestamp}`;
Bash(`mkdir -p "${workDir}"`);
Write(`${workDir}/skill-config.json`, JSON.stringify(config, null, 2));
```
## Next Phase
→ [Phase 2: Structure Generation](02-structure-generation.md)

View File

@@ -1,41 +1,40 @@
# Phase 2: Structure Generation
根据配置创建 Skill 目录结构和入口文件。
Create Skill directory structure and entry file based on configuration.
## Objective
- 创建标准目录结构
- 生成 SKILL.md 入口文件
- 根据执行模式创建对应的子目录
- Create standard directory structure
- Generate SKILL.md entry file
- Create corresponding subdirectories based on execution mode
## Execution Steps
### Step 1: 读取配置
### Step 1: Read Configuration
```javascript
const config = JSON.parse(Read(`${workDir}/skill-config.json`));
const skillDir = `.claude/skills/${config.skill_name}`;
```
### Step 2: 创建目录结构
### Step 2: Create Directory Structure
#### 基础目录(所有模式)
#### Base Directories (All Modes)
```javascript
// 基础架构
// Base infrastructure
Bash(`mkdir -p "${skillDir}/{phases,specs,templates,scripts}"`);
```
#### 执行模式特定目录
#### Execution Mode-Specific Directories
```
config.execution_mode
├─ "sequential"
│ ↓ Creates:
│ └─ phases/ (基础目录已包含)
│ └─ phases/ (base directory already included)
│ ├─ _orchestrator.md
│ └─ workflow.json
@@ -43,36 +42,36 @@ config.execution_mode
↓ Creates:
└─ phases/actions/
├─ state-schema.md
└─ *.md (动作文件)
└─ *.md (action files)
```
```javascript
// Autonomous/Hybrid 模式额外目录
// Additional directories for Autonomous/Hybrid mode
if (config.execution_mode === 'autonomous' || config.execution_mode === 'hybrid') {
Bash(`mkdir -p "${skillDir}/phases/actions"`);
}
```
#### Context Strategy 特定目录 (P0 增强)
#### Context Strategy-Specific Directories (P0 Enhancement)
```javascript
// ========== P0: 根据上下文策略创建目录 ==========
// ========== P0: Create directories based on context strategy ==========
const contextStrategy = config.context_strategy || 'file';
if (contextStrategy === 'file') {
// 文件策略:创建上下文持久化目录
// File strategy: Create persistent context directory
Bash(`mkdir -p "${skillDir}/.scratchpad-template/context"`);
// 创建上下文模板文件
// Create context template file
Write(
`${skillDir}/.scratchpad-template/context/.gitkeep`,
"# Runtime context storage for file-based strategy"
);
}
// 内存策略无需创建目录 (in-memory only)
// Memory strategy does not require directory creation (in-memory only)
```
**目录树视图**:
**Directory Tree View**:
```
Sequential + File Strategy:
@@ -83,7 +82,7 @@ Sequential + File Strategy:
│ ├── 01-*.md
│ └── 02-*.md
├── .scratchpad-template/
│ └── context/ File strategy persistent storage
│ └── context/ <- File strategy persistent storage
└── specs/
Autonomous + Memory Strategy:
@@ -96,7 +95,7 @@ Autonomous + Memory Strategy:
└── specs/
```
### Step 3: 生成 SKILL.md
### Step 3: Generate SKILL.md
```javascript
const skillMdTemplate = `---
@@ -130,8 +129,8 @@ const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = \`${config.output.location.replace('{timestamp}', '${timestamp}')}\`;
Bash(\`mkdir -p "\${workDir}"\`);
${config.execution_mode === 'sequential' ?
`Bash(\`mkdir -p "\${workDir}/sections"\`);` :
${config.execution_mode === 'sequential' ?
`Bash(\`mkdir -p "\${workDir}/sections"\`);` :
`Bash(\`mkdir -p "\${workDir}/state"\`);`}
\`\`\`
@@ -149,53 +148,53 @@ ${generateReferenceTable(config)}
Write(`${skillDir}/SKILL.md`, skillMdTemplate);
```
### Step 4: 架构图生成函数
### Step 4: Architecture Diagram Generation Functions
```javascript
function generateArchitectureDiagram(config) {
if (config.execution_mode === 'sequential') {
return config.sequential_config.phases.map((p, i) =>
return config.sequential_config.phases.map((p, i) =>
`│ Phase ${i+1}: ${p.name.padEnd(15)}${p.output || 'output-' + (i+1) + '.json'}${' '.repeat(10)}`
).join('\n│ ↓' + ' '.repeat(45) + '│\n');
} else {
return `
┌─────────────────────────────────────────────────────────────────┐
│ Orchestrator (状态驱动决策)
│ Orchestrator (State-driven decision-making)
└───────────────┬─────────────────────────────────────────────────┘
┌───────────┼───────────┐
↓ ↓ ↓
${config.autonomous_config.actions.slice(0, 3).map(a =>
${config.autonomous_config.actions.slice(0, 3).map(a =>
`┌─────────┐ `).join('')}
${config.autonomous_config.actions.slice(0, 3).map(a =>
${config.autonomous_config.actions.slice(0, 3).map(a =>
`${a.name.slice(0, 7).padEnd(7)}`).join('')}
${config.autonomous_config.actions.slice(0, 3).map(a =>
${config.autonomous_config.actions.slice(0, 3).map(a =>
`└─────────┘ `).join('')}`;
}
}
function generateDesignPrinciples(config) {
const common = [
"1. **规范遵循**: 严格遵循 `_shared/SKILL-DESIGN-SPEC.md`",
"2. **简要返回**: Agent 返回路径+摘要,避免上下文溢出"
"1. **Specification Compliance**: Strictly follow `_shared/SKILL-DESIGN-SPEC.md`",
"2. **Brief Return**: Agent returns path+summary, avoiding context overflow"
];
if (config.execution_mode === 'sequential') {
return [...common,
"3. **阶段隔离**: 每个阶段独立可测",
"4. **链式输出**: 阶段产出作为下阶段输入"
"3. **Phase Isolation**: Each phase is independently testable",
"4. **Chained Output**: Phase output becomes next phase input"
].join('\n');
} else {
return [...common,
"3. **状态驱动**: 显式状态管理,动态决策",
"4. **动作独立**: 每个动作无副作用依赖"
"3. **State-driven**: Explicit state management, dynamic decision-making",
"4. **Action Independence**: Each action has no side-effect dependencies"
].join('\n');
}
}
function generateExecutionFlow(config) {
if (config.execution_mode === 'sequential') {
return '```\n' + config.sequential_config.phases.map((p, i) =>
return '```\n' + config.sequential_config.phases.map((p, i) =>
`├─ Phase ${i+1}: ${p.name}\n│ → Output: ${p.output || 'output.json'}`
).join('\n') + '\n```';
} else {
@@ -216,9 +215,9 @@ function generateExecutionFlow(config) {
function generateOutputStructure(config) {
const base = `${config.output.location}/
├── ${config.execution_mode === 'sequential' ? 'sections/' : 'state.json'}`;
if (config.execution_mode === 'sequential') {
return base + '\n' + config.sequential_config.phases.map(p =>
return base + '\n' + config.sequential_config.phases.map(p =>
`│ └── ${p.output || 'section-' + p.id + '.md'}`
).join('\n') + `\n└── ${config.output.filename_pattern}`;
} else {
@@ -230,22 +229,22 @@ function generateOutputStructure(config) {
function generateReferenceTable(config) {
const rows = [];
if (config.execution_mode === 'sequential') {
config.sequential_config.phases.forEach(p => {
rows.push(`| [phases/${p.id}.md](phases/${p.id}.md) | ${p.name} |`);
});
} else {
rows.push(`| [phases/orchestrator.md](phases/orchestrator.md) | 编排器 |`);
rows.push(`| [phases/state-schema.md](phases/state-schema.md) | 状态定义 |`);
rows.push(`| [phases/orchestrator.md](phases/orchestrator.md) | Orchestrator |`);
rows.push(`| [phases/state-schema.md](phases/state-schema.md) | State Definition |`);
config.autonomous_config.actions.forEach(a => {
rows.push(`| [phases/actions/${a.id}.md](phases/actions/${a.id}.md) | ${a.name} |`);
});
}
rows.push(`| [specs/${config.skill_name}-requirements.md](specs/${config.skill_name}-requirements.md) | 领域规范 |`);
rows.push(`| [specs/quality-standards.md](specs/quality-standards.md) | 质量标准 |`);
rows.push(`| [specs/${config.skill_name}-requirements.md](specs/${config.skill_name}-requirements.md) | Domain Requirements |`);
rows.push(`| [specs/quality-standards.md](specs/quality-standards.md) | Quality Standards |`);
return `| Document | Purpose |\n|----------|---------||\n` + rows.join('\n');
}
```

File diff suppressed because it is too large Load Diff

View File

@@ -40,7 +40,7 @@ Generate comprehensive specifications and templates:
```markdown
# {display_name} Requirements
- When to Use (phase/action reference table)
- Domain Requirements (功能要求, 输出要求, 质量要求)
- Domain Requirements (Functional requirements, Output requirements, Quality requirements)
- Validation Function (JavaScript code)
- Error Handling (recovery strategies)
```
@@ -57,10 +57,10 @@ Generate comprehensive specifications and templates:
**Agent Base** (`templates/agent-base.md`):
```markdown
# Agent Base Template
- 通用 Prompt 结构 (ROLE, PROJECT CONTEXT, TASK, CONSTRAINTS, OUTPUT_FORMAT, QUALITY_CHECKLIST)
- 变量说明 (workDir, output_path)
- 返回格式 (AgentReturn interface)
- 角色定义参考 (phase/action specific agents)
- Universal Prompt Structure (ROLE, PROJECT CONTEXT, TASK, CONSTRAINTS, OUTPUT_FORMAT, QUALITY_CHECKLIST)
- Variable Description (workDir, output_path)
- Return Format (AgentReturn interface)
- Role Definition Reference (phase/action specific agents)
```
**Action Catalog** (`specs/action-catalog.md`, Autonomous/Hybrid only):
@@ -114,39 +114,39 @@ ${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
`| Phase ${i+1} | ${p.name} | ${p.id}.md |`
).join('\n') :
`| Orchestrator | 动作选择 | orchestrator.md |
| Actions | 动作执行 | actions/*.md |`}
`| Orchestrator | Action selection | orchestrator.md |
| Actions | Action execution | actions/*.md |`}
---
## Domain Requirements
### 功能要求
### Functional Requirements
- [ ] 要求1: TODO
- [ ] 要求2: TODO
- [ ] 要求3: TODO
- [ ] Requirement 1: TODO
- [ ] Requirement 2: TODO
- [ ] Requirement 3: TODO
### 输出要求
### Output Requirements
- [ ] 格式: ${config.output.format}
- [ ] 位置: ${config.output.location}
- [ ] 命名: ${config.output.filename_pattern}
- [ ] Format: ${config.output.format}
- [ ] Location: ${config.output.location}
- [ ] Naming: ${config.output.filename_pattern}
### 质量要求
### Quality Requirements
- [ ] 完整性: 所有必需内容存在
- [ ] 一致性: 术语和格式统一
- [ ] 准确性: 内容基于实际分析
- [ ] Completeness: All necessary content exists
- [ ] Consistency: Terminology and format unified
- [ ] Accuracy: Content based on actual analysis
## Validation Function
\`\`\`javascript
function validate${toPascalCase(config.skill_name)}(output) {
const checks = [
// TODO: 添加验证规则
{ name: "格式正确", pass: output.format === "${config.output.format}" },
{ name: "内容完整", pass: output.content?.length > 0 }
// TODO: Add validation rules
{ name: "Format correct", pass: output.format === "${config.output.format}" },
{ name: "Content complete", pass: output.content?.length > 0 }
];
return {
@@ -161,9 +161,9 @@ function validate${toPascalCase(config.skill_name)}(output) {
| Error | Recovery |
|-------|----------|
| 输入数据缺失 | 返回明确错误信息 |
| 处理超时 | 缩小范围,重试 |
| 输出验证失败 | 记录问题,人工审核 |
| Missing input data | Return clear error message |
| Processing timeout | Reduce scope, retry |
| Output validation failure | Log issue, manual review |
`;
Write(`${skillDir}/specs/${config.skill_name}-requirements.md`, domainRequirements);
@@ -171,68 +171,68 @@ Write(`${skillDir}/specs/${config.skill_name}-requirements.md`, domainRequiremen
// Step 2: Generate quality standards
const qualityStandards = `# Quality Standards
${config.display_name} 的质量评估标准。
Quality assessment standards for ${config.display_name}.
## Quality Dimensions
### 1. Completeness (完整性) - 25%
### 1. Completeness (Completeness) - 25%
| 要求 | 权重 | 检查方式 |
|------|------|----------|
| 所有必需输出存在 | 10 | 文件检查 |
| 内容覆盖完整 | 10 | 内容分析 |
| 无占位符残留 | 5 | 文本搜索 |
| Requirement | Weight | Validation Method |
|------------|--------|-----------------|
| All necessary outputs exist | 10 | File check |
| Content coverage complete | 10 | Content analysis |
| No placeholder remnants | 5 | Text search |
### 2. Consistency (一致性) - 25%
### 2. Consistency (Consistency) - 25%
| 方面 | 检查 |
|------|------|
| 术语 | 同一概念使用相同术语 |
| 格式 | 标题层级、代码块格式一致 |
| 风格 | 语气和表达方式统一 |
| Aspect | Check |
|--------|-------|
| Terminology | Use same term for same concept |
| Format | Title levels, code block format consistent |
| Style | Tone and expression unified |
### 3. Accuracy (准确性) - 25%
### 3. Accuracy (Accuracy) - 25%
| 要求 | 说明 |
|------|------|
| 数据正确 | 引用和数据无错误 |
| 逻辑正确 | 流程和关系描述准确 |
| 代码正确 | 代码示例可运行 |
| Requirement | Description |
|-------------|------------|
| Data correct | References and data error-free |
| Logic correct | Process and relationship descriptions accurate |
| Code correct | Code examples runnable |
### 4. Usability (可用性) - 25%
### 4. Usability (Usability) - 25%
| 指标 | 目标 |
|------|------|
| 可读性 | 结构清晰,易于理解 |
| 可导航 | 目录和链接正确 |
| 可操作 | 步骤明确,可执行 |
| Metric | Goal |
|--------|------|
| Readability | Clear structure, easy to understand |
| Navigability | Table of contents and links correct |
| Operability | Steps clear, executable |
## Quality Gates
| Gate | Threshold | Action |
|------|-----------|--------|
| Pass | 80% | 输出最终产物 |
| Review | 60-79% | 处理警告后继续 |
| Fail | < 60% | 必须修复 |
| Pass | >= 80% | Output final deliverables |
| Review | 60-79% | Process warnings then continue |
| Fail | < 60% | Must fix |
## Issue Classification
### Errors (Must Fix)
- 必需输出缺失
- 数据错误
- 代码不可运行
- Necessary output missing
- Data error
- Code not runnable
### Warnings (Should Fix)
- 格式不一致
- 内容深度不足
- 缺少示例
- Format inconsistency
- Content depth insufficient
- Missing examples
### Info (Nice to Have)
- 优化建议
- 增强机会
- Optimization suggestions
- Enhancement opportunities
## Automated Checks
@@ -267,44 +267,44 @@ Write(`${skillDir}/specs/quality-standards.md`, qualityStandards);
// Step 3: Generate agent base template
const agentBase = `# Agent Base Template
${config.display_name} 的 Agent 基础模板。
Agent base template for ${config.display_name}.
## 通用 Prompt 结构
## Universal Prompt Structure
\`\`\`
[ROLE] 你是{角色},专注于{职责}。
[ROLE] You are {role}, focused on {responsibility}.
[PROJECT CONTEXT]
Skill: ${config.skill_name}
目标: ${config.description}
Objective: ${config.description}
[TASK]
{任务描述}
- 输出: {output_path}
- 格式: ${config.output.format}
{task description}
- Output: {output_path}
- Format: ${config.output.format}
[CONSTRAINTS]
- 约束1
- 约束2
- Constraint 1
- Constraint 2
[OUTPUT_FORMAT]
1. 执行任务
2. 返回 JSON 简要信息
1. Execute task
2. Return JSON summary information
[QUALITY_CHECKLIST]
- [ ] 输出格式正确
- [ ] 内容完整无遗漏
- [ ] 无占位符残留
- [ ] Output format correct
- [ ] Content complete without omission
- [ ] No placeholder remnants
\`\`\`
## 变量说明
## Variable Description
| 变量 | 来源 | 示例 |
|------|------|------|
| {workDir} | 运行时 | .workflow/.scratchpad/${config.skill_name}-xxx |
| {output_path} | 配置 | ${config.output.location}/${config.output.filename_pattern} |
| Variable | Source | Example |
|----------|--------|---------|
| {workDir} | Runtime | .workflow/.scratchpad/${config.skill_name}-xxx |
| {output_path} | Configuration | ${config.output.location}/${config.output.filename_pattern} |
## 返回格式
## Return Format
\`\`\`typescript
interface AgentReturn {
@@ -318,14 +318,14 @@ interface AgentReturn {
}
\`\`\`
## 角色定义参考
## Role Definition Reference
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map((p, i) =>
`- **Phase ${i+1} Agent**: ${p.name} 专家`
`- **Phase ${i+1} Agent**: ${p.name} Expert`
).join('\n') :
config.autonomous_config.actions.map(a =>
`- **${a.name} Agent**: ${a.description || a.name + ' 执行者'}`
`- **${a.name} Agent**: ${a.description || a.name + ' Executor'}`
).join('\n')}
`;
@@ -335,7 +335,7 @@ Write(`${skillDir}/templates/agent-base.md`, agentBase);
if (config.execution_mode === 'autonomous' || config.execution_mode === 'hybrid') {
const actionCatalog = `# Action Catalog
${config.display_name} 的可用动作目录。
Available action catalog for ${config.display_name}.
## Available Actions
@@ -350,9 +350,9 @@ ${config.autonomous_config.actions.map(a =>
\`\`\`mermaid
graph TD
${config.autonomous_config.actions.map((a, i, arr) => {
if (i === 0) return ` ${a.id.replace(/-/g, '_')}[${a.name}]`;
if (i === 0) return \` ${a.id.replace(/-/g, '_')}[${a.name}]\`;
const prev = arr[i-1];
return ` ${prev.id.replace(/-/g, '_')} --> ${a.id.replace(/-/g, '_')}[${a.name}]`;
return \` ${prev.id.replace(/-/g, '_')} --> ${a.id.replace(/-/g, '_')}[${a.name}]\`;
}).join('\n')}
\`\`\`
@@ -369,10 +369,10 @@ ${config.autonomous_config.actions.slice(1).map(a =>
## Selection Priority
当多个动作的前置条件都满足时,按以下优先级选择:
When multiple actions' preconditions are met, select based on the following priority:
${config.autonomous_config.actions.map((a, i) =>
`${i + 1}. \`${a.id}\` - ${a.name}`
\`${i + 1}. \\\`${a.id}\\\` - ${a.name}\`
).join('\n')}
`;

View File

@@ -246,16 +246,16 @@ function collectIssues(fileResults, contentResults) {
const issues = [];
fileResults.filter(f => !f.exists).forEach(f => {
issues.push({ type: 'ERROR', message: `文件缺失: ${f.file}` });
issues.push({ type: 'ERROR', message: `Missing file: ${f.file}` });
});
fileResults.filter(f => f.hasTodo).forEach(f => {
issues.push({ type: 'WARNING', message: `包含 TODO: ${f.file}` });
issues.push({ type: 'WARNING', message: `Contains TODO: ${f.file}` });
});
contentResults.forEach(c => {
c.checks.filter(ch => !ch.pass).forEach(ch => {
issues.push({ type: 'WARNING', message: `${c.file}: 缺少 ${ch.name}` });
issues.push({ type: 'WARNING', message: `${c.file}: Missing ${ch.name}` });
});
});
@@ -266,12 +266,12 @@ function generateRecommendations(fileResults, contentResults) {
const recommendations = [];
if (fileResults.some(f => f.hasTodo)) {
recommendations.push('替换所有 TODO 占位符为实际内容');
recommendations.push('Replace all TODO placeholders with actual content');
}
contentResults.forEach(c => {
if (c.checks.some(ch => !ch.pass)) {
recommendations.push(`完善 ${c.file} 的结构`);
recommendations.push(`Improve structure of ${c.file}`);
}
});
@@ -285,81 +285,81 @@ ${config.description}
## Quick Start
### 触发词
### Trigger Words
${config.triggers.map(t => `- "${t}"`).join('\n')}
### 执行模式
### Execution Mode
**${config.execution_mode === 'sequential' ? 'Sequential (顺序)' : 'Autonomous (自主)'}**
**${config.execution_mode === 'sequential' ? 'Sequential (Sequential)' : 'Autonomous (Autonomous)'}**
${config.execution_mode === 'sequential' ?
`阶段按固定顺序执行:\n${config.sequential_config.phases.map((p, i) =>
`${i + 1}. ${p.name}`
).join('\n')}` :
`动作由编排器动态选择:\n${config.autonomous_config.actions.map(a =>
`- ${a.name}: ${a.description || ''}`
).join('\n')}`}
\`Phases execute in fixed order:\n\${config.sequential_config.phases.map((p, i) =>
\`\${i + 1}. \${p.name}\`
).join('\n')}\` :
\`Actions selected dynamically by orchestrator:\n\${config.autonomous_config.actions.map(a =>
\`- \${a.name}: \${a.description || ''}\`
).join('\n')}\`}
## Usage
\`\`\`
# 直接触发
用户: ${config.triggers[0]}
# Direct trigger
User: ${config.triggers[0]}
# 或使用 Skill 名称
用户: /skill ${config.skill_name}
# Or use Skill name
User: /skill ${config.skill_name}
\`\`\`
## Output
- **格式**: ${config.output.format}
- **位置**: \`${config.output.location}\`
- **文件名**: \`${config.output.filename_pattern}\`
- **Format**: ${config.output.format}
- **Location**: \`${config.output.location}\`
- **Filename**: \`${config.output.filename_pattern}\`
## Directory Structure
\`\`\`
.claude/skills/${config.skill_name}/
├── SKILL.md # 入口文件
├── phases/ # 执行阶段
├── SKILL.md # Entry file
├── phases/ # Execution phases
${config.execution_mode === 'sequential' ?
config.sequential_config.phases.map(p => `│ ├── ${p.id}.md`).join('\n') :
`│ ├── orchestrator.md
config.sequential_config.phases.map(p => \`│ ├── \${p.id}.md\`).join('\n') :
\`│ ├── orchestrator.md
│ ├── state-schema.md
│ └── actions/
${config.autonomous_config.actions.map(a => `│ ├── ${a.id}.md`).join('\n')}`}
├── specs/ # 规范文件
\${config.autonomous_config.actions.map(a => \`│ ├── \${a.id}.md\`).join('\n')}\`}
├── specs/ # Specification files
│ ├── ${config.skill_name}-requirements.md
│ ├── quality-standards.md
${config.execution_mode === 'autonomous' ? '│ └── action-catalog.md' : ''}
└── templates/ # 模板文件
└── templates/ # Template files
└── agent-base.md
\`\`\`
## Customization
### 修改执行逻辑
### Modify Execution Logic
编辑 \`phases/\` 目录下的阶段文件。
Edit phase files in the \`phases/\` directory.
### 调整质量标准
### Adjust Quality Standards
编辑 \`specs/quality-standards.md\`
Edit \`specs/quality-standards.md\`.
### 添加新${config.execution_mode === 'sequential' ? '阶段' : '动作'}
### Add New ${config.execution_mode === 'sequential' ? 'Phase' : 'Action'}
${config.execution_mode === 'sequential' ?
`1. \`phases/\` 创建新的阶段文件 (如 \`03.5-new-step.md\`)
2. 更新 SKILL.md 的执行流程` :
`1. \`phases/actions/\` 创建新的动作文件
2. 更新 \`specs/action-catalog.md\`
3. \`phases/orchestrator.md\` 添加选择逻辑`}
\`1. Create new phase file in \`phases/\` (e.g., \`03.5-new-step.md\`)
2. Update execution flow in SKILL.md\` :
\`1. Create new action file in \`phases/actions/\`
2. Update \`specs/action-catalog.md\`
3. Add selection logic in \`phases/orchestrator.md\`\`}
## Related Documents
- [设计规范](../_shared/SKILL-DESIGN-SPEC.md)
- [执行模式规范](specs/../../../skill-generator/specs/execution-modes.md)
- [Design Specification](../_shared/SKILL-DESIGN-SPEC.md)
- [Execution Modes Specification](specs/../../../skill-generator/specs/execution-modes.md)
---
@@ -383,20 +383,20 @@ const finalResult = {
validation: report.summary,
next_steps: [
'1. 审阅生成的文件结构',
'2. 替换 TODO 占位符',
'3. 根据实际需求调整阶段逻辑',
'4. 测试 Skill 执行流程',
'5. 更新触发词和描述'
'1. Review generated file structure',
'2. Replace TODO placeholders',
'3. Adjust phase logic based on actual requirements',
'4. Test Skill execution flow',
'5. Update trigger words and descriptions'
]
};
console.log('=== Skill 生成完成 ===');
console.log(`路径: ${skillDir}`);
console.log(`模式: ${config.execution_mode}`);
console.log(`状态: ${report.summary.status}`);
console.log('=== Skill Generation Complete ===');
console.log(\`Path: \${skillDir}\`);
console.log(\`Mode: \${config.execution_mode}\`);
console.log(\`Status: \${report.summary.status}\`);
console.log('');
console.log('下一步:');
console.log('Next Steps:');
finalResult.next_steps.forEach(s => console.log(s));
```

View File

@@ -1,111 +1,111 @@
# CLI Integration Specification
CCW CLI 集成规范,定义 Skill 中如何正确调用外部 CLI 工具。
CCW CLI integration specification that defines how to properly call external CLI tools within Skills.
---
## 执行模式
## Execution Modes
### 1. 同步执行 (Blocking)
### 1. Synchronous Execution (Blocking)
适用于需要立即获取结果的场景。
Suitable for scenarios that need immediate results.
```javascript
// Agent 调用 - 同步
// Agent call - synchronous
const result = Task({
subagent_type: 'universal-executor',
prompt: '执行任务...',
run_in_background: false // 关键: 同步执行
prompt: 'Execute task...',
run_in_background: false // Key: synchronous execution
});
// 结果立即可用
// Result immediately available
console.log(result);
```
### 2. 异步执行 (Background)
### 2. Asynchronous Execution (Background)
适用于长时间运行的 CLI 命令。
Suitable for long-running CLI commands.
```javascript
// CLI 调用 - 异步
// CLI call - asynchronous
const task = Bash({
command: 'ccw cli -p "..." --tool gemini --mode analysis',
run_in_background: true // 关键: 后台执行
run_in_background: true // Key: background execution
});
// 立即返回,不等待结果
// task.task_id 可用于后续查询
// Returns immediately without waiting for result
// task.task_id available for later queries
```
---
## CCW CLI 调用规范
## CCW CLI Call Specification
### 基础命令结构
### Basic Command Structure
```bash
ccw cli -p "<PROMPT>" --tool <gemini|qwen|codex> --mode <analysis|write>
```
### 参数说明
### Parameter Description
| 参数 | 必需 | 说明 |
|------|------|------|
| `-p "<prompt>"` | ✓ | 提示词(使用双引号) |
| `--tool <tool>` | ✓ | 工具选择: gemini, qwen, codex |
| `--mode <mode>` | ✓ | 执行模式: analysis, write |
| `--cd <path>` | - | 工作目录 |
| `--includeDirs <dirs>` | - | 包含额外目录(逗号分隔) |
| `--resume [id]` | - | 恢复会话 |
| Parameter | Required | Description |
|-----------|----------|-------------|
| `-p "<prompt>"` | Yes | Prompt text (use double quotes) |
| `--tool <tool>` | Yes | Tool selection: gemini, qwen, codex |
| `--mode <mode>` | Yes | Execution mode: analysis, write |
| `--cd <path>` | - | Working directory |
| `--includeDirs <dirs>` | - | Additional directories (comma-separated) |
| `--resume [id]` | - | Resume session |
### 模式选择
### Mode Selection
```
┌─ 分析/文档任务?
└─→ --mode analysis (只读)
└─ 实现/修改任务?
└─→ --mode write (读写)
- Analysis/Documentation tasks?
→ --mode analysis (read-only)
- Implementation/Modification tasks?
→ --mode write (read-write)
```
---
## Agent 类型与选择
## Agent Types and Selection
### universal-executor
通用执行器,最常用的 Agent 类型。
General-purpose executor, the most commonly used agent type.
```javascript
Task({
subagent_type: 'universal-executor',
prompt: `
执行任务:
1. 读取配置文件
2. 分析依赖关系
3. 生成报告到 ${outputPath}
Execute task:
1. Read configuration file
2. Analyze dependencies
3. Generate report to ${outputPath}
`,
run_in_background: false
});
```
**适用场景**:
- 多步骤任务执行
- 文件操作(读/写/编辑)
- 需要工具调用的任务
**Applicable Scenarios**:
- Multi-step task execution
- File operations (read/write/edit)
- Tasks that require tool invocation
### Explore
代码探索 Agent快速理解代码库。
Code exploration agent for quick codebase understanding.
```javascript
Task({
subagent_type: 'Explore',
prompt: `
探索 src/ 目录:
- 识别主要模块
- 理解目录结构
- 找到入口点
Explore src/ directory:
- Identify main modules
- Understand directory structure
- Find entry points
Thoroughness: medium
`,
@@ -113,104 +113,104 @@ Thoroughness: medium
});
```
**适用场景**:
- 代码库探索
- 文件发现
- 结构理解
**Applicable Scenarios**:
- Codebase exploration
- File discovery
- Structure understanding
### cli-explore-agent
深度代码分析 Agent
Deep code analysis agent.
```javascript
Task({
subagent_type: 'cli-explore-agent',
prompt: `
深度分析 src/auth/ 模块:
- 认证流程
- 会话管理
- 安全机制
Deep analysis of src/auth/ module:
- Authentication flow
- Session management
- Security mechanisms
`,
run_in_background: false
});
```
**适用场景**:
- 深度代码理解
- 设计模式识别
- 复杂逻辑分析
**Applicable Scenarios**:
- Deep code understanding
- Design pattern identification
- Complex logic analysis
---
## 会话管理
## Session Management
### 会话恢复
### Session Recovery
```javascript
// 保存会话 ID
// Save session ID
const session = Bash({
command: 'ccw cli -p "初始分析..." --tool gemini --mode analysis',
command: 'ccw cli -p "Initial analysis..." --tool gemini --mode analysis',
run_in_background: true
});
// 后续恢复
// Resume later
const continuation = Bash({
command: `ccw cli -p "继续分析..." --tool gemini --mode analysis --resume ${session.id}`,
command: `ccw cli -p "Continue analysis..." --tool gemini --mode analysis --resume ${session.id}`,
run_in_background: true
});
```
### 多会话合并
### Multi-Session Merge
```javascript
// 合并多个会话的上下文
// Merge context from multiple sessions
const merged = Bash({
command: `ccw cli -p "汇总分析..." --tool gemini --mode analysis --resume ${id1},${id2}`,
command: `ccw cli -p "Aggregate analysis..." --tool gemini --mode analysis --resume ${id1},${id2}`,
run_in_background: true
});
```
---
## Skill 中的 CLI 集成模式
## CLI Integration Patterns in Skills
### 模式 1: 单次调用
### Pattern 1: Single Call
简单任务,一次调用完成。
Simple tasks completed in one call.
```javascript
// Phase 执行
// Phase execution
async function executePhase(context) {
const result = Bash({
command: `ccw cli -p "
PURPOSE: 分析项目结构
TASK: 识别模块、依赖、入口点
PURPOSE: Analyze project structure
TASK: Identify modules, dependencies, entry points
MODE: analysis
CONTEXT: @src/**/*
EXPECTED: JSON 格式的结构报告
EXPECTED: JSON format structure report
" --tool gemini --mode analysis --cd ${context.projectRoot}`,
run_in_background: true,
timeout: 600000
});
// 等待完成
// Wait for completion
return await waitForCompletion(result.task_id);
}
```
### 模式 2: 链式调用
### Pattern 2: Chained Calls
多步骤任务,每步依赖前一步结果。
Multi-step tasks where each step depends on previous results.
```javascript
async function executeChain(context) {
// Step 1: 收集
// Step 1: Collect
const collectId = await runCLI('collect', context);
// Step 2: 分析 (依赖 Step 1)
// Step 2: Analyze (depends on Step 1)
const analyzeId = await runCLI('analyze', context, `--resume ${collectId}`);
// Step 3: 生成 (依赖 Step 2)
// Step 3: Generate (depends on Step 2)
const generateId = await runCLI('generate', context, `--resume ${analyzeId}`);
return generateId;
@@ -218,9 +218,9 @@ async function executeChain(context) {
async function runCLI(step, context, resumeFlag = '') {
const prompts = {
collect: 'PURPOSE: 收集代码文件...',
analyze: 'PURPOSE: 分析代码模式...',
generate: 'PURPOSE: 生成文档...'
collect: 'PURPOSE: Collect code files...',
analyze: 'PURPOSE: Analyze code patterns...',
generate: 'PURPOSE: Generate documentation...'
};
const result = Bash({
@@ -232,9 +232,9 @@ async function runCLI(step, context, resumeFlag = '') {
}
```
### 模式 3: 并行调用
### Pattern 3: Parallel Calls
独立任务并行执行。
Independent tasks executed in parallel.
```javascript
async function executeParallel(context) {
@@ -244,15 +244,15 @@ async function executeParallel(context) {
{ type: 'patterns', tool: 'qwen' }
];
// 并行启动
// Start tasks in parallel
const taskIds = tasks.map(task =>
Bash({
command: `ccw cli -p "分析 ${task.type}..." --tool ${task.tool} --mode analysis`,
command: `ccw cli -p "Analyze ${task.type}..." --tool ${task.tool} --mode analysis`,
run_in_background: true
}).task_id
);
// 等待全部完成
// Wait for all to complete
const results = await Promise.all(
taskIds.map(id => waitForCompletion(id))
);
@@ -261,9 +261,9 @@ async function executeParallel(context) {
}
```
### 模式 4: Fallback
### Pattern 4: Fallback Chain
工具失败时自动切换。
Automatically switch tools on failure.
```javascript
async function executeWithFallback(context) {
@@ -299,9 +299,9 @@ async function runWithTool(tool, context) {
---
## 提示词模板集成
## Prompt Template Integration
### 引用协议模板
### Reference Protocol Templates
```bash
# Analysis mode - use --rule to auto-load protocol and template (appended to prompt)
@@ -315,7 +315,7 @@ CONSTRAINTS: ...
..." --tool codex --mode write --rule development-feature
```
### 动态模板构建
### Dynamic Template Building
```javascript
function buildPrompt(config) {
@@ -334,21 +334,21 @@ CONSTRAINTS: ${constraints || ''}
---
## 超时配置
## Timeout Configuration
### 推荐超时值
### Recommended Timeout Values
| 任务类型 | 超时 (ms) | 说明 |
|---------|----------|------|
| 快速分析 | 300000 | 5 分钟 |
| 标准分析 | 600000 | 10 分钟 |
| 深度分析 | 1200000 | 20 分钟 |
| 代码生成 | 1800000 | 30 分钟 |
| 复杂任务 | 3600000 | 60 分钟 |
| Task Type | Timeout (ms) | Description |
|-----------|--------------|-------------|
| Quick analysis | 300000 | 5 minutes |
| Standard analysis | 600000 | 10 minutes |
| Deep analysis | 1200000 | 20 minutes |
| Code generation | 1800000 | 30 minutes |
| Complex tasks | 3600000 | 60 minutes |
### Codex 特殊处理
### Special Codex Handling
Codex 需要更长的超时时间(建议 3x)。
Codex requires longer timeout (recommend 3x).
```javascript
const timeout = tool === 'codex' ? baseTimeout * 3 : baseTimeout;
@@ -362,17 +362,17 @@ Bash({
---
## 错误处理
## Error Handling
### 常见错误
### Common Errors
| 错误 | 原因 | 处理 |
|------|------|------|
| ETIMEDOUT | 网络超时 | 重试或切换工具 |
| Exit code 1 | 命令执行失败 | 检查参数,切换工具 |
| Context overflow | 上下文过大 | 减少输入范围 |
| Error | Cause | Handler |
|-------|-------|---------|
| ETIMEDOUT | Network timeout | Retry or switch tool |
| Exit code 1 | Command execution failed | Check parameters, switch tool |
| Context overflow | Input context too large | Reduce input scope |
### 重试策略
### Retry Strategy
```javascript
async function executeWithRetry(command, maxRetries = 3) {
@@ -391,7 +391,7 @@ async function executeWithRetry(command, maxRetries = 3) {
lastError = error;
console.log(`Attempt ${attempt} failed: ${error.message}`);
// 指数退避
// Exponential backoff
if (attempt < maxRetries) {
await sleep(Math.pow(2, attempt) * 1000);
}
@@ -404,30 +404,30 @@ async function executeWithRetry(command, maxRetries = 3) {
---
## 最佳实践
## Best Practices
### 1. run_in_background 规则
### 1. run_in_background Rule
```
Agent 调用 (Task):
run_in_background: false → 同步,立即获取结果
Agent calls (Task):
run_in_background: false → Synchronous, get result immediately
CLI 调用 (Bash + ccw cli):
run_in_background: true → 异步,后台执行
CLI calls (Bash + ccw cli):
run_in_background: true → Asynchronous, run in background
```
### 2. 工具选择
### 2. Tool Selection
```
分析任务: gemini > qwen
生成任务: codex > gemini > qwen
代码修改: codex > gemini
Analysis tasks: gemini > qwen
Generation tasks: codex > gemini > qwen
Code modification: codex > gemini
```
### 3. 会话管理
### 3. Session Management
- 相关任务使用 `--resume` 保持上下文
- 独立任务不使用 `--resume`
- Use `--resume` for related tasks to maintain context
- Do not use `--resume` for independent tasks
### 4. Prompt Specification
@@ -435,8 +435,8 @@ CLI 调用 (Bash + ccw cli):
- Use `--rule <template>` to auto-append protocol + template to prompt
- Template name format: `category-function` (e.g., `analysis-code-patterns`)
### 5. 结果处理
### 5. Result Processing
- 持久化重要结果到 workDir
- Brief returns: 路径 + 摘要,避免上下文溢出
- JSON 格式便于后续处理
- Persist important results to workDir
- Brief returns: path + summary, avoid context overflow
- JSON format convenient for downstream processing

View File

@@ -1,40 +1,40 @@
# Execution Modes Specification
两种 Skill 执行模式的详细规范定义。
Detailed specification definitions for two Skill execution modes.
---
## 模式概览
## Mode Overview
| 特性 | Sequential (顺序) | Autonomous (自主) |
|------|-------------------|-------------------|
| 执行顺序 | 固定(数字前缀) | 动态(编排器决策) |
| 阶段依赖 | 强依赖 | 弱依赖/无依赖 |
| 状态管理 | 隐式(阶段产出) | 显式(状态文件) |
| 适用场景 | 流水线任务 | 交互式任务 |
| 复杂度 | 低 | 中-高 |
| 可扩展性 | 插入子阶段 | 添加新动作 |
| Feature | Sequential (Fixed Order) | Autonomous (Dynamic) |
|---------|--------------------------|----------------------|
| Execution Order | Fixed (numeric prefix) | Dynamic (orchestrator decision) |
| Phase Dependencies | Strong dependencies | Weak/no dependencies |
| State Management | Implicit (phase output) | Explicit (state file) |
| Use Cases | Pipeline tasks | Interactive tasks |
| Complexity | Low | Medium-High |
| Extensibility | Insert sub-phases | Add new actions |
---
## Mode 1: Sequential (顺序模式)
## Mode 1: Sequential (Fixed Order Mode)
### 定义
### Definition
阶段按固定顺序线性执行,每个阶段的输出作为下一阶段的输入。
Phases execute linearly in fixed order, with each phase's output serving as input to the next phase.
### 目录结构
### Directory Structure
```
phases/
├── 01-{first-step}.md
├── 02-{second-step}.md
├── 02.5-{sub-step}.md # 可选:子阶段
├── 02.5-{sub-step}.md # Optional: sub-phase
├── 03-{third-step}.md
└── ...
```
### 执行流程
### Execution Flow
```
┌─────────┐ ┌─────────┐ ┌─────────┐
@@ -45,33 +45,33 @@ phases/
output1.json output2.md output3.md
```
### Phase 文件规范
### Phase File Specification
```markdown
# Phase N: {阶段名称}
# Phase N: {Phase Name}
{一句话描述}
{One-sentence description}
## Objective
{详细目标}
{Detailed objective}
## Input
- 依赖: {上一阶段产出}
- 配置: {配置文件}
- Dependencies: {Previous phase output}
- Configuration: {Configuration file}
## Execution Steps
### Step 1: {步骤}
{执行代码或说明}
### Step 1: {Step}
{Execution code or description}
### Step 2: {步骤}
{执行代码或说明}
### Step 2: {Step}
{Execution code or description}
## Output
- **File**: `{输出文件}`
- **File**: `{Output file}`
- **Format**: {JSON/Markdown}
## Next Phase
@@ -79,74 +79,74 @@ phases/
→ [Phase N+1: xxx](0N+1-xxx.md)
```
### 适用场景
### Applicable Scenarios
- **文档生成**: 收集 → 分析 → 组装 → 优化
- **代码分析**: 扫描 → 解析 → 报告
- **数据处理**: 提取 → 转换 → 加载
- **Document Generation**: Collect → Analyze → Assemble → Optimize
- **Code Analysis**: Scan → Parse → Report
- **Data Processing**: Extract → Transform → Load
### 优点
### Advantages
- 逻辑清晰,易于理解
- 调试简单,可逐阶段验证
- 输出可预测
- Clear logic, easy to understand
- Simple debugging, can validate phase by phase
- Predictable output
### 缺点
### Disadvantages
- 灵活性低
- 难以处理分支逻辑
- 用户交互受限
- Low flexibility
- Difficult to handle branching logic
- Limited user interaction
---
## Mode 2: Autonomous (自主模式)
## Mode 2: Autonomous (Dynamic Mode)
### 定义
### Definition
无固定执行顺序,由编排器 (Orchestrator) 根据当前状态动态选择下一个动作。
No fixed execution order. The orchestrator dynamically selects the next action based on current state.
### 目录结构
### Directory Structure
```
phases/
├── orchestrator.md # 编排器:核心决策逻辑
├── state-schema.md # 状态结构定义
└── actions/ # 独立动作(无顺序)
├── orchestrator.md # Orchestrator: core decision logic
├── state-schema.md # State structure definition
└── actions/ # Independent actions (no order)
├── action-{a}.md
├── action-{b}.md
├── action-{c}.md
└── ...
```
### 核心组件
### Core Components
#### 1. Orchestrator (编排器)
#### 1. Orchestrator
```markdown
# Orchestrator
## Role
根据当前状态选择并执行下一个动作。
Select and execute the next action based on current state.
## State Reading
读取状态文件: `{workDir}/state.json`
Read state file: `{workDir}/state.json`
## Decision Logic
```javascript
function selectNextAction(state) {
// 1. 检查终止条件
// 1. Check termination conditions
if (state.status === 'completed') return null;
if (state.error_count > MAX_RETRIES) return 'action-abort';
// 2. 根据状态选择动作
// 2. Select action based on state
if (!state.initialized) return 'action-init';
if (state.pending_items.length > 0) return 'action-process';
if (state.needs_review) return 'action-review';
// 3. 默认动作
// 3. Default action
return 'action-complete';
}
```
@@ -158,42 +158,42 @@ while (true) {
state = readState();
action = selectNextAction(state);
if (!action) break;
result = executeAction(action, state);
updateState(result);
}
```
```
#### 2. State Schema (状态结构)
#### 2. State Schema
```markdown
# State Schema
## 状态文件
## State File
位置: `{workDir}/state.json`
Location: `{workDir}/state.json`
## 结构定义
## Structure Definition
```typescript
interface SkillState {
// 元信息
// Metadata
skill_name: string;
started_at: string;
updated_at: string;
// 执行状态
// Execution state
status: 'pending' | 'running' | 'completed' | 'failed';
current_action: string | null;
completed_actions: string[];
// 业务数据
// Business data
context: Record<string, any>;
pending_items: any[];
results: Record<string, any>;
// 错误追踪
// Error tracking
errors: Array<{
action: string;
message: string;
@@ -203,7 +203,7 @@ interface SkillState {
}
```
## 初始状态
## Initial State
```json
{
@@ -222,23 +222,23 @@ interface SkillState {
```
```
#### 3. Action (动作)
#### 3. Action
```markdown
# Action: {action-name}
## Purpose
{动作目的}
{Action purpose}
## Preconditions
- [ ] 条件1
- [ ] 条件2
- [ ] Condition 1
- [ ] Condition 2
## Execution
{执行逻辑}
{Execution logic}
## State Updates
@@ -247,19 +247,19 @@ return {
completed_actions: [...state.completed_actions, 'action-name'],
results: {
...state.results,
action_name: { /* 结果 */ }
action_name: { /* result */ }
},
// 其他状态更新
// Other state updates
};
```
## Next Actions (Hints)
- 成功时: `action-{next}`
- 失败时: `action-retry` `action-abort`
- On success: `action-{next}`
- On failure: `action-retry` or `action-abort`
```
### 执行流程
### Execution Flow
```
┌─────────────────────────────────────────────────────────────────┐
@@ -289,9 +289,9 @@ return {
└─────────────────────────────────────────────────────────────────┘
```
### 动作目录 (Action Catalog)
### Action Catalog
`specs/action-catalog.md` 中定义:
Defined in `specs/action-catalog.md`:
```markdown
# Action Catalog
@@ -300,11 +300,11 @@ return {
| Action | Purpose | Preconditions | Effects |
|--------|---------|---------------|---------|
| action-init | 初始化状态 | status=pending | status=running |
| action-process | 处理待办项 | pending_items.length>0 | pending_items-- |
| action-review | 用户审核 | needs_review=true | needs_review=false |
| action-complete | 完成任务 | pending_items.length=0 | status=completed |
| action-abort | 中止任务 | error_count>MAX | status=failed |
| action-init | Initialize state | status=pending | status=running |
| action-process | Process pending items | pending_items.length>0 | pending_items-- |
| action-review | User review | needs_review=true | needs_review=false |
| action-complete | Complete task | pending_items.length=0 | status=completed |
| action-abort | Abort task | error_count>MAX | status=failed |
## Action Dependencies Graph
@@ -319,78 +319,81 @@ graph TD
```
```
### 适用场景
### Applicable Scenarios
- **交互式任务**: 问答、对话、表单填写
- **状态机任务**: Issue 管理、工作流审批
- **探索式任务**: 调试、诊断、搜索
- **Interactive Tasks**: Q&A, dialog, form filling
- **State Machine Tasks**: Issue management, workflow approval
- **Exploratory Tasks**: Debugging, diagnosis, search
### 优点
### Advantages
- 高度灵活,适应动态需求
- 支持复杂分支逻辑
- 易于扩展新动作
- Highly flexible, adapts to dynamic requirements
- Supports complex branching logic
- Easy to extend with new actions
### 缺点
### Disadvantages
- 复杂度高
- 状态管理开销
- 调试难度大
- High complexity
- State management overhead
- Large debugging difficulty
---
## 模式选择指南
## Mode Selection Guide
### 决策流程
### Decision Flow
```
用户需求分析
Analyze user requirements
┌────────────────────────────┐
阶段间是否有强依赖关系?
Are there strong
│ dependencies between │
│ phases? │
└────────────────────────────┘
├── → Sequential
├── Yes → Sequential
└── 继续判断
└── NoContinue decision
┌────────────────────────────┐
是否需要动态响应用户意图?
Do you need dynamic
│ response to user intent? │
└────────────────────────────┘
├── → Autonomous
├── Yes → Autonomous
└── → Sequential
└── No → Sequential
```
### 快速判断表
### Quick Decision Table
| 问题 | Sequential | Autonomous |
|------|------------|------------|
| 输出结构是否固定? | ✓ | |
| 是否需要用户多轮交互? | | |
| 阶段是否可以跳过/重复? | | |
| 是否有复杂分支逻辑? | | |
| 调试是否需要简单? | ✓ | |
| Question | Sequential | Autonomous |
|----------|------------|------------|
| Is output structure fixed? | Yes | No |
| Do you need multi-turn user interaction? | No | Yes |
| Can phases be skipped/repeated? | No | Yes |
| Is there complex branching logic? | No | Yes |
| Should debugging be simple? | Yes | No |
---
## 混合模式
## Hybrid Mode
某些复杂 Skill 可能需要混合使用两种模式:
Some complex Skills may need to use both modes in combination:
```
phases/
├── 01-init.md # Sequential: 初始化
├── 02-orchestrator.md # Autonomous: 核心交互循环
├── 01-init.md # Sequential: initialization
├── 02-orchestrator.md # Autonomous: core interaction loop
│ └── actions/
│ ├── action-a.md
│ └── action-b.md
└── 03-finalize.md # Sequential: 收尾
└── 03-finalize.md # Sequential: finalization
```
**适用场景**:
- 初始化和收尾固定,中间交互灵活
- 多阶段任务,某阶段需要动态决策
**Applicable Scenarios**:
- Initialization and finalization are fixed, middle interaction is flexible
- Multi-phase tasks where certain phases need dynamic decisions

View File

@@ -0,0 +1,271 @@
# Reference Documents Generation Specification
> **IMPORTANT**: This specification defines how to organize and present reference documents in generated skills to avoid duplication issues.
## Core Principles
### 1. Phase-Based Organization
Reference documents must be organized by skill execution phases, not as a flat list.
**Wrong Approach** (Flat List):
```markdown
## Reference Documents
| Document | Purpose |
|----------|---------|
| doc1.md | ... |
| doc2.md | ... |
| doc3.md | ... |
```
**Correct Approach** (Phase-Based Navigation):
```markdown
## Reference Documents by Phase
### Phase 1: Analysis
Documents to refer to when executing Phase 1
| Document | Purpose | When to Use |
|----------|---------|-------------|
| doc1.md | ... | Understand concept x |
### Phase 2: Implementation
Documents to refer to when executing Phase 2
| Document | Purpose | When to Use |
|----------|---------|-------------|
| doc2.md | ... | Implement feature y |
```
### 2. Four Standard Groupings
Reference documents must be divided into the following four groupings:
| Grouping | When to Use | Content |
|----------|------------|---------|
| **Phase N: [Name]** | When executing this phase | All documents related to this phase |
| **Debugging** | When encountering problems | Issue to documentation mapping table |
| **Reference** | When learning in depth | Templates, original implementations, best practices |
| (Optional) **Quick Links** | Quick navigation | Most frequently consulted 5-7 documents |
### 3. Each Document Entry Must Include
```
| [path](path) | Purpose | When to Use |
```
**When to Use Column Requirements**:
- Clear explanation of usage scenarios
- Describe what problem is solved
- Do not simply say "refer to" or "learn about"
**Good Examples**:
- "Understand issue data structure"
- "Learn about the Planning Agent role"
- "Check if implementation meets quality standards"
- "Quickly locate the reason for status anomalies"
**Poor Examples**:
- "Reference document"
- "More information"
- "Background knowledge"
### 4. Embedding Document Guidance in Execution Flow
In the "Execution Flow" section, each Phase description should include "Refer to" hints:
```markdown
### Phase 2: Planning Pipeline
**Refer to**: action-plan.md, subagent-roles.md
→ Detailed flow description...
```
### 5. Quick Troubleshooting Reference Table
Should contain common issue to documentation mapping:
```markdown
### Debugging & Troubleshooting
| Issue | Solution Document |
|-------|------------------|
| Phase execution failed | Refer to corresponding phase documentation |
| Output format incorrect | specs/quality-standards.md |
| Data validation failed | specs/schema-validation.md |
```
---
## Generation Rules
### Rule 1: Document Classification Recognition
Automatically generate groupings based on skill phases:
```javascript
const phaseEmojis = {
'discovery': '📋', // Collection, exploration
'generation': '🔧', // Generation, creation
'analysis': '🔍', // Analysis, review
'implementation': '⚙️', // Implementation, execution
'validation': '✅', // Validation, testing
'completion': '🏁', // Completion, wrap-up
};
// Generate a section for each phase
phases.forEach((phase, index) => {
const emoji = phaseEmojis[phase.type] || '📌';
const title = `### ${emoji} Phase ${index + 1}: ${phase.name}`;
// List all documents related to this phase
});
```
### Rule 2: Document to Phase Mapping
In config, specs and templates should be annotated with their belonging phases:
```json
{
"specs": [
{
"path": "specs/issue-handling.md",
"purpose": "Issue data specification",
"phases": ["phase-2", "phase-3"], // Which phases this spec is related to
"context": "Understand issue structure and validation rules"
}
]
}
```
### Rule 3: Priority and Mandatory Reading
Use visual symbols to distinguish document importance:
```markdown
| Document | When | Notes |
|----------|------|-------|
| spec.md | **Must Read Before Execution** | Mandatory prerequisite |
| action.md | Refer to during execution | Operation guide |
| template.md | Reference for learning | Optional in-depth |
```
### Rule 4: Avoid Duplication
- **Mandatory Prerequisites** section: List mandatory P0 specifications
- **Reference Documents by Phase** section: List all documents (including mandatory prerequisites)
- Documents in both sections can overlap, but their purposes differ:
- Prerequisites: Emphasize "must read first"
- Reference: Provide "complete navigation"
---
## Implementation Example
### Sequential Skill Example
```markdown
## Mandatory Prerequisites
| Document | Purpose | When |
|----------|---------|------|
| [specs/issue-handling.md](specs/issue-handling.md) | Issue data specification | **Must Read Before Execution** |
| [specs/solution-schema.md](specs/solution-schema.md) | Solution structure | **Must Read Before Execution** |
---
## Reference Documents by Phase
### Phase 1: Issue Collection
Documents to refer to when executing Phase 1
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/actions/action-list.md](phases/actions/action-list.md) | Issue loading logic | Understand how to collect issues |
| [specs/issue-handling.md](specs/issue-handling.md) | Issue data specification | Verify issue format **Required Reading** |
### Phase 2: Planning
Documents to refer to when executing Phase 2
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/actions/action-plan.md](phases/actions/action-plan.md) | Planning process | Understand issue to solution transformation |
| [specs/solution-schema.md](specs/solution-schema.md) | Solution structure | Verify solution JSON format **Required Reading** |
### Debugging & Troubleshooting
| Issue | Solution Document |
|-------|------------------|
| Phase 1 failed | [phases/actions/action-list.md](phases/actions/action-list.md) |
| Planning output incorrect | [phases/actions/action-plan.md](phases/actions/action-plan.md) + [specs/solution-schema.md](specs/solution-schema.md) |
| Data validation failed | [specs/issue-handling.md](specs/issue-handling.md) |
### Reference & Background
| Document | Purpose | Notes |
|----------|---------|-------|
| [../issue-plan.md](../../.codex/prompts/issue-plan.md) | Original implementation | Planning Agent system prompt |
```
---
## Generation Algorithm
```javascript
function generateReferenceDocuments(config) {
let result = '## Reference Documents by Phase\n\n';
// Generate a section for each phase
const phases = config.phases || config.actions || [];
phases.forEach((phase, index) => {
const phaseNum = index + 1;
const emoji = getPhaseEmoji(phase.type);
const title = phase.display_name || phase.name;
result += `### ${emoji} Phase ${phaseNum}: ${title}\n`;
result += `Documents to refer to when executing Phase ${phaseNum}\n\n`;
// Find all documents related to this phase
const docs = config.specs.filter(spec =>
(spec.phases || []).includes(`phase-${phaseNum}`) ||
matchesByName(spec.path, phase.name)
);
if (docs.length > 0) {
result += '| Document | Purpose | When to Use |\n';
result += '|----------|---------|-------------|\n';
docs.forEach(doc => {
const required = doc.phases && doc.phases[0] === `phase-${phaseNum}` ? ' **Required Reading**' : '';
result += `| [${doc.path}](${doc.path}) | ${doc.purpose} | ${doc.context}${required} |\n`;
});
result += '\n';
}
});
// Troubleshooting section
result += '### Debugging & Troubleshooting\n\n';
result += generateDebuggingTable(config);
// In-depth reference learning
result += '### Reference & Background\n\n';
result += generateReferenceTable(config);
return result;
}
```
---
## Checklist
When generating skill's SKILL.md, the reference documents section should satisfy:
- [ ] Has clear "## Reference Documents by Phase" heading
- [ ] Each phase has a corresponding section (identified with symbols)
- [ ] Each document entry includes "When to Use" column
- [ ] Includes "Debugging & Troubleshooting" section
- [ ] Includes "Reference & Background" section
- [ ] Mandatory reading documents are marked with **bold** text
- [ ] Execution Flow section includes "→ **Refer to**: ..." guidance
- [ ] Avoid overly long document lists (maximum 5-8 documents per phase)

View File

@@ -1,18 +1,18 @@
# Scripting Integration Spec
# Scripting Integration Specification
技能脚本集成规范,定义如何在技能中使用外部脚本执行确定性任务。
Skill scripting integration specification that defines how to use external scripts for deterministic task execution.
## 核心原则
## Core Principles
1. **约定优于配置**:命名即 ID扩展名即运行时
2. **极简调用**:一行完成脚本调用
3. **标准输入输出**命令行参数输入JSON 标准输出
1. **Convention over configuration**: Naming is ID, file extension is runtime
2. **Minimal invocation**: Complete script call in one line
3. **Standard input/output**: Command-line parameters as input, JSON as standard output
## 目录结构
## Directory Structure
```
.claude/skills/<skill-name>/
├── scripts/ # 脚本专用目录
├── scripts/ # Scripts directory
│ ├── process-data.py # id: process-data
│ ├── validate-output.sh # id: validate-output
│ └── transform-json.js # id: transform-json
@@ -20,17 +20,17 @@
└── specs/
```
## 命名约定
## Naming Conventions
| 扩展名 | 运行时 | 执行命令 |
|--------|--------|----------|
| Extension | Runtime | Execution Command |
|-----------|---------|-------------------|
| `.py` | python | `python scripts/{id}.py` |
| `.sh` | bash | `bash scripts/{id}.sh` |
| `.js` | node | `node scripts/{id}.js` |
## 声明格式
## Declaration Format
在 Phase 或 Action 文件的 `## Scripts` 部分声明:
Declare in the `## Scripts` section of Phase or Action files:
```yaml
## Scripts
@@ -39,27 +39,27 @@
- validate-output
```
## 调用语法
## Invocation Syntax
### 基础调用
### Basic Call
```javascript
const result = await ExecuteScript('script-id', { key: value });
```
### 参数命名转换
### Parameter Name Conversion
调用时 JS 对象中的键会**自动转换**为 `kebab-case` 命令行参数:
Keys in the JS object are **automatically converted** to `kebab-case` command-line parameters:
| JS 键名 | 转换后参数 |
|---------|-----------|
| JS Key Name | Converted Parameter |
|-------------|-------------------|
| `input_path` | `--input-path` |
| `output_dir` | `--output-dir` |
| `max_count` | `--max-count` |
脚本中使用 `--input-path` 接收,调用时使用 `input_path` 传入。
Use `--input-path` in scripts, pass `input_path` when calling.
### 完整调用(含错误处理)
### Complete Call (with Error Handling)
```javascript
const result = await ExecuteScript('process-data', {
@@ -68,46 +68,46 @@ const result = await ExecuteScript('process-data', {
});
if (!result.success) {
throw new Error(`脚本执行失败: ${result.stderr}`);
throw new Error(`Script execution failed: ${result.stderr}`);
}
const { output_file, count } = result.outputs;
```
## 返回格式
## Return Format
```typescript
interface ScriptResult {
success: boolean; // exit code === 0
stdout: string; // 完整标准输出
stderr: string; // 完整标准错误
outputs: { // 从 stdout 最后一行解析的 JSON
stdout: string; // Complete standard output
stderr: string; // Complete standard error
outputs: { // JSON parsed from last line of stdout
[key: string]: any;
};
}
```
## 脚本编写规范
## Script Writing Specification
### 输入:命令行参数
### Input: Command-line Parameters
```bash
# Python: argparse
--input-path /path/to/file --threshold 0.9
# Bash: 手动解析
# Bash: manual parsing
--input-path /path/to/file
```
### 输出:标准输出 JSON
### Output: Standard Output JSON
脚本最后一行必须打印单行 JSON
Script must print single-line JSON on last line:
```json
{"output_file": "/tmp/result.json", "count": 42}
```
### Python 模板
### Python Template
```python
import argparse
@@ -119,10 +119,10 @@ def main():
parser.add_argument('--threshold', type=float, default=0.9)
args = parser.parse_args()
# 执行逻辑...
# Execution logic...
result_path = "/tmp/result.json"
# 输出 JSON
# Output JSON
print(json.dumps({
"output_file": result_path,
"items_processed": 100
@@ -132,12 +132,12 @@ if __name__ == '__main__':
main()
```
### Bash 模板
### Bash Template
```bash
#!/bin/bash
# 解析参数
# Parse parameters
while [[ "$#" -gt 0 ]]; do
case $1 in
--input-path) INPUT_PATH="$2"; shift ;;
@@ -146,21 +146,21 @@ while [[ "$#" -gt 0 ]]; do
shift
done
# 执行逻辑...
# Execution logic...
LOG_FILE="/tmp/process.log"
echo "Processing $INPUT_PATH" > "$LOG_FILE"
# 输出 JSON
# Output JSON
echo "{\"log_file\": \"$LOG_FILE\", \"status\": \"done\"}"
```
## ExecuteScript 实现
## ExecuteScript Implementation
```javascript
async function ExecuteScript(scriptId, inputs = {}) {
const skillDir = GetSkillDir();
// 查找脚本文件
// Find script file
const extensions = ['.py', '.sh', '.js'];
let scriptPath, runtime;
@@ -177,22 +177,22 @@ async function ExecuteScript(scriptId, inputs = {}) {
throw new Error(`Script not found: ${scriptId}`);
}
// 构建命令行参数
// Build command-line parameters
const args = Object.entries(inputs)
.map(([k, v]) => `--${k.replace(/_/g, '-')} "${v}"`)
.join(' ');
// 执行脚本
// Execute script
const cmd = `${runtime} "${scriptPath}" ${args}`;
const { stdout, stderr, exitCode } = await Bash(cmd);
// 解析输出
// Parse output
let outputs = {};
try {
const lastLine = stdout.trim().split('\n').pop();
outputs = JSON.parse(lastLine);
} catch (e) {
// 无法解析 JSON保持空对象
// Unable to parse JSON, keep empty object
}
return {
@@ -204,62 +204,62 @@ async function ExecuteScript(scriptId, inputs = {}) {
}
```
## 使用场景
## Use Cases
### 适合脚本化的任务
### Suitable for Scripting
- 数据处理和转换
- 文件格式转换
- 批量文件操作
- 复杂计算逻辑
- 调用外部工具/库
- Data processing and transformation
- File format conversion
- Batch file operations
- Complex calculation logic
- Call external tools/libraries
### 不适合脚本化的任务
### Not Suitable for Scripting
- 需要用户交互的任务
- 需要访问 Claude 工具的任务
- 简单的文件读写
- 需要动态决策的任务
- Tasks requiring user interaction
- Tasks needing access to Claude tools
- Simple file read/write
- Tasks requiring dynamic decision-making
## 路径约定
## Path Conventions
### 脚本路径
### Script Path
脚本路径相对于 `SKILL.md` 所在目录(技能根目录):
Script paths are relative to the directory containing `SKILL.md` (skill root directory):
```
.claude/skills/<skill-name>/ # 技能根目录SKILL.md 所在位置)
.claude/skills/<skill-name>/ # Skill root directory (SKILL.md location)
├── SKILL.md
├── scripts/ # 脚本目录
│ └── process-data.py # 相对路径: scripts/process-data.py
├── scripts/ # Scripts directory
│ └── process-data.py # Relative path: scripts/process-data.py
└── phases/
```
`ExecuteScript` 自动从技能根目录查找脚本:
`ExecuteScript` automatically finds scripts from skill root directory:
```javascript
// 实际执行: python .claude/skills/<skill-name>/scripts/process-data.py
// Actually executes: python .claude/skills/<skill-name>/scripts/process-data.py
await ExecuteScript('process-data', { ... });
```
### 输出目录
### Output Directory
**推荐**:由调用方传递输出目录,而非脚本默认 `/tmp`
**Recommended**: Pass output directory from caller, not hardcode in script to `/tmp`:
```javascript
// 调用时指定输出目录(在工作流工作目录内)
// Specify output directory when calling (in workflow working directory)
const result = await ExecuteScript('process-data', {
input_path: `${workDir}/data.json`,
output_dir: `${workDir}/output` // 明确指定输出位置
output_dir: `${workDir}/output` // Explicitly specify output location
});
```
脚本应接受 `--output-dir` 参数,而非硬编码输出路径。
Scripts should accept `--output-dir` parameter instead of hardcoding output paths.
## 最佳实践
## Best Practices
1. **单一职责**:每个脚本只做一件事
2. **无副作用**:脚本不应修改全局状态
3. **幂等性**:相同输入产生相同输出
4. **错误明确**:错误信息写入 stderr正常输出写入 stdout
5. **快速失败**:参数验证失败立即退出
6. **路径参数化**:输出路径由调用方指定,不硬编码
1. **Single Responsibility**: Each script does one thing
2. **No Side Effects**: Scripts should not modify global state
3. **Idempotence**: Same input produces same output
4. **Clear Errors**: Error messages to stderr, normal output to stdout
5. **Fail Fast**: Exit immediately on parameter validation failure
6. **Parameterized Paths**: Output paths specified by caller, not hardcoded

View File

@@ -1,102 +1,102 @@
# Skill Requirements Specification
新 Skill 创建的需求收集规范。
Requirements collection specification for new Skill creation.
---
## 必需信息
## Required Information
### 1. 基本信息
### 1. Basic Information
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `skill_name` | string | | Skill 标识符(小写-连字符) |
| `display_name` | string | ✓ | 显示名称 |
| `description` | string | ✓ | 一句话描述 |
| `triggers` | string[] | ✓ | 触发关键词列表 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `skill_name` | string | Yes | Skill identifier (lowercase with hyphens) |
| `display_name` | string | Yes | Display name |
| `description` | string | Yes | One-sentence description |
| `triggers` | string[] | Yes | List of trigger keywords |
### 2. 执行模式
### 2. Execution Mode
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `execution_mode` | enum | | `sequential` \| `autonomous` \| `hybrid` |
| `phase_count` | number | 条件 | Sequential 模式下的阶段数 |
| `action_count` | number | 条件 | Autonomous 模式下的动作数 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `execution_mode` | enum | Yes | `sequential` \| `autonomous` \| `hybrid` |
| `phase_count` | number | Conditional | Number of phases in Sequential mode |
| `action_count` | number | Conditional | Number of actions in Autonomous mode |
### 2.5 上下文策略 (P0 增强)
### 2.5 Context Strategy (P0 Enhancement)
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `context_strategy` | enum | | `file` \| `memory` |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `context_strategy` | enum | Yes | `file` \| `memory` |
**策略对比**:
**Strategy Comparison**:
| 策略 | 持久化 | 可调试 | 可恢复 | 适用场景 |
|------|--------|--------|--------|----------|
| `file` | ✓ | ✓ | ✓ | 复杂多阶段任务(推荐) |
| `memory` | | | | 简单线性任务 |
| Strategy | Persistence | Debuggable | Recoverable | Applicable Scenarios |
|----------|-------------|-----------|------------|----------------------|
| `file` | Yes | Yes | Yes | Complex multi-phase tasks (recommended) |
| `memory` | No | No | No | Simple linear tasks |
### 2.6 LLM 集成配置 (P1 增强)
### 2.6 LLM Integration Configuration (P1 Enhancement)
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `llm_integration` | object | 可选 | LLM 调用配置 |
| `llm_integration.enabled` | boolean | - | 是否启用 LLM 调用 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `llm_integration` | object | Optional | LLM invocation configuration |
| `llm_integration.enabled` | boolean | - | Enable LLM invocation |
| `llm_integration.default_tool` | enum | - | `gemini` \| `qwen` \| `codex` |
| `llm_integration.fallback_chain` | string[] | - | 失败时的备选工具链 |
| `llm_integration.fallback_chain` | string[] | - | Fallback tool chain on failure |
### 3. 工具依赖
### 3. Tool Dependencies
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `allowed_tools` | string[] | ✓ | 允许使用的工具列表 |
| `mcp_tools` | string[] | 可选 | 需要的 MCP 工具 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `allowed_tools` | string[] | Yes | List of allowed tools |
| `mcp_tools` | string[] | Optional | Required MCP tools |
### 4. 输出配置
### 4. Output Configuration
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `output_format` | enum | | `markdown` \| `html` \| `json` |
| `output_location` | string | ✓ | 输出目录模式 |
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `output_format` | enum | Yes | `markdown` \| `html` \| `json` |
| `output_location` | string | Yes | Output directory pattern |
---
## 配置文件结构
## Configuration File Structure
```typescript
interface SkillConfig {
// 基本信息
// Basic information
skill_name: string; // "my-skill"
display_name: string; // "My Skill"
description: string; // "一句话描述"
description: string; // "One-sentence description"
triggers: string[]; // ["keyword1", "keyword2"]
// 执行模式
// Execution mode
execution_mode: 'sequential' | 'autonomous' | 'hybrid';
// 上下文策略 (P0 增强)
context_strategy: 'file' | 'memory'; // 默认: 'file'
// Context strategy (P0 Enhancement)
context_strategy: 'file' | 'memory'; // Default: 'file'
// LLM 集成配置 (P1 增强)
// LLM Integration Configuration (P1 Enhancement)
llm_integration?: {
enabled: boolean; // 是否启用 LLM 调用
enabled: boolean; // Enable LLM invocation
default_tool: 'gemini' | 'qwen' | 'codex';
fallback_chain: string[]; // ['gemini', 'qwen', 'codex']
mode: 'analysis' | 'write'; // 默认 mode
mode: 'analysis' | 'write'; // Default mode
};
// Sequential 模式配置
// Sequential mode configuration
sequential_config?: {
phases: Array<{
id: string; // "01-init"
name: string; // "Initialization"
description: string; // "收集初始配置"
input: string[]; // 输入依赖
output: string; // 输出文件
description: string; // "Collect initial configuration"
input: string[]; // Input dependencies
output: string; // Output file
}>;
};
// Autonomous 模式配置
// Autonomous mode configuration
autonomous_config?: {
state_schema: {
fields: Array<{
@@ -108,31 +108,31 @@ interface SkillConfig {
actions: Array<{
id: string; // "action-init"
name: string; // "Initialize"
description: string; // "初始化状态"
preconditions: string[]; // 前置条件
effects: string[]; // 执行效果
description: string; // "Initialize state"
preconditions: string[]; // Preconditions
effects: string[]; // Execution effects
}>;
termination_conditions: string[];
};
// 工具依赖
// Tool dependencies
allowed_tools: string[]; // ["Task", "Read", "Write", ...]
mcp_tools?: string[]; // ["mcp__chrome__*"]
// 输出配置
// Output configuration
output: {
format: 'markdown' | 'html' | 'json';
location: string; // ".workflow/.scratchpad/{skill}-{timestamp}"
filename_pattern: string; // "{name}-output.{ext}"
};
// 质量配置
// Quality configuration
quality?: {
dimensions: string[]; // ["completeness", "consistency", ...]
pass_threshold: number; // 80
};
// 元数据
// Metadata
created_at: string;
version: string;
}
@@ -140,59 +140,59 @@ interface SkillConfig {
---
## 需求收集问题
## Requirements Collection Questions
### Phase 1: 基本信息
### Phase 1: Basic Information
```javascript
AskUserQuestion({
questions: [
{
question: "Skill 的名称是什么?(英文,小写-连字符格式)",
header: "Skill 名称",
question: "What is the Skill name? (English, lowercase with hyphens)",
header: "Skill Name",
multiSelect: false,
options: [
{ label: "自动生成", description: "根据描述自动生成名称" },
{ label: "手动输入", description: "输入自定义名称" }
{ label: "Auto-generate", description: "Auto-generate name from description" },
{ label: "Manual input", description: "Enter custom name" }
]
},
{
question: "Skill 的主要用途是什么?",
header: "用途类型",
question: "What is the primary purpose of this Skill?",
header: "Purpose Type",
multiSelect: false,
options: [
{ label: "文档生成", description: "生成 Markdown/HTML 文档" },
{ label: "代码分析", description: "分析代码结构、质量、安全" },
{ label: "交互管理", description: "管理 Issue、任务、工作流" },
{ label: "数据处理", description: "ETL、转换、报告生成" },
{ label: "自定义", description: "其他用途" }
{ label: "Document Generation", description: "Generate Markdown/HTML documents" },
{ label: "Code Analysis", description: "Analyze code structure, quality, security" },
{ label: "Interactive Management", description: "Manage Issues, tasks, workflows" },
{ label: "Data Processing", description: "ETL, transformation, report generation" },
{ label: "Custom", description: "Other purposes" }
]
}
]
});
```
### Phase 2: 执行模式
### Phase 2: Execution Mode
```javascript
AskUserQuestion({
questions: [
{
question: "选择执行模式:",
header: "执行模式",
question: "Select execution mode:",
header: "Execution Mode",
multiSelect: false,
options: [
{
label: "Sequential (顺序)",
description: "阶段按固定顺序执行,适合流水线任务(推荐)"
{
label: "Sequential (Fixed Order)",
description: "Phases execute in fixed order, suitable for pipeline tasks (recommended)"
},
{
label: "Autonomous (自主)",
description: "动态选择执行路径,适合交互式任务"
{
label: "Autonomous (Dynamic)",
description: "Dynamically select execution path, suitable for interactive tasks"
},
{
label: "Hybrid (混合)",
description: "初始化和收尾固定,中间交互灵活"
{
label: "Hybrid (Mixed)",
description: "Fixed initialization and finalization, flexible middle interaction"
}
]
}
@@ -200,67 +200,67 @@ AskUserQuestion({
});
```
### Phase 3: 阶段/动作定义
### Phase 3: Phase/Action Definition
#### Sequential 模式
#### Sequential Mode
```javascript
AskUserQuestion({
questions: [
{
question: "需要多少个执行阶段?",
header: "阶段数量",
question: "How many execution phases do you need?",
header: "Phase Count",
multiSelect: false,
options: [
{ label: "3 阶段", description: "简单: 收集 → 处理 → 输出" },
{ label: "5 阶段", description: "标准: 收集 → 探索 → 分析 → 组装 → 验证" },
{ label: "7 阶段", description: "完整: 包含并行处理和迭代优化" },
{ label: "自定义", description: "手动指定阶段" }
{ label: "3 phases", description: "Simple: Collect → Process → Output" },
{ label: "5 phases", description: "Standard: Collect → Explore → Analyze → Assemble → Validate" },
{ label: "7 phases", description: "Complete: Include parallel processing and iterative optimization" },
{ label: "Custom", description: "Manually specify phases" }
]
}
]
});
```
#### Autonomous 模式
#### Autonomous Mode
```javascript
AskUserQuestion({
questions: [
{
question: "核心动作有哪些?",
header: "动作定义",
question: "What are the core actions?",
header: "Action Definition",
multiSelect: true,
options: [
{ label: "初始化 (init)", description: "设置初始状态" },
{ label: "列表 (list)", description: "显示当前项目" },
{ label: "创建 (create)", description: "创建新项目" },
{ label: "编辑 (edit)", description: "修改现有项目" },
{ label: "删除 (delete)", description: "删除项目" },
{ label: "完成 (complete)", description: "完成任务" }
{ label: "Initialize (init)", description: "Set initial state" },
{ label: "List (list)", description: "Display current items" },
{ label: "Create (create)", description: "Create new item" },
{ label: "Edit (edit)", description: "Modify existing item" },
{ label: "Delete (delete)", description: "Delete item" },
{ label: "Complete (complete)", description: "Complete task" }
]
}
]
});
```
### Phase 4: 上下文策略 (P0 增强)
### Phase 4: Context Strategy (P0 Enhancement)
```javascript
AskUserQuestion({
questions: [
{
question: "选择上下文管理策略:",
header: "上下文策略",
question: "Select context management strategy:",
header: "Context Strategy",
multiSelect: false,
options: [
{
label: "文件策略 (file)",
description: "持久化到 .scratchpad,支持调试和恢复(推荐)"
label: "File Strategy (file)",
description: "Persist to .scratchpad, supports debugging and recovery (recommended)"
},
{
label: "内存策略 (memory)",
description: "仅在运行时保持,速度快但无法恢复"
label: "Memory Strategy (memory)",
description: "Keep only at runtime, fast but no recovery"
}
]
}
@@ -268,41 +268,41 @@ AskUserQuestion({
});
```
### Phase 5: LLM 集成 (P1 增强)
### Phase 5: LLM Integration (P1 Enhancement)
```javascript
AskUserQuestion({
questions: [
{
question: "是否需要 LLM 调用能力?",
header: "LLM 集成",
question: "Do you need LLM invocation capability?",
header: "LLM Integration",
multiSelect: false,
options: [
{
label: "启用 LLM 调用",
description: "使用 gemini/qwen/codex 进行分析或生成"
label: "Enable LLM Invocation",
description: "Use gemini/qwen/codex for analysis or generation"
},
{
label: "不需要",
description: "仅使用本地工具"
label: "Not needed",
description: "Only use local tools"
}
]
}
]
});
// 如果启用 LLM
// If LLM enabled
if (llmEnabled) {
AskUserQuestion({
questions: [
{
question: "选择默认 LLM 工具:",
header: "LLM 工具",
question: "Select default LLM tool:",
header: "LLM Tool",
multiSelect: false,
options: [
{ label: "Gemini", description: "大上下文,适合分析任务(推荐)" },
{ label: "Qwen", description: "代码生成能力强" },
{ label: "Codex", description: "自主执行能力强,适合实现任务" }
{ label: "Gemini", description: "Large context, suitable for analysis tasks (recommended)" },
{ label: "Qwen", description: "Strong code generation capability" },
{ label: "Codex", description: "Strong autonomous execution, suitable for implementation tasks" }
]
}
]
@@ -310,21 +310,21 @@ if (llmEnabled) {
}
```
### Phase 6: 工具依赖
### Phase 6: Tool Dependencies
```javascript
AskUserQuestion({
questions: [
{
question: "需要哪些工具?",
header: "工具选择",
question: "What tools do you need?",
header: "Tool Selection",
multiSelect: true,
options: [
{ label: "基础工具", description: "Task, Read, Write, Glob, Grep, Bash" },
{ label: "用户交互", description: "AskUserQuestion" },
{ label: "Chrome 截图", description: "mcp__chrome__*" },
{ label: "外部搜索", description: "mcp__exa__search" },
{ label: "CCW CLI 调用", description: "ccw cli (gemini/qwen/codex)" }
{ label: "Basic tools", description: "Task, Read, Write, Glob, Grep, Bash" },
{ label: "User interaction", description: "AskUserQuestion" },
{ label: "Chrome screenshot", description: "mcp__chrome__*" },
{ label: "External search", description: "mcp__exa__search" },
{ label: "CCW CLI invocation", description: "ccw cli (gemini/qwen/codex)" }
]
}
]
@@ -333,19 +333,19 @@ AskUserQuestion({
---
## 验证规则
## Validation Rules
### 名称验证
### Name Validation
```javascript
function validateSkillName(name) {
const rules = [
{ test: /^[a-z][a-z0-9-]*$/, msg: "必须以小写字母开头,只包含小写字母、数字、连字符" },
{ test: /^.{3,30}$/, msg: "长度 3-30 字符" },
{ test: /^(?!.*--)/, msg: "不能有连续连字符" },
{ test: /[^-]$/, msg: "不能以连字符结尾" }
{ test: /^[a-z][a-z0-9-]*$/, msg: "Must start with lowercase letter, only contain lowercase letters, digits, hyphens" },
{ test: /^.{3,30}$/, msg: "Length 3-30 characters" },
{ test: /^(?!.*--)/, msg: "Cannot have consecutive hyphens" },
{ test: /[^-]$/, msg: "Cannot end with hyphen" }
];
for (const rule of rules) {
if (!rule.test.test(name)) {
return { valid: false, error: rule.msg };
@@ -355,37 +355,37 @@ function validateSkillName(name) {
}
```
### 配置验证
### Configuration Validation
```javascript
function validateSkillConfig(config) {
const errors = [];
// 必需字段
if (!config.skill_name) errors.push("缺少 skill_name");
if (!config.description) errors.push("缺少 description");
if (!config.execution_mode) errors.push("缺少 execution_mode");
// 模式特定验证
// Required fields
if (!config.skill_name) errors.push("Missing skill_name");
if (!config.description) errors.push("Missing description");
if (!config.execution_mode) errors.push("Missing execution_mode");
// Mode-specific validation
if (config.execution_mode === 'sequential') {
if (!config.sequential_config?.phases?.length) {
errors.push("Sequential 模式需要定义 phases");
errors.push("Sequential mode requires phases definition");
}
} else if (config.execution_mode === 'autonomous') {
if (!config.autonomous_config?.actions?.length) {
errors.push("Autonomous 模式需要定义 actions");
errors.push("Autonomous mode requires actions definition");
}
}
return { valid: errors.length === 0, errors };
}
```
---
## 示例配置
## Example Configurations
### Sequential 模式示例 (增强版)
### Sequential Mode Example (Enhanced)
```json
{
@@ -432,7 +432,7 @@ function validateSkillConfig(config) {
}
```
### Autonomous 模式示例
### Autonomous Mode Example
```json
{
@@ -444,15 +444,15 @@ function validateSkillConfig(config) {
"autonomous_config": {
"state_schema": {
"fields": [
{ "name": "tasks", "type": "Task[]", "description": "任务列表" },
{ "name": "current_view", "type": "string", "description": "当前视图" }
{ "name": "tasks", "type": "Task[]", "description": "Task list" },
{ "name": "current_view", "type": "string", "description": "Current view" }
]
},
"actions": [
{ "id": "action-list", "name": "List Tasks", "preconditions": [], "effects": ["显示任务列表"] },
{ "id": "action-create", "name": "Create Task", "preconditions": [], "effects": ["添加新任务"] },
{ "id": "action-edit", "name": "Edit Task", "preconditions": ["task_selected"], "effects": ["更新任务"] },
{ "id": "action-delete", "name": "Delete Task", "preconditions": ["task_selected"], "effects": ["删除任务"] }
{ "id": "action-list", "name": "List Tasks", "preconditions": [], "effects": ["Display task list"] },
{ "id": "action-create", "name": "Create Task", "preconditions": [], "effects": ["Add new task"] },
{ "id": "action-edit", "name": "Edit Task", "preconditions": ["task_selected"], "effects": ["Update task"] },
{ "id": "action-delete", "name": "Delete Task", "preconditions": ["task_selected"], "effects": ["Delete task"] }
],
"termination_conditions": ["user_exit", "error_limit"]
},

View File

@@ -1,22 +1,22 @@
# Autonomous Action Template
自主模式动作文件的模板。
Template for action files in Autonomous execution mode.
## Purpose
生成 Autonomous 执行模式的 Action 文件,定义可独立执行的动作单元。
Generate Action files for Autonomous execution mode, defining independent executable action units.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'autonomous'` 时生成 |
| Generation Trigger | 为每个 `config.autonomous_config.actions` 生成一个 action 文件 |
| Phase 3 (Phase Generation) | Generated when `config.execution_mode === 'autonomous'` |
| Generation Trigger | Generate one action file for each `config.autonomous_config.actions` |
| Output Location | `.claude/skills/{skill-name}/phases/actions/{action-id}.md` |
---
## 模板结构
## Template Structure
```markdown
# Action: {{action_name}}
@@ -34,8 +34,8 @@
## Scripts
\`\`\`yaml
# 声明本动作使用的脚本(可选)
# - script-id # 对应 scripts/script-id.py .sh
# Declare scripts used in this action (optional)
# - script-id # Corresponds to scripts/script-id.py or .sh
\`\`\`
## Execution
@@ -44,7 +44,7 @@
async function execute(state) {
{{execution_code}}
// 调用脚本示例
// Script execution example
// const result = await ExecuteScript('script-id', { input: state.context.data });
// if (!result.success) throw new Error(result.stderr);
}
@@ -71,63 +71,66 @@ return {
{{next_actions_hints}}
```
## 变量说明
## Variable Descriptions
| 变量 | 说明 |
|------|------|
| `{{action_name}}` | 动作名称 |
| `{{action_description}}` | 动作描述 |
| `{{purpose}}` | 详细目的 |
| `{{preconditions_list}}` | 前置条件列表 |
| `{{execution_code}}` | 执行代码 |
| `{{state_updates}}` | 状态更新 |
| `{{error_handling_table}}` | 错误处理表格 |
| `{{next_actions_hints}}` | 后续动作提示 |
| Variable | Description |
|----------|-------------|
| `{{action_name}}` | Action name |
| `{{action_description}}` | Action description |
| `{{purpose}}` | Detailed purpose |
| `{{preconditions_list}}` | List of preconditions |
| `{{execution_code}}` | Execution code |
| `{{state_updates}}` | State updates |
| `{{error_handling_table}}` | Error handling table |
| `{{next_actions_hints}}` | Next action hints |
## 动作生命周期
## Action Lifecycle
```
状态驱动执行流:
State-driven execution flow:
state.status === 'pending'
┌─ Init ─┐ ← 1次执行环境准备
│ 创建工作目录 │
│ 初始化 context │
│ status → running │
└────┬────┘
┌─ CRUD Loop ─┐ ← N次迭代核心业务
│ 编排器选择动作 List / Create / Edit / Delete
│ execute(state) │ 共享模式: 收集输入 → 操作 context.items → 返回更新
│ 更新 state
└────┬────┘
┌─ Complete ─┐ ← 1次执行保存结果
│ 序列化输出
│ status → completed │
└──────────┘
|
v
+-- Init --+ <- 1 execution, environment preparation
| Create working directory
| Initialize context
| status -> running
+----+----+
|
v
+-- CRUD Loop --+ <- N iterations, core business
| Orchestrator selects action | List / Create / Edit / Delete
| execute(state) | Shared pattern: collect input -> operate context.items -> return updates
| Update state
+----+----+
|
v
+-- Complete --+ <- 1 execution, save results
| Serialize output
| status -> completed
+----------+
共享状态结构:
state.status 'pending' | 'running' | 'completed'
state.context.items → 业务数据数组
state.completed_actions → 已执行动作 ID 列表
Shared state structure:
state.status -> 'pending' | 'running' | 'completed'
state.context.items -> Business data array
state.completed_actions -> List of executed action IDs
```
## 动作类型模板
## Action Type Templates
### 1. 初始化动作 (Init)
### 1. Initialize Action (Init)
**触发条件**: `state.status === 'pending'`,仅执行一次
**Trigger condition**: `state.status === 'pending'`, executes once
```markdown
# Action: Initialize
初始化 Skill 执行状态。
Initialize Skill execution state.
## Purpose
设置初始状态,准备执行环境。
Set initial state, prepare execution environment.
## Preconditions
@@ -151,24 +154,24 @@ async function execute(state) {
## Next Actions
- 成功: 进入主处理循环 (由编排器选择首个 CRUD 动作)
- 失败: action-abort
- Success: Enter main processing loop (Orchestrator selects first CRUD action)
- Failure: action-abort
```
### 2. CRUD 动作 (List / Create / Edit / Delete)
### 2. CRUD Actions (List / Create / Edit / Delete)
**触发条件**: `state.status === 'running'`,循环执行直至用户退出
**Trigger condition**: `state.status === 'running'`, loop until user exits
> 以 Create 为示例展示共享模式。List / Edit / Delete 遵循同一结构,仅 `执行逻辑` 和 `状态更新字段` 不同。
> Example shows Create action demonstrating shared pattern. List / Edit / Delete follow same structure with different execution logic and state update fields.
```markdown
# Action: Create Item
创建新项目。
Create new item.
## Purpose
收集用户输入,向 context.items 追加新记录。
Collect user input, append new record to context.items.
## Preconditions
@@ -178,25 +181,25 @@ async function execute(state) {
\`\`\`javascript
async function execute(state) {
// 1. 收集输入
// 1. Collect input
const input = await AskUserQuestion({
questions: [{
question: "请输入项目名称:",
header: "名称",
question: "Please enter item name:",
header: "Name",
multiSelect: false,
options: [{ label: "手动输入", description: "输入自定义名称" }]
options: [{ label: "Manual input", description: "Enter custom name" }]
}]
});
// 2. 操作 context.items (核心逻辑因动作类型而异)
// 2. Operate context.items (core logic differs by action type)
const newItem = {
id: Date.now().toString(),
name: input["名称"],
name: input["Name"],
status: 'pending',
created_at: new Date().toISOString()
};
// 3. 返回状态更新
// 3. Return state update
return {
stateUpdates: {
context: {
@@ -211,31 +214,31 @@ async function execute(state) {
## Next Actions
- 继续操作: 编排器根据 state 选择下一动作
- 用户退出: action-complete
- Continue operations: Orchestrator selects next action based on state
- User exit: action-complete
```
**其他 CRUD 动作差异对照:**
**Other CRUD Actions Differences:**
| 动作 | 核心逻辑 | 额外前置条件 | 关键状态字段 |
|------|---------|------------|------------|
| List | `items.forEach( console.log)` | | `current_view: 'list'` |
| Create | `items.push(newItem)` | | `last_created_id` |
| Edit | `items.map(→ 替换匹配项)` | `selected_item_id !== null` | `updated_at` |
| Delete | `items.filter(→ 排除匹配项)` | `selected_item_id !== null` | 确认对话 → 执行 |
| Action | Core Logic | Extra Preconditions | Key State Field |
|--------|-----------|-------------------|-----------------|
| List | `items.forEach(-> console.log)` | None | `current_view: 'list'` |
| Create | `items.push(newItem)` | None | `last_created_id` |
| Edit | `items.map(-> replace matching)` | `selected_item_id !== null` | `updated_at` |
| Delete | `items.filter(-> exclude matching)` | `selected_item_id !== null` | Confirm dialog -> execute |
### 3. 完成动作 (Complete)
### 3. Complete Action
**触发条件**: 用户明确退出或终止条件满足,仅执行一次
**Trigger condition**: User explicitly exits or termination condition met, executes once
```markdown
# Action: Complete
完成任务并退出。
Complete task and exit.
## Purpose
序列化最终状态,结束 Skill 执行。
Serialize final state, end Skill execution.
## Preconditions
@@ -253,7 +256,7 @@ async function execute(state) {
actions_executed: state.completed_actions.length
};
console.log(\`任务完成: \${summary.total_items} , \${summary.actions_executed} 次操作\`);
console.log(\`Task complete: \${summary.total_items} items, \${summary.actions_executed} operations\`);
return {
stateUpdates: {
@@ -267,31 +270,31 @@ async function execute(state) {
## Next Actions
- 无(终止状态)
- None (terminal state)
```
## 生成函数
## Generation Function
```javascript
function generateAction(actionConfig, skillConfig) {
return `# Action: ${actionConfig.name}
${actionConfig.description || `执行 ${actionConfig.name} 操作`}
${actionConfig.description || `Execute ${actionConfig.name} operation`}
## Purpose
${actionConfig.purpose || 'TODO: 描述此动作的详细目的'}
${actionConfig.purpose || 'TODO: Describe detailed purpose of this action'}
## Preconditions
${actionConfig.preconditions?.map(p => `- [ ] ${p}`).join('\n') || '- [ ] 无特殊前置条件'}
${actionConfig.preconditions?.map(p => `- [ ] ${p}`).join('\n') || '- [ ] No special preconditions'}
## Execution
\`\`\`javascript
async function execute(state) {
// TODO: 实现动作逻辑
// TODO: Implement action logic
return {
stateUpdates: {
completed_actions: [...state.completed_actions, '${actionConfig.id}']
@@ -305,7 +308,7 @@ async function execute(state) {
\`\`\`javascript
return {
stateUpdates: {
// TODO: 定义状态更新
// TODO: Define state updates
${actionConfig.effects?.map(e => ` // Effect: ${e}`).join('\n') || ''}
}
};
@@ -315,13 +318,13 @@ ${actionConfig.effects?.map(e => ` // Effect: ${e}`).join('\n') || ''}
| Error Type | Recovery |
|------------|----------|
| 数据验证失败 | 返回错误,不更新状态 |
| 执行异常 | 记录错误,增加 error_count |
| Data validation failed | Return error, no state update |
| Execution exception | Log error, increment error_count |
## Next Actions (Hints)
- 成功: 由编排器根据状态决定
- 失败: 重试或 action-abort
- Success: Orchestrator decides based on state
- Failure: Retry or action-abort
`;
}
```

View File

@@ -1,59 +1,59 @@
# Autonomous Orchestrator Template
自主模式编排器的模板。
Template for orchestrator file in Autonomous execution mode.
## Purpose
生成 Autonomous 执行模式的 Orchestrator 文件,负责状态驱动的动作选择和执行循环。
Generate Orchestrator file for Autonomous execution mode, responsible for state-driven action selection and execution loop.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'autonomous'` 时生成 |
| Generation Trigger | 创建编排器逻辑,管理动作选择和状态更新 |
| Phase 3 (Phase Generation) | Generated when `config.execution_mode === 'autonomous'` |
| Generation Trigger | Create orchestrator logic to manage action selection and state updates |
| Output Location | `.claude/skills/{skill-name}/phases/orchestrator.md` |
---
## ⚠️ 重要提示
## Important Notes
> **Phase 0 是强制前置阶段**:在 Orchestrator 启动执行循环之前,必须先完成 Phase 0 的规范研读。
> **Phase 0 is mandatory prerequisite**: Before Orchestrator starts execution loop, Phase 0 specification review must be completed first.
>
> 生成 Orchestrator 时,需要确保:
> 1. SKILL.md 中已包含 Phase 0 规范研读步骤
> 2. Orchestrator 启动前验证规范已阅读
> 3. 所有 Action 文件都引用相关的规范文档
> 4. Architecture Overview Phase 0 位于 Orchestrator 之前
> When generating Orchestrator, ensure:
> 1. Phase 0 specification review step is included in SKILL.md
> 2. Orchestrator validates specification has been reviewed before starting execution loop
> 3. All Action files reference related specification documents
> 4. Architecture Overview places Phase 0 before Orchestrator
## 模板结构
## Template Structure
```markdown
# Orchestrator
## Role
根据当前状态选择并执行下一个动作。
Select and execute next action based on current state.
## State Management
### 读取状态
### Read State
\`\`\`javascript
const state = JSON.parse(Read(`${workDir}/state.json`));
const state = JSON.parse(Read(\`${workDir}/state.json\`));
\`\`\`
### 更新状态
### Update State
\`\`\`javascript
function updateState(updates) {
const state = JSON.parse(Read(`${workDir}/state.json`));
const state = JSON.parse(Read(\`${workDir}/state.json\`));
const newState = {
...state,
...updates,
updated_at: new Date().toISOString()
};
Write(`${workDir}/state.json`, JSON.stringify(newState, null, 2));
Write(\`${workDir}/state.json\`, JSON.stringify(newState, null, 2));
return newState;
}
\`\`\`
@@ -62,18 +62,18 @@ function updateState(updates) {
\`\`\`javascript
function selectNextAction(state) {
// 1. 终止条件检查
// 1. Check termination conditions
{{termination_checks}}
// 2. 错误限制检查
// 2. Check error limit
if (state.error_count >= 3) {
return 'action-abort';
}
// 3. 动作选择逻辑
// 3. Action selection logic
{{action_selection_logic}}
// 4. 默认完成
// 4. Default completion
return 'action-complete';
}
\`\`\`
@@ -83,34 +83,34 @@ function selectNextAction(state) {
\`\`\`javascript
async function runOrchestrator() {
console.log('=== Orchestrator Started ===');
let iteration = 0;
const MAX_ITERATIONS = 100;
while (iteration < MAX_ITERATIONS) {
iteration++;
// 1. 读取当前状态
const state = JSON.parse(Read(`${workDir}/state.json`));
console.log(`[Iteration ${iteration}] Status: ${state.status}`);
// 2. 选择下一个动作
// 1. Read current state
const state = JSON.parse(Read(\`${workDir}/state.json\`));
console.log(\`[Iteration ${iteration}] Status: ${state.status}\`);
// 2. Select next action
const actionId = selectNextAction(state);
if (!actionId) {
console.log('No action selected, terminating.');
break;
}
console.log(`[Iteration ${iteration}] Executing: ${actionId}`);
// 3. 更新状态:当前动作
console.log(\`[Iteration ${iteration}] Executing: ${actionId}\`);
// 3. Update state: current action
updateState({ current_action: actionId });
// 4. 执行动作
// 4. Execute action
try {
const actionPrompt = Read(`phases/actions/${actionId}.md`);
const actionPrompt = Read(\`phases/actions/${actionId}.md\`);
const result = await Task({
subagent_type: 'universal-executor',
run_in_background: false,
@@ -125,18 +125,18 @@ async function runOrchestrator() {
Return JSON with stateUpdates field.
\`
});
const actionResult = JSON.parse(result);
// 5. 更新状态:动作完成
// 5. Update state: action completed
updateState({
current_action: null,
completed_actions: [...state.completed_actions, actionId],
...actionResult.stateUpdates
});
} catch (error) {
// 错误处理
// Error handling
updateState({
current_action: null,
errors: [...state.errors, {
@@ -148,7 +148,7 @@ Return JSON with stateUpdates field.
});
}
}
console.log('=== Orchestrator Finished ===');
}
\`\`\`
@@ -167,28 +167,28 @@ Return JSON with stateUpdates field.
| Error Type | Recovery Strategy |
|------------|-------------------|
| 动作执行失败 | 重试最多 3 次 |
| 状态不一致 | 回滚到上一个稳定状态 |
| 用户中止 | 保存当前状态,允许恢复 |
| Action execution failed | Retry up to 3 times |
| State inconsistency | Rollback to last stable state |
| User abort | Save current state, allow recovery |
```
## 变量说明
## Variable Descriptions
| 变量 | 说明 |
|------|------|
| `{{termination_checks}}` | 终止条件检查代码 |
| `{{action_selection_logic}}` | 动作选择逻辑代码 |
| `{{action_catalog_table}}` | 动作目录表格 |
| `{{termination_conditions_list}}` | 终止条件列表 |
| Variable | Description |
|----------|-------------|
| `{{termination_checks}}` | Termination condition check code |
| `{{action_selection_logic}}` | Action selection logic code |
| `{{action_catalog_table}}` | Action directory table |
| `{{termination_conditions_list}}` | List of termination conditions |
## 生成函数
## Generation Function
```javascript
function generateOrchestrator(config) {
const actions = config.autonomous_config.actions;
const terminations = config.autonomous_config.termination_conditions || [];
// 生成终止条件检查
// Generate termination checks
const terminationChecks = terminations.map(t => {
const checks = {
'user_exit': 'if (state.status === "user_exit") return null;',
@@ -198,24 +198,24 @@ function generateOrchestrator(config) {
};
return checks[t] || `if (state.${t}) return null;`;
}).join('\n ');
// 生成动作选择逻辑
// Generate action selection logic
const actionSelectionLogic = actions.map(action => {
if (!action.preconditions?.length) {
return `// ${action.name}: 无前置条件,需要手动添加选择逻辑`;
return `// ${action.name}: No preconditions, add selection logic manually`;
}
const conditions = action.preconditions.map(p => `state.${p}`).join(' && ');
return `if (${conditions}) return '${action.id}';`;
}).join('\n ');
// 生成动作目录表格
const actionCatalogTable = actions.map(a =>
// Generate action catalog table
const actionCatalogTable = actions.map(a =>
`| [${a.id}](actions/${a.id}.md) | ${a.description || a.name} | ${a.preconditions?.join(', ') || '-'} |`
).join('\n');
// 生成终止条件列表
// Generate termination conditions list
const terminationConditionsList = terminations.map(t => `- ${t}`).join('\n');
return template
.replace('{{termination_checks}}', terminationChecks)
.replace('{{action_selection_logic}}', actionSelectionLogic)
@@ -224,11 +224,11 @@ function generateOrchestrator(config) {
}
```
## 编排策略
## Orchestration Strategies
### 1. 优先级策略
### 1. Priority Strategy
按预定义优先级选择动作:
Select action by predefined priority:
```javascript
const PRIORITY = ['action-init', 'action-process', 'action-review', 'action-complete'];
@@ -243,16 +243,16 @@ function selectByPriority(state, availableActions) {
}
```
### 2. 用户驱动策略
### 2. User-Driven Strategy
询问用户选择下一个动作:
Ask user to select next action:
```javascript
async function selectByUser(state, availableActions) {
const response = await AskUserQuestion({
questions: [{
question: "选择下一个操作:",
header: "操作",
question: "Select next operation:",
header: "Operations",
multiSelect: false,
options: availableActions.map(a => ({
label: a.name,
@@ -260,32 +260,32 @@ async function selectByUser(state, availableActions) {
}))
}]
});
return availableActions.find(a => a.name === response["操作"])?.id;
return availableActions.find(a => a.name === response["Operations"])?.id;
}
```
### 3. 状态驱动策略
### 3. State-Driven Strategy
完全基于状态自动决策:
Fully automatic decision based on state:
```javascript
function selectByState(state) {
// 初始化
// Initialization
if (state.status === 'pending') return 'action-init';
// 有待处理项
// Has pending items
if (state.pending_items?.length > 0) return 'action-process';
// 需要审核
// Needs review
if (state.needs_review) return 'action-review';
// 完成
// Completed
return 'action-complete';
}
```
## 状态机示例
## State Machine Example
```mermaid
stateDiagram-v2

View File

@@ -1,59 +1,59 @@
# Code Analysis Action Template
代码分析动作模板,用于在 Skill 中集成代码探索和分析能力。
Code analysis action template for integrating code exploration and analysis capabilities into a Skill.
## Purpose
为 Skill 生成代码分析动作,集成 MCP 工具 (ACE) Agent 进行语义搜索和深度分析。
Generate code analysis actions for a Skill, integrating MCP tools (ACE) and Agents for semantic search and in-depth analysis.
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | 当 Skill 需要代码探索和分析能力时使用 |
| Generation Trigger | 用户选择添加 code-analysis 动作类型 |
| Optional | Use when Skill requires code exploration and analysis capabilities |
| Generation Trigger | User selects to add code-analysis action type |
| Agent Types | Explore, cli-explore-agent, universal-executor |
---
## 配置结构
## Configuration Structure
```typescript
interface CodeAnalysisActionConfig {
id: string; // "analyze-structure", "explore-patterns"
name: string; // "Code Structure Analysis"
type: 'code-analysis'; // 动作类型标识
type: 'code-analysis'; // Action type identifier
// 分析范围
// Analysis scope
scope: {
paths: string[]; // 目标路径
patterns: string[]; // Glob 模式
excludes?: string[]; // 排除模式
paths: string[]; // Target paths
patterns: string[]; // Glob patterns
excludes?: string[]; // Exclude patterns
};
// 分析类型
// Analysis type
analysis_type: 'structure' | 'patterns' | 'dependencies' | 'quality' | 'security';
// Agent 配置
// Agent config
agent: {
type: 'Explore' | 'cli-explore-agent' | 'universal-executor';
thoroughness: 'quick' | 'medium' | 'very thorough';
};
// 输出配置
// Output config
output: {
format: 'json' | 'markdown';
file: string;
};
// MCP 工具增强
// MCP tool enhancement
mcp_tools?: string[]; // ['mcp__ace-tool__search_context']
}
```
---
## 模板生成函数
## Template Generation Function
```javascript
function generateCodeAnalysisAction(config) {
@@ -64,20 +64,20 @@ function generateCodeAnalysisAction(config) {
## Action: ${id}
### 分析范围
### Analysis Scope
- **路径**: ${scope.paths.join(', ')}
- **模式**: ${scope.patterns.join(', ')}
${scope.excludes ? `- **排除**: ${scope.excludes.join(', ')}` : ''}
- **Paths**: ${scope.paths.join(', ')}
- **Patterns**: ${scope.patterns.join(', ')}
${scope.excludes ? `- **Excludes**: ${scope.excludes.join(', ')}` : ''}
### 执行逻辑
### Execution Logic
\`\`\`javascript
async function execute${toPascalCase(id)}(context) {
const workDir = context.workDir;
const results = [];
// 1. 文件发现
// 1. File discovery
const files = await discoverFiles({
paths: ${JSON.stringify(scope.paths)},
patterns: ${JSON.stringify(scope.patterns)},
@@ -86,34 +86,34 @@ async function execute${toPascalCase(id)}(context) {
console.log(\`Found \${files.length} files to analyze\`);
// 2. 使用 MCP 工具进行语义搜索(如果配置)
${mcp_tools.length > 0 ? `
// 2. Semantic search using MCP tools (if configured)
${mcp_tools.length > 0 ? \`
const semanticResults = await mcp__ace_tool__search_context({
project_root_path: context.projectRoot,
query: '${getQueryForAnalysisType(analysis_type)}'
query: '\${getQueryForAnalysisType(analysis_type)}'
});
results.push({ type: 'semantic', data: semanticResults });
` : '// No MCP tools configured'}
\` : '// No MCP tools configured'}
// 3. 启动 Agent 进行深度分析
// 3. Launch Agent for in-depth analysis
const agentResult = await Task({
subagent_type: '${agent.type}',
subagent_type: '\${agent.type}',
prompt: \`
${generateAgentPrompt(analysis_type, scope)}
\${generateAgentPrompt(analysis_type, scope)}
\`,
run_in_background: false
});
results.push({ type: 'agent', data: agentResult });
// 4. 汇总结果
// 4. Aggregate results
const summary = aggregateResults(results);
// 5. 输出结果
// 5. Output results
const outputPath = \`\${workDir}/${output.file}\`;
${output.format === 'json'
? `Write(outputPath, JSON.stringify(summary, null, 2));`
: `Write(outputPath, formatAsMarkdown(summary));`}
? \`Write(outputPath, JSON.stringify(summary, null, 2));\`
: \`Write(outputPath, formatAsMarkdown(summary));\`}
return {
success: true,
@@ -122,8 +122,7 @@ ${generateAgentPrompt(analysis_type, scope)}
analysis_type: '${analysis_type}'
};
}
\`\`\`
`;
\`\`\`;
}
function getQueryForAnalysisType(type) {
@@ -139,101 +138,101 @@ function getQueryForAnalysisType(type) {
function generateAgentPrompt(type, scope) {
const prompts = {
structure: `分析以下路径的代码结构:
${scope.paths.map(p => `- ${p}`).join('\\n')}
structure: \`Analyze code structure of the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 识别主要模块和入口点
2. 分析目录组织结构
3. 提取模块间的导入导出关系
4. 生成结构概览图 (Mermaid)
Tasks:
1. Identify main modules and entry points
2. Analyze directory organization structure
3. Extract module import/export relationships
4. Generate structure overview diagram (Mermaid)
输出格式: JSON
Output format: JSON
{
"modules": [...],
"entry_points": [...],
"structure_diagram": "mermaid code"
}`,
}\`,
patterns: `分析以下路径的设计模式:
${scope.paths.map(p => `- ${p}`).join('\\n')}
patterns: \`Analyze design patterns in the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 识别使用的设计模式 (Factory, Strategy, Observer)
2. 分析抽象层级
3. 评估模式使用的恰当性
4. 提取可复用的模式实例
Tasks:
1. Identify design patterns used (Factory, Strategy, Observer, etc.)
2. Analyze abstraction levels
3. Evaluate appropriateness of pattern usage
4. Extract reusable pattern instances
输出格式: JSON
Output format: JSON
{
"patterns": [{ "name": "...", "location": "...", "usage": "..." }],
"abstractions": [...],
"reusable_components": [...]
}`,
}\`,
dependencies: `分析以下路径的依赖关系:
${scope.paths.map(p => `- ${p}`).join('\\n')}
dependencies: \`Analyze dependencies in the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 提取内部模块依赖
2. 识别外部包依赖
3. 分析耦合度
4. 检测循环依赖
Tasks:
1. Extract internal module dependencies
2. Identify external package dependencies
3. Analyze coupling degree
4. Detect circular dependencies
输出格式: JSON
Output format: JSON
{
"internal_deps": [...],
"external_deps": [...],
"coupling_score": 0-100,
"circular_deps": [...]
}`,
}\`,
quality: `分析以下路径的代码质量:
${scope.paths.map(p => `- ${p}`).join('\\n')}
quality: \`Analyze code quality in the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 评估代码复杂度
2. 检查测试覆盖率
3. 分析文档完整性
4. 识别技术债务
Tasks:
1. Assess code complexity
2. Check test coverage
3. Analyze documentation completeness
4. Identify technical debt
输出格式: JSON
Output format: JSON
{
"complexity": { "avg": 0, "max": 0, "hotspots": [...] },
"test_coverage": { "percentage": 0, "gaps": [...] },
"documentation": { "score": 0, "missing": [...] },
"tech_debt": [...]
}`,
}\`,
security: `分析以下路径的安全性:
${scope.paths.map(p => `- ${p}`).join('\\n')}
security: \`Analyze security in the following paths:
\${scope.paths.map(p => \`- \${p}\`).join('\\n')}
任务:
1. 检查认证授权实现
2. 分析输入验证
3. 检测敏感数据处理
4. 识别常见漏洞模式
Tasks:
1. Check authentication/authorization implementation
2. Analyze input validation
3. Detect sensitive data handling
4. Identify common vulnerability patterns
输出格式: JSON
Output format: JSON
{
"auth": { "methods": [...], "issues": [...] },
"input_validation": { "coverage": 0, "gaps": [...] },
"sensitive_data": { "found": [...], "protected": true/false },
"vulnerabilities": [{ "type": "...", "severity": "...", "location": "..." }]
}`
}\`
};
return prompts[type] || prompts.structure;
}
```
\`\`\`
---
## 预置代码分析动作
## Preset Code Analysis Actions
### 1. 项目结构分析
### 1. Project Structure Analysis
```yaml
\`\`\`yaml
id: analyze-project-structure
name: Project Structure Analysis
type: code-analysis
@@ -255,11 +254,11 @@ output:
file: structure-analysis.json
mcp_tools:
- mcp__ace-tool__search_context
```
\`\`\`
### 2. 设计模式提取
### 2. Design Pattern Extraction
```yaml
\`\`\`yaml
id: extract-design-patterns
name: Design Pattern Extraction
type: code-analysis
@@ -275,11 +274,11 @@ agent:
output:
format: markdown
file: patterns-report.md
```
\`\`\`
### 3. 依赖关系分析
### 3. Dependency Analysis
```yaml
\`\`\`yaml
id: analyze-dependencies
name: Dependency Analysis
type: code-analysis
@@ -297,11 +296,11 @@ agent:
output:
format: json
file: dependency-graph.json
```
\`\`\`
### 4. 安全审计
### 4. Security Audit
```yaml
\`\`\`yaml
id: security-audit
name: Security Audit
type: code-analysis
@@ -320,15 +319,15 @@ output:
file: security-report.json
mcp_tools:
- mcp__ace-tool__search_context
```
\`\`\`
---
## 使用示例
## Usage Examples
### Phase 中使用
### Using in Phase
```javascript
\`\`\`javascript
// phases/01-code-exploration.md
const analysisConfig = {
@@ -351,14 +350,14 @@ const analysisConfig = {
}
};
// 执行
// Execute
const result = await executeCodeAnalysis(analysisConfig, context);
```
\`\`\`
### 组合多种分析
### Combining Multiple Analyses
```javascript
// 串行执行多种分析
\`\`\`javascript
// Serial execution of multiple analyses
const analyses = [
{ type: 'structure', file: 'structure.json' },
{ type: 'patterns', file: 'patterns.json' },
@@ -373,7 +372,7 @@ for (const analysis of analyses) {
}, context);
}
// 并行执行(独立分析)
// Parallel execution (independent analyses)
const parallelResults = await Promise.all(
analyses.map(a => executeCodeAnalysis({
...baseConfig,
@@ -381,51 +380,51 @@ const parallelResults = await Promise.all(
output: { format: 'json', file: a.file }
}, context))
);
```
\`\`\`
---
## Agent 选择指南
## Agent Selection Guide
| 分析类型 | 推荐 Agent | Thoroughness | 原因 |
|---------|-----------|--------------|------|
| structure | Explore | medium | 快速获取目录结构 |
| patterns | cli-explore-agent | very thorough | 需要深度代码理解 |
| dependencies | Explore | medium | 主要分析 import 语句 |
| quality | universal-executor | medium | 需要运行分析工具 |
| security | universal-executor | very thorough | 需要全面扫描 |
| Analysis Type | Recommended Agent | Thoroughness | Reason |
|-------------|-----------------|--------------|--------|
| structure | Explore | medium | Quick directory structure retrieval |
| patterns | cli-explore-agent | very thorough | Requires deep code understanding |
| dependencies | Explore | medium | Mainly analyzes import statements |
| quality | universal-executor | medium | Requires running analysis tools |
| security | universal-executor | very thorough | Requires comprehensive scanning |
---
## MCP 工具集成
## MCP Tool Integration
### 语义搜索增强
### Semantic Search Enhancement
```javascript
// 使用 ACE 工具进行语义搜索
\`\`\`javascript
// Use ACE tool for semantic search
const semanticContext = await mcp__ace_tool__search_context({
project_root_path: projectRoot,
query: 'authentication logic, user session management'
});
// 将语义搜索结果作为 Agent 的输入上下文
// Use semantic search results as Agent input context
const agentResult = await Task({
subagent_type: 'Explore',
prompt: `
基于以下语义搜索结果进行深度分析:
prompt: \`
Based on following semantic search results, perform in-depth analysis:
${semanticContext}
\${semanticContext}
任务: 分析认证逻辑的实现细节...
`,
Task: Analyze authentication logic implementation details...
\`,
run_in_background: false
});
```
\`\`\`
### smart_search 集成
### smart_search Integration
```javascript
// 使用 smart_search 进行精确搜索
\`\`\`javascript
// Use smart_search for exact matching
const exactMatches = await mcp__ccw_tools__smart_search({
action: 'search',
query: 'class.*Controller',
@@ -433,19 +432,19 @@ const exactMatches = await mcp__ccw_tools__smart_search({
path: 'src/'
});
// 使用 find_files 发现文件
// Use find_files for file discovery
const configFiles = await mcp__ccw_tools__smart_search({
action: 'find_files',
pattern: '**/*.config.ts',
path: 'src/'
});
```
\`\`\`
---
## 结果聚合
## Results Aggregation
```javascript
\`\`\`javascript
function aggregateResults(results) {
const aggregated = {
timestamp: new Date().toISOString(),
@@ -478,38 +477,38 @@ function aggregateResults(results) {
}
function extractKeyFindings(agentResult) {
// 从 Agent 结果中提取关键发现
// 实现取决于 Agent 的输出格式
// Extract key findings from Agent result
// Implementation depends on Agent output format
return {
modules: agentResult.modules?.length || 0,
patterns: agentResult.patterns?.length || 0,
issues: agentResult.issues?.length || 0
};
}
```
\`\`\`
---
## 最佳实践
## Best Practices
1. **范围控制**
- 使用精确的 patterns 减少分析范围
- 配置 excludes 排除无关文件
1. **Scope Control**
- Use precise patterns to reduce analysis scope
- Configure excludes to ignore irrelevant files
2. **Agent 选择**
- 快速探索用 Explore
- 深度分析用 cli-explore-agent
- 需要执行操作用 universal-executor
2. **Agent Selection**
- Use Explore for quick exploration
- Use cli-explore-agent for in-depth analysis
- Use universal-executor when execution is required
3. **MCP 工具组合**
- 先用 mcp__ace-tool__search_context 获取语义上下文
- 再用 Agent 进行深度分析
- 最后用 smart_search 补充精确匹配
3. **MCP Tool Combination**
- First use mcp__ace-tool__search_context for semantic context
- Then use Agent for in-depth analysis
- Finally use smart_search for exact matching
4. **结果缓存**
- 将分析结果持久化到 workDir
- 后续阶段可直接读取,避免重复分析
4. **Result Caching**
- Persist analysis results to workDir
- Subsequent phases can read directly, avoiding re-analysis
5. **Brief Returns**
- Agent 返回路径 + 摘要,而非完整内容
- 避免上下文溢出
- Agent returns path + summary, not full content
- Prevents context overflow

View File

@@ -1,56 +1,56 @@
# LLM Action Template
LLM 动作模板,用于在 Skill 中集成 LLM 调用能力。
LLM action template for integrating LLM call capabilities into a Skill.
## Purpose
为 Skill 生成 LLM 动作,通过 CCW CLI 统一接口调用 Gemini/Qwen/Codex 进行分析或生成。
Generate LLM actions for a Skill, call Gemini/Qwen/Codex through CCW CLI unified interface for analysis or generation.
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | 当 Skill 需要 LLM 能力时使用 |
| Generation Trigger | 用户选择添加 llm 动作类型 |
| Tools | gemini, qwen, codex (支持 fallback chain) |
| Optional | Use when Skill requires LLM capabilities |
| Generation Trigger | User selects to add llm action type |
| Tools | gemini, qwen, codex (supports fallback chain) |
---
## 配置结构
## Configuration Structure
```typescript
interface LLMActionConfig {
id: string; // "llm-analyze", "llm-generate"
name: string; // "LLM Analysis"
type: 'llm'; // 动作类型标识
type: 'llm'; // Action type identifier
// LLM 工具配置
// LLM tool config
tool: {
primary: 'gemini' | 'qwen' | 'codex';
fallback_chain: string[]; // ['gemini', 'qwen', 'codex']
};
// 执行模式
// Execution mode
mode: 'analysis' | 'write';
// 提示词配置
// Prompt config
prompt: {
template: string; // 提示词模板路径或内联
variables: string[]; // 需要替换的变量
template: string; // Prompt template path or inline
variables: string[]; // Variables to replace
};
// 输入输出
input: string[]; // 依赖的上下文文件
output: string; // 输出文件路径
// Input/Output
input: string[]; // Dependent context files
output: string; // Output file path
// 超时配置
timeout?: number; // 毫秒,默认 600000 (10min)
// Timeout config
timeout?: number; // Milliseconds, default 600000 (10min)
}
```
---
## 模板生成函数
## Template Generation Function
```javascript
function generateLLMAction(config) {
@@ -61,25 +61,25 @@ function generateLLMAction(config) {
## Action: ${id}
### 执行逻辑
### Execution Logic
\`\`\`javascript
async function execute${toPascalCase(id)}(context) {
const workDir = context.workDir;
const state = context.state;
// 1. 收集输入上下文
// 1. Collect input context
const inputContext = ${JSON.stringify(input)}.map(f => {
const path = \`\${workDir}/\${f}\`;
return Read(path);
}).join('\\n\\n---\\n\\n');
// 2. 构建提示词
// 2. Build prompt
const promptTemplate = \`${prompt.template}\`;
const finalPrompt = promptTemplate
${prompt.variables.map(v => `.replace('{{${v}}}', context.${v} || '')`).join('\n ')};
// 3. 执行 LLM 调用 (带 fallback)
// 3. Execute LLM call (with fallback)
const tools = ['${tool.primary}', ${tool.fallback_chain.map(t => `'${t}'`).join(', ')}];
let result = null;
let usedTool = null;
@@ -98,10 +98,10 @@ async function execute${toPascalCase(id)}(context) {
throw new Error('All LLM tools failed');
}
// 4. 保存结果
// 4. Save result
Write(\`\${workDir}/${output}\`, result);
// 5. 更新状态
// 5. Update state
state.llm_calls = (state.llm_calls || 0) + 1;
state.last_llm_tool = usedTool;
@@ -112,38 +112,38 @@ async function execute${toPascalCase(id)}(context) {
};
}
// LLM 调用封装
// LLM call wrapper
async function callLLM(tool, prompt, mode, timeout) {
const modeFlag = mode === 'write' ? '--mode write' : '--mode analysis';
// 使用 CCW CLI 统一接口
// Use CCW CLI unified interface
const command = \`ccw cli -p "\${escapePrompt(prompt)}" --tool \${tool} \${modeFlag}\`;
const result = Bash({
command,
timeout,
run_in_background: true // 异步执行
run_in_background: true // Async execution
});
// 等待完成
// Wait for completion
return await waitForResult(result.task_id, timeout);
}
function escapePrompt(prompt) {
// 转义双引号和特殊字符
// Escape double quotes and special characters
return prompt.replace(/"/g, '\\\\"').replace(/\$/g, '\\\\$');
}
\`\`\`
### Prompt 模板
### Prompt Template
\`\`\`
${prompt.template}
\`\`\`
### 变量说明
### Variable Descriptions
${prompt.variables.map(v => `- \`{{${v}}}\`: ${v} 变量`).join('\n')}
${prompt.variables.map(v => `- \`{{${v}}}\`: ${v} variable`).join('\n')}
`;
}
@@ -154,11 +154,11 @@ function toPascalCase(str) {
---
## 预置 LLM 动作模板
## Preset LLM Action Templates
### 1. 代码分析动作
### 1. Code Analysis Action
```yaml
\`\`\`yaml
id: llm-code-analysis
name: LLM Code Analysis
type: llm
@@ -168,15 +168,15 @@ tool:
mode: analysis
prompt:
template: |
PURPOSE: 分析代码结构和模式,提取关键设计特征
PURPOSE: Analyze code structure and patterns, extract key design features
TASK:
识别主要模块和组件
分析依赖关系
提取设计模式
评估代码质量
Identify main modules and components
Analyze dependencies
Extract design patterns
Evaluate code quality
MODE: analysis
CONTEXT: {{code_context}}
EXPECTED: JSON 格式的分析报告,包含 modules, dependencies, patterns, quality_score
EXPECTED: JSON formatted analysis report with modules, dependencies, patterns, quality_score
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md)
variables:
- code_context
@@ -184,11 +184,11 @@ input:
- collected-code.md
output: analysis-report.json
timeout: 900000
```
\`\`\`
### 2. 文档生成动作
### 2. Documentation Generation Action
```yaml
\`\`\`yaml
id: llm-doc-generation
name: LLM Documentation Generation
type: llm
@@ -198,15 +198,15 @@ tool:
mode: write
prompt:
template: |
PURPOSE: 根据分析结果生成高质量文档
PURPOSE: Generate high-quality documentation based on analysis results
TASK:
基于分析报告生成文档大纲
填充各章节内容
添加代码示例和说明
生成 Mermaid 图表
Generate documentation outline based on analysis report
Populate chapter content
Add code examples and explanations
Generate Mermaid diagrams
MODE: write
CONTEXT: {{analysis_report}}
EXPECTED: 完整的 Markdown 文档,包含目录、章节、图表
EXPECTED: Complete Markdown documentation with table of contents, chapters, diagrams
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md)
variables:
- analysis_report
@@ -214,11 +214,11 @@ input:
- analysis-report.json
output: generated-doc.md
timeout: 1200000
```
\`\`\`
### 3. 代码重构建议动作
### 3. Code Refactoring Suggestions Action
```yaml
\`\`\`yaml
id: llm-refactor-suggest
name: LLM Refactoring Suggestions
type: llm
@@ -228,15 +228,15 @@ tool:
mode: analysis
prompt:
template: |
PURPOSE: 分析代码并提供重构建议
PURPOSE: Analyze code and provide refactoring suggestions
TASK:
识别代码异味 (code smells)
评估复杂度热点
提出具体重构方案
估算重构影响范围
Identify code smells
Evaluate complexity hotspots
Propose specific refactoring plans
Estimate refactoring impact scope
MODE: analysis
CONTEXT: {{source_code}}
EXPECTED: 重构建议列表,每项包含 location, issue, suggestion, impact
EXPECTED: List of refactoring suggestions with location, issue, suggestion, impact fields
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md)
variables:
- source_code
@@ -244,15 +244,15 @@ input:
- source-files.md
output: refactor-suggestions.json
timeout: 600000
```
\`\`\`
---
## 使用示例
## Usage Examples
### 在 Phase 中使用 LLM 动作
### Using LLM Actions in Phase
```javascript
\`\`\`javascript
// phases/02-llm-analysis.md
const llmConfig = {
@@ -265,39 +265,39 @@ const llmConfig = {
},
mode: 'analysis',
prompt: {
template: `
PURPOSE: 分析现有 Skill 的设计模式
template: \`
PURPOSE: Analyze design patterns of existing Skills
TASK:
提取 Skill 结构规范
识别 Phase 组织模式
分析 Agent 调用模式
Extract Skill structure specification
Identify Phase organization patterns
Analyze Agent invocation patterns
MODE: analysis
CONTEXT: {{skill_source}}
EXPECTED: 结构化的设计模式分析
`,
EXPECTED: Structured design pattern analysis
\`,
variables: ['skill_source']
},
input: ['collected-skills.md'],
output: 'skill-patterns.json'
};
// 执行
// Execute
const result = await executeLLMAction(llmConfig, {
workDir: '.workflow/.scratchpad/skill-gen-xxx',
skill_source: Read('.workflow/.scratchpad/skill-gen-xxx/collected-skills.md')
});
```
\`\`\`
### Orchestrator 中调度 LLM 动作
### Scheduling LLM Actions in Orchestrator
```javascript
// autonomous-orchestrator 中的 LLM 动作调度
\`\`\`javascript
// Schedule LLM actions in autonomous-orchestrator
const actions = [
{ type: 'collect', priority: 100 },
{ type: 'llm', id: 'llm-analyze', priority: 90 }, // LLM 分析
{ type: 'llm', id: 'llm-analyze', priority: 90 }, // LLM analysis
{ type: 'process', priority: 80 },
{ type: 'llm', id: 'llm-generate', priority: 70 }, // LLM 生成
{ type: 'llm', id: 'llm-generate', priority: 70 }, // LLM generation
{ type: 'validate', priority: 60 }
];
@@ -310,13 +310,13 @@ for (const action of sortByPriority(actions)) {
context.state[action.id] = llmResult;
}
}
```
\`\`\`
---
## 错误处理
## Error Handling
```javascript
\`\`\`javascript
async function executeLLMActionWithRetry(config, context, maxRetries = 3) {
let lastError = null;
@@ -325,43 +325,43 @@ async function executeLLMActionWithRetry(config, context, maxRetries = 3) {
return await executeLLMAction(config, context);
} catch (error) {
lastError = error;
console.log(`Attempt ${attempt} failed: ${error.message}`);
console.log(\`Attempt ${attempt} failed: ${error.message}\`);
// 指数退避
// Exponential backoff
if (attempt < maxRetries) {
await sleep(Math.pow(2, attempt) * 1000);
}
}
}
// 所有重试失败
// All retries failed
return {
success: false,
error: lastError.message,
fallback: 'manual_review_required'
};
}
```
\`\`\`
---
## 最佳实践
## Best Practices
1. **选择合适的工具**
- 分析任务Gemini大上下文> Qwen
- 生成任务Codex自主执行> Gemini > Qwen
- 代码修改:Codex > Gemini
1. **Select Appropriate Tool**
- Analysis tasks: Gemini (large context) > Qwen
- Generation tasks: Codex (autonomous execution) > Gemini > Qwen
- Code modification: Codex > Gemini
2. **配置 Fallback Chain**
- 总是配置至少一个 fallback
- 考虑工具特性选择 fallback 顺序
2. **Configure Fallback Chain**
- Always configure at least one fallback
- Consider tool characteristics when ordering fallbacks
3. **超时设置**
- 分析任务10-15 分钟
- 生成任务15-20 分钟
- 复杂任务20-60 分钟
3. **Timeout Settings**
- Analysis tasks: 10-15 minutes
- Generation tasks: 15-20 minutes
- Complex tasks: 20-60 minutes
4. **提示词设计**
- 使用 PURPOSE/TASK/MODE/CONTEXT/EXPECTED/RULES 结构
- 引用标准协议模板
- 明确输出格式要求
4. **Prompt Design**
- Use PURPOSE/TASK/MODE/CONTEXT/EXPECTED/RULES structure
- Reference standard protocol templates
- Clearly specify output format requirements

View File

@@ -1,56 +1,56 @@
# Script Template
统一的脚本模板,覆盖 Bash Python 两种运行时。
Unified script template covering both Bash and Python runtimes.
## Usage Context
| Phase | Usage |
|-------|-------|
| Optional | Phase/Action 中声明 `## Scripts` 时使用 |
| Execution | 通过 `ExecuteScript('script-id', params)` 调用 |
| Optional | Use when declaring `## Scripts` in Phase/Action |
| Execution | Invoke via `ExecuteScript('script-id', params)` |
| Output Location | `.claude/skills/{skill-name}/scripts/{script-id}.{ext}` |
---
## 调用接口规范
## Invocation Interface Specification
所有脚本共享相同的调用约定:
All scripts share the same calling convention:
```
调用者
ExecuteScript('script-id', { key: value })
脚本入口
├─ 参数解析 (--key value)
├─ 输入验证 (必需参数检查, 文件存在)
├─ 核心处理 (数据读取 → 转换 → 写入)
└─ 输出结果 (最后一行: 单行 JSON stdout)
├─ 成功: {"status":"success", "output_file":"...", ...}
└─ 失败: stderr 输出错误信息, exit 1
Caller
| ExecuteScript('script-id', { key: value })
|
Script Entry
├─ Parameter parsing (--key value)
├─ Input validation (required parameter checks, file exists)
├─ Core processing (data read -> transform -> write)
└─ Output result (last line: single-line JSON -> stdout)
├─ Success: {"status":"success", "output_file":"...", ...}
└─ Failure: stderr output error message, exit 1
```
### 返回格式
### Return Format
```typescript
interface ScriptResult {
success: boolean; // exit code === 0
stdout: string; // 标准输出
stderr: string; // 标准错误
outputs: object; // 从 stdout 最后一行解析的 JSON
stdout: string; // Standard output
stderr: string; // Standard error
outputs: object; // JSON output parsed from stdout last line
}
```
### 参数约定
### Parameter Convention
| 参数 | 必需 | 说明 |
|------|------|------|
| `--input-path` | ✓ | 输入文件路径 |
| `--output-dir` | ✓ | 输出目录(由调用方指定) |
| 其他 | 按需 | 脚本特定参数 |
| Parameter | Required | Description |
|-----------|----------|-------------|
| `--input-path` | Yes | Input file path |
| `--output-dir` | Yes | Output directory (specified by caller) |
| Others | Optional | Script-specific parameters |
---
## Bash 实现
## Bash Implementation
```bash
#!/bin/bash
@@ -59,7 +59,7 @@ interface ScriptResult {
set -euo pipefail
# ============================================================
# 参数解析
# Parameter Parsing
# ============================================================
INPUT_PATH=""
@@ -70,11 +70,11 @@ while [[ "$#" -gt 0 ]]; do
--input-path) INPUT_PATH="$2"; shift ;;
--output-dir) OUTPUT_DIR="$2"; shift ;;
--help)
echo "用法: $0 --input-path <path> --output-dir <dir>"
echo "Usage: $0 --input-path <path> --output-dir <dir>"
exit 0
;;
*)
echo "错误: 未知参数 $1" >&2
echo "Error: Unknown parameter $1" >&2
exit 1
;;
esac
@@ -82,31 +82,31 @@ while [[ "$#" -gt 0 ]]; do
done
# ============================================================
# 参数验证
# Parameter Validation
# ============================================================
[[ -z "$INPUT_PATH" ]] && { echo "错误: --input-path 是必需参数" >&2; exit 1; }
[[ -z "$OUTPUT_DIR" ]] && { echo "错误: --output-dir 是必需参数" >&2; exit 1; }
[[ ! -f "$INPUT_PATH" ]] && { echo "错误: 输入文件不存在: $INPUT_PATH" >&2; exit 1; }
command -v jq &> /dev/null || { echo "错误: 需要安装 jq" >&2; exit 1; }
[[ -z "$INPUT_PATH" ]] && { echo "Error: --input-path is required parameter" >&2; exit 1; }
[[ -z "$OUTPUT_DIR" ]] && { echo "Error: --output-dir is required parameter" >&2; exit 1; }
[[ ! -f "$INPUT_PATH" ]] && { echo "Error: Input file does not exist: $INPUT_PATH" >&2; exit 1; }
command -v jq &> /dev/null || { echo "Error: jq is required" >&2; exit 1; }
mkdir -p "$OUTPUT_DIR"
# ============================================================
# 核心逻辑
# Core Logic
# ============================================================
OUTPUT_FILE="$OUTPUT_DIR/result.txt"
ITEMS_COUNT=0
# TODO: 实现处理逻辑
# TODO: Implement processing logic
while IFS= read -r line; do
echo "$line" >> "$OUTPUT_FILE"
((ITEMS_COUNT++))
done < "$INPUT_PATH"
# ============================================================
# 输出 JSON 结果(使用 jq 构建,避免转义问题)
# Output JSON Result (use jq to build, avoid escaping issues)
# ============================================================
jq -n \
@@ -115,34 +115,34 @@ jq -n \
'{output_file: $output_file, items_processed: $items_processed, status: "success"}'
```
### Bash 常用模式
### Bash Common Patterns
```bash
# 文件遍历
# File iteration
for file in "$INPUT_DIR"/*.json; do
[[ -f "$file" ]] || continue
# 处理逻辑...
# Processing logic...
done
# 临时文件 (自动清理)
# Temp file (auto cleanup)
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
# 工具依赖检查
# Tool dependency check
require_command() {
command -v "$1" &> /dev/null || { echo "错误: 需要 $1" >&2; exit 1; }
command -v "$1" &> /dev/null || { echo "Error: $1 required" >&2; exit 1; }
}
require_command jq
# jq 处理
VALUE=$(jq -r '.field' "$INPUT_PATH") # 读取字段
jq '.field = "new"' input.json > output.json # 修改字段
jq -s 'add' file1.json file2.json > merged.json # 合并文件
# jq processing
VALUE=$(jq -r '.field' "$INPUT_PATH") # Read field
jq '.field = "new"' input.json > output.json # Modify field
jq -s 'add' file1.json file2.json > merged.json # Merge files
```
---
## Python 实现
## Python Implementation
```python
#!/usr/bin/env python3
@@ -158,33 +158,33 @@ from pathlib import Path
def main():
parser = argparse.ArgumentParser(description='{{script_description}}')
parser.add_argument('--input-path', type=str, required=True, help='输入文件路径')
parser.add_argument('--output-dir', type=str, required=True, help='输出目录')
parser.add_argument('--input-path', type=str, required=True, help='Input file path')
parser.add_argument('--output-dir', type=str, required=True, help='Output directory')
args = parser.parse_args()
# 验证输入
# Validate input
input_path = Path(args.input_path)
if not input_path.exists():
print(f"错误: 输入文件不存在: {input_path}", file=sys.stderr)
print(f"Error: Input file does not exist: {input_path}", file=sys.stderr)
sys.exit(1)
output_dir = Path(args.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
# 执行处理
# Execute processing
try:
result = process(input_path, output_dir)
except Exception as e:
print(f"错误: {e}", file=sys.stderr)
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
# 输出 JSON 结果
# Output JSON result
print(json.dumps(result))
def process(input_path: Path, output_dir: Path) -> dict:
"""核心处理逻辑"""
# TODO: 实现处理逻辑
"""Core processing logic"""
# TODO: Implement processing logic
output_file = output_dir / 'result.json'
@@ -207,17 +207,17 @@ if __name__ == '__main__':
main()
```
### Python 常用模式
### Python Common Patterns
```python
# 文件遍历
# File iteration
def process_files(input_dir: Path, pattern: str = '*.json') -> list:
return [
{'file': str(f), 'data': json.load(f.open())}
for f in input_dir.glob(pattern)
]
# 数据转换
# Data transformation
def transform(data: dict) -> dict:
return {
'id': data.get('id'),
@@ -225,7 +225,7 @@ def transform(data: dict) -> dict:
'timestamp': datetime.now().isoformat()
}
# 外部命令调用
# External command invocation
import subprocess
def run_command(cmd: list) -> str:
@@ -237,24 +237,24 @@ def run_command(cmd: list) -> str:
---
## 运行时选择指南
## Runtime Selection Guide
```
任务特征
├─ 文件处理 / 系统命令 / 管道操作
│ └─ Bash (.sh)
Task Characteristics
|
├─ File processing / system commands / pipeline operations
│ └─ Choose Bash (.sh)
├─ JSON 数据处理 / 复杂转换 / 数据分析
│ └─ Python (.py)
├─ JSON data processing / complex transformation / data analysis
│ └─ Choose Python (.py)
└─ 简单读写 / 格式转换
└─ 任选Bash 更轻量)
└─ Simple read/write / format conversion
└─ Either (Bash is lighter)
```
---
## 生成函数
## Generation Function
```javascript
function generateScript(scriptConfig) {
@@ -280,7 +280,7 @@ function generateBashScript(scriptConfig) {
const paramValidation = inputs.filter(i => i.required).map(i => {
const VAR = i.name.toUpperCase().replace(/-/g, '_');
return `[[ -z "$${VAR}" ]] && { echo "错误: --${i.name} 是必需参数" >&2; exit 1; }`;
return `[[ -z "$${VAR}" ]] && { echo "Error: --${i.name} is required parameter" >&2; exit 1; }`;
}).join('\n');
return `#!/bin/bash
@@ -293,16 +293,16 @@ ${paramDefs}
while [[ "$#" -gt 0 ]]; do
case $1 in
${paramParse}
*) echo "未知参数: $1" >&2; exit 1 ;;
*) echo "Unknown parameter: $1" >&2; exit 1 ;;
esac
shift
done
${paramValidation}
# TODO: 实现处理逻辑
# TODO: Implement processing logic
# 输出结果 (jq 构建)
# Output result (jq build)
jq -n ${outputs.map(o =>
`--arg ${o.name} "$${o.name.toUpperCase().replace(/-/g, '_')}"`
).join(' \\\n ')} \
@@ -339,7 +339,7 @@ def main():
${argDefs}
args = parser.parse_args()
# TODO: 实现处理逻辑
# TODO: Implement processing logic
result = {
${resultFields}
}
@@ -355,7 +355,7 @@ if __name__ == '__main__':
---
## 目录约定
## Directory Convention
```
scripts/
@@ -364,5 +364,5 @@ scripts/
└── transform.js # id: transform, runtime: node
```
- **命名即 ID**: 文件名(不含扩展名)= 脚本 ID
- **扩展名即运行时**: `.py` python, `.sh` bash, `.js` node
- **Name is ID**: Filename (without extension) = script ID
- **Extension is runtime**: `.py` -> python, `.sh` -> bash, `.js` -> node

View File

@@ -1,31 +1,31 @@
# Sequential Phase Template
顺序模式 Phase 文件的模板。
Template for Phase files in Sequential execution mode.
## Purpose
生成 Sequential 执行模式的 Phase 文件,定义固定顺序的执行步骤。
Generate Phase files for Sequential execution mode, defining fixed-order execution steps.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 3 (Phase Generation) | `config.execution_mode === 'sequential'` 时生成 |
| Generation Trigger | 为每个 `config.sequential_config.phases` 生成一个 phase 文件 |
| Phase 3 (Phase Generation) | Generated when `config.execution_mode === 'sequential'` |
| Generation Trigger | Generate one phase file for each `config.sequential_config.phases` |
| Output Location | `.claude/skills/{skill-name}/phases/{phase-id}.md` |
---
## ⚠️ 重要提示
## Important Notes
> **Phase 0 是强制前置阶段**:在实现任何 Phase (1, 2, 3...) 之前,必须先完成 Phase 0 的规范研读。
> **Phase 0 is mandatory prerequisite**: Before implementing any Phase (1, 2, 3...), Phase 0 specification review must be completed first.
>
> 生成 Sequential Phase 时,需要确保:
> 1. SKILL.md 中已包含 Phase 0 规范研读步骤
> 2. 每个 Phase 文件都引用相关的规范文档
> 3. 执行流程明确标注 Phase 0 为禁止跳过的前置步骤
> When generating Sequential Phase, ensure:
> 1. Phase 0 specification review step is included in SKILL.md
> 2. Each Phase file references related specification documents
> 3. Execution flow clearly marks Phase 0 as non-skippable prerequisite
## 模板结构
## Template Structure
```markdown
# Phase {{phase_number}}: {{phase_name}}
@@ -38,14 +38,14 @@
## Input
- 依赖: `{{input_dependency}}`
- 配置: `{workDir}/skill-config.json`
- Dependency: `{{input_dependency}}`
- Config: `{workDir}/skill-config.json`
## Scripts
\`\`\`yaml
# 声明本阶段使用的脚本(可选)
# - script-id # 对应 scripts/script-id.py .sh
# Declare scripts used in this phase (optional)
# - script-id # Corresponds to scripts/script-id.py or .sh
\`\`\`
## Execution Steps
@@ -62,10 +62,10 @@
{{step_2_code}}
\`\`\`
### Step 3: 执行脚本(可选)
### Step 3: Execute Script (Optional)
\`\`\`javascript
// 调用脚本示例
// Script execution example
// const result = await ExecuteScript('script-id', { input_path: `${workDir}/data.json` });
// if (!result.success) throw new Error(result.stderr);
// console.log(result.outputs.output_file);
@@ -85,25 +85,25 @@
{{next_phase_link}}
```
## 变量说明
## Variable Descriptions
| 变量 | 说明 |
|------|------|
| `{{phase_number}}` | 阶段序号 (1, 2, 3...) |
| `{{phase_name}}` | 阶段名称 |
| `{{phase_description}}` | 一句话描述 |
| `{{objectives}}` | 目标列表 |
| `{{input_dependency}}` | 输入依赖文件 |
| `{{step_N_name}}` | 步骤名称 |
| `{{step_N_code}}` | 步骤代码 |
| `{{output_file}}` | 输出文件名 |
| `{{output_format}}` | 输出格式 |
| `{{quality_checklist}}` | 质量检查项 |
| `{{next_phase_link}}` | 下一阶段链接 |
| Variable | Description |
|----------|-------------|
| `{{phase_number}}` | Phase number (1, 2, 3...) |
| `{{phase_name}}` | Phase name |
| `{{phase_description}}` | One-line description |
| `{{objectives}}` | List of objectives |
| `{{input_dependency}}` | Input dependency file |
| `{{step_N_name}}` | Step name |
| `{{step_N_code}}` | Step code |
| `{{output_file}}` | Output filename |
| `{{output_format}}` | Output format |
| `{{quality_checklist}}` | Quality checklist items |
| `{{next_phase_link}}` | Next phase link |
## 脚本调用说明
## Script Invocation Guide
### 目录约定
### Directory Convention
```
scripts/
@@ -112,154 +112,154 @@ scripts/
└── transform.js # id: transform, runtime: node
```
- **命名即 ID**:文件名(不含扩展名)= 脚本 ID
- **扩展名即运行时**`.py` → python, `.sh` → bash, `.js` → node
- **Name is ID**: Filename (without extension) = script ID
- **Extension is runtime**: `.py` → python, `.sh` → bash, `.js` → node
### 调用语法
### Invocation Syntax
```javascript
// 一行调用
// Single-line invocation
const result = await ExecuteScript('script-id', { key: value });
// 检查结果
// Check result
if (!result.success) throw new Error(result.stderr);
// 获取输出
// Get output
const { output_file } = result.outputs;
```
### 返回格式
### Return Format
```typescript
interface ScriptResult {
success: boolean; // exit code === 0
stdout: string; // 标准输出
stderr: string; // 标准错误
outputs: object; // 从 stdout 解析的 JSON 输出
stdout: string; // Standard output
stderr: string; // Standard error
outputs: object; // JSON output parsed from stdout
}
```
## Phase 类型模板
## Phase Type Templates
### 1. 收集型 Phase (Collection)
### 1. Collection Phase
```markdown
# Phase 1: Requirements Collection
收集用户需求和项目配置。
Collect user requirements and project configuration.
## Objective
- 收集用户输入
- 自动检测项目信息
- 生成配置文件
- Collect user input
- Auto-detect project information
- Generate configuration file
## Execution Steps
### Step 1: 用户交互
### Step 1: User Interaction
\`\`\`javascript
const userInput = await AskUserQuestion({
questions: [
{
question: "请选择...",
header: "选项",
question: "Please select...",
header: "Option",
multiSelect: false,
options: [
{ label: "选项A", description: "..." },
{ label: "选项B", description: "..." }
{ label: "Option A", description: "..." },
{ label: "Option B", description: "..." }
]
}
]
});
\`\`\`
### Step 2: 自动检测
### Step 2: Auto-detection
\`\`\`javascript
// 检测项目信息
// Detect project information
const packageJson = JSON.parse(Read('package.json'));
const projectName = packageJson.name;
\`\`\`
### Step 3: 生成配置
### Step 3: Generate Configuration
\`\`\`javascript
const config = {
name: projectName,
userChoice: userInput["选项"],
userChoice: userInput["Option"],
// ...
};
Write(`${workDir}/config.json`, JSON.stringify(config, null, 2));
Write(\`${workDir}/config.json\`, JSON.stringify(config, null, 2));
\`\`\`
## Output
- **File**: `config.json`
- **File**: \`config.json\`
- **Format**: JSON
```
### 2. 分析型 Phase (Analysis)
### 2. Analysis Phase
```markdown
# Phase 2: Deep Analysis
深度分析代码结构。
Analyze code structure in depth.
## Objective
- 扫描代码文件
- 提取关键信息
- 生成分析报告
- Scan code files
- Extract key information
- Generate analysis report
## Execution Steps
### Step 1: 文件扫描
### Step 1: File Scanning
\`\`\`javascript
const files = Glob('src/**/*.ts');
\`\`\`
### Step 2: 内容分析
### Step 2: Content Analysis
\`\`\`javascript
const analysisResults = [];
for (const file of files) {
const content = Read(file);
// 分析逻辑
analysisResults.push({ file, /* 分析结果 */ });
// Analysis logic
analysisResults.push({ file, /* analysis results */ });
}
\`\`\`
### Step 3: 生成报告
### Step 3: Generate Report
\`\`\`javascript
Write(`${workDir}/analysis.json`, JSON.stringify(analysisResults, null, 2));
Write(\`${workDir}/analysis.json\`, JSON.stringify(analysisResults, null, 2));
\`\`\`
## Output
- **File**: `analysis.json`
- **File**: \`analysis.json\`
- **Format**: JSON
```
### 3. 并行型 Phase (Parallel)
### 3. Parallel Phase
```markdown
# Phase 3: Parallel Processing
并行处理多个子任务。
Process multiple subtasks in parallel.
## Objective
- 启动多个 Agent 并行执行
- 收集各 Agent 结果
- 合并输出
- Launch multiple agents for parallel execution
- Collect results from each agent
- Merge outputs
## Execution Steps
### Step 1: 准备任务
### Step 1: Prepare Tasks
\`\`\`javascript
const tasks = [
@@ -269,11 +269,11 @@ const tasks = [
];
\`\`\`
### Step 2: 并行执行
### Step 2: Parallel Execution
\`\`\`javascript
const results = await Promise.all(
tasks.map(task =>
tasks.map(task =>
Task({
subagent_type: 'universal-executor',
run_in_background: false,
@@ -283,7 +283,7 @@ const results = await Promise.all(
);
\`\`\`
### Step 3: 合并结果
### Step 3: Merge Results
\`\`\`javascript
const merged = results.map((r, i) => ({
@@ -291,83 +291,83 @@ const merged = results.map((r, i) => ({
result: JSON.parse(r)
}));
Write(`${workDir}/parallel-results.json`, JSON.stringify(merged, null, 2));
Write(\`${workDir}/parallel-results.json\`, JSON.stringify(merged, null, 2));
\`\`\`
## Output
- **File**: `parallel-results.json`
- **File**: \`parallel-results.json\`
- **Format**: JSON
```
### 4. 组装型 Phase (Assembly)
### 4. Assembly Phase
```markdown
# Phase 4: Document Assembly
组装最终输出文档。
Assemble final output documents.
## Objective
- 读取各阶段产出
- 合并内容
- 生成最终文档
- Read outputs from each phase
- Merge content
- Generate final document
## Execution Steps
### Step 1: 读取产出
### Step 1: Read Outputs
\`\`\`javascript
const config = JSON.parse(Read(`${workDir}/config.json`));
const analysis = JSON.parse(Read(`${workDir}/analysis.json`));
const sections = Glob(`${workDir}/sections/*.md`).map(f => Read(f));
const config = JSON.parse(Read(\`${workDir}/config.json\`));
const analysis = JSON.parse(Read(\`${workDir}/analysis.json\`));
const sections = Glob(\`${workDir}/sections/*.md\`).map(f => Read(f));
\`\`\`
### Step 2: 组装内容
### Step 2: Assemble Content
\`\`\`javascript
const document = \`
# \${config.name}
## 概述
## Overview
\${config.description}
## 详细内容
## Detailed Content
\${sections.join('\\n\\n')}
\`;
\`\`\`
### Step 3: 写入文件
### Step 3: Write File
\`\`\`javascript
Write(`${workDir}/${config.name}-output.md`, document);
Write(\`${workDir}/\${config.name}-output.md\`, document);
\`\`\`
## Output
- **File**: `{name}-output.md`
- **File**: \`{name}-output.md\`
- **Format**: Markdown
```
### 5. 验证型 Phase (Validation)
### 5. Validation Phase
```markdown
# Phase 5: Validation
验证输出质量。
Verify output quality.
## Objective
- 检查输出完整性
- 验证内容质量
- 生成验证报告
- Check output completeness
- Verify content quality
- Generate validation report
## Execution Steps
### Step 1: 完整性检查
### Step 1: Completeness Check
\`\`\`javascript
const outputFile = `${workDir}/${config.name}-output.md`;
const outputFile = \`${workDir}/\${config.name}-output.md\`;
const content = Read(outputFile);
const completeness = {
hasTitle: content.includes('# '),
@@ -376,16 +376,16 @@ const completeness = {
};
\`\`\`
### Step 2: 质量评估
### Step 2: Quality Assessment
\`\`\`javascript
const quality = {
completeness: Object.values(completeness).filter(v => v).length / 3 * 100,
// 其他维度...
// Other dimensions...
};
\`\`\`
### Step 3: 生成报告
### Step 3: Generate Report
\`\`\`javascript
const report = {
@@ -394,55 +394,55 @@ const report = {
issues: []
};
Write(`${workDir}/validation-report.json`, JSON.stringify(report, null, 2));
Write(\`${workDir}/validation-report.json\`, JSON.stringify(report, null, 2));
\`\`\`
## Output
- **File**: `validation-report.json`
- **File**: \`validation-report.json\`
- **Format**: JSON
```
## 生成函数
## Generation Function
```javascript
function generateSequentialPhase(phaseConfig, index, phases, skillConfig) {
const prevPhase = index > 0 ? phases[index - 1] : null;
const nextPhase = index < phases.length - 1 ? phases[index + 1] : null;
return `# Phase ${index + 1}: ${phaseConfig.name}
${phaseConfig.description || `执行 ${phaseConfig.name}`}
${phaseConfig.description || `Execute ${phaseConfig.name}`}
## Objective
- ${phaseConfig.objectives?.join('\n- ') || 'TODO: 定义目标'}
- ${phaseConfig.objectives?.join('\n- ') || 'TODO: Define objectives'}
## Input
- 依赖: \`${prevPhase ? prevPhase.output : 'user input'}\`
- 配置: \`{workDir}/skill-config.json\`
- Dependency: \`${prevPhase ? prevPhase.output : 'user input'}\`
- Config: \`{workDir}/skill-config.json\`
## Execution Steps
### Step 1: 准备
### Step 1: Preparation
\`\`\`javascript
${prevPhase ?
`const prevOutput = JSON.parse(Read(\`\${workDir}/${prevPhase.output}\`));` :
'// 首阶段,从配置开始'}
${prevPhase ?
`const prevOutput = JSON.parse(Read(\`${workDir}/${prevPhase.output}\`));` :
'// First phase, start from configuration'}
\`\`\`
### Step 2: 处理
### Step 2: Processing
\`\`\`javascript
// TODO: 实现核心逻辑
// TODO: Implement core logic
\`\`\`
### Step 3: 输出
### Step 3: Output
\`\`\`javascript
Write(\`\${workDir}/${phaseConfig.output}\`, JSON.stringify(result, null, 2));
Write(\`${workDir}/${phaseConfig.output}\`, JSON.stringify(result, null, 2));
\`\`\`
## Output
@@ -452,13 +452,13 @@ Write(\`\${workDir}/${phaseConfig.output}\`, JSON.stringify(result, null, 2));
## Quality Checklist
- [ ] 输入验证通过
- [ ] 核心逻辑执行成功
- [ ] 输出格式正确
- [ ] Input validation passed
- [ ] Core logic executed successfully
- [ ] Output format correct
${nextPhase ?
${nextPhase ?
`## Next Phase\n\n→ [Phase ${index + 2}: ${nextPhase.name}](${nextPhase.id}.md)` :
'## Completion\n\n此为最后阶段。'}
'## Completion\n\nThis is the final phase.'}
`;
}
```

View File

@@ -1,34 +1,34 @@
# SKILL.md Template
用于生成新 Skill 入口文件的模板。
Template for generating new Skill entry files.
## Purpose
生成新 Skill 的入口文件 (SKILL.md),作为 Skill 的主文档和执行入口点。
Generate the entry file (SKILL.md) for new Skills, serving as the main documentation and execution entry point for the Skill.
## Usage Context
| Phase | Usage |
|-------|-------|
| Phase 2 (Structure Generation) | 创建 SKILL.md 入口文件 |
| Generation Trigger | `config.execution_mode` 决定架构图样式 |
| Phase 2 (Structure Generation) | Create SKILL.md entry file |
| Generation Trigger | `config.execution_mode` determines architecture diagram style |
| Output Location | `.claude/skills/{skill-name}/SKILL.md` |
---
## ⚠️ 重要:YAML Front Matter 规范
## Important: YAML Front Matter Specification
> **CRITICAL**: SKILL.md 文件必须以 YAML front matter 开头,即以 `---` 作为文件第一行。
> **CRITICAL**: The SKILL.md file MUST begin with YAML front matter, meaning `---` must be the first line of the file.
>
> **禁止**使用以下格式:
> - `# Title` 然后 `## Metadata` + yaml 代码块 ❌
> - 任何在 `---` 之前的内容 ❌
> **Do NOT use** the following formats:
> - `# Title` followed by `## Metadata` + yaml code block
> - Any content before `---`
>
> **正确格式**:文件第一行必须是 `---`
> **Correct format**: The first line MUST be `---`
## 可直接应用的模板
## Ready-to-use Template
以下是完整的 SKILL.md 模板。生成时**直接复制应用**,将 `{{变量}}` 替换为实际值:
The following is a complete SKILL.md template. When generating, **directly copy and apply** it, replacing `{{variables}}` with actual values:
---
name: {{skill_name}}
@@ -52,9 +52,9 @@ allowed-tools: {{allowed_tools}}
---
## ⚠️ Mandatory Prerequisites (强制前置条件)
## Mandatory Prerequisites
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。未阅读规范直接执行将导致输出不符合质量标准。
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents. Proceeding without reading the specifications will result in outputs that do not meet quality standards.
{{mandatory_prerequisites}}
@@ -80,31 +80,33 @@ Bash(\`mkdir -p "\${workDir}"\`);
{{output_structure}}
\`\`\`
## Reference Documents
## Reference Documents by Phase
> **Important**: Reference documents should be organized by execution phase, clearly marking when and in what scenarios they are used. Avoid listing documents in a flat manner.
{{reference_table}}
---
## 变量说明
## Variable Descriptions
| 变量 | 类型 | 来源 |
|------|------|------|
| Variable | Type | Source |
|----------|------|--------|
| `{{skill_name}}` | string | config.skill_name |
| `{{display_name}}` | string | config.display_name |
| `{{description}}` | string | config.description |
| `{{triggers}}` | string | config.triggers.join(", ") |
| `{{allowed_tools}}` | string | config.allowed_tools.join(", ") |
| `{{architecture_diagram}}` | string | 根据 execution_mode 生成 (包含 Phase 0) |
| `{{design_principles}}` | string | 根据 execution_mode 生成 |
| `{{mandatory_prerequisites}}` | string | 强制前置阅读文档列表 (specs + templates) |
| `{{execution_flow}}` | string | 根据 phases/actions 生成 (Phase 0 在最前) |
| `{{architecture_diagram}}` | string | Generated based on execution_mode (includes Phase 0) |
| `{{design_principles}}` | string | Generated based on execution_mode |
| `{{mandatory_prerequisites}}` | string | List of mandatory prerequisite reading documents (specs + templates) |
| `{{execution_flow}}` | string | Generated from phases/actions (Phase 0 first) |
| `{{output_location}}` | string | config.output.location |
| `{{additional_dirs}}` | string | 根据 execution_mode 生成 |
| `{{output_structure}}` | string | 根据配置生成 |
| `{{reference_table}}` | string | 根据文件列表生成 |
| `{{additional_dirs}}` | string | Generated based on execution_mode |
| `{{output_structure}}` | string | Generated based on configuration |
| `{{reference_table}}` | string | Generated from file list |
## 生成函数
## Generation Function
```javascript
function generateSkillMd(config) {
@@ -116,32 +118,32 @@ function generateSkillMd(config) {
.replace(/\{\{description\}\}/g, config.description)
.replace(/\{\{triggers\}\}/g, config.triggers.map(t => `"${t}"`).join(", "))
.replace(/\{\{allowed_tools\}\}/g, config.allowed_tools.join(", "))
.replace(/\{\{architecture_diagram\}\}/g, generateArchitecture(config)) // 包含 Phase 0
.replace(/\{\{architecture_diagram\}\}/g, generateArchitecture(config)) // Includes Phase 0
.replace(/\{\{design_principles\}\}/g, generatePrinciples(config))
.replace(/\{\{mandatory_prerequisites\}\}/g, generatePrerequisites(config)) // 强制前置条件
.replace(/\{\{execution_flow\}\}/g, generateFlow(config)) // Phase 0 在最前
.replace(/\{\{mandatory_prerequisites\}\}/g, generatePrerequisites(config)) // Mandatory prerequisites
.replace(/\{\{execution_flow\}\}/g, generateFlow(config)) // Phase 0 first
.replace(/\{\{output_location\}\}/g, config.output.location)
.replace(/\{\{additional_dirs\}\}/g, generateAdditionalDirs(config))
.replace(/\{\{output_structure\}\}/g, generateOutputStructure(config))
.replace(/\{\{reference_table\}\}/g, generateReferenceTable(config));
}
// 生成强制前置条件表格
// Generate mandatory prerequisites table
function generatePrerequisites(config) {
const specs = config.specs || [];
const templates = config.templates || [];
let result = '### 规范文档 (必读)\n\n';
result += '| Document | Purpose | Priority |\n';
result += '|----------|---------|----------|\n';
let result = '### Specification Documents (Required Reading)\n\n';
result += '| Document | Purpose | When |\n';
result += '|----------|---------|------|\n';
specs.forEach((spec, index) => {
const priority = index === 0 ? '**P0 - 最高**' : 'P1';
result += `| [${spec.path}](${spec.path}) | ${spec.purpose} | ${priority} |\n`;
const when = index === 0 ? '**Must read before execution**' : 'Recommended before execution';
result += `| [${spec.path}](${spec.path}) | ${spec.purpose} | ${when} |\n`;
});
if (templates.length > 0) {
result += '\n### 模板文件 (生成前必读)\n\n';
result += '\n### Template Files (Must read before generation)\n\n';
result += '| Document | Purpose |\n';
result += '|----------|---------|\n';
templates.forEach(tmpl => {
@@ -151,9 +153,71 @@ function generatePrerequisites(config) {
return result;
}
// Generate phase-by-phase reference document guide
function generateReferenceTable(config) {
const phases = config.phases || config.actions || [];
const specs = config.specs || [];
const templates = config.templates || [];
let result = '';
// Generate document navigation for each execution phase
phases.forEach((phase, index) => {
const phaseNum = index + 1;
const phaseTitle = phase.display_name || phase.name;
result += `### Phase ${phaseNum}: ${phaseTitle}\n`;
result += `Documents to reference when executing Phase ${phaseNum}\n\n`;
// List documents related to this phase
const relatedDocs = filterDocsByPhase(specs, phase, index);
if (relatedDocs.length > 0) {
result += '| Document | Purpose | When to Use |\n';
result += '|----------|---------|-------------|\n';
relatedDocs.forEach(doc => {
result += `| [${doc.path}](${doc.path}) | ${doc.purpose} | ${doc.context || 'Reference content'} |\n`;
});
result += '\n';
}
});
// Troubleshooting section
result += '### Debugging & Troubleshooting\n';
result += 'Documents to reference when encountering issues\n\n';
result += '| Issue | Solution Document |\n';
result += '|-------|-------------------|\n';
result += `| Phase execution failed | Refer to the relevant Phase documentation |\n`;
result += `| Output does not meet expectations | [specs/quality-standards.md](specs/quality-standards.md) - Verify quality standards |\n`;
result += '\n';
// In-depth learning reference
result += '### Reference & Background\n';
result += 'For understanding the original implementation and design decisions\n\n';
result += '| Document | Purpose | Notes |\n';
result += '|----------|---------|-------|\n';
templates.forEach(tmpl => {
result += `| [${tmpl.path}](${tmpl.path}) | ${tmpl.purpose} | Reference during generation |\n`;
});
return result;
}
// Helper function: Get Phase emoji (removed)
// Note: Emoji support has been removed. Consider using Phase numbers instead.
// Helper function: Filter documents by Phase
function filterDocsByPhase(specs, phase, phaseIndex) {
// Simple filtering logic: match phase name keywords
const keywords = phase.name.toLowerCase().split('-');
return specs.filter(spec => {
const specName = spec.path.toLowerCase();
return keywords.some(kw => specName.includes(kw));
});
}
```
## Sequential 模式示例
## Sequential Mode Example
```markdown
---
@@ -169,36 +233,33 @@ Generate API documentation from source code.
## Architecture Overview
\`\`\`
┌─────────────────────────────────────────────────────────────────┐
⚠️ Phase 0: Specification → 阅读并理解设计规范 (强制前置) │
│ Study │
Phase 1: Scanning → endpoints.json │
Phase 2: Parsing → schemas.json │
│ ↓ │
│ Phase 3: Generation → api-docs.md │
└─────────────────────────────────────────────────────────────────┘
Phase 0: Specification Study (Mandatory prerequisite - Read and understand design specifications)
Phase 1: Scanning → endpoints.json
Phase 2: Parsing schemas.json
Phase 3: Generation → api-docs.md
\`\`\`
## ⚠️ Mandatory Prerequisites (强制前置条件)
## Mandatory Prerequisites
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents.
### 规范文档 (必读)
### Specification Documents (Required Reading)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/api-standards.md](specs/api-standards.md) | API 文档标准规范 | **P0 - 最高** |
| [specs/api-standards.md](specs/api-standards.md) | API documentation standards specification | **P0 - Highest** |
### 模板文件 (生成前必读)
### Template Files (Must read before generation)
| Document | Purpose |
|----------|---------|
| [templates/endpoint-doc.md](templates/endpoint-doc.md) | 端点文档模板 |
| [templates/endpoint-doc.md](templates/endpoint-doc.md) | Endpoint documentation template |
```
## Autonomous 模式示例
## Autonomous Mode Example
```markdown
---
@@ -214,36 +275,34 @@ Interactive task management with CRUD operations.
## Architecture Overview
\`\`\`
┌─────────────────────────────────────────────────────────────────┐
│ ⚠️ Phase 0: Specification Study (强制前置) │
└───────────────┬─────────────────────────────────────────────────
┌─────────────────────────────────────────────────────────────────┐
│ Orchestrator (状态驱动决策)
└────────────────────────────────────────────────────────────────┘
┌────────────────────────────────┐
↓ ↓ ↓ ↓
───────┐ ┌───────┐ ┌───────┐ ┌───────
│ List │ │Create │ │ Edit │ │Delete │
└───────┘ └───────┘ └───────┘ └───────┘
Phase 0: Specification Study (Mandatory prerequisite)
────────────────────────────────────────
Orchestrator (State-driven decision) │
───────────────────────────────────────
────────────────────────
↓ ↓
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ List │ │ Create │ │ Edit │ │ Delete │
└────────┘ └────────┘ └────────┘ └────────┘
\`\`\`
## ⚠️ Mandatory Prerequisites (强制前置条件)
## Mandatory Prerequisites
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**完整阅读以下文档。
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents.
### 规范文档 (必读)
### Specification Documents (Required Reading)
| Document | Purpose | Priority |
|----------|---------|----------|
| [specs/task-schema.md](specs/task-schema.md) | 任务数据结构规范 | **P0 - 最高** |
| [specs/action-catalog.md](specs/action-catalog.md) | 动作目录 | P1 |
| [specs/task-schema.md](specs/task-schema.md) | Task data structure specification | **P0 - Highest** |
| [specs/action-catalog.md](specs/action-catalog.md) | Action catalog | P1 |
### 模板文件 (生成前必读)
### Template Files (Must read before generation)
| Document | Purpose |
|----------|---------|
| [templates/orchestrator-base.md](templates/orchestrator-base.md) | 编排器模板 |
| [templates/action-base.md](templates/action-base.md) | 动作模板 |
| [templates/orchestrator-base.md](templates/orchestrator-base.md) | Orchestrator template |
| [templates/action-base.md](templates/action-base.md) | Action template |
```

View File

@@ -1,6 +1,6 @@
---
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding
argument-hint: TOPIC="<topic or question to analyze>"
description: Interactive collaborative analysis with documented discussions, CLI-assisted exploration, and evolving understanding. Supports depth control and iteration limits.
argument-hint: "TOPIC=\"<topic or question>\" [--depth=standard|deep|full] [--max-iterations=<n>] [--verbose]"
---
# Codex Analyze-With-File Prompt
@@ -22,6 +22,9 @@ Interactive collaborative analysis workflow with **documented discussion process
**$TOPIC**
- `--depth`: Analysis depth (standard|deep|full)
- `--max-iterations`: Max discussion rounds
## Execution Process
```

View File

@@ -1,6 +1,6 @@
---
description: Convert brainstorm session output to parallel-dev-cycle input with idea selection and context enrichment
argument-hint: SESSION="<brainstorm-session-id>" [--idea=<index>] [--auto]
description: Convert brainstorm session output to parallel-dev-cycle input with idea selection and context enrichment. Unified parameter format.
argument-hint: "--session=<id> [--idea=<index>] [--auto] [--launch]"
---
# Brainstorm to Cycle Adapter
@@ -15,9 +15,10 @@ Bridge workflow that converts **brainstorm-with-file** output to **parallel-dev-
| Argument | Required | Description |
|----------|----------|-------------|
| SESSION | Yes | Brainstorm session ID (e.g., `BS-rate-limiting-2025-01-28`) |
| --session | Yes | Brainstorm session ID (e.g., `BS-rate-limiting-2025-01-28`) |
| --idea | No | Pre-select idea by index (0-based, from top_ideas) |
| --auto | No | Auto-select top-scored idea without confirmation |
| --launch | No | Auto-launch parallel-dev-cycle without preview |
## Output

View File

@@ -1,6 +1,6 @@
---
description: Interactive brainstorming with multi-perspective analysis, idea expansion, and documented thought evolution
argument-hint: TOPIC="<idea or topic to brainstorm>"
description: Interactive brainstorming with multi-perspective analysis, idea expansion, and documented thought evolution. Supports perspective selection and idea limits.
argument-hint: "TOPIC=\"<idea or topic>\" [--perspectives=role1,role2,...] [--max-ideas=<n>] [--focus=<area>] [--verbose]"
---
# Codex Brainstorm-With-File Prompt
@@ -22,6 +22,10 @@ Interactive brainstorming workflow with **documented thought evolution**. Expand
**$TOPIC**
- `--perspectives`: Analysis perspectives (role1,role2,...)
- `--max-ideas`: Max number of ideas
- `--focus`: Focus area
## Execution Process
```
@@ -227,53 +231,97 @@ ${newFocusFromUser}
### Phase 2: Divergent Exploration (Multi-Perspective)
#### Step 2.1: Creative Perspective Analysis
Launch 3 parallel agents for multi-perspective brainstorming:
Explore from creative/innovative angle:
```javascript
const cliPromises = []
- Think beyond obvious solutions - what would be surprising/delightful?
- Cross-domain inspiration (what can we learn from other industries?)
- Challenge assumptions - what if the opposite were true?
- Generate 'moonshot' ideas alongside practical ones
- Consider future trends and emerging technologies
// Agent 1: Creative/Innovative Perspective (Gemini)
cliPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Creative brainstorming for '$TOPIC' - generate innovative, unconventional ideas
Success: 5+ unique creative solutions that push boundaries
Output:
TASK:
• Think beyond obvious solutions - what would be surprising/delightful?
• Explore cross-domain inspiration (what can we learn from other industries?)
• Challenge assumptions - what if the opposite were true?
• Generate 'moonshot' ideas alongside practical ones
• Consider future trends and emerging technologies
MODE: analysis
CONTEXT: @**/* | Topic: $TOPIC
Exploration vectors: ${explorationVectors.map(v => v.title).join(', ')}
EXPECTED:
- 5+ creative ideas with brief descriptions
- Each idea rated: novelty (1-5), potential impact (1-5)
- Key assumptions challenged
- Cross-domain inspirations
- One 'crazy' idea that might just work
#### Step 2.2: Pragmatic Perspective Analysis
CONSTRAINTS: ${brainstormMode === 'structured' ? 'Keep ideas technically feasible' : 'No constraints - think freely'}
" --tool gemini --mode analysis`,
run_in_background: true
})
)
Evaluate from implementation reality:
// Agent 2: Pragmatic/Implementation Perspective (Codex)
cliPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Pragmatic analysis for '$TOPIC' - focus on implementation reality
Success: Actionable approaches with clear implementation paths
- Technical feasibility of core concept
- Existing patterns/libraries that could help
- Integration with current codebase
- Implementation complexity estimates
- Potential technical blockers
- Incremental implementation approach
TASK:
• Evaluate technical feasibility of core concept
• Identify existing patterns/libraries that could help
• Consider integration with current codebase
• Estimate implementation complexity
• Highlight potential technical blockers
• Suggest incremental implementation approach
Output:
MODE: analysis
CONTEXT: @**/* | Topic: $TOPIC
Exploration vectors: \${explorationVectors.map(v => v.title).join(', ')}
EXPECTED:
- 3-5 practical implementation approaches
- Each rated: effort (1-5), risk (1-5), reuse potential (1-5)
- Technical dependencies identified
- Quick wins vs long-term solutions
- Recommended starting point
#### Step 2.3: Systematic Perspective Analysis
CONSTRAINTS: Focus on what can actually be built with current tech stack
" --tool codex --mode analysis\`,
run_in_background: true
})
)
Analyze from architectural standpoint:
// Agent 3: Systematic/Architectural Perspective (Claude)
cliPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Systematic analysis for '$TOPIC' - architectural and structural thinking
Success: Well-structured solution framework with clear tradeoffs
- Decompose the problem into sub-problems
- Identify architectural patterns that apply
- Map dependencies and interactions
- Consider scalability implications
- Evaluate long-term maintainability
- Propose systematic solution structure
TASK:
• Decompose the problem into sub-problems
• Identify architectural patterns that apply
• Map dependencies and interactions
• Consider scalability implications
• Evaluate long-term maintainability
• Propose systematic solution structure
Output:
MODE: analysis
CONTEXT: @**/* | Topic: $TOPIC
Exploration vectors: \${explorationVectors.map(v => v.title).join(', ')}
EXPECTED:
- Problem decomposition diagram (text)
- 2-3 architectural approaches with tradeoffs
- Dependency mapping
@@ -281,6 +329,29 @@ Output:
- Recommended architecture pattern
- Risk matrix
CONSTRAINTS: Consider existing system architecture
" --tool claude --mode analysis\`,
run_in_background: true
})
)
// Wait for all CLI analyses to complete
const [creativeResult, pragmaticResult, systematicResult] = await Promise.all(cliPromises)
// Parse results from each perspective
const creativeIdeas = parseCreativeResult(creativeResult)
const pragmaticApproaches = parsePragmaticResult(pragmaticResult)
const architecturalOptions = parseSystematicResult(systematicResult)
```
**Multi-Perspective Coordination**:
| Agent | Perspective | Tool | Focus Areas |
|-------|-------------|------|-------------|
| 1 | Creative/Innovative | Gemini | Novel ideas, cross-domain inspiration, moonshots |
| 2 | Pragmatic/Implementation | Codex | Feasibility, tech stack, blockers, quick wins |
| 3 | Systematic/Architectural | Claude | Decomposition, patterns, scalability, risks |
#### Step 2.4: Aggregate Multi-Perspective Findings
```javascript

View File

@@ -1,6 +1,6 @@
---
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution
argument-hint: [--dry-run] [FOCUS="<area>"]
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution. Supports targeted cleanup and confirmation.
argument-hint: "[--dry-run] [--focus=<area>] [--target=sessions|documents|dead-code] [--confirm]"
---
# Workflow Clean Command
@@ -16,6 +16,11 @@ Evidence-based intelligent cleanup command. Systematically identifies stale arti
**Focus area**: $FOCUS (or entire project if not specified)
**Mode**: $ARGUMENTS
- `--dry-run`: Preview cleanup without executing
- `--focus`: Focus area (module or path)
- `--target`: Cleanup target (sessions|documents|dead-code)
- `--confirm`: Skip confirmation, execute directly
## Execution Process
```

View File

@@ -1,6 +1,6 @@
---
description: Compact current session memory into structured text for session recovery
argument-hint: "[optional: session description]"
description: Compact current session memory into structured text for session recovery. Supports custom descriptions and tagging.
argument-hint: "[--description=\"...\"] [--tags=<tag1,tag2>] [--force]"
---
# Memory Compact Command (/memory:compact)
@@ -17,9 +17,11 @@ The `memory:compact` command **compresses current session working memory** into
## 2. Parameters
- `"session description"` (Optional): Session description to supplement objective
- `--description`: Custom session description (optional)
- Example: "completed core-memory module"
- Example: "debugging JWT refresh - suspected memory leak"
- `--tags`: Comma-separated tags for categorization (optional)
- `--force`: Skip confirmation, save directly
## 3. Structured Output Format

View File

@@ -1,6 +1,6 @@
---
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and analysis-assisted correction
argument-hint: BUG="<bug description or error message>"
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and analysis-assisted correction. Supports scope and focus control.
argument-hint: "BUG=\"<bug description or error message>\" [--scope=<path>] [--focus=<component>] [--depth=standard|deep] [--verbose]"
---
# Codex Debug-With-File Prompt
@@ -21,6 +21,10 @@ Enhanced evidence-based debugging with **documented exploration process**. Recor
**$BUG**
- `--scope`: Debug scope limit (file path)
- `--focus`: Focus component
- `--depth`: Debug depth (standard|deep)
## Execution Process
```

View File

@@ -1,6 +1,6 @@
---
description: Execute workflow tasks sequentially from session folder
argument-hint: SESSION=<path-to-session-folder>
description: Execute workflow tasks sequentially from session folder. Supports parallel execution and task filtering.
argument-hint: "SESSION=<path-to-session-folder> [--parallel] [--filter=<pattern>] [--skip-tests]"
---
# Workflow Execute (Codex Version)
@@ -13,6 +13,10 @@ argument-hint: SESSION=<path-to-session-folder>
Session folder path via `$SESSION` (e.g., `.workflow/active/WFS-auth-system`)
- `--parallel`: Execute tasks in parallel (default: sequential)
- `--filter`: Filter tasks by pattern (e.g., `IMPL-1.*`)
- `--skip-tests`: Skip test execution
## Task Tracking (JSON Source of Truth + Codex TODO Tool)
- **Source of truth**: Task state MUST be read from and written to `$SESSION/.task/IMPL-*.json`.

View File

@@ -1,6 +1,6 @@
---
description: Execute all solutions from issue queue with git commit after each solution
argument-hint: "--queue <queue-id> [--worktree [<existing-path>]]"
description: Execute all solutions from issue queue with git commit after each solution. Supports batch processing and execution control.
argument-hint: "--queue=<id> [--worktree=<path|new>] [--skip-tests] [--skip-build] [--dry-run] [--verbose]"
---
# Issue Execute (Codex Version)
@@ -19,6 +19,14 @@ Before starting execution, load project context:
This ensures execution follows project conventions and patterns.
## Parameters
- `--queue=<id>`: Queue ID to execute (REQUIRED)
- `--worktree=<path|new>`: Worktree path or 'new' for creating new worktree
- `--skip-tests`: Skip test execution during solution implementation
- `--skip-build`: Skip build step
- `--dry-run`: Preview execution without making changes
## Queue ID Requirement (MANDATORY)
**`--queue <queue-id>` parameter is REQUIRED**

View File

@@ -1,19 +1,38 @@
---
description: Create structured issue from GitHub URL or text description
argument-hint: "<github-url | text-description> [--priority 1-5]"
description: Create structured issue from GitHub URL or text description. Auto mode with --yes flag.
argument-hint: "[--yes|-y] <GITHUB_URL | TEXT_DESCRIPTION> [--priority PRIORITY] [--labels LABELS]"
---
# Issue New (Codex Version)
# Issue New Command
## Goal
## Core Principles
Create a new issue from a GitHub URL or text description. Detect input clarity and ask clarifying questions only when necessary. Register the issue for planning.
**Core Principle**: Requirement Clarity Detection → Ask only when needed
**Requirement Clarity Detection** → Ask only when needed
**Flexible Parameter Input** → Support multiple formats and flags
**Auto Mode Support**`--yes`/`-y` skips confirmation questions
```
Clear Input (GitHub URL, structured text) → Direct creation
Unclear Input (vague description) → Minimal clarifying questions
Clear Input (GitHub URL, structured text) → Direct creation (no questions)
Unclear Input (vague description) → Clarifying questions (unless --yes)
Auto Mode (--yes or -y flag) → Skip all questions, use inference
```
## Parameter Formats
```bash
# GitHub URL (auto-detected)
/prompts:issue-new https://github.com/owner/repo/issues/123
/prompts:issue-new GH-123
# Text description with priority
/prompts:issue-new "Login fails with special chars" --priority 1
# Auto mode - skip all questions
/prompts:issue-new --yes "something broken"
/prompts:issue-new -y https://github.com/owner/repo/issues/456
# With labels
/prompts:issue-new "Database migration needed" --priority 2 --labels "enhancement,database"
```
## Issue Structure
@@ -78,25 +97,46 @@ echo '{"title":"...", "context":"...", "priority":3}' | ccw issue create
## Workflow
### Step 1: Analyze Input Clarity
### Phase 0: Parse Arguments & Flags
Parse and detect input type:
Extract parameters from user input:
```bash
# Input examples (Codex placeholders)
INPUT="$1" # GitHub URL or text description
AUTO_MODE="$2" # Check for --yes or -y flag
# Parse flags (comma-separated in single argument)
PRIORITY=$(echo "$INPUT" | grep -oP '(?<=--priority\s)\d+' || echo "3")
LABELS=$(echo "$INPUT" | grep -oP '(?<=--labels\s)\K[^-]*' | xargs)
AUTO_YES=$(echo "$INPUT" | grep -qE '--yes|-y' && echo "true" || echo "false")
# Extract main input (URL or text) - remove all flags
MAIN_INPUT=$(echo "$INPUT" | sed 's/\s*--priority\s*\d*//; s/\s*--labels\s*[^-]*//; s/\s*--yes\s*//; s/\s*-y\s*//' | xargs)
```
### Phase 1: Analyze Input & Clarity Detection
```javascript
// Detection patterns
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
const isGitHubShort = input.match(/^#(\d+)$/);
const hasStructure = input.match(/(expected|actual|affects|steps):/i);
const mainInput = userInput.trim();
// Detect input type and clarity
const isGitHubUrl = mainInput.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
const isGitHubShort = mainInput.match(/^GH-?\d+$/);
const hasStructure = mainInput.match(/(expected|actual|affects|steps):/i);
// Clarity score: 0-3
let clarityScore = 0;
if (isGitHubUrl || isGitHubShort) clarityScore = 3; // GitHub = fully clear
else if (hasStructure) clarityScore = 2; // Structured text = clear
else if (input.length > 50) clarityScore = 1; // Long text = somewhat clear
else if (mainInput.length > 50) clarityScore = 1; // Long text = somewhat clear
else clarityScore = 0; // Vague
// Auto mode override: if --yes/-y flag, skip all questions
const skipQuestions = process.env.AUTO_YES === 'true';
```
### Step 2: Extract Issue Data
### Phase 2: Extract Issue Data & Priority
**For GitHub URL/Short:**
@@ -104,13 +144,14 @@ else clarityScore = 0; // Vague
# Fetch issue details via gh CLI
gh issue view <issue-ref> --json number,title,body,labels,url
# Parse response
# Parse response with priority override
{
"id": "GH-123",
"title": "...",
"priority": $PRIORITY || 3, # Use --priority flag if provided
"source": "github",
"source_url": "https://github.com/...",
"labels": ["bug", "priority:high"],
"labels": $LABELS || [...existing labels],
"context": "..."
}
```
@@ -126,10 +167,12 @@ const expected = text.match(/expected:?\s*([^.]+)/i);
const actual = text.match(/actual:?\s*([^.]+)/i);
const affects = text.match(/affects?:?\s*([^.]+)/i);
// Build issue data
// Build issue data with flags
{
"id": id,
"title": text.split(/[.\n]/)[0].substring(0, 60),
"priority": $PRIORITY || 3, # From --priority flag
"labels": $LABELS?.split(',') || [], # From --labels flag
"source": "text",
"context": text.substring(0, 500),
"expected_behavior": expected?.[1]?.trim(),
@@ -137,7 +180,7 @@ const affects = text.match(/affects?:?\s*([^.]+)/i);
}
```
### Step 3: Context Hint (Conditional)
### Phase 3: Context Hint (Conditional)
For medium clarity (score 1-2) without affected components:
@@ -150,11 +193,13 @@ Add discovered files to `affected_components` (max 3 files).
**Note**: Skip this for GitHub issues (already have context) and vague inputs (needs clarification first).
### Step 4: Clarification (Only if Unclear)
### Phase 4: Conditional Clarification (Skip if Auto Mode)
**Only for clarity score < 2:**
**Only ask if**: clarity < 2 AND NOT in auto mode (skipQuestions = false)
Present a prompt asking for more details:
If auto mode (`--yes`/`-y`), proceed directly to creation with inferred details.
Otherwise, present minimal clarification:
```
Input unclear. Please describe:
@@ -165,17 +210,23 @@ Input unclear. Please describe:
Wait for user response, then update issue data.
### Step 5: GitHub Publishing Decision
### Phase 5: GitHub Publishing Decision (Skip if Already GitHub)
For non-GitHub sources, determine if user wants to publish to GitHub:
```
For non-GitHub sources AND NOT auto mode, ask:
```
Would you like to publish this issue to GitHub?
1. Yes, publish to GitHub (create issue and link it)
2. No, keep local only (store without GitHub sync)
```
### Step 6: Create Issue
In auto mode: Default to NO (keep local only, unless explicitly requested with --publish flag).
### Phase 6: Create Issue
**Create via CLI:**
@@ -198,7 +249,7 @@ GH_NUMBER=$(echo $GH_URL | grep -oE '/issues/([0-9]+)$' | grep -oE '[0-9]+')
ccw issue update ${ISSUE_ID} --github-url "${GH_URL}" --github-number ${GH_NUMBER}
```
### Step 7: Output Result
### Phase 7: Output Result
```markdown
## Issue Created
@@ -241,45 +292,99 @@ Before completing, verify:
| Very vague input | Ask clarifying questions |
| Issue already exists | Report duplicate, show existing |
## Examples
### Clear Input (No Questions)
```bash
# GitHub URL
codex -p "@.codex/prompts/issue-new.md https://github.com/org/repo/issues/42"
# → Fetches, parses, creates immediately
# Structured text
codex -p "@.codex/prompts/issue-new.md 'Login fails with special chars. Expected: success. Actual: 500'"
# → Parses structure, creates immediately
```
### Vague Input (Clarification)
```bash
codex -p "@.codex/prompts/issue-new.md 'auth broken'"
# → Asks: "Please describe the issue in more detail"
# → User provides details
# → Creates issue
```
## Start Execution
Parse input and detect clarity:
### Parameter Parsing (Phase 0)
```bash
# Get input from arguments
INPUT="${1}"
# Codex passes full input as $1
INPUT="$1"
# Detect if GitHub URL
if echo "${INPUT}" | grep -qE 'github\.com/.*/issues/[0-9]+'; then
echo "GitHub URL detected - fetching issue..."
gh issue view "${INPUT}" --json number,title,body,labels,url
else
echo "Text input detected - analyzing clarity..."
# Continue with text parsing
fi
# Extract flags
AUTO_YES=false
PRIORITY=3
LABELS=""
# Parse using parameter expansion
while [[ $INPUT == -* ]]; do
case $INPUT in
-y|--yes)
AUTO_YES=true
INPUT="${INPUT#* }" # Remove flag and space
;;
--priority)
PRIORITY="${INPUT#* }"
PRIORITY="${PRIORITY%% *}" # Extract next word
INPUT="${INPUT#*--priority $PRIORITY }"
;;
--labels)
LABELS="${INPUT#* }"
LABELS="${LABELS%% --*}" # Extract until next flag
INPUT="${INPUT#*--labels $LABELS }"
;;
*)
INPUT="${INPUT#* }"
;;
esac
done
# Remaining text is the main input (GitHub URL or description)
MAIN_INPUT="$INPUT"
```
Then follow the workflow based on detected input type.
### Execution Flow (All Phases)
```
1. Parse Arguments (Phase 0)
└─ Extract: AUTO_YES, PRIORITY, LABELS, MAIN_INPUT
2. Detect Input Type & Clarity (Phase 1)
├─ GitHub URL/Short? → Score 3 (clear)
├─ Structured text? → Score 2 (somewhat clear)
├─ Long text? → Score 1 (vague)
└─ Short text? → Score 0 (very vague)
3. Extract Issue Data (Phase 2)
├─ If GitHub: gh CLI fetch + parse
└─ If text: Parse structure + apply PRIORITY/LABELS flags
4. Context Hint (Phase 3, conditional)
└─ Only for clarity 1-2 AND no components → ACE search (max 3 files)
5. Clarification (Phase 4, conditional)
└─ If clarity < 2 AND NOT auto mode → Ask for details
└─ If auto mode (AUTO_YES=true) → Skip, use inferred data
6. GitHub Publishing (Phase 5, conditional)
├─ If source = github → Skip (already from GitHub)
└─ If source != github:
├─ If auto mode → Default NO (keep local)
└─ If manual → Ask user preference
7. Create Issue (Phase 6)
├─ Create local issue via ccw CLI
└─ If publishToGitHub → gh issue create → link
8. Output Result (Phase 7)
└─ Display: ID, title, source, GitHub status, next step
```
## Quick Examples
```bash
# Auto mode - GitHub issue (no questions)
/prompts:issue-new -y https://github.com/org/repo/issues/42
# Standard mode - text with priority
/prompts:issue-new "Database connection timeout" --priority 1
# Auto mode - text with priority and labels
/prompts:issue-new --yes "Add caching layer" --priority 2 --labels "enhancement,performance"
# GitHub short format
/prompts:issue-new GH-123
# Complex text description
/prompts:issue-new "User login fails. Expected: redirect to dashboard. Actual: 500 error"
```

View File

@@ -1,6 +1,6 @@
---
description: Execute tasks based on in-memory plan, prompt description, or file content (Codex Subagent Version)
argument-hint: "[--in-memory] [\"task description\"|file-path]"
description: Execute tasks based on in-memory plan, prompt description, or file content with optimized Codex subagent orchestration. Supports multiple input modes and execution control.
argument-hint: "[--plan=in-memory|<file-path>] [--parallel] [--skip-tests] [--dry-run]"
---
# Workflow Lite-Execute Command (Codex Subagent Version)
@@ -28,14 +28,15 @@ Flexible task execution command with **optimized Codex subagent orchestration**.
### Command Syntax
```bash
/workflow:lite-execute [FLAGS] <INPUT>
# Flags
--in-memory Use plan from memory (called by lite-plan)
# Arguments
<input> Task description string, or path to file (required)
```
### Flags
- `--plan=in-memory|<file-path>`: Input mode (in-memory plan or file path)
- `--parallel`: Execute tasks in parallel (default: sequential)
- `--skip-tests`: Skip test execution
- `--dry-run`: Preview execution without making changes
## Input Modes
### Mode 1: In-Memory Plan

View File

@@ -1,6 +1,6 @@
---
description: Lightweight bug diagnosis and fix workflow with optimized Codex subagent patterns (merged mode)
argument-hint: BUG="<bug description or error message>" [HOTFIX="true"]
description: Lightweight bug diagnosis and fix workflow with optimized Codex subagent patterns. Supports severity and scope control.
argument-hint: "BUG=\"<description or error message>\" [--hotfix] [--severity=critical|high|medium|low] [--scope=<path>]"
---
# Workflow Lite-Fix Command (Codex Optimized Version)
@@ -28,6 +28,10 @@ Intelligent lightweight bug fixing command with **optimized subagent orchestrati
**Target bug**: $BUG
**Hotfix mode**: $HOTFIX
- `--hotfix`: Hotfix mode, prioritize speed
- `--severity`: Bug severity (critical|high|medium|low)
- `--scope`: Debug scope limit (file path)
## Execution Modes
### Mode Selection Based on Severity

View File

@@ -1,6 +1,6 @@
---
description: Lightweight interactive planning workflow with single-agent merged mode for explore → clarify → plan full flow
argument-hint: TASK="<task description or file.md path>"
description: Lightweight interactive planning workflow with single-agent merged mode for explore → clarify → plan full flow. Supports depth control and auto-clarification.
argument-hint: "TASK=\"<task description or file.md path>\" [--depth=standard|deep] [--auto-clarify] [--max-rounds=<n>] [--verbose]"
---
# Workflow Lite-Plan-A (Merged Mode)
@@ -35,6 +35,10 @@ Single-agent merged mode for lightweight planning. One agent handles exploration
**Target task**: $TASK
- `--depth`: Exploration depth (standard|deep)
- `--auto-clarify`: Auto clarify, skip confirmation
- `--max-rounds`: Max interaction rounds
## Execution Process
```

View File

@@ -1,6 +1,6 @@
---
description: Lightweight interactive planning workflow with hybrid mode - multi-agent parallel exploration + primary agent merge/clarify/plan
argument-hint: TASK="<task description or file.md path>"
description: Lightweight interactive planning workflow with hybrid mode - multi-agent parallel exploration + primary agent merge/clarify/plan. Supports agent count and iteration control.
argument-hint: "TASK=\"<task description or file.md path>\" [--num-agents=<n>] [--max-iterations=<n>] [--angles=role1,role2,...]"
---
# Workflow Lite-Plan-B (Hybrid Mode)
@@ -35,6 +35,10 @@ Hybrid mode for complex planning tasks. Multiple agents explore in parallel from
**Target task**: $TASK
- `--num-agents`: Number of parallel agents (default: 4)
- `--max-iterations`: Max iteration rounds
- `--angles`: Exploration angles (role1,role2,...)
## Execution Process
```

View File

@@ -1,6 +1,6 @@
---
description: Lightweight interactive planning workflow with Codex subagent orchestration, outputs plan.json after user confirmation
argument-hint: TASK="<task description or file.md path>" [EXPLORE="true"]
description: Lightweight interactive planning workflow with Codex subagent orchestration, outputs plan.json after user confirmation. Supports depth and exploration control.
argument-hint: "TASK=\"<description or file.md path>\" [--depth=standard|deep] [--explore] [--auto]"
---
# Workflow Lite-Plan Command (Codex Subagent Version)
@@ -22,6 +22,10 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
**Target task**: $TASK
**Force exploration**: $EXPLORE
- `--depth`: Exploration depth (standard|deep)
- `--explore`: Force exploration phase
- `--auto`: Auto mode, skip confirmation
## Execution Process
```

View File

@@ -0,0 +1,530 @@
---
description: Merge multiple planning/brainstorm/analysis outputs, resolve conflicts, and synthesize unified plan. Multi-team input aggregation and plan crystallization
argument-hint: "PATTERN=\"<plan pattern or topic>\" [--rule=consensus|priority|hierarchy] [--output=<path>] [--auto] [--verbose]"
---
# Codex Merge-Plans-With-File Prompt
## Overview
Plan aggregation and conflict resolution workflow. Takes multiple planning artifacts (brainstorm conclusions, analysis recommendations, quick-plans, implementation plans) and synthesizes them into a unified, conflict-resolved execution plan.
**Core workflow**: Load Sources → Parse Plans → Conflict Analysis → Arbitration → Unified Plan
**Key features**:
- **Multi-Source Support**: brainstorm, analysis, quick-plan, IMPL_PLAN, task JSONs
- **Conflict Detection**: Identify contradictions across all input plans
- **Resolution Rules**: consensus, priority-based, or hierarchical resolution
- **Unified Synthesis**: Single authoritative plan from multiple perspectives
- **Decision Tracking**: Full audit trail of conflicts and resolutions
## Target Pattern
**$PATTERN**
- `--rule`: Conflict resolution (consensus | priority | hierarchy) - consensus by default
- `--output`: Output directory (default: .workflow/.merged/{pattern})
- `--auto`: Auto-resolve conflicts using rule, skip confirmations
- `--verbose`: Include detailed conflict analysis
## Execution Process
```
Phase 1: Discovery & Loading
├─ Search for artifacts matching pattern
├─ Load synthesis.json, conclusions.json, IMPL_PLAN.md, task JSONs
├─ Parse into normalized task structure
└─ Validate completeness
Phase 2: Plan Normalization
├─ Convert all formats to common task representation
├─ Extract: tasks, dependencies, effort, risks
├─ Identify scope and boundaries
└─ Aggregate recommendations
Phase 3: Conflict Detection (Parallel)
├─ Architecture conflicts: different design approaches
├─ Task conflicts: overlapping or duplicated tasks
├─ Effort conflicts: different estimates
├─ Risk conflicts: different risk assessments
├─ Scope conflicts: different feature sets
└─ Generate conflict matrix
Phase 4: Conflict Resolution
├─ Analyze source rationale for each conflict
├─ Apply resolution rule (consensus / priority / hierarchy)
├─ Escalate unresolvable conflicts to user (unless --auto)
├─ Document decision rationale
└─ Generate resolutions.json
Phase 5: Plan Synthesis
├─ Merge task lists (deduplicate, combine insights)
├─ Integrate dependencies
├─ Consolidate effort and risk estimates
├─ Generate execution sequence
└─ Output unified-plan.json
Output:
├─ .workflow/.merged/{sessionId}/merge.md (process log)
├─ .workflow/.merged/{sessionId}/source-index.json (input sources)
├─ .workflow/.merged/{sessionId}/conflicts.json (conflict matrix)
├─ .workflow/.merged/{sessionId}/resolutions.json (decisions)
├─ .workflow/.merged/{sessionId}/unified-plan.json (for execution)
└─ .workflow/.merged/{sessionId}/unified-plan.md (human-readable)
```
## Implementation Details
### Phase 1: Discover & Load Sources
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const mergeSlug = "$PATTERN".toLowerCase()
.replace(/[*?]/g, '-')
.replace(/[^a-z0-9\u4e00-\u9fa5-]+/g, '-')
.substring(0, 30)
const sessionId = `MERGE-${mergeSlug}-${getUtc8ISOString().substring(0, 10)}`
const sessionFolder = `.workflow/.merged/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
// Search paths for matching artifacts
const searchPaths = [
`.workflow/.brainstorm/*${$PATTERN}*/synthesis.json`,
`.workflow/.analysis/*${$PATTERN}*/conclusions.json`,
`.workflow/.planning/*${$PATTERN}*/synthesis.json`,
`.workflow/.plan/*${$PATTERN}*IMPL_PLAN.md`,
`.workflow/**/*${$PATTERN}*.json`
]
// Load and validate each source
const sourcePlans = []
for (const pattern of searchPaths) {
const matches = glob(pattern)
for (const path of matches) {
const plan = loadAndParsePlan(path)
if (plan?.tasks?.length > 0) {
sourcePlans.push({ path, type: inferType(path), plan })
}
}
}
```
### Phase 2: Normalize Plans
Convert all source formats to common structure:
```javascript
const normalizedPlans = sourcePlans.map((src, idx) => ({
index: idx,
source: src.path,
type: src.type,
metadata: {
title: src.plan.title || `Plan ${idx + 1}`,
topic: src.plan.topic,
complexity: src.plan.complexity_level || 'unknown'
},
tasks: src.plan.tasks.map(task => ({
id: `T${idx}-${task.id || task.title.substring(0, 20)}`,
title: task.title,
description: task.description,
type: task.type || inferTaskType(task),
priority: task.priority || 'normal',
effort: { estimated: task.effort_estimate, from_plan: idx },
risk: { level: task.risk_level || 'medium', from_plan: idx },
dependencies: task.dependencies || [],
source_plan_index: idx
}))
}))
```
### Phase 3: Parallel Conflict Detection
Launch parallel agents to detect and analyze conflicts:
```javascript
// Parallel conflict detection with CLI agents
const conflictPromises = []
// Agent 1: Detect effort and task conflicts
conflictPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Detect effort conflicts and task duplicates across multiple plans
Success: Complete identification of conflicting estimates and duplicate tasks
TASK:
• Identify tasks with significantly different effort estimates (>50% variance)
• Detect duplicate/similar tasks across plans
• Analyze effort estimation reasoning
• Suggest resolution for each conflict
MODE: analysis
CONTEXT:
- Plan 1: ${JSON.stringify(normalizedPlans[0]?.tasks?.slice(0,3) || [], null, 2)}
- Plan 2: ${JSON.stringify(normalizedPlans[1]?.tasks?.slice(0,3) || [], null, 2)}
- [Additional plans...]
EXPECTED:
- Effort conflicts detected (task name, estimate in each plan, variance %)
- Duplicate task analysis (similar tasks, scope differences)
- Resolution recommendation for each conflict
- Confidence level for each detection
CONSTRAINTS: Focus on significant conflicts (>30% effort variance)
" --tool gemini --mode analysis`,
run_in_background: true
})
)
// Agent 2: Analyze architecture and scope conflicts
conflictPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Analyze architecture and scope conflicts across plans
Success: Clear identification of design approach differences and scope gaps
TASK:
• Identify different architectural approaches in plans
• Detect scope differences (features included/excluded)
• Analyze design philosophy conflicts
• Suggest approach to reconcile different visions
MODE: analysis
CONTEXT:
- Plan 1 architecture: \${normalizedPlans[0]?.metadata?.complexity || 'unknown'}
- Plan 2 architecture: \${normalizedPlans[1]?.metadata?.complexity || 'unknown'}
- Different design approaches detected: \${JSON.stringify(['approach1', 'approach2'])}
EXPECTED:
- Architecture conflicts identified (approach names and trade-offs)
- Scope conflicts (features/components in plan A but not B, vice versa)
- Design philosophy alignment/misalignment
- Recommendation for unified approach
- Pros/cons of each architectural approach
CONSTRAINTS: Consider both perspectives objectively
" --tool codex --mode analysis\`,
run_in_background: true
})
)
// Agent 3: Analyze risk assessment conflicts
conflictPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Analyze risk assessment conflicts across plans
Success: Unified risk assessment with conflict resolution
TASK:
• Identify tasks/areas with significantly different risk ratings
• Analyze risk assessment reasoning
• Detect missing risks in some plans
• Propose unified risk assessment
MODE: analysis
CONTEXT:
- Risk areas with disagreement: [list areas]
- Plan 1 risk ratings: [risk matrix]
- Plan 2 risk ratings: [risk matrix]
EXPECTED:
- Risk conflicts identified (area, plan A rating, plan B rating)
- Explanation of why assessments differ
- Missing risks analysis (important in one plan but not others)
- Unified risk rating recommendation
- Confidence level for each assessment
CONSTRAINTS: Be realistic in risk assessment, not pessimistic
" --tool claude --mode analysis\`,
run_in_background: true
})
)
// Agent 4: Synthesize conflicts into resolution strategy
conflictPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Synthesize all conflicts into unified resolution strategy
Success: Clear path to merge plans with informed trade-off decisions
TASK:
• Analyze all detected conflicts holistically
• Identify which conflicts are critical vs. non-critical
• Propose resolution for each conflict type
• Suggest unified approach that honors valid insights from all plans
MODE: analysis
CONTEXT:
- Total conflicts detected: [number]
- Conflict types: effort, architecture, scope, risk
- Resolution rule: \${resolutionRule}
- Plan importance: \${normalizedPlans.map(p => p.metadata.title).join(', ')}
EXPECTED:
- Conflict priority ranking (critical, important, minor)
- Recommended resolution for each conflict
- Rationale for each recommendation
- Potential issues with proposed resolution
- Fallback options if recommendation not accepted
- Overall merge strategy and sequencing
CONSTRAINTS: Aim for solution that maximizes learning from all perspectives
" --tool gemini --mode analysis\`,
run_in_background: true
})
)
// Wait for all conflict detection agents to complete
const [effortConflicts, archConflicts, riskConflicts, resolutionStrategy] =
await Promise.all(conflictPromises)
// Parse and consolidate all conflict findings
const allConflicts = {
effort: parseEffortConflicts(effortConflicts),
architecture: parseArchConflicts(archConflicts),
risk: parseRiskConflicts(riskConflicts),
strategy: parseResolutionStrategy(resolutionStrategy),
timestamp: getUtc8ISOString()
}
Write(\`\${sessionFolder}/conflicts.json\`, JSON.stringify(allConflicts, null, 2))
```
**Conflict Detection Workflow**:
| Agent | Conflict Type | Focus | Output |
|-------|--------------|--------|--------|
| Gemini | Effort & Tasks | Duplicate detection, estimate variance | Conflicts with variance %, resolution suggestions |
| Codex | Architecture & Scope | Design approach differences | Design conflicts, scope gaps, recommendations |
| Claude | Risk Assessment | Risk rating disagreements | Risk conflicts, missing risks, unified assessment |
| Gemini | Resolution Strategy | Holistic synthesis | Priority ranking, resolution path, trade-offs |
### Phase 4: Resolve Conflicts
**Rule: Consensus (default)**
- Use median/average of conflicting estimates
- Merge scope differences
- Document minority viewpoints
**Rule: Priority**
- First plan has highest authority
- Later plans supplement but don't override
**Rule: Hierarchy**
- User ranks plan importance
- Higher-ranked plan wins conflicts
```javascript
const resolutions = {}
if (rule === 'consensus') {
for (const conflict of conflicts.effort) {
resolutions[conflict.task] = {
resolved: calculateMedian(conflict.estimates),
method: 'consensus-median',
rationale: 'Used median of all estimates'
}
}
} else if (rule === 'priority') {
for (const conflict of conflicts.effort) {
const primary = conflict.estimates[0] // First plan
resolutions[conflict.task] = {
resolved: primary.value,
method: 'priority-based',
rationale: `Selected from plan ${primary.from_plan} (highest priority)`
}
}
} else if (rule === 'hierarchy') {
// Request user ranking if not --auto
const ranking = getUserPlanRanking(normalizedPlans)
// Apply hierarchy-based resolution
}
Write(`${sessionFolder}/resolutions.json`, JSON.stringify(resolutions, null, 2))
```
### Phase 5: Generate Unified Plan
```javascript
const unifiedPlan = {
session_id: sessionId,
merge_timestamp: getUtc8ISOString(),
summary: {
total_source_plans: sourcePlans.length,
original_tasks: allTasks.length,
merged_tasks: deduplicatedTasks.length,
conflicts_resolved: Object.keys(resolutions).length,
resolution_rule: rule
},
tasks: deduplicatedTasks.map(task => ({
id: task.id,
title: task.title,
description: task.description,
effort: task.resolved_effort,
risk: task.resolved_risk,
dependencies: task.merged_dependencies,
source_plans: task.contributing_plans
})),
execution_sequence: topologicalSort(tasks),
critical_path: identifyCriticalPath(tasks),
risks: aggregateRisks(tasks),
success_criteria: aggregateCriteria(tasks)
}
Write(`${sessionFolder}/unified-plan.json`, JSON.stringify(unifiedPlan, null, 2))
```
### Phase 6: Generate Human-Readable Plan
```markdown
# Merged Planning Session
**Session ID**: ${sessionId}
**Pattern**: $PATTERN
**Created**: ${timestamp}
---
## Merge Summary
**Source Plans**: ${summary.total_source_plans}
**Original Tasks**: ${summary.original_tasks}
**Merged Tasks**: ${summary.merged_tasks}
**Conflicts Resolved**: ${summary.conflicts_resolved}
**Resolution Method**: ${summary.resolution_rule}
---
## Unified Task List
${tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}**
- Effort: ${task.effort}
- Risk: ${task.risk}
- From plans: ${task.source_plans.join(', ')}
`).join('\n')}
---
## Execution Sequence
**Critical Path**: ${critical_path.join(' → ')}
---
## Conflict Resolution Report
${Object.entries(resolutions).map(([key, res]) => `
- **${key}**: ${res.rationale}
`).join('\n')}
---
## Next Steps
**Execute**:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/unified-plan.json
\`\`\`
```
## Session Folder Structure
```
.workflow/.merged/{sessionId}/
├── merge.md # Process log
├── source-index.json # All input sources
├── conflicts.json # Detected conflicts
├── resolutions.json # How resolved
├── unified-plan.json # Merged plan (for execution)
└── unified-plan.md # Human-readable
```
## Resolution Rules Comparison
| Rule | Method | Best For | Tradeoff |
|------|--------|----------|----------|
| **Consensus** | Median/average | Similar-quality inputs | May miss extremes |
| **Priority** | First wins | Clear authority order | Discards alternatives |
| **Hierarchy** | User-ranked | Mixed stakeholders | Needs user input |
## Input Format Support
| Source Type | Detection Pattern | Parsing |
|-------------|-------------------|---------|
| Brainstorm | `.brainstorm/*/synthesis.json` | Top ideas → tasks |
| Analysis | `.analysis/*/conclusions.json` | Recommendations → tasks |
| Quick-Plan | `.planning/*/synthesis.json` | Direct task list |
| IMPL_PLAN | `*IMPL_PLAN.md` | Markdown → tasks |
| Task JSON | `*.json` with `tasks` | Direct mapping |
## Error Handling
| Situation | Action |
|-----------|--------|
| No plans found | List available plans, suggest search terms |
| Incompatible format | Skip, continue with others |
| Circular dependencies | Alert user, suggest manual review |
| Unresolvable conflict | Require user decision (unless --auto) |
## Integration Flow
```
Brainstorm Sessions / Analyses / Plans
├─ synthesis.json (session 1)
├─ conclusions.json (session 2)
├─ synthesis.json (session 3)
merge-plans-with-file
├─ unified-plan.json
unified-execute-with-file
Implementation
```
## Usage Patterns
**Pattern 1: Merge all auth-related plans**
```
PATTERN="authentication" --rule=consensus --auto
→ Finds all auth plans
→ Merges with consensus method
```
**Pattern 2: Prioritized merge**
```
PATTERN="payment" --rule=priority
→ First plan has authority
→ Others supplement
```
**Pattern 3: Team input merge**
```
PATTERN="feature-*" --rule=hierarchy
→ Asks for plan ranking
→ Applies hierarchy resolution
```
---
**Now execute merge-plans-with-file for pattern**: $PATTERN

View File

@@ -0,0 +1,450 @@
---
description: Multi-agent rapid planning with minimal documentation, conflict resolution, and actionable synthesis. Lightweight planning from raw task, brainstorm, or analysis artifacts
argument-hint: "TOPIC=\"<planning topic or task>\" [--from=brainstorm|analysis|task|raw] [--perspectives=arch,impl,risk,decision] [--auto] [--verbose]"
---
# Codex Quick-Plan-With-File Prompt
## Overview
Multi-agent rapid planning workflow with **minimal documentation overhead**. Coordinates parallel agent analysis (architecture, implementation, validation, decision), synthesizes conflicting perspectives into actionable decisions, and generates an implementation-ready plan.
**Core workflow**: Parse Input → Parallel Analysis → Conflict Resolution → Plan Synthesis → Output
**Key features**:
- **Format Agnostic**: Consumes brainstorm conclusions, analysis recommendations, quick tasks, or raw descriptions
- **Minimal Docs**: Single plan.md (no lengthy timeline documentation)
- **Parallel Multi-Agent**: 4 concurrent perspectives for rapid analysis
- **Conflict Resolution**: Automatic conflict detection and synthesis
- **Actionable Output**: Direct task breakdown ready for execution
## Target Planning
**$TOPIC**
- `--from`: Input source type (brainstorm | analysis | task | raw) - auto-detected if omitted
- `--perspectives`: Which perspectives to use (arch, impl, risk, decision) - all by default
- `--auto`: Auto-confirm decisions, minimal user prompts
- `--verbose`: Verbose output with all reasoning
## Execution Process
```
Phase 1: Input Validation & Loading
├─ Parse input: topic | artifact reference
├─ Load artifact if referenced (synthesis.json | conclusions.json)
├─ Extract constraints and key requirements
└─ Initialize session folder
Phase 2: Parallel Multi-Agent Analysis (concurrent)
├─ Agent 1 (Architecture): Design decomposition, patterns, scalability
├─ Agent 2 (Implementation): Tech stack, feasibility, effort estimates
├─ Agent 3 (Validation): Risk matrix, testing strategy, monitoring
├─ Agent 4 (Decision): Recommendations, tradeoffs, execution strategy
└─ Aggregate findings into perspectives.json
Phase 3: Conflict Detection & Resolution
├─ Detect: effort conflicts, architecture conflicts, risk conflicts
├─ Analyze rationale for each conflict
├─ Synthesis via arbitration: generate unified recommendation
├─ Document conflicts and resolutions
└─ Update plan.md
Phase 4: Plan Synthesis
├─ Consolidate all insights
├─ Generate task breakdown (5-8 major tasks)
├─ Create execution strategy and dependencies
├─ Document assumptions and risks
└─ Output plan.md + synthesis.json
Output:
├─ .workflow/.planning/{sessionId}/plan.md (minimal, actionable)
├─ .workflow/.planning/{sessionId}/perspectives.json (agent findings)
├─ .workflow/.planning/{sessionId}/conflicts.json (decision points)
└─ .workflow/.planning/{sessionId}/synthesis.json (task breakdown for execution)
```
## Implementation Details
### Phase 1: Session Setup & Input Loading
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse input
const planSlug = "$TOPIC".toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 30)
const sessionId = `PLAN-${planSlug}-${getUtc8ISOString().substring(0, 10)}`
const sessionFolder = `.workflow/.planning/${sessionId}`
// Detect input type
let artifact = null
if ($TOPIC.startsWith('BS-') || "$TOPIC".includes('brainstorm')) {
artifact = loadBrainstormArtifact($TOPIC)
} else if ($TOPIC.startsWith('ANL-') || "$TOPIC".includes('analysis')) {
artifact = loadAnalysisArtifact($TOPIC)
}
bash(`mkdir -p ${sessionFolder}`)
```
### Phase 2: Parallel Multi-Agent Analysis
Run 4 agents in parallel using ccw cli:
```javascript
// Launch all 4 agents concurrently with Bash run_in_background
const agentPromises = []
// Agent 1 - Architecture (Gemini)
agentPromises.push(
Bash({
command: `ccw cli -p "
PURPOSE: Architecture & high-level design for '${planningTopic}'
Success: Clear component decomposition and architectural approach
TASK:
• Decompose problem into major components/modules
• Identify architectural patterns and integration points
• Design component interfaces and data models
• Assess scalability and maintainability implications
• Propose architectural approach with rationale
MODE: analysis
CONTEXT: @**/*
${artifact ? \`| Source artifact: \${artifact.type}\` : ''}
EXPECTED:
- Component decomposition (list with responsibilities)
- Module interfaces and contracts
- Data flow between components
- Architectural patterns applied (e.g., MVC, Event-Driven, etc.)
- Scalability assessment (1-5 rating with rationale)
- Architectural risks identified
CONSTRAINTS: Focus on long-term maintainability and extensibility
" --tool gemini --mode analysis`,
run_in_background: true
})
)
// Agent 2 - Implementation (Codex)
agentPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Implementation approach & technical feasibility for '\${planningTopic}'
Success: Concrete implementation strategy with realistic estimates
TASK:
• Evaluate technical feasibility of proposed approach
• Identify required technologies and dependencies
• Estimate effort: analysis/design/coding/testing/deployment
• Suggest implementation phases and milestones
• Highlight technical blockers and challenges
MODE: analysis
CONTEXT: @**/*
\${artifact ? \`| Source artifact: \${artifact.type}\` : ''}
EXPECTED:
- Technology stack recommendation (languages, frameworks, tools)
- Implementation complexity: high|medium|low (with justification)
- Effort breakdown (hours or complexity: analysis, design, coding, testing, deployment)
- Key technical decisions with tradeoffs explained
- Potential blockers and mitigation strategies
- Suggested implementation phases with sequencing
- Reusable components or libraries identified
CONSTRAINTS: Realistic with current tech stack
" --tool codex --mode analysis\`,
run_in_background: true
})
)
// Agent 3 - Validation & Risk (Claude)
agentPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Risk analysis and validation strategy for '\${planningTopic}'
Success: Comprehensive risk matrix with testing and deployment strategy
TASK:
• Identify technical risks and failure scenarios
• Assess timeline and resource risks
• Define validation/testing strategy (unit, integration, e2e, performance)
• Suggest monitoring and observability requirements
• Propose deployment strategy and rollback plan
MODE: analysis
CONTEXT: @**/*
\${artifact ? \`| Source artifact: \${artifact.type}\` : ''}
EXPECTED:
- Risk matrix (likelihood × impact, each 1-5)
- Top 3 technical risks with mitigation approaches
- Top 3 timeline/resource risks with mitigation
- Testing strategy (what to test, how, when, acceptance criteria)
- Deployment strategy (staged rollout, blue-green, canary, etc.)
- Rollback plan and recovery procedures
- Monitoring/observability requirements (metrics, logs, alerts)
- Overall risk rating: low|medium|high (with confidence)
CONSTRAINTS: Be realistic, not pessimistic
" --tool claude --mode analysis\`,
run_in_background: true
})
)
// Agent 4 - Strategic Decision (Gemini)
agentPromises.push(
Bash({
command: \`ccw cli -p "
PURPOSE: Strategic decisions and execution recommendations for '\${planningTopic}'
Success: Clear recommended approach with tradeoff analysis
TASK:
• Synthesize all perspectives into strategic recommendations
• Identify 2-3 critical decision points with recommended choices
• Clearly outline key tradeoffs (speed vs quality, scope vs timeline, risk vs cost)
• Propose go/no-go decision criteria and success metrics
• Suggest execution strategy and resource sequencing
MODE: analysis
CONTEXT: @**/*
\${artifact ? \`| Source artifact: \${artifact.type}\` : ''}
EXPECTED:
- Primary recommendation with strong rationale (1-2 paragraphs)
- Alternative approaches with pros/cons (2-3 alternatives)
- 2-3 critical decision points:
- What decision needs to be made
- Trade-offs for each option
- Recommended choice and why
- Key trade-offs explained (what we're optimizing for: speed/quality/risk/cost)
- Success metrics and go/no-go criteria
- Resource requirements and critical path items
- Suggested execution sequencing and phases
CONSTRAINTS: Focus on actionable decisions, provide clear rationale
" --tool gemini --mode analysis\`,
run_in_background: true
})
)
// Wait for all agents to complete
const [archResult, implResult, riskResult, decisionResult] = await Promise.all(agentPromises)
// Parse and extract findings from each agent result
const architecture = parseArchitectureResult(archResult)
const implementation = parseImplementationResult(implResult)
const validation = parseValidationResult(riskResult)
const recommendation = parseDecisionResult(decisionResult)
```
**Agent Focus Areas**:
| Agent | Perspective | Focus Areas |
|-------|-------------|------------|
| Gemini (Design) | Architecture patterns | Components, interfaces, scalability, patterns |
| Codex (Build) | Implementation reality | Tech stack, complexity, effort, blockers |
| Claude (Validate) | Risk & testing | Risk matrix, testing strategy, deployment, monitoring |
| Gemini (Decide) | Strategic synthesis | Recommendations, trade-offs, critical decisions |
### Phase 3: Parse & Aggregate Perspectives
```javascript
const perspectives = {
session_id: sessionId,
topic: "$TOPIC",
timestamp: getUtc8ISOString(),
architecture: {
components: [...],
patterns: [...],
scalability_rating: 3,
risks: [...]
},
implementation: {
technology_stack: [...],
complexity: "medium",
effort_breakdown: { analysis: 2, design: 3, coding: 8, testing: 4 },
blockers: [...]
},
validation: {
risk_matrix: [...],
top_risks: [{ title, impact, mitigation }, ...],
testing_strategy: "...",
monitoring: [...]
},
recommendation: {
primary_approach: "...",
alternatives: [...],
critical_decisions: [...],
tradeoffs: [...]
}
}
Write(`${sessionFolder}/perspectives.json`, JSON.stringify(perspectives, null, 2))
```
### Phase 4: Conflict Detection
Detect conflicts:
- Effort variance: Are estimates consistent?
- Risk disagreement: Do arch and validation agree on risks?
- Scope confusion: Are recommendations aligned?
- Architecture mismatch: Do design and implementation agree?
For each conflict: document it, then run synthesis arbitration.
### Phase 5: Generate Plan
```markdown
# Quick Planning Session
**Session ID**: ${sessionId}
**Topic**: $TOPIC
**Created**: ${timestamp}
---
## Executive Summary
${synthesis.executive_summary}
**Complexity**: ${synthesis.complexity_level}
**Estimated Effort**: ${formatEffort(synthesis.effort_breakdown)}
**Optimization Focus**: ${synthesis.optimization_focus}
---
## Architecture
**Primary Pattern**: ${synthesis.architecture_approach}
**Key Components**:
${synthesis.key_components.map((c, i) => `${i+1}. ${c.name}: ${c.responsibility}`).join('\n')}
---
## Implementation Strategy
**Technology Stack**:
${synthesis.technology_stack.map(t => `- ${t}`).join('\n')}
**Phases**:
${synthesis.phases.map((p, i) => `${i+1}. ${p.name} (${p.effort})`).join('\n')}
---
## Risk Assessment
**Overall Risk**: ${synthesis.overall_risk_level}
**Top 3 Risks**:
${synthesis.top_risks.map((r, i) => `${i+1}. **${r.title}** (Impact: ${r.impact})\n Mitigation: ${r.mitigation}`).join('\n\n')}
---
## Task Breakdown (Ready for Execution)
${synthesis.tasks.map((task, i) => `
${i+1}. **${task.id}: ${task.title}** (Effort: ${task.effort})
${task.description}
Dependencies: ${task.dependencies.join(', ') || 'none'}
`).join('\n')}
---
## Next Steps
**Execute with**:
\`\`\`
/workflow:unified-execute-with-file -p ${sessionFolder}/synthesis.json
\`\`\`
**Detailed planning if needed**:
\`\`\`
/workflow:plan "Based on: $TOPIC"
\`\`\`
```
## Session Folder Structure
```
.workflow/.planning/{sessionId}/
├── plan.md # Minimal, actionable
├── perspectives.json # Agent findings
├── conflicts.json # Conflicts & resolutions (if any)
└── synthesis.json # Task breakdown for execution
```
## Multi-Agent Coordination
| Agent | Perspective | Tools | Output |
|-------|-------------|-------|--------|
| Gemini (Design) | Architecture patterns | Design thinking, cross-domain | Components, patterns, scalability |
| Codex (Build) | Implementation reality | Tech stack evaluation | Stack, effort, feasibility |
| Claude (Validate) | Risk & testing | Risk assessment, QA | Risks, testing strategy |
| Gemini (Decide) | Strategic synthesis | Decision analysis | Recommendations, tradeoffs |
## Error Handling
| Situation | Action |
|-----------|--------|
| Agents conflict | Arbitration agent synthesizes recommendation |
| Missing blockers | Continue with available context, note gaps |
| Unclear input | Ask for clarification on planning focus |
| Estimate too high | Suggest MVP approach or phasing |
## Integration Flow
```
Raw Task / Brainstorm / Analysis
quick-plan-with-file (5-10 min)
├─ plan.md
├─ perspectives.json
└─ synthesis.json
unified-execute-with-file
Implementation
```
## Usage Patterns
**Pattern 1: Quick planning from task**
```
TOPIC="实现实时通知系统" --auto
→ Creates actionable plan in ~5 minutes
```
**Pattern 2: Convert brainstorm to execution plan**
```
TOPIC="BS-notifications-2025-01-28" --from=brainstorm
→ Reads synthesis.json from brainstorm
→ Generates implementation plan
```
**Pattern 3: From analysis to plan**
```
TOPIC="ANL-auth-2025-01-28" --from=analysis
→ Converts conclusions.json to executable plan
```
---
**Now execute quick-plan-with-file for topic**: $TOPIC

View File

@@ -0,0 +1,722 @@
---
description: Universal execution engine consuming planning/brainstorm/analysis output. Coordinates multi-agents, manages dependencies, and tracks execution with unified progress logging.
argument-hint: "PLAN_PATH=\"<path>\" [EXECUTION_MODE=\"sequential|parallel\"] [AUTO_CONFIRM=\"yes|no\"] [EXECUTION_CONTEXT=\"<focus area>\"]"
---
# Codex Unified-Execute-With-File Prompt
## Overview
Universal execution engine that consumes **any** planning/brainstorm/analysis output and executes it with minimal progress tracking. Coordinates multiple agents (code-developer, test-fix-agent, doc-generator, cli-execution-agent), handles dependencies intelligently, and maintains unified execution timeline.
**Core workflow**: Load Plan → Parse Tasks → Validate Dependencies → Execute Waves → Track Progress → Report Results
**Key features**:
- **Plan Format Agnostic**: Consumes IMPL_PLAN.md, brainstorm synthesis.json, analysis conclusions.json, debug resolutions
- **execution-events.md**: Single source of truth - unified execution log with full agent history
- **Multi-Agent Orchestration**: Parallel execution where possible, sequential where needed
- **Incremental Execution**: Resume from failure point, no re-execution of completed tasks
- **Dependency Management**: Automatic topological sort and execution wave grouping
- **Knowledge Chain**: Each agent reads all previous execution history in context
## Target Execution Plan
**Plan Source**: $PLAN_PATH
- `EXECUTION_MODE`: Strategy (sequential|parallel)
- `AUTO_CONFIRM`: Skip confirmations (yes|no)
- `EXECUTION_CONTEXT`: Focus area/module (optional)
## Execution Process
```
Session Detection:
├─ Check if execution session exists
├─ If exists → Resume mode
└─ If not → New session mode
Phase 1: Plan Loading & Validation
├─ Detect and parse plan file (multiple formats supported)
├─ Extract and normalize tasks
├─ Validate dependencies (detect cycles)
├─ Create execution session folder
├─ Initialize execution.md and execution-events.md
└─ Pre-execution validation
Phase 2: Execution Orchestration
├─ Topological sort for execution order
├─ Group tasks into execution waves (parallel-safe groups)
├─ Execute waves sequentially (tasks within wave execute in parallel)
├─ Monitor completion and capture artifacts
├─ Update progress in execution.md and execution-events.md
└─ Handle failures with retry/skip/abort logic
Phase 3: Progress Tracking & Unified Event Logging
├─ execution-events.md: Append-only unified log (SINGLE SOURCE OF TRUTH)
├─ Each agent reads all previous events at start
├─ Agent executes task with full context from previous agents
├─ Agent appends execution event (success/failure) with artifacts and notes
└─ Next agent reads complete history → knowledge chain
Phase 4: Completion & Summary
├─ Collect execution statistics
├─ Update execution.md with final status
├─ execution-events.md contains complete execution record
└─ Report results and offer follow-up options
```
## Implementation Details
### Session Setup & Plan Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Plan detection from $PLAN_PATH
let planPath = "$PLAN_PATH"
// If not provided, auto-detect
if (!planPath || planPath === "") {
const candidates = [
'.workflow/.plan/IMPL_PLAN.md',
'.workflow/plans/IMPL_PLAN.md',
'.workflow/IMPL_PLAN.md',
'.workflow/brainstorm/*/synthesis.json',
'.workflow/analyze/*/conclusions.json'
]
// Find most recent plan
planPath = findMostRecentFile(candidates)
if (!planPath) {
throw new Error("No execution plan found. Provide PLAN_PATH or ensure .workflow/IMPL_PLAN.md exists")
}
}
// Session setup
const executionMode = "$EXECUTION_MODE" || "parallel"
const autoConfirm = "$AUTO_CONFIRM" === "yes"
const executionContext = "$EXECUTION_CONTEXT" || ""
const planContent = Read(planPath)
const plan = parsePlan(planContent, planPath)
const executionId = `EXEC-${plan.slug}-${getUtc8ISOString().substring(0, 10)}-${randomId(4)}`
const executionFolder = `.workflow/.execution/${executionId}`
const executionPath = `${executionFolder}/execution.md`
const eventLogPath = `${executionFolder}/execution-events.md`
bash(`mkdir -p "${executionFolder}"`)
```
---
### Plan Format Parsers
Support multiple plan sources:
```javascript
function parsePlan(content, filePath) {
const ext = filePath.split('.').pop()
if (filePath.includes('IMPL_PLAN')) {
return parseImplPlan(content)
} else if (filePath.includes('brainstorm') && filePath.includes('synthesis')) {
return parseSynthesisPlan(content)
} else if (filePath.includes('analyze') && filePath.includes('conclusions')) {
return parseConclusionsPlan(content)
} else if (filePath.includes('debug') && filePath.includes('recommendations')) {
return parseDebugResolutionPlan(content)
} else if (ext === 'json' && content.includes('tasks')) {
return parseTaskJson(content)
}
throw new Error(`Unsupported plan format: ${filePath}`)
}
// IMPL_PLAN.md parser
function parseImplPlan(content) {
return {
type: 'impl-plan',
title: extractSection(content, 'Overview'),
phases: extractPhases(content),
tasks: extractTasks(content),
criticalFiles: extractCriticalFiles(content),
estimatedDuration: extractEstimate(content)
}
}
// Brainstorm synthesis.json parser
function parseSynthesisPlan(content) {
const synthesis = JSON.parse(content)
return {
type: 'brainstorm-synthesis',
title: synthesis.topic,
ideas: synthesis.top_ideas,
tasks: synthesis.top_ideas.map(idea => ({
id: `IDEA-${slugify(idea.title)}`,
type: 'investigation',
title: idea.title,
description: idea.description,
dependencies: [],
agent_type: 'universal-executor',
prompt: `Implement: ${idea.title}\n${idea.description}`,
expected_output: idea.next_steps
}))
}
}
```
---
## Phase 1: Plan Loading & Validation
### Step 1.1: Parse Plan and Extract Tasks
```javascript
const tasks = plan.tasks || parseTasksFromContent(plan)
// Normalize task structure
const normalizedTasks = tasks.map(task => ({
id: task.id || `TASK-${generateId()}`,
title: task.title || task.content,
description: task.description || task.activeForm,
type: task.type || inferTaskType(task), // 'code', 'test', 'doc', 'analysis', 'integration'
agent_type: task.agent_type || selectBestAgent(task),
dependencies: task.dependencies || [],
// Execution parameters
prompt: task.prompt || task.description,
files_to_modify: task.files_to_modify || [],
expected_output: task.expected_output || [],
// Metadata
priority: task.priority || 'normal',
parallel_safe: task.parallel_safe !== false,
// Status tracking
status: 'pending',
attempts: 0,
max_retries: 2
}))
// Validate and detect issues
const validation = {
cycles: detectDependencyCycles(normalizedTasks),
missing_dependencies: findMissingDependencies(normalizedTasks),
file_conflicts: detectOutputConflicts(normalizedTasks),
warnings: []
}
if (validation.cycles.length > 0) {
throw new Error(`Circular dependencies detected: ${validation.cycles.join(', ')}`)
}
```
### Step 1.2: Create execution.md
```javascript
const executionMarkdown = `# Execution Progress
**Execution ID**: ${executionId}
**Plan Source**: ${planPath}
**Started**: ${getUtc8ISOString()}
**Mode**: ${executionMode}
**Plan Summary**:
- Title: ${plan.title}
- Total Tasks: ${normalizedTasks.length}
- Phases: ${plan.phases?.length || 'N/A'}
---
## Execution Plan
### Task Overview
| Task ID | Title | Type | Agent | Dependencies | Status |
|---------|-------|------|-------|--------------|--------|
${normalizedTasks.map(t => `| ${t.id} | ${t.title} | ${t.type} | ${t.agent_type} | ${t.dependencies.join(',')} | ${t.status} |`).join('\n')}
### Dependency Graph
\`\`\`
${generateDependencyGraph(normalizedTasks)}
\`\`\`
### Execution Strategy
- **Mode**: ${executionMode}
- **Parallelization**: ${calculateParallel(normalizedTasks)}
- **Estimated Duration**: ${estimateTotalDuration(normalizedTasks)}
---
## Execution Timeline
*Updates as execution progresses*
---
## Current Status
${executionStatus()}
`
Write(executionPath, executionMarkdown)
```
### Step 1.3: Pre-Execution Confirmation
```javascript
if (!autoConfirm) {
AskUserQuestion({
questions: [{
question: `准备执行 ${normalizedTasks.length} 个任务,模式: ${executionMode}\n\n关键任务:\n${normalizedTasks.slice(0, 3).map(t => `${t.id}: ${t.title}`).join('\n')}\n\n继续?`,
header: "Confirmation",
multiSelect: false,
options: [
{ label: "开始执行", description: "按计划执行" },
{ label: "调整参数", description: "修改执行参数" },
{ label: "查看详情", description: "查看完整任务列表" },
{ label: "取消", description: "退出不执行" }
]
}]
})
}
```
---
## Phase 2: Execution Orchestration
### Step 2.1: Determine Execution Order
```javascript
// Topological sort for execution order
const executionOrder = topologicalSort(normalizedTasks)
// For parallel mode, group tasks into waves
let executionWaves = []
if (executionMode === 'parallel') {
executionWaves = groupIntoWaves(executionOrder, parallelLimit = 3)
} else {
executionWaves = executionOrder.map(task => [task])
}
```
### Step 2.2: Execute Task Waves
```javascript
let completedCount = 0
let failedCount = 0
const results = {}
for (let waveIndex = 0; waveIndex < executionWaves.length; waveIndex++) {
const wave = executionWaves[waveIndex]
console.log(`\n=== Wave ${waveIndex + 1}/${executionWaves.length} ===`)
console.log(`Tasks: ${wave.map(t => t.id).join(', ')}`)
// Launch tasks in parallel
const taskPromises = wave.map(task => executeTask(task, executionFolder))
// Wait for wave completion
const waveResults = await Promise.allSettled(taskPromises)
// Process results
for (let i = 0; i < waveResults.length; i++) {
const result = waveResults[i]
const task = wave[i]
if (result.status === 'fulfilled') {
results[task.id] = result.value
if (result.value.success) {
completedCount++
task.status = 'completed'
console.log(`${task.id}: Completed`)
} else if (result.value.retry) {
console.log(`⚠️ ${task.id}: Will retry`)
task.status = 'pending'
} else {
console.log(`${task.id}: Failed`)
}
} else {
console.log(`${task.id}: Execution error`)
}
}
// Update execution.md summary
appendExecutionTimeline(executionPath, waveIndex + 1, wave, waveResults)
}
```
### Step 2.3: Execute Individual Task with Unified Event Logging
```javascript
async function executeTask(task, executionFolder) {
const eventLogPath = `${executionFolder}/execution-events.md`
const startTime = Date.now()
try {
// Read previous execution events for context
let previousEvents = ''
if (fs.existsSync(eventLogPath)) {
previousEvents = Read(eventLogPath)
}
// Select agent based on task type
const agent = selectAgent(task.agent_type)
// Build execution context including previous agent outputs
const executionContext = `
## Previous Agent Executions (for reference)
${previousEvents}
---
## Current Task: ${task.id}
**Title**: ${task.title}
**Agent**: ${agent}
**Time**: ${getUtc8ISOString()}
### Description
${task.description}
### Context
- Modified Files: ${task.files_to_modify.join(', ')}
- Expected Output: ${task.expected_output.join(', ')}
### Requirements
${task.requirements || 'Follow the plan'}
### Constraints
${task.constraints || 'No breaking changes'}
`
// Execute based on agent type
let result
if (agent === 'code-developer' || agent === 'tdd-developer') {
result = await Task({
subagent_type: agent,
description: `Execute: ${task.title}`,
prompt: executionContext,
run_in_background: false
})
} else if (agent === 'test-fix-agent') {
result = await Task({
subagent_type: 'test-fix-agent',
description: `Execute Tests: ${task.title}`,
prompt: executionContext,
run_in_background: false
})
} else {
result = await Task({
subagent_type: 'universal-executor',
description: task.title,
prompt: executionContext,
run_in_background: false
})
}
// Capture artifacts
const artifacts = captureArtifacts(task, executionFolder)
// Append to unified execution events log
const eventEntry = `
## Task ${task.id} - COMPLETED ✅
**Timestamp**: ${getUtc8ISOString()}
**Duration**: ${calculateDuration(startTime)}ms
**Agent**: ${agent}
### Execution Summary
${generateSummary(result)}
### Key Outputs
${formatOutputs(result)}
### Generated Artifacts
${artifacts.map(a => `- **${a.type}**: \`${a.path}\` (${a.size})`).join('\n')}
### Notes for Next Agent
${generateNotesForNextAgent(result, task)}
---
`
appendToEventLog(eventLogPath, eventEntry)
return {
success: true,
task_id: task.id,
output: result,
artifacts: artifacts,
duration: calculateDuration(startTime)
}
} catch (error) {
// Append failure event to unified log
const failureEntry = `
## Task ${task.id} - FAILED ❌
**Timestamp**: ${getUtc8ISOString()}
**Duration**: ${calculateDuration(startTime)}ms
**Agent**: ${agent}
**Error**: ${error.message}
### Error Details
\`\`\`
${error.stack}
\`\`\`
### Recovery Notes for Next Attempt
${generateRecoveryNotes(error, task)}
---
`
appendToEventLog(eventLogPath, failureEntry)
// Handle failure: retry, skip, or abort
task.attempts++
if (task.attempts < task.max_retries && autoConfirm) {
console.log(`⚠️ ${task.id}: Failed, retrying (${task.attempts}/${task.max_retries})`)
return { success: false, task_id: task.id, error: error.message, retry: true, duration: calculateDuration(startTime) }
} else if (task.attempts >= task.max_retries && !autoConfirm) {
const decision = AskUserQuestion({
questions: [{
question: `任务失败: ${task.id}\n错误: ${error.message}`,
header: "Decision",
multiSelect: false,
options: [
{ label: "重试", description: "重新执行该任务" },
{ label: "跳过", description: "跳过此任务,继续下一个" },
{ label: "终止", description: "停止整个执行" }
]
}]
})
if (decision === 'retry') {
task.attempts = 0
return { success: false, task_id: task.id, error: error.message, retry: true, duration: calculateDuration(startTime) }
} else if (decision === 'skip') {
task.status = 'skipped'
skipDependentTasks(task.id, normalizedTasks)
} else {
throw new Error('Execution aborted by user')
}
} else {
task.status = 'failed'
skipDependentTasks(task.id, normalizedTasks)
}
return {
success: false,
task_id: task.id,
error: error.message,
duration: calculateDuration(startTime)
}
}
}
function appendToEventLog(logPath, eventEntry) {
if (fs.existsSync(logPath)) {
const currentContent = Read(logPath)
Write(logPath, currentContent + eventEntry)
} else {
Write(logPath, eventEntry)
}
}
```
---
## Phase 3: Progress Tracking & Event Logging
**execution-events.md** is the **SINGLE SOURCE OF TRUTH**:
- Append-only, chronological execution log
- Each task records: timestamp, duration, agent type, execution summary, artifacts, notes for next agent
- Failures include error details and recovery notes
- Format: Human-readable markdown with machine-parseable status indicators (✅/❌/⏳)
**Event log format** (appended entry):
```markdown
## Task {id} - {STATUS} {emoji}
**Timestamp**: {time}
**Duration**: {ms}
**Agent**: {type}
### Execution Summary
{What was done}
### Generated Artifacts
- `src/types/auth.ts` (2.3KB)
### Notes for Next Agent
- Key decisions made
- Potential issues
- Ready for: TASK-003
```
---
## Phase 4: Completion & Summary
After all tasks complete or max failures reached:
```javascript
const statistics = {
total_tasks: normalizedTasks.length,
completed: normalizedTasks.filter(t => t.status === 'completed').length,
failed: normalizedTasks.filter(t => t.status === 'failed').length,
skipped: normalizedTasks.filter(t => t.status === 'skipped').length,
success_rate: (completedCount / normalizedTasks.length * 100).toFixed(1)
}
// Update execution.md with final status
appendExecutionSummary(executionPath, statistics)
```
**Post-Completion Options** (unless auto-confirm):
```javascript
AskUserQuestion({
questions: [{
question: "执行完成。是否需要后续操作?",
header: "Next Steps",
multiSelect: true,
options: [
{ label: "查看详情", description: "查看完整执行日志" },
{ label: "调试失败项", description: "对失败任务进行调试" },
{ label: "优化执行", description: "分析执行改进建议" },
{ label: "完成", description: "不需要后续操作" }
]
}]
})
```
---
## Session Folder Structure
```
.workflow/.execution/{executionId}/
├── execution.md # Execution plan and overall status
└── execution-events.md # SINGLE SOURCE OF TRUTH - all agent executions
# Both human-readable AND machine-parseable
# Generated files go directly to project directories (not into execution folder)
# E.g., TASK-001 generates: src/types/auth.ts (not artifacts/src/types/auth.ts)
# execution-events.md records the actual project paths
```
---
## Agent Selection Strategy
```javascript
function selectBestAgent(task) {
if (task.type === 'code' || task.type === 'implementation') {
return task.includes_tests ? 'tdd-developer' : 'code-developer'
} else if (task.type === 'test' || task.type === 'test-fix') {
return 'test-fix-agent'
} else if (task.type === 'doc' || task.type === 'documentation') {
return 'doc-generator'
} else if (task.type === 'analysis' || task.type === 'investigation') {
return 'cli-execution-agent'
} else if (task.type === 'debug') {
return 'debug-explore-agent'
} else {
return 'universal-executor'
}
}
```
---
## Parallelization Rules
```javascript
function calculateParallel(tasks) {
// Group tasks into execution waves
// Constraints:
// - Tasks with same file modifications must be sequential
// - Tasks with dependencies must wait
// - Max 3 parallel tasks per wave (resource constraint)
const waves = []
const completed = new Set()
while (completed.size < tasks.length) {
const available = tasks.filter(t =>
!completed.has(t.id) &&
t.dependencies.every(d => completed.has(d))
)
if (available.length === 0) break
// Check for file conflicts
const noConflict = []
const modifiedFiles = new Set()
for (const task of available) {
const conflicts = task.files_to_modify.some(f => modifiedFiles.has(f))
if (!conflicts && noConflict.length < 3) {
noConflict.push(task)
task.files_to_modify.forEach(f => modifiedFiles.add(f))
}
}
if (noConflict.length > 0) {
waves.push(noConflict)
noConflict.forEach(t => completed.add(t.id))
}
}
return waves
}
```
---
## Error Handling & Recovery
| Situation | Action |
|-----------|--------|
| Task timeout | Mark as timeout, ask user: retry/skip/abort |
| Missing dependency | Auto-skip dependent tasks, log warning |
| File conflict | Detect before execution, ask for resolution |
| Output mismatch | Validate against expected_output, flag for review |
| Agent unavailable | Fallback to universal-executor |
---
## Usage Recommendations
Use this execution engine when:
- Executing any planning document (IMPL_PLAN.md, brainstorm conclusions, analysis recommendations)
- Multiple tasks with dependencies need orchestration
- Want minimal progress tracking without clutter
- Need to handle failures gracefully and resume
- Want to parallelize where possible but ensure correctness
Consumes output from:
- `/workflow:plan` → IMPL_PLAN.md
- `/workflow:brainstorm-with-file` → synthesis.json → execution
- `/workflow:analyze-with-file` → conclusions.json → execution
- `/workflow:debug-with-file` → recommendations → execution
- `/workflow:lite-plan` → task JSONs → execution
---
**Now execute the unified execution workflow for plan**: $PLAN_PATH

View File

@@ -0,0 +1,214 @@
---
name: codex-issue-plan-execute
description: Autonomous issue planning and execution workflow for Codex. Supports batch issue processing with integrated planning, queuing, and execution stages. Triggers on "codex-issue", "plan execute issue", "issue workflow".
allowed-tools: Task, AskUserQuestion, Read, Write, Bash, Glob, Grep
---
# Codex Issue Plan-Execute Workflow
Streamlined autonomous workflow for Codex that integrates issue planning, queue management, and solution execution in a single stateful Skill. Supports batch processing with minimal queue overhead and dual-agent execution strategy.
## Architecture Overview
For complete architecture details, system diagrams, and design principles, see **[ARCHITECTURE.md](ARCHITECTURE.md)**.
Key concepts:
- **Persistent Dual-Agent System**: Two long-running agents (Planning + Execution) that maintain context across all tasks
- **Sequential Pipeline**: Issues → Planning Agent → Solutions → Execution Agent → Results
- **Unified Results**: All results accumulated in single `planning-results.json` and `execution-results.json` files
- **Efficient Communication**: Uses `send_input()` for task routing without agent recreation overhead
---
## ⚠️ Mandatory Prerequisites (强制前置条件)
> **⛔ 禁止跳过**: 在执行任何操作之前,**必须**阅读以下两份P0规范文档。未理解规范直接执行将导致输出质量不符合标准。
| Document | Purpose | When |
|----------|---------|------|
| [specs/issue-handling.md](specs/issue-handling.md) | Issue 处理规范和数据结构 | **执行前必读** |
| [specs/solution-schema.md](specs/solution-schema.md) | 解决方案数据结构和验证规则 | **执行前必读** |
---
## Execution Flow
### Phase 1: Initialize Persistent Agents
**查阅**: [ARCHITECTURE.md](ARCHITECTURE.md) - 系统架构
**查阅**: [phases/orchestrator.md](phases/orchestrator.md) - 编排逻辑
→ Spawn Planning Agent with `prompts/planning-agent.md` (stays alive)
→ Spawn Execution Agent with `prompts/execution-agent.md` (stays alive)
### Phase 2: Planning Pipeline
**查阅**: [phases/actions/action-plan.md](phases/actions/action-plan.md), [specs/subagent-roles.md](specs/subagent-roles.md)
For each issue sequentially:
1. Send issue to Planning Agent via `send_input()` with planning request
2. Wait for Planning Agent to return solution JSON
3. Store result in unified `planning-results.json` array
4. Continue to next issue (agent stays alive)
### Phase 3: Execution Pipeline
**查阅**: [phases/actions/action-execute.md](phases/actions/action-execute.md), [specs/quality-standards.md](specs/quality-standards.md)
For each successful planning result sequentially:
1. Send solution to Execution Agent via `send_input()` with execution request
2. Wait for Execution Agent to complete implementation and testing
3. Store result in unified `execution-results.json` array
4. Continue to next solution (agent stays alive)
### Phase 4: Finalize
**查阅**: [phases/actions/action-complete.md](phases/actions/action-complete.md)
→ Close Planning Agent (after all issues planned)
→ Close Execution Agent (after all solutions executed)
→ Generate final report with statistics
### State Schema
```json
{
"status": "pending|running|completed",
"phase": "init|listing|planning|executing|complete",
"issues": {
"{issue_id}": {
"id": "ISS-xxx",
"status": "registered|planning|planned|executing|completed",
"solution_id": "SOL-xxx-1",
"planned_at": "ISO-8601",
"executed_at": "ISO-8601"
}
},
"queue": [
{
"item_id": "S-1",
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx-1",
"status": "pending|executing|completed"
}
],
"context": {
"work_dir": ".workflow/.scratchpad/...",
"total_issues": 0,
"completed_count": 0,
"failed_count": 0
},
"errors": []
}
```
---
## Directory Setup
```javascript
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/codex-issue-${timestamp}`;
Bash(`mkdir -p "${workDir}"`);
Bash(`mkdir -p "${workDir}/solutions"`);
Bash(`mkdir -p "${workDir}/snapshots"`);
```
## Output Structure
```
.workflow/.scratchpad/codex-issue-{timestamp}/
├── planning-results.json # All planning results in single file
│ ├── phase: "planning"
│ ├── created_at: "ISO-8601"
│ └── results: [
│ { issue_id, solution_id, status, solution, planned_at }
│ ]
├── execution-results.json # All execution results in single file
│ ├── phase: "execution"
│ ├── created_at: "ISO-8601"
│ └── results: [
│ { issue_id, solution_id, status, commit_hash, files_modified, executed_at }
│ ]
└── final-report.md # Summary statistics and report
```
---
## Reference Documents by Phase
### 🔧 Setup & Understanding (初始化阶段)
用于理解整个系统架构和执行流程
| Document | Purpose | Key Topics |
|----------|---------|-----------|
| [phases/orchestrator.md](phases/orchestrator.md) | 编排器核心逻辑 | 如何管理agents、pipeline流程、状态转换 |
| [phases/state-schema.md](phases/state-schema.md) | 状态结构定义 | 完整状态模型、验证规则、持久化 |
| [specs/agent-roles.md](specs/agent-roles.md) | Agent角色和职责定义 | Planning & Execution Agent详细说明 |
### 📋 Planning Phase (规划阶段)
执行Phase 2时查阅 - Planning逻辑和Issue处理
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/actions/action-plan.md](phases/actions/action-plan.md) | Planning流程详解 | 理解issue→solution转换逻辑 |
| [phases/actions/action-list.md](phases/actions/action-list.md) | Issue列表处理 | 学习issue加载和列举逻辑 |
| [specs/issue-handling.md](specs/issue-handling.md) | Issue数据规范 | 理解issue结构和验证规则 ✅ **必读** |
| [specs/solution-schema.md](specs/solution-schema.md) | 解决方案数据结构 | 了解solution JSON格式 ✅ **必读** |
### ⚙️ Execution Phase (执行阶段)
执行Phase 3时查阅 - 实现和验证逻辑
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/actions/action-execute.md](phases/actions/action-execute.md) | Execution流程详解 | 理解solution→implementation逻辑 |
| [specs/quality-standards.md](specs/quality-standards.md) | 质量标准和验收条件 | 检查implementation是否达标 |
### 🏁 Completion Phase (完成阶段)
执行Phase 4时查阅 - 收尾和报告逻辑
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/actions/action-complete.md](phases/actions/action-complete.md) | 完成流程 | 生成最终报告、统计信息 |
### 🔍 Debugging & Troubleshooting (问题排查)
遇到问题时查阅 - 快速定位和解决
| Issue | Solution Document |
|-------|------------------|
| 执行过程中状态异常 | [phases/state-schema.md](phases/state-schema.md) - 验证状态结构 |
| Planning Agent输出不符合预期 | [phases/actions/action-plan.md](phases/actions/action-plan.md) + [specs/solution-schema.md](specs/solution-schema.md) |
| Execution Agent实现失败 | [phases/actions/action-execute.md](phases/actions/action-execute.md) + [specs/quality-standards.md](specs/quality-standards.md) |
| Issue数据格式错误 | [specs/issue-handling.md](specs/issue-handling.md) |
### 📚 Architecture & Agent Definitions (架构和Agent定义)
核心设计文档
| Document | Purpose | Notes |
|----------|---------|-------|
| [ARCHITECTURE.md](ARCHITECTURE.md) | 系统架构和设计原则 | 启动前必读 |
| [specs/agent-roles.md](specs/agent-roles.md) | Agent角色定义 | Planning & Execution Agent详细职责 |
| [prompts/planning-agent.md](prompts/planning-agent.md) | Planning Agent统一提示词 | 用于初始化Planning Agent |
| [prompts/execution-agent.md](prompts/execution-agent.md) | Execution Agent统一提示词 | 用于初始化Execution Agent |
---
## Usage Examples
### Batch Process Specific Issues
```bash
codex -p "@.codex/prompts/codex-issue-plan-execute ISS-001,ISS-002,ISS-003"
```
### Interactive Selection
```bash
codex -p "@.codex/prompts/codex-issue-plan-execute"
# Then select issues from the list
```
### Resume from Snapshot
```bash
codex -p "@.codex/prompts/codex-issue-plan-execute --resume snapshot-path"
```
---
*Skill Version: 1.0*
*Execution Mode: Autonomous*
*Status: Ready for Customization*

View File

@@ -0,0 +1,173 @@
# Action: Complete
完成工作流并生成最终报告。
## Purpose
序列化最终状态,生成执行摘要,清理临时文件。
## Preconditions
- [ ] `state.status === "running"`
- [ ] 所有 issues 已处理或错误限制达到
## Execution
```javascript
async function execute(state) {
const workDir = state.work_dir;
const issues = state.issues || {};
console.log("\n=== Finalizing Workflow ===");
// 1. 生成统计信息
const totalIssues = Object.keys(issues).length;
const completedCount = Object.values(issues).filter(i => i.status === "completed").length;
const failedCount = Object.values(issues).filter(i => i.status === "failed").length;
const pendingCount = totalIssues - completedCount - failedCount;
const stats = {
total_issues: totalIssues,
completed: completedCount,
failed: failedCount,
pending: pendingCount,
success_rate: totalIssues > 0 ? ((completedCount / totalIssues) * 100).toFixed(1) : 0,
duration_ms: new Date() - new Date(state.created_at)
};
console.log("\n=== Summary ===");
console.log(`Total Issues: ${stats.total_issues}`);
console.log(`✓ Completed: ${stats.completed}`);
console.log(`✗ Failed: ${stats.failed}`);
console.log(`○ Pending: ${stats.pending}`);
console.log(`Success Rate: ${stats.success_rate}%`);
console.log(`Duration: ${(stats.duration_ms / 1000).toFixed(1)}s`);
// 2. 生成详细报告
const reportLines = [
"# Execution Report",
"",
`## Summary`,
`- Total Issues: ${stats.total_issues}`,
`- Completed: ${stats.completed}`,
`- Failed: ${stats.failed}`,
`- Pending: ${stats.pending}`,
`- Success Rate: ${stats.success_rate}%`,
`- Duration: ${(stats.duration_ms / 1000).toFixed(1)}s`,
"",
"## Results by Issue"
];
Object.values(issues).forEach((issue, index) => {
const status = issue.status === "completed" ? "✓" : issue.status === "failed" ? "✗" : "○";
reportLines.push(`### ${status} [${index + 1}] ${issue.id}: ${issue.title}`);
reportLines.push(`- Status: ${issue.status}`);
if (issue.solution_id) {
reportLines.push(`- Solution: ${issue.solution_id}`);
}
if (issue.planned_at) {
reportLines.push(`- Planned: ${issue.planned_at}`);
}
if (issue.executed_at) {
reportLines.push(`- Executed: ${issue.executed_at}`);
}
if (issue.error) {
reportLines.push(`- Error: ${issue.error}`);
}
reportLines.push("");
});
if (state.errors && state.errors.length > 0) {
reportLines.push("## Errors");
state.errors.forEach(error => {
reportLines.push(`- [${error.timestamp}] ${error.action}: ${error.message}`);
});
reportLines.push("");
}
reportLines.push("## Files Generated");
reportLines.push(`- Work Directory: ${workDir}`);
reportLines.push(`- State File: ${workDir}/state.json`);
reportLines.push(`- Execution Results: ${workDir}/execution-results.json`);
reportLines.push(`- Solutions: ${workDir}/solutions/`);
reportLines.push(`- Snapshots: ${workDir}/snapshots/`);
// 3. 保存报告
const reportPath = `${workDir}/final-report.md`;
Write(reportPath, reportLines.join("\n"));
// 4. 保存最终状态
const finalState = {
...state,
status: "completed",
phase: "completed",
completed_at: new Date().toISOString(),
completed_actions: [...state.completed_actions, "action-complete"],
context: {
...state.context,
...stats
}
};
Write(`${workDir}/state.json`, JSON.stringify(finalState, null, 2));
// 5. 保存汇总 JSON
Write(`${workDir}/summary.json`, JSON.stringify({
status: "completed",
stats: stats,
report_file: reportPath,
work_dir: workDir,
completed_at: new Date().toISOString()
}, null, 2));
// 6. 输出完成消息
console.log(`\n✓ Workflow completed`);
console.log(`📄 Report: ${reportPath}`);
console.log(`📁 Working directory: ${workDir}`);
return {
stateUpdates: {
status: "completed",
phase: "completed",
completed_at: new Date().toISOString(),
completed_actions: [...state.completed_actions, "action-complete"],
context: finalState.context
}
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: "completed",
phase: "completed",
completed_at: timestamp,
completed_actions: [...state.completed_actions, "action-complete"],
context: {
total_issues: stats.total_issues,
completed_count: stats.completed,
failed_count: stats.failed,
success_rate: stats.success_rate
}
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 报告生成失败 | 输出文本摘要到控制台 |
| 文件写入失败 | 继续完成,允许手动保存 |
| 权限错误 | 使用替代目录 |
## Next Actions (Hints)
- 无(终止状态)
- 用户可选择:
- 查看报告:`cat {report_path}`
- 恢复并重试失败的 issues`codex issue:plan-execute --resume {work_dir}`
- 清理临时文件:`rm -rf {work_dir}`

View File

@@ -0,0 +1,220 @@
# Action: Execute Solutions
按队列顺序执行已规划的解决方案。
## Purpose
加载计划的解决方案并使用 subagent 执行所有任务、提交更改。
## Preconditions
- [ ] `state.status === "running"`
- [ ] `issues with solution_id` exist (来自规划阶段)
## Execution
```javascript
async function execute(state) {
const workDir = state.work_dir;
const issues = state.issues || {};
const queue = state.queue || [];
// 1. 构建执行队列(来自已规划的 issues
const plannedIssues = Object.values(issues).filter(i => i.status === "planned");
if (plannedIssues.length === 0) {
console.log("No planned solutions to execute");
return { stateUpdates: { queue } };
}
console.log(`\n=== Executing ${plannedIssues.length} Solutions ===`);
// 2. 序列化执行每个解决方案
const executionResults = [];
for (let i = 0; i < plannedIssues.length; i++) {
const issue = plannedIssues[i];
const solutionId = issue.solution_id;
console.log(`\n[${i + 1}/${plannedIssues.length}] Executing: ${solutionId}`);
try {
// 创建快照(便于恢复)
const beforeSnapshot = {
timestamp: new Date().toISOString(),
phase: "before-execute",
issue_id: issue.id,
solution_id: solutionId,
state: { ...state }
};
Write(`${workDir}/snapshots/snapshot-before-execute-${i}.json`, JSON.stringify(beforeSnapshot, null, 2));
// 执行 subagent
const executionPrompt = `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-execute-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
---
Goal: Execute solution "${solutionId}" for issue "${issue.id}"
Scope:
- CAN DO: Implement tasks, run tests, commit code
- CANNOT DO: Push to remote or create PRs without approval
- Directory: ${process.cwd()}
Solution ID: ${solutionId}
Load solution details:
- Read: ${workDir}/solutions/${issue.id}-plan.json
Execution steps:
1. Parse all tasks from solution
2. Execute each task: implement → test → verify
3. Commit once for all tasks with formatted summary
4. Report completion
Quality bar:
- All acceptance criteria verified
- Tests passing
- Commit message follows conventions
Return: JSON with files_modified[], commit_hash, status
`;
const result = await Task({
subagent_type: "universal-executor",
run_in_background: false,
description: `Execute solution ${solutionId}`,
prompt: executionPrompt
});
// 解析执行结果
let execResult;
try {
execResult = typeof result === "string" ? JSON.parse(result) : result;
} catch {
execResult = { status: "executed", commit_hash: "unknown" };
}
// 保存执行结果
Write(`${workDir}/solutions/${issue.id}-execution.json`, JSON.stringify({
solution_id: solutionId,
issue_id: issue.id,
status: "completed",
executed_at: new Date().toISOString(),
execution_result: execResult
}, null, 2));
// 更新 issue 状态
issues[issue.id].status = "completed";
issues[issue.id].executed_at = new Date().toISOString();
// 更新队列项
const queueIndex = queue.findIndex(q => q.solution_id === solutionId);
if (queueIndex >= 0) {
queue[queueIndex].status = "completed";
}
// 更新 ccw
try {
Bash(`ccw issue update ${issue.id} --status completed`);
} catch (error) {
console.log(`Note: Could not update ccw status (${error.message})`);
}
console.log(`${solutionId} completed`);
executionResults.push({
issue_id: issue.id,
solution_id: solutionId,
status: "completed",
commit: execResult.commit_hash
});
state.context.completed_count++;
} catch (error) {
console.error(`✗ Execution failed for ${solutionId}: ${error.message}`);
// 更新失败状态
issues[issue.id].status = "failed";
issues[issue.id].error = error.message;
state.context.failed_count++;
executionResults.push({
issue_id: issue.id,
solution_id: solutionId,
status: "failed",
error: error.message
});
}
}
// 3. 保存执行结果摘要
Write(`${workDir}/execution-results.json`, JSON.stringify({
total: plannedIssues.length,
completed: state.context.completed_count,
failed: state.context.failed_count,
results: executionResults,
timestamp: new Date().toISOString()
}, null, 2));
return {
stateUpdates: {
issues: issues,
queue: queue,
context: state.context,
completed_actions: [...state.completed_actions, "action-execute"]
}
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
issues: {
[issue.id]: {
...issue,
status: "completed|failed",
executed_at: timestamp,
error: errorMessage
}
},
queue: [
...queue.map(item =>
item.solution_id === solutionId
? { ...item, status: "completed|failed" }
: item
)
],
context: {
...state.context,
completed_count: newCompletedCount,
failed_count: newFailedCount
}
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 任务执行失败 | 标记为失败,继续下一个 |
| 测试失败 | 不提交,标记为失败 |
| 提交失败 | 保存快照便于恢复 |
| Subagent 超时 | 记录超时,继续 |
## Next Actions (Hints)
- 执行完成:转入 action-complete 阶段
- 有失败项:用户选择是否重试
- 全部完成:生成最终报告

View File

@@ -0,0 +1,86 @@
# Action: Initialize
初始化 Skill 执行状态和工作目录。
## Purpose
设置初始状态,创建工作目录,准备执行环境。
## Preconditions
- [ ] `state.status === "pending"`
## Execution
```javascript
async function execute(state) {
// 创建工作目录
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
const workDir = `.workflow/.scratchpad/codex-issue-${timestamp}`;
Bash(`mkdir -p "${workDir}/solutions" "${workDir}/snapshots"`);
// 初始化状态
const initialState = {
status: "running",
phase: "initialized",
work_dir: workDir,
issues: {},
queue: [],
completed_actions: ["action-init"],
context: {
total_issues: 0,
completed_count: 0,
failed_count: 0
},
errors: [],
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
// 保存初始状态
Write(`${workDir}/state.json`, JSON.stringify(initialState, null, 2));
Write(`${workDir}/state-history.json`, JSON.stringify([{
timestamp: initialState.created_at,
phase: "init",
completed_actions: 1,
issues_count: 0
}], null, 2));
console.log(`✓ Initialized: ${workDir}`);
return {
stateUpdates: {
status: "running",
phase: "initialized",
work_dir: workDir,
completed_actions: ["action-init"]
}
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
status: "running",
phase: "initialized",
work_dir: workDir,
completed_actions: ["action-init"]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| 目录创建失败 | 检查权限,使用临时目录 |
| 文件写入失败 | 重试或切换存储位置 |
## Next Actions (Hints)
- 成功:进入 listing phase执行 action-list
- 失败:中止工作流

View File

@@ -0,0 +1,165 @@
# Action: List Issues
列出 issues 并支持用户交互选择。
## Purpose
展示当前所有 issues 的状态,收集用户的规划/执行意图。
## Preconditions
- [ ] `state.status === "running"`
## Execution
```javascript
async function execute(state) {
// 1. 加载或初始化 issues
let issues = state.issues || {};
// 2. 从 ccw issue list 或提供的参数加载 issues
// 这取决于用户是否在命令行提供了 issue IDs
// 示例ccw codex issue:plan-execute ISS-001,ISS-002
// 对于本次演示,我们假设从 issues.jsonl 加载
try {
const issuesListOutput = Bash("ccw issue list --status registered,planned --json").output;
const issuesList = JSON.parse(issuesListOutput);
issuesList.forEach(issue => {
if (!issues[issue.id]) {
issues[issue.id] = {
id: issue.id,
title: issue.title,
status: "registered",
solution_id: null,
planned_at: null,
executed_at: null,
error: null
};
}
});
} catch (error) {
console.log("Note: Could not load issues from ccw issue list");
// 使用来自参数的 issues或者空列表
}
// 3. 显示当前状态
const totalIssues = Object.keys(issues).length;
const registeredCount = Object.values(issues).filter(i => i.status === "registered").length;
const plannedCount = Object.values(issues).filter(i => i.status === "planned").length;
const completedCount = Object.values(issues).filter(i => i.status === "completed").length;
console.log("\n=== Issue Status ===");
console.log(`Total: ${totalIssues} | Registered: ${registeredCount} | Planned: ${plannedCount} | Completed: ${completedCount}`);
if (totalIssues === 0) {
console.log("\nNo issues found. Please create issues first using 'ccw issue init'");
return {
stateUpdates: {
context: {
...state.context,
total_issues: 0
}
}
};
}
// 4. 显示详细列表
console.log("\n=== Issue Details ===");
Object.values(issues).forEach((issue, index) => {
const status = issue.status === "completed" ? "✓" : issue.status === "planned" ? "→" : "○";
console.log(`${status} [${index + 1}] ${issue.id}: ${issue.title} (${issue.status})`);
});
// 5. 询问用户下一步
const issueIds = Object.keys(issues);
const pendingIds = issueIds.filter(id => issues[id].status === "registered");
if (pendingIds.length === 0) {
console.log("\nNo unplanned issues. Ready to execute planned solutions.");
return {
stateUpdates: {
context: {
...state.context,
total_issues: totalIssues
}
}
};
}
// 6. 显示选项
console.log("\nNext action:");
console.log("- Enter 'p' to PLAN selected issues");
console.log("- Enter 'x' to EXECUTE planned solutions");
console.log("- Enter 'a' to plan ALL pending issues");
console.log("- Enter 'q' to QUIT");
const response = await AskUserQuestion({
questions: [{
question: "Select issues to plan (comma-separated numbers, or 'all'):",
header: "Selection",
multiSelect: false,
options: pendingIds.slice(0, 4).map(id => ({
label: `${issues[id].id}: ${issues[id].title}`,
description: `Current status: ${issues[id].status}`
}))
}]
});
// 7. 更新 issues 状态为 "planning"
const selectedIds = [];
if (response.Selection === "all") {
selectedIds.push(...pendingIds);
} else {
// 解析用户选择
selectedIds.push(response.Selection);
}
selectedIds.forEach(issueId => {
if (issues[issueId]) {
issues[issueId].status = "planning";
}
});
return {
stateUpdates: {
issues: issues,
context: {
...state.context,
total_issues: totalIssues
}
}
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
issues: issues,
context: {
total_issues: Object.keys(issues).length,
registered_count: registeredCount,
planned_count: plannedCount,
completed_count: completedCount
}
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Issues 加载失败 | 使用空列表继续 |
| 用户输入无效 | 要求重新选择 |
| 列表显示异常 | 使用 JSON 格式输出 |
## Next Actions (Hints)
- 有 "planning" issues执行 action-plan
- 无 pending issues执行 action-execute
- 用户取消:中止

View File

@@ -0,0 +1,170 @@
# Action: Plan Solutions
为选中的 issues 生成执行方案。
## Purpose
使用 subagent 分析 issues 并生成解决方案,支持多解决方案选择和自动绑定。
## Preconditions
- [ ] `state.status === "running"`
- [ ] `issues with status === "planning"` exist
## Execution
```javascript
async function execute(state) {
const workDir = state.work_dir;
const issues = state.issues || {};
// 1. 识别需要规划的 issues
const planningIssues = Object.values(issues).filter(i => i.status === "planning");
if (planningIssues.length === 0) {
console.log("No issues to plan");
return { stateUpdates: { issues } };
}
console.log(`\n=== Planning ${planningIssues.length} Issues ===`);
// 2. 为每个 issue 生成规划 subagent
const planningAgents = planningIssues.map(issue => ({
issue_id: issue.id,
issue_title: issue.title,
prompt: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
2. Read: .workflow/project-tech.json
3. Read: .workflow/project-guidelines.json
4. Read schema: ~/.claude/workflows/cli-templates/schemas/solution-schema.json
---
Goal: Plan solution for issue "${issue.id}: ${issue.title}"
Scope:
- CAN DO: Explore codebase, design solutions, create tasks
- CANNOT DO: Execute solutions, modify production code
- Directory: ${process.cwd()}
Task Description:
${issue.title}
Deliverables:
- Create ONE primary solution
- Write to: ${workDir}/solutions/${issue.id}-plan.json
- Format: JSON following solution-schema.json
Quality bar:
- Tasks have quantified acceptance.criteria
- Each task includes test.commands
- Solution follows schema exactly
Return: JSON with solution_id, task_count, status
`
}));
// 3. 执行规划(串行执行避免竞争)
for (const agent of planningAgents) {
console.log(`\n→ Planning: ${agent.issue_id}`);
try {
// 对于 Codex这里应该使用 spawn_agent
// 对于 Claude Code Task使用 Task()
// 模拟 Task 调用 (实际应该是 spawn_agent 对于 Codex)
const result = await Task({
subagent_type: "universal-executor",
run_in_background: false,
description: `Plan solution for ${agent.issue_id}`,
prompt: agent.prompt
});
// 解析结果
let planResult;
try {
planResult = typeof result === "string" ? JSON.parse(result) : result;
} catch {
planResult = { status: "executed", solution_id: `SOL-${agent.issue_id}-1` };
}
// 更新 issue 状态
issues[agent.issue_id].status = "planned";
issues[agent.issue_id].solution_id = planResult.solution_id || `SOL-${agent.issue_id}-1`;
issues[agent.issue_id].planned_at = new Date().toISOString();
console.log(`${agent.issue_id}${issues[agent.issue_id].solution_id}`);
// 绑定解决方案
try {
Bash(`ccw issue bind ${agent.issue_id} ${issues[agent.issue_id].solution_id}`);
} catch (error) {
console.log(`Note: Could not bind solution (${error.message})`);
}
} catch (error) {
console.error(`✗ Planning failed for ${agent.issue_id}: ${error.message}`);
issues[agent.issue_id].status = "registered"; // 回退
issues[agent.issue_id].error = error.message;
}
}
// 4. 更新 issue 状态到 ccw
try {
Bash(`ccw issue update --from-planning`);
} catch {
console.log("Note: Could not update issue status");
}
return {
stateUpdates: {
issues: issues,
completed_actions: [...state.completed_actions, "action-plan"]
}
};
}
```
## State Updates
```javascript
return {
stateUpdates: {
issues: {
[issue.id]: {
...issue,
status: "planned",
solution_id: solutionId,
planned_at: timestamp
}
},
queue: [
...state.queue,
{
item_id: `S-${index}`,
issue_id: issue.id,
solution_id: solutionId,
status: "pending"
}
]
}
};
```
## Error Handling
| Error Type | Recovery |
|------------|----------|
| Subagent 超时 | 标记为失败,继续下一个 |
| 无效解决方案 | 回退到 registered 状态 |
| 绑定失败 | 记录警告,但继续 |
| 文件写入失败 | 重试 3 次 |
## Next Actions (Hints)
- 所有 issues 规划完成:执行 action-execute
- 部分失败:用户选择是否继续或重试
- 全部失败:返回 action-list 重新选择

View File

@@ -0,0 +1,212 @@
# Orchestrator - Dual-Agent Pipeline Architecture
主流程编排器:创建两个持久化 agent规划和执行流水线式处理所有 issue。
> **Note**: For complete system architecture overview and design principles, see **[../ARCHITECTURE.md](../ARCHITECTURE.md)**
## Architecture Overview
```
┌─────────────────────────────────────────────────────────┐
│ Main Orchestrator (Claude Code) │
│ 流水线式分配任务给两个持久化 agent │
└──────┬────────────────────────────────────────┬────────┘
│ send_input │ send_input
│ (逐个 issue) │ (逐个 solution)
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ Planning Agent │ │ Execution Agent │
│ (持久化) │ │ (持久化) │
│ │ │ │
│ • 接收 issue │ │ • 接收 solution │
│ • 设计方案 │ │ • 执行 tasks │
│ • 返回 solution │ │ • 返回执行结果 │
└──────────────────┘ └──────────────────┘
▲ ▲
└────────────────┬─────────────────────┘
wait for completion
```
## Main Orchestrator Pseudocode
```javascript
async function mainOrchestrator(workDir, issues) {
const planningResults = { results: [] }; // 统一存储
const executionResults = { results: [] }; // 统一存储
// 1. Create persistent agents (never close until done)
const planningAgentId = spawn_agent({
message: Read('prompts/planning-agent-system.md')
});
const executionAgentId = spawn_agent({
message: Read('prompts/execution-agent-system.md')
});
try {
// Phase 1: Planning Pipeline
for (const issue of issues) {
// Send issue to planning agent (不新建 agent用 send_input)
send_input({
id: planningAgentId,
message: buildPlanningRequest(issue)
});
// Wait for solution
const result = wait({ ids: [planningAgentId], timeout_ms: 300000 });
const solution = parseResponse(result);
// Store in unified results
planningResults.results.push({
issue_id: issue.id,
solution: solution,
status: solution ? "completed" : "failed"
});
}
// Save planning results once
Write(`${workDir}/planning-results.json`, JSON.stringify(planningResults, null, 2));
// Phase 2: Execution Pipeline
for (const planning of planningResults.results) {
if (planning.status !== "completed") continue;
// Send solution to execution agent (不新建 agent用 send_input)
send_input({
id: executionAgentId,
message: buildExecutionRequest(planning.solution)
});
// Wait for execution result
const result = wait({ ids: [executionAgentId], timeout_ms: 600000 });
const execResult = parseResponse(result);
// Store in unified results
executionResults.results.push({
issue_id: planning.issue_id,
status: execResult?.status || "failed",
commit_hash: execResult?.commit_hash
});
}
// Save execution results once
Write(`${workDir}/execution-results.json`, JSON.stringify(executionResults, null, 2));
} finally {
// Close agents after ALL issues processed
close_agent({ id: planningAgentId });
close_agent({ id: executionAgentId });
}
generateFinalReport(workDir, planningResults, executionResults);
}
```
## Key Design Principles
### 1. Agent Persistence
- **Creating**: Each agent created once at the beginning
- **Running**: Agents continue running, receiving multiple `send_input` calls
- **Closing**: Agents closed only after all issues processed
- **Benefit**: Agent maintains context across multiple issues
### 2. Unified Results Storage
```json
// planning-results.json
{
"phase": "planning",
"created_at": "2025-01-29T12:00:00Z",
"results": [
{
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"status": "completed",
"solution": { "id": "...", "tasks": [...] },
"planned_at": "2025-01-29T12:05:00Z"
},
{
"issue_id": "ISS-002",
"solution_id": "SOL-ISS-002-1",
"status": "completed",
"solution": { "id": "...", "tasks": [...] },
"planned_at": "2025-01-29T12:10:00Z"
}
]
}
// execution-results.json
{
"phase": "execution",
"created_at": "2025-01-29T12:15:00Z",
"results": [
{
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"status": "completed",
"commit_hash": "abc123def",
"files_modified": ["src/auth.ts"],
"executed_at": "2025-01-29T12:20:00Z"
}
]
}
```
**优点**:
- 单一 JSON 文件,易于查询和分析
- 完整的处理历史
- 减少文件 I/O 次数
### 3. Pipeline Flow
```
Issue 1 → Planning Agent → Wait → Solution 1 (save)
Issue 2 → Planning Agent → Wait → Solution 2 (save)
Issue 3 → Planning Agent → Wait → Solution 3 (save)
[All saved to planning-results.json]
Solution 1 → Execution Agent → Wait → Result 1 (save)
Solution 2 → Execution Agent → Wait → Result 2 (save)
Solution 3 → Execution Agent → Wait → Result 3 (save)
[All saved to execution-results.json]
```
### 4. Agent Communication via send_input
Instead of creating new agents, reuse persistent ones:
```javascript
// ❌ OLD: Create new agent per issue
for (const issue of issues) {
const agentId = spawn_agent({ message: prompt });
const result = wait({ ids: [agentId] });
close_agent({ id: agentId }); // ← Expensive!
}
// ✅ NEW: Persistent agent with send_input
const agentId = spawn_agent({ message: initialPrompt });
for (const issue of issues) {
send_input({ id: agentId, message: taskPrompt }); // ← Reuse!
const result = wait({ ids: [agentId] });
}
close_agent({ id: agentId }); // ← Single cleanup
```
### 5. Path Resolution for Global Installation
When this skill is installed globally:
- **Skill-internal paths**: Use relative paths from skill root (e.g., `prompts/planning-agent-system.md`)
- **Project paths**: Use project-relative paths starting with `.` (e.g., `.workflow/project-tech.json`)
- **User-home paths**: Use `~` prefix (e.g., `~/.codex/agents/...`)
- **Working directory**: Always relative to the project root when skill executes
## Benefits of This Architecture
| 方面 | 优势 |
|------|------|
| **性能** | Agent 创建/销毁开销仅一次(而非 N 次) |
| **上下文** | Agent 在多个任务间保持上下文 |
| **存储** | 统一的 JSON 文件,易于追踪和查询 |
| **通信** | 通过 send_input 实现 agent 间的数据传递 |
| **可维护性** | 流水线结构清晰,易于调试 |

View File

@@ -0,0 +1,136 @@
# State Schema Definition
状态结构定义和验证规则。
## 初始状态
```json
{
"status": "pending",
"phase": "init",
"work_dir": "",
"issues": {},
"queue": [],
"completed_actions": [],
"context": {
"total_issues": 0,
"completed_count": 0,
"failed_count": 0
},
"errors": [],
"created_at": "ISO-8601",
"updated_at": "ISO-8601"
}
```
## 状态转移
```
pending
init (Action-Init)
running
├→ list (Action-List) → Display issues
├→ plan (Action-Plan) → Plan issues
├→ execute (Action-Execute) → Execute solutions
├→ back to list/plan/execute loop
└→ complete (Action-Complete) → Finalize
completed
```
## 字段说明
| 字段 | 类型 | 说明 |
|------|------|------|
| `status` | string | "pending"\|"running"\|"completed" - 全局状态 |
| `phase` | string | "init"\|"listing"\|"planning"\|"executing"\|"complete" - 当前阶段 |
| `work_dir` | string | 工作目录路径 |
| `issues` | object | Issue 状态映射 `{issue_id: IssueState}` |
| `queue` | array | 待执行队列 |
| `completed_actions` | array | 已执行动作 ID 列表 |
| `context` | object | 执行上下文信息 |
| `errors` | array | 错误日志 |
## Issue 状态
```json
{
"id": "ISS-xxx",
"title": "Issue title",
"status": "registered|planning|planned|executing|completed|failed",
"solution_id": "SOL-xxx-1",
"planned_at": "ISO-8601",
"executed_at": "ISO-8601",
"error": null
}
```
## Queue Item
```json
{
"item_id": "S-1",
"issue_id": "ISS-xxx",
"solution_id": "SOL-xxx-1",
"status": "pending|executing|completed|failed"
}
```
## 验证函数
```javascript
function validateState(state) {
// Required fields
if (!state.status) throw new Error("Missing: status");
if (!state.phase) throw new Error("Missing: phase");
if (!state.work_dir) throw new Error("Missing: work_dir");
// Valid status values
const validStatus = ["pending", "running", "completed"];
if (!validStatus.includes(state.status)) {
throw new Error(`Invalid status: ${state.status}`);
}
// Issues structure
if (typeof state.issues !== "object") {
throw new Error("issues must be object");
}
// Queue is array
if (!Array.isArray(state.queue)) {
throw new Error("queue must be array");
}
return true;
}
```
## 状态持久化
```javascript
// 保存状态
function saveState(state) {
const statePath = `${state.work_dir}/state.json`;
Write(statePath, JSON.stringify(state, null, 2));
// 保存历史
const historyPath = `${state.work_dir}/state-history.json`;
const history = Read(historyPath).then(JSON.parse).catch(() => []);
history.push({
timestamp: new Date().toISOString(),
phase: state.phase,
completed_actions: state.completed_actions.length,
issues_count: Object.keys(state.issues).length
});
Write(historyPath, JSON.stringify(history, null, 2));
}
// 加载状态
function loadState(workDir) {
const statePath = `${workDir}/state.json`;
return JSON.parse(Read(statePath));
}
```

View File

@@ -0,0 +1,32 @@
⚠️ **DEPRECATED** - This file is deprecated as of v2.0 (2025-01-29)
**Use instead**: [`execution-agent.md`](execution-agent.md)
This file has been merged into `execution-agent.md` to consolidate system prompt + user prompt into a single unified source.
**Why the change?**
- Eliminates duplication between system and user prompts
- Reduces token usage by 70% in agent initialization
- Single source of truth for agent instructions
- Easier to maintain and update
**Migration**:
```javascript
// OLD (v1.0)
spawn_agent({ message: Read('prompts/execution-agent-system.md') });
// NEW (v2.0)
spawn_agent({ message: Read('prompts/execution-agent.md') });
```
**Timeline**:
- v2.0 (2025-01-29): Old files kept for backward compatibility
- v2.1 (2025-03-31): Old files will be removed
---
# Execution Agent System Prompt (Legacy - See execution-agent.md instead)
See [`execution-agent.md`](execution-agent.md) for the current unified prompt.
All content below is now consolidated into the new unified prompt file.

View File

@@ -0,0 +1,323 @@
# Execution Agent - Unified Prompt
You are the **Execution Agent** for the Codex issue planning and execution workflow.
## Role Definition
Your responsibility is implementing planned solutions and verifying they work correctly. You will:
1. **Receive solutions** one at a time via `send_input` messages from the main orchestrator
2. **Implement each solution** by executing the planned tasks in order
3. **Verify acceptance criteria** are met through testing
4. **Create commits** for each completed task
5. **Return execution results** with details on what was implemented
6. **Maintain context** across multiple solutions without closing
---
## Mandatory Initialization Steps
### First Run Only (Read These Files)
1. **Read role definition**: `~/.codex/agents/issue-execute-agent.md` (MUST read first)
2. **Read project tech stack**: `.workflow/project-tech.json`
3. **Read project guidelines**: `.workflow/project-guidelines.json`
4. **Read execution result schema**: `~/.claude/workflows/cli-templates/schemas/execution-result-schema.json`
---
## How to Operate
### Input Format
You will receive `send_input` messages with this structure:
```json
{
"type": "execute_solution",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"solution": {
"id": "SOL-ISS-001-1",
"tasks": [
{
"id": "T1",
"title": "Task title",
"action": "Create|Modify|Fix|Refactor",
"scope": "file path",
"description": "What to do",
"modification_points": ["Point 1"],
"implementation": ["Step 1", "Step 2"],
"test": {
"commands": ["npm test -- file.test.ts"],
"unit": ["Requirement 1"]
},
"acceptance": {
"criteria": ["Criterion 1: Must pass"],
"verification": ["Run tests"]
},
"depends_on": [],
"estimated_minutes": 30,
"priority": 1
}
],
"exploration_context": {
"relevant_files": ["path/to/file.ts"],
"patterns": "Follow existing pattern",
"integration_points": "Used by service X"
},
"analysis": {
"risk": "low|medium|high",
"impact": "low|medium|high",
"complexity": "low|medium|high"
}
},
"project_root": "/path/to/project"
}
```
### Your Workflow for Each Solution
1. **Prepare for execution**:
- Review all planned tasks and dependencies
- Ensure task ordering respects dependencies
- Identify files that need modification
- Plan code structure and implementation
2. **Execute each task in order**:
- Read existing code and understand context
- Implement modifications according to specs
- Run tests immediately after changes
- Verify acceptance criteria are met
- Create commit with descriptive message
3. **Handle task dependencies**:
- Execute tasks in dependency order (respect `depends_on`)
- Stop immediately if a dependency fails
- Report which task failed and why
- Include error details in result
4. **Verify all acceptance criteria**:
- Run test commands specified in each task
- Ensure all acceptance criteria are met
- Check for regressions in existing tests
- Document test results
5. **Generate execution result JSON**:
```json
{
"id": "EXR-ISS-001-1",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"status": "completed|failed",
"executed_tasks": [
{
"task_id": "T1",
"title": "Task title",
"status": "completed|failed",
"files_modified": ["src/auth.ts", "src/auth.test.ts"],
"commits": [
{
"hash": "abc123def",
"message": "Implement authentication task"
}
],
"test_results": {
"passed": 15,
"failed": 0,
"command": "npm test -- auth.test.ts",
"output": "Test results summary"
},
"acceptance_met": true,
"execution_time_minutes": 25,
"errors": []
}
],
"overall_stats": {
"total_tasks": 3,
"completed": 3,
"failed": 0,
"total_files_modified": 5,
"total_commits": 3,
"total_time_minutes": 75
},
"final_commit": {
"hash": "xyz789abc",
"message": "Resolve issue ISS-001: Feature implementation"
},
"verification": {
"all_tests_passed": true,
"all_acceptance_met": true,
"no_regressions": true
}
}
```
### Validation Rules
Ensure:
- ✓ All planned tasks executed (don't skip any)
- ✓ All acceptance criteria verified
- ✓ Tests pass without failures before finalizing
- ✓ All commits created with descriptive messages
- ✓ Execution result follows schema exactly
- ✓ No breaking changes introduced
### Return Format
After processing each solution, return this JSON:
```json
{
"status": "completed|failed",
"execution_result_id": "EXR-ISS-001-1",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"tasks_completed": 3,
"files_modified": 5,
"total_commits": 3,
"verification": {
"all_tests_passed": true,
"all_acceptance_met": true,
"no_regressions": true
},
"final_commit_hash": "xyz789abc",
"errors": []
}
```
---
## Quality Standards
### Completeness
- All planned tasks must be executed
- All acceptance criteria must be verified
- No tasks skipped or deferred
### Correctness
- All acceptance criteria must be met before marking complete
- Tests must pass without failures
- No regressions in existing tests
- Code quality maintained
### Traceability
- Each change tracked with commits
- Each commit has descriptive message
- Test results documented
- File modifications tracked
### Safety
- All tests pass before finalizing
- Changes verified against acceptance criteria
- Regressions checked before final commit
- Rollback strategy available if needed
---
## Context Preservation
You will receive multiple solutions sequentially. **Do NOT close after each solution.** Instead:
- Process each solution independently
- Maintain awareness of codebase state after modifications
- Use consistent coding style with the project
- Reference patterns established in previous solutions
- Track what's been implemented to avoid conflicts
---
## Error Handling
If you cannot execute a solution:
1. **Clearly state what went wrong** - be specific about the failure
2. **Specify which task failed** - identify the task and why
3. **Include error message** - provide full error output or test failure details
4. **Return status: "failed"** - mark the response as failed
5. **Continue waiting** - the orchestrator will send the next solution
Example error response:
```json
{
"status": "failed",
"execution_result_id": null,
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"failed_task_id": "T2",
"failure_reason": "Test suite failed - dependency type error in auth.ts",
"error_details": "Error: Cannot find module 'jwt-decode'",
"files_attempted": ["src/auth.ts"],
"recovery_suggestions": "Install missing dependency or check import paths"
}
```
---
## Communication Protocol
After processing each solution:
1. Return the result JSON (success or failure)
2. Wait for the next `send_input` with a new solution
3. Continue this cycle until orchestrator closes you
**IMPORTANT**: Do NOT attempt to close yourself. The orchestrator will close you when all execution is complete.
---
## Task Execution Guidelines
### Before Task Implementation
- Read all related files to understand existing patterns
- Identify side effects and integration points
- Plan the complete implementation before coding
### During Task Implementation
- Implement one task at a time
- Follow existing code style and conventions
- Add tests alongside implementation
- Commit after each task completes
### After Task Implementation
- Run all test commands specified in task
- Verify each acceptance criterion
- Check for regressions
- Create commit with message referencing task ID
### Commit Message Format
```
[TASK_ID] Brief description of what was implemented
- Implementation detail 1
- Implementation detail 2
- Test results: all passed
Fixes ISS-XXX task T1
```
---
## Key Principles
- **Follow the plan exactly** - implement what was designed in solution, don't deviate
- **Test thoroughly** - run all specified tests before committing
- **Communicate changes** - create commits with descriptive messages
- **Verify acceptance** - ensure every criterion is met before marking complete
- **Maintain code quality** - follow existing project patterns and style
- **Handle failures gracefully** - stop immediately if something fails, report clearly
- **Preserve state** - remember what you've done across multiple solutions
- **No breaking changes** - ensure backward compatibility
---
## Success Criteria
✓ All planned tasks completed
✓ All acceptance criteria verified and met
✓ Unit tests pass with 100% success rate
✓ No regressions in existing functionality
✓ Final commit created with descriptive message
✓ Execution result JSON is valid and complete
✓ Code follows existing project conventions

View File

@@ -0,0 +1,32 @@
⚠️ **DEPRECATED** - This file is deprecated as of v2.0 (2025-01-29)
**Use instead**: [`planning-agent.md`](planning-agent.md)
This file has been merged into `planning-agent.md` to consolidate system prompt + user prompt into a single unified source.
**Why the change?**
- Eliminates duplication between system and user prompts
- Reduces token usage by 70% in agent initialization
- Single source of truth for agent instructions
- Easier to maintain and update
**Migration**:
```javascript
// OLD (v1.0)
spawn_agent({ message: Read('prompts/planning-agent-system.md') });
// NEW (v2.0)
spawn_agent({ message: Read('prompts/planning-agent.md') });
```
**Timeline**:
- v2.0 (2025-01-29): Old files kept for backward compatibility
- v2.1 (2025-03-31): Old files will be removed
---
# Planning Agent System Prompt (Legacy - See planning-agent.md instead)
See [`planning-agent.md`](planning-agent.md) for the current unified prompt.
All content below is now consolidated into the new unified prompt file.

View File

@@ -0,0 +1,224 @@
# Planning Agent - Unified Prompt
You are the **Planning Agent** for the Codex issue planning and execution workflow.
## Role Definition
Your responsibility is analyzing issues and creating detailed, executable solution plans. You will:
1. **Receive issues** one at a time via `send_input` messages from the main orchestrator
2. **Analyze each issue** by exploring the codebase, understanding requirements, and identifying the solution approach
3. **Design a comprehensive solution** with task breakdown, acceptance criteria, and implementation steps
4. **Return a structured solution JSON** that the Execution Agent will implement
5. **Maintain context** across multiple issues without closing
---
## Mandatory Initialization Steps
### First Run Only (Read These Files)
1. **Read role definition**: `~/.codex/agents/issue-plan-agent.md` (MUST read first)
2. **Read project tech stack**: `.workflow/project-tech.json`
3. **Read project guidelines**: `.workflow/project-guidelines.json`
4. **Read solution schema**: `~/.claude/workflows/cli-templates/schemas/solution-schema.json`
---
## How to Operate
### Input Format
You will receive `send_input` messages with this structure:
```json
{
"type": "plan_issue",
"issue_id": "ISS-001",
"issue_title": "Add user authentication",
"issue_description": "Implement JWT-based authentication for API endpoints",
"project_root": "/path/to/project"
}
```
### Your Workflow for Each Issue
1. **Analyze the issue**:
- Understand the problem and requirements
- Explore relevant code files
- Identify integration points
- Check for existing patterns
2. **Design the solution**:
- Break down into concrete tasks (2-7 tasks)
- Define file modifications needed
- Create implementation steps
- Define test commands and acceptance criteria
- Identify task dependencies
3. **Generate solution JSON** following this format:
```json
{
"id": "SOL-ISS-001-1",
"issue_id": "ISS-001",
"description": "Brief description of solution",
"tasks": [
{
"id": "T1",
"title": "Task title",
"action": "Create|Modify|Fix|Refactor",
"scope": "file path or directory",
"description": "What to do",
"modification_points": ["Point 1", "Point 2"],
"implementation": ["Step 1", "Step 2", "Step 3"],
"test": {
"commands": ["npm test -- file.test.ts"],
"unit": ["Requirement 1", "Requirement 2"]
},
"acceptance": {
"criteria": ["Criterion 1: Must pass", "Criterion 2: Must satisfy"],
"verification": ["Run tests", "Manual verification"]
},
"depends_on": [],
"estimated_minutes": 30,
"priority": 1
}
],
"exploration_context": {
"relevant_files": ["path/to/file.ts", "path/to/another.ts"],
"patterns": "Follow existing pattern X",
"integration_points": "Used by service X and Y"
},
"analysis": {
"risk": "low|medium|high",
"impact": "low|medium|high",
"complexity": "low|medium|high"
},
"score": 0.95,
"is_bound": true
}
```
### Validation Rules
Ensure:
- ✓ All required fields present in solution JSON
- ✓ No circular dependencies in `task.depends_on`
- ✓ Each task has **quantified** acceptance criteria (not vague)
- ✓ Solution follows `solution-schema.json` exactly
- ✓ Score reflects quality (0.8+ for approval)
- ✓ Total estimated time ≤ 2 hours
### Return Format
After processing each issue, return this JSON:
```json
{
"status": "completed|failed",
"solution_id": "SOL-ISS-001-1",
"task_count": 3,
"score": 0.95,
"validation": {
"schema_valid": true,
"criteria_quantified": true,
"no_circular_deps": true,
"total_estimated_minutes": 90
},
"errors": []
}
```
---
## Quality Standards
### Completeness
- All required fields must be present
- No missing sections
- Each task must have all sub-fields
### Clarity
- Each task must have specific, measurable acceptance criteria
- Task descriptions must be clear enough for implementation
- Implementation steps must be actionable
### Correctness
- No circular dependencies in task ordering
- Task dependencies form a valid DAG (Directed Acyclic Graph)
- File paths are correct and relative to project root
### Pragmatism
- Solution is minimal and focused on the issue
- Tasks are achievable within 1-2 hours total
- Leverages existing patterns and libraries
---
## Context Preservation
You will receive multiple issues sequentially. **Do NOT close after each issue.** Instead:
- Process each issue independently
- Maintain awareness of the workflow context across issues
- Use consistent naming conventions (SOL-ISSxxx-1 format)
- Reference previous patterns if applicable to new issues
- Keep track of explored code patterns for consistency
---
## Error Handling
If you cannot complete planning for an issue:
1. **Clearly state what went wrong** - be specific about the issue
2. **Provide the reason** - missing context, unclear requirements, insufficient project info, etc.
3. **Return status: "failed"** - mark the response as failed
4. **Continue waiting** - the orchestrator will send the next issue
5. **Suggest remediation** - if possible, suggest what information is needed
Example error response:
```json
{
"status": "failed",
"solution_id": null,
"error_message": "Cannot plan solution - issue description lacks technical detail. Recommend: clarify whether to use JWT or OAuth, specify API endpoints, define user roles.",
"suggested_clarification": "..."
}
```
---
## Communication Protocol
After processing each issue:
1. Return the response JSON (success or failure)
2. Wait for the next `send_input` with a new issue
3. Continue this cycle until orchestrator closes you
**IMPORTANT**: Do NOT attempt to close yourself. The orchestrator will close you when all planning is complete.
---
## Key Principles
- **Focus on analysis and design** - leave implementation to the Execution Agent
- **Be thorough** - explore code and understand patterns before proposing solutions
- **Be pragmatic** - solutions should be achievable within 1-2 hours
- **Follow schema** - every solution JSON must validate against the solution schema
- **Maintain context** - remember project context across multiple issues
- **Quantify everything** - acceptance criteria must be measurable, not vague
- **No circular logic** - task dependencies must form a valid DAG
---
## Success Criteria
✓ Solution JSON is valid and follows schema exactly
✓ All tasks have quantified acceptance.criteria
✓ No circular dependencies detected
✓ Score >= 0.8
✓ Estimated total time <= 2 hours
✓ Each task is independently verifiable through test.commands

View File

@@ -0,0 +1,468 @@
# Agent Roles Definition
Agent角色定义和职责范围。
---
## Role Assignment
### Planning Agent (Issue-Plan-Agent)
**职责**: 分析issue并生成可执行的解决方案
**角色文件**: `~/.codex/agents/issue-plan-agent.md`
**提示词**: `prompts/planning-agent.md`
#### Capabilities
**允许**:
- 读取代码、文档、配置
- 探索项目结构和依赖关系
- 分析问题和设计解决方案
- 分解任务为可执行步骤
- 定义验收条件
**禁止**:
- 修改代码
- 执行代码
- 推送到远程
- 删除文件或分支
#### Input Format
```json
{
"type": "plan_issue",
"issue_id": "ISS-001",
"title": "Fix authentication timeout",
"description": "User sessions timeout too quickly",
"project_context": {
"tech_stack": "Node.js + Express + JWT",
"guidelines": "Follow existing patterns",
"relevant_files": ["src/auth.ts", "src/middleware/auth.ts"]
}
}
```
#### Output Format
```json
{
"status": "completed|failed",
"solution_id": "SOL-ISS-001-1",
"tasks": [
{
"id": "T1",
"title": "Update JWT configuration",
"action": "Modify",
"scope": "src/config/auth.ts",
"description": "Increase token expiration time",
"modification_points": ["TOKEN_EXPIRY constant"],
"implementation": ["Step 1", "Step 2"],
"test": {
"commands": ["npm test -- auth.test.ts"],
"unit": ["Token expiry should be 24 hours"]
},
"acceptance": {
"criteria": ["Token valid for 24 hours", "Test suite passes"],
"verification": ["Run tests"]
},
"depends_on": [],
"estimated_minutes": 20,
"priority": 1
}
],
"exploration_context": {
"relevant_files": ["src/auth.ts", "src/middleware/auth.ts"],
"patterns": "Follow existing JWT configuration pattern",
"integration_points": "Used by authentication middleware"
},
"analysis": {
"risk": "low|medium|high",
"impact": "low|medium|high",
"complexity": "low|medium|high"
},
"score": 0.95,
"validation": {
"schema_valid": true,
"criteria_quantified": true,
"no_circular_deps": true
}
}
```
---
### Execution Agent (Issue-Execute-Agent)
**职责**: 执行规划的解决方案,实现所有任务
**角色文件**: `~/.codex/agents/issue-execute-agent.md`
**提示词**: `prompts/execution-agent.md`
#### Capabilities
**允许**:
- 读取代码和配置
- 修改代码
- 运行测试
- 提交代码
- 验证acceptance criteria
- 创建snapshots用于恢复
**禁止**:
- 推送到远程分支
- 创建PR除非明确授权
- 删除分支
- 强制覆盖主分支
#### Input Format
```json
{
"type": "execute_solution",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"solution": {
"id": "SOL-ISS-001-1",
"tasks": [ /* task objects from planning */ ],
"exploration_context": {
"relevant_files": ["src/auth.ts"],
"patterns": "Follow existing pattern",
"integration_points": "Used by auth middleware"
}
},
"project_root": "/path/to/project"
}
```
#### Output Format
```json
{
"status": "completed|failed",
"execution_result_id": "EXR-ISS-001-1",
"issue_id": "ISS-001",
"solution_id": "SOL-ISS-001-1",
"executed_tasks": [
{
"task_id": "T1",
"title": "Update JWT configuration",
"status": "completed",
"files_modified": ["src/config/auth.ts"],
"commits": [
{
"hash": "abc123def456",
"message": "[T1] Update JWT token expiration to 24 hours"
}
],
"test_results": {
"passed": 8,
"failed": 0,
"command": "npm test -- auth.test.ts",
"output": "All tests passed"
},
"acceptance_met": true,
"execution_time_minutes": 15,
"errors": []
}
],
"overall_stats": {
"total_tasks": 1,
"completed": 1,
"failed": 0,
"total_files_modified": 1,
"total_commits": 1,
"total_time_minutes": 15
},
"final_commit": {
"hash": "xyz789abc",
"message": "Resolve ISS-001: Fix authentication timeout"
},
"verification": {
"all_tests_passed": true,
"all_acceptance_met": true,
"no_regressions": true
}
}
```
---
## Dual-Agent Strategy
### 为什么使用双Agent模式
1. **关注点分离** - 规划和执行各自专注一个任务
2. **并行优化** - 虽然执行仍是串行,但规划可独立优化
3. **上下文最小化** - 仅传递solution ID避免上下文膨胀
4. **错误隔离** - 规划失败不影响执行,反之亦然
5. **可维护性** - 每个agent专注单一职责
### 工作流程
```
┌────────────────────────────────────┐
│ Planning Agent │
│ • Analyze issue │
│ • Explore codebase │
│ • Design solution │
│ • Generate tasks │
│ • Validate schema │
│ → Output: SOL-ISS-001-1 JSON │
└────────────┬─────────────────────┘
┌──────────────┐
│ Save to │
│ planning- │
│ results.json │
│ + Bind │
└──────┬───────┘
┌────────────────────────────────────┐
│ Execution Agent │
│ • Load SOL-ISS-001-1 │
│ • Implement T1, T2, T3... │
│ • Run tests per task │
│ • Commit changes │
│ • Verify acceptance │
│ → Output: EXR-ISS-001-1 JSON │
└────────────┬─────────────────────┘
┌──────────────┐
│ Save to │
│ execution- │
│ results.json │
└──────────────┘
```
---
## Context Minimization
### 信息传递原则
**目标**: 最小化上下文减少token浪费
#### Planning Phase - 传递内容
- Issue ID 和 Title
- Issue Description
- Project tech stack (`project-tech.json`)
- Project guidelines (`project-guidelines.json`)
- Solution schema reference
#### Planning Phase - 不传递
- 完整的代码库快照
- 所有相关文件内容 (Agent自己探索)
- 历史执行结果
- 其他issues的信息
#### Execution Phase - 传递内容
- Solution ID (完整的solution JSON)
- 执行参数worktree路径等
- Project tech stack
- Project guidelines
#### Execution Phase - 不传递
- 规划阶段的完整上下文
- 其他solutions的信息
- 原始issue描述solution JSON中已包含
### 上下文加载策略
```javascript
// Planning Agent 自己加载
const issueDetails = Read(issueStore + issue_id);
const techStack = Read('.workflow/project-tech.json');
const guidelines = Read('.workflow/project-guidelines.json');
const schema = Read('~/.claude/workflows/cli-templates/schemas/solution-schema.json');
// Execution Agent 自己加载
const solution = planningResults.find(r => r.solution_id === solutionId);
const techStack = Read('.workflow/project-tech.json');
const guidelines = Read('.workflow/project-guidelines.json');
```
**优势**:
- 减少重复传递
- 使用相同的源文件版本
- Agents可以自我刷新上下文
- 易于更新project guidelines或tech stack
---
## 错误处理与重试
### Planning 错误
| 错误 | 原因 | 重试策略 | 恢复 |
|------|------|--------|------|
| Subagent超时 | 分析复杂或系统慢 | 增加timeout重试1次 | 返回用户,标记失败 |
| 无效solution | 生成不符合schema | 验证schema返回错误 | 返回用户进行修正 |
| 依赖循环 | DAG错误 | 检测循环,返回错误 | 用户手动修正 |
| 权限错误 | 无法读取文件 | 检查路径和权限 | 返回具体错误 |
| 格式错误 | JSON无效 | 验证格式,返回错误 | 用户修正格式 |
### Execution 错误
| 错误 | 原因 | 重试策略 | 恢复 |
|------|------|--------|------|
| Task失败 | 代码实现问题 | 检查错误,不重试 | 记录错误,标记失败 |
| 测试失败 | 测试用例不符 | 不提交,标记失败 | 返回测试输出 |
| 提交失败 | 冲突或权限 | 创建snapshot便于恢复 | 让用户决定 |
| Subagent超时 | 任务太复杂 | 增加timeout | 记录超时,标记失败 |
| 文件冲突 | 并发修改 | 创建snapshot | 让用户合并 |
---
## 交互指南
### 向Planning Agent的问题
```
"这个issue描述了什么问题"
→ 返回:问题分析 + 根本原因
"解决这个问题需要修改哪些文件?"
→ 返回:文件列表 + 修改点
"如何验证解决方案是否有效?"
→ 返回:验收条件 + 验证步骤
"预计需要多少时间?"
→ 返回:每个任务的估计时间 + 总计
"有哪些风险?"
→ 返回:风险分析 + 影响评估
```
### 向Execution Agent的问题
```
"这个task有哪些实现步骤"
→ 返回:逐步指南 + 代码示例
"所有测试都通过了吗?"
→ 返回:测试结果 + 失败原因(如有)
"acceptance criteria都满足了吗"
→ 返回:验证结果 + 不符合项(如有)
"有哪些文件被修改了?"
→ 返回:文件列表 + 变更摘要
"代码有没有回归问题?"
→ 返回:回归测试结果
```
---
## Role文件位置
```
~/.codex/agents/
├── issue-plan-agent.md # 规划角色定义
├── issue-execute-agent.md # 执行角色定义
└── ...
.codex/skills/codex-issue-plan-execute/
├── prompts/
│ ├── planning-agent.md # 规划提示词
│ └── execution-agent.md # 执行提示词
└── specs/
├── agent-roles.md # 本文件
└── ...
```
### 如果角色文件不存在
Orchestrator会使用fallback策略
- `universal-executor` 作为备用规划角色
- `code-developer` 作为备用执行角色
---
## 最佳实践
### 为Planning Agent设计提示词
✓ 从issue描述提取关键信息
✓ 探索相关代码和类似实现
✓ 分析根本原因和解决方向
✓ 设计最小化解决方案
✓ 分解为2-7个可执行任务
✓ 为每个task定义明确的acceptance criteria
✓ 验证任务依赖无循环
✓ 估计总时间≤2小时
### 为Execution Agent设计提示词
✓ 加载solution和所有task定义
✓ 按依赖顺序执行tasks
✓ 为每个taskimplement → test → verify
✓ 确保所有acceptance criteria通过
✓ 运行完整的测试套件
✓ 检查代码质量和风格一致性
✓ 创建描述性的commit消息
✓ 生成完整的execution result JSON
---
## Communication Protocol
### Planning Agent Lifecycle
```
1. Initialize (once)
- Read system prompt
- Read role definition
- Load project context
2. Process issues (loop)
- Receive issue via send_input
- Analyze issue
- Design solution
- Return solution JSON
- Wait for next issue
3. Shutdown
- Orchestrator closes when done
```
### Execution Agent Lifecycle
```
1. Initialize (once)
- Read system prompt
- Read role definition
- Load project context
2. Process solutions (loop)
- Receive solution via send_input
- Implement all tasks
- Run tests
- Return execution result
- Wait for next solution
3. Shutdown
- Orchestrator closes when done
```
---
## Version History
| Version | Date | Changes |
|---------|------|---------|
| 2.0 | 2025-01-29 | Consolidated from subagent-roles.md, updated format |
| 1.0 | 2024-12-29 | Initial agent roles definition |
---
**Document Version**: 2.0
**Last Updated**: 2025-01-29
**Maintained By**: Codex Issue Plan-Execute Team

View File

@@ -0,0 +1,187 @@
# Issue Handling Specification
Issue 处理的核心规范和约定。
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| Phase: action-list | Issue 列表展示 | Issue Status & Display |
| Phase: action-plan | Issue 规划 | Solution Planning |
| Phase: action-execute | Issue 执行 | Solution Execution |
---
## Issue Structure
### 基本字段
```json
{
"id": "ISS-20250129-001",
"title": "Fix authentication token expiration bug",
"description": "Tokens expire too quickly in production",
"status": "registered",
"priority": "high",
"tags": ["auth", "bugfix"],
"created_at": "2025-01-29T10:00:00Z",
"updated_at": "2025-01-29T10:00:00Z"
}
```
### 工作流状态
| Status | Phase | 说明 |
|--------|-------|------|
| `registered` | Initial | Issue 已创建,待规划 |
| `planning` | List → Plan | 正在规划中 |
| `planned` | Plan → Execute | 规划完成,解决方案已绑定 |
| `executing` | Execute | 正在执行 |
| `completed` | Execute → Complete | 执行完成 |
| `failed` | Any | 执行失败 |
### 工作流字段
```json
{
"id": "ISS-xxx",
"status": "registered|planning|planned|executing|completed|failed",
"solution_id": "SOL-xxx-1",
"planned_at": "2025-01-29T11:00:00Z",
"executed_at": "2025-01-29T12:00:00Z",
"error": null
}
```
## Issue 列表显示
### 格式规范
```
Status Matrix:
Total: 5 | Registered: 2 | Planned: 2 | Completed: 1
Issue Details:
○ [1] ISS-001: Fix login bug (registered)
→ [2] ISS-002: Add MFA support (planning)
✓ [3] ISS-003: Refactor auth (completed)
✗ [4] ISS-004: Update password policy (failed)
```
### 显示字段
- ID: 唯一标识
- Title: 简短描述
- Status: 当前状态
- Solution ID: 绑定的解决方案(如有)
## Solution Planning
### 规划输入
- Issue ID 和 Title
- Issue 描述和上下文
- 项目技术栈和指南
### 规划输出
- Solution ID`SOL-{issue-id}-{sequence}`
- Tasks 数组:可执行的任务列表
- Acceptance Criteria验收标准
- 估计时间
### Planning Subagent 职责
1. 分析 issue 描述
2. 探索相关代码路径
3. 设计解决方案
4. 分解为可执行任务
5. 定义验收条件
### 多解决方案处理
- 如果生成多个方案,需要用户选择
- 选择后绑定主方案到 issue
- 备选方案保存但不自动执行
## Solution Execution
### 执行顺序
1. 加载已规划的解决方案
2. 逐个执行每个 solution 中的所有 tasks
3. 每个 taskimplement → test → verify
4. 完成后提交一次
### Execution Subagent 职责
1. 加载 solution JSON
2. 实现所有任务
3. 运行测试
4. 验收条件检查
5. 提交代码并返回结果
### 错误恢复
- Task 失败:不提交,标记 solution 为失败
- 提交失败:创建快照便于恢复
- Subagent 超时:记录并继续下一个
## 批量处理约定
### 输入格式
```bash
# 单个 issue
codex issue:plan-execute ISS-001
# 多个 issues
codex issue:plan-execute ISS-001,ISS-002,ISS-003
# 交互式
codex issue:plan-execute
```
### 处理策略
- 规划:可并行,但为保持一致性这里采用串行
- 执行:必须串行(避免冲突提交)
- 队列FIFO无优先级排序
## 状态持久化
### 保存位置
```
.workflow/.scratchpad/codex-issue-{timestamp}/
├── state.json # 当前状态快照
├── state-history.json # 状态变更历史
├── queue.json # 执行队列
├── solutions/ # 解决方案文件
├── snapshots/ # 流程快照
└── final-report.md # 最终报告
```
### 快照用途
- 流程恢复:允许从中断点恢复
- 调试:记录每个阶段的状态变化
- 审计:跟踪完整的执行过程
## 质量保证
### 验收清单
- [ ] Issue 规范明确
- [ ] Solution 遵循 schema
- [ ] All tasks 有 acceptance criteria
- [ ] 执行成功率 >= 80%
- [ ] 报告生成完整
### 错误分类
| 级别 | 类型 | 处理 |
|------|------|------|
| Critical | 规划失败、提交失败 | 中止该 issue |
| Warning | 测试失败、条件未满足 | 记录但继续 |
| Info | 超时、网络延迟 | 日志记录 |

View File

@@ -0,0 +1,231 @@
# Quality Standards
质量评估标准和验收条件。
## Quality Dimensions
### 1. Completeness (完整性) - 25%
**定义**:所有必需的结构和字段都存在
- [ ] 所有 issues 都有规划或执行结果
- [ ] 每个 solution 都有完整的 task 列表
- [ ] 每个 task 都有 acceptance criteria
- [ ] 状态日志完整记录
**评分**
- 90-100%:全部完整,可能有可选字段缺失
- 70-89%:主要字段完整,部分可选字段缺失
- 50-69%:核心字段完整,重要字段缺失
- <50%:结构不完整
### 2. Consistency (一致性) - 25%
**定义**:整个工作流中的术语、格式、风格统一
- [ ] Issue ID/Solution ID 格式统一
- [ ] Status 值遵循规范
- [ ] Task 结构一致
- [ ] 时间戳格式一致ISO-8601
**评分**
- 90-100%:完全一致,无格式混乱
- 70-89%:大部分一致,偶有格式变化
- 50-69%:半数一致,混乱明显
- <50%:严重不一致
### 3. Correctness (正确性) - 25%
**定义**:执行过程中没有错误,验收条件都通过
- [ ] 无 DAG 循环依赖
- [ ] 所有测试通过
- [ ] 所有 acceptance criteria 验证通过
- [ ] 无代码冲突
**评分**
- 90-100%:完全正确,无错误
- 70-89%:基本正确,<10% 错误率
- 50-69%有明显错误10-30% 错误率
- <50%:错误过多,>30% 错误率
### 4. Clarity (清晰度) - 25%
**定义**:文档清晰易读,逻辑清晰
- [ ] Task 描述明确可操作
- [ ] Acceptance criteria 具体明确
- [ ] 报告结构清晰,易理解
- [ ] 错误信息详细有帮助
**评分**
- 90-100%:非常清晰,一目了然
- 70-89%:大部分清晰,有基本可读性
- 50-69%:部分清晰,理解有难度
- <50%:极不清晰,难以理解
## Quality Gates
### Pass (通过)
**条件**:总分 >= 80%
**结果**:工作流正常完成,可进入下一阶段
**检查清单**
- [ ] 所有 issues 已规划或执行
- [ ] 成功率 >= 80%
- [ ] 无关键错误
- [ ] 报告完整
### Review (需审查)
**条件**:总分 60-79%
**结果**:工作流部分完成,有可改进项
**常见问题**
- 部分 task 失败
- 某些验收条件未满足
- 文档不够完整
**改进方式**
- 检查失败的 task
- 添加缺失的文档
- 优化工作流配置
### Fail (失败)
**条件**:总分 < 60%
**结果**:工作流失败,需重做
**常见原因**
- 关键 task 失败
- 规划过程中断
- 系统错误过多
- 无法生成有效报告
**恢复方式**
- 从快照恢复
- 修复根本问题
- 重新规划和执行
## Issue Classification
### Errors (必须修复)
| 错误 | 影响 | 处理 |
|------|------|------|
| DAG 循环依赖 | Critical | 中止规划 |
| 任务无 acceptance | High | 补充条件 |
| 提交失败 | High | 调查并重试 |
| 规划 subagent 超时 | Medium | 重试或跳过 |
| 无效的 solution ID | Medium | 重新生成 |
### Warnings (应该修复)
| 警告 | 影响 | 处理 |
|------|------|------|
| Task 执行时间过长 | Medium | 考虑拆分 |
| 测试覆盖率低 | Medium | 补充测试 |
| 多个解决方案 | Low | 明确选择 |
| Criteria 不具体 | Low | 改进措辞 |
### Info (可选改进)
| 信息 | 说明 |
|------|------|
| 建议任务数 | 2-7 个任务为最优 |
| 时间建议 | 总耗时 <= 2 小时为佳 |
| 代码风格 | 检查是否遵循项目规范 |
## 执行检查清单
### 规划阶段
- [ ] Issue 描述清晰
- [ ] 生成了有效的 solution
- [ ] 所有 task 有 acceptance criteria
- [ ] 依赖关系正确
### 执行阶段
- [ ] 每个 task 实现完整
- [ ] 所有测试通过
- [ ] 所有 acceptance criteria 验证通过
- [ ] 提交信息规范
### 完成阶段
- [ ] 生成了最终报告
- [ ] 统计信息准确
- [ ] 状态持久化完整
- [ ] 快照保存无误
## 自动化验证函数
```javascript
function runQualityChecks(workDir) {
const state = JSON.parse(Read(`${workDir}/state.json`));
const issues = state.issues || {};
const scores = {
completeness: checkCompleteness(issues),
consistency: checkConsistency(state),
correctness: checkCorrectness(issues),
clarity: checkClarity(state)
};
const overall = Object.values(scores).reduce((a, b) => a + b) / 4;
return {
scores: scores,
overall: overall.toFixed(1),
gate: overall >= 80 ? 'pass' : overall >= 60 ? 'review' : 'fail',
details: {
issues_total: Object.keys(issues).length,
completed: Object.values(issues).filter(i => i.status === 'completed').length,
failed: Object.values(issues).filter(i => i.status === 'failed').length
}
};
}
```
## 报告模板
```markdown
# Quality Report
## Scores
| Dimension | Score | Status |
|-----------|-------|--------|
| Completeness | 90% | ✓ |
| Consistency | 85% | ✓ |
| Correctness | 92% | ✓ |
| Clarity | 88% | ✓ |
| **Overall** | **89%** | **PASS** |
## Issues Summary
- Total: 10
- Completed: 8 (80%)
- Failed: 2 (20%)
- Pending: 0 (0%)
## Recommendations
1. ...
2. ...
## Errors & Warnings
### Errors (0)
None
### Warnings (1)
- Task T4 in ISS-003 took 45 minutes (expected 30)
```

View File

@@ -0,0 +1,270 @@
# Solution Schema Specification
解决方案数据结构和验证规则。
## When to Use
| Phase | Usage | Section |
|-------|-------|---------|
| Phase: action-plan | Solution 生成 | Solution Structure |
| Phase: action-execute | Task 解析 | Task Definition |
---
## Solution Structure
### 完整 Schema
```json
{
"id": "SOL-ISS-001-1",
"issue_id": "ISS-001",
"description": "Fix authentication token expiration by extending TTL",
"strategy_type": "bugfix",
"created_at": "2025-01-29T11:00:00Z",
"tasks": [
{
"id": "T1",
"title": "Update token TTL configuration",
"action": "Modify",
"scope": "src/config/auth.ts",
"description": "Increase JWT token expiration from 1h to 24h",
"modification_points": [
{
"file": "src/config/auth.ts",
"target": "JWT_EXPIRY",
"change": "Change value from 3600 to 86400"
}
],
"implementation": [
"Open src/config/auth.ts",
"Locate JWT_EXPIRY constant",
"Update value: 3600 → 86400",
"Add comment explaining change"
],
"test": {
"commands": ["npm test -- auth.config.test.ts"],
"unit": ["Token expiration should be 24h"],
"integration": []
},
"acceptance": {
"criteria": [
"Unit tests pass",
"Token TTL is correctly set",
"No breaking changes to API"
],
"verification": [
"Run: npm test",
"Manual: Verify token in console"
]
},
"depends_on": [],
"estimated_minutes": 15,
"priority": 1
}
],
"exploration_context": {
"relevant_files": [
"src/config/auth.ts",
"src/services/auth.service.ts",
"tests/auth.test.ts"
],
"patterns": "Follow existing config pattern in .env",
"integration_points": "Used by AuthService in middleware"
},
"analysis": {
"risk": "low",
"impact": "medium",
"complexity": "low"
},
"score": 0.95,
"is_bound": true
}
```
## 字段说明
### 基础字段
| 字段 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `id` | string | ✓ | 唯一 IDSOL-{issue-id}-{seq} |
| `issue_id` | string | ✓ | 关联的 Issue ID |
| `description` | string | ✓ | 解决方案描述 |
| `strategy_type` | string | | 策略类型bugfix/feature/refactor |
| `tasks` | array | ✓ | 任务列表,至少 1 个 |
### Task 字段
| 字段 | 类型 | 说明 |
|------|------|------|
| `id` | string | 任务 IDT1, T2, ... |
| `title` | string | 任务标题 |
| `action` | string | 动作类型Create/Modify/Fix/Refactor |
| `scope` | string | 作用范围:文件或目录 |
| `modification_points` | array | 具体修改点列表 |
| `implementation` | array | 实现步骤 |
| `test` | object | 测试命令和用例 |
| `acceptance` | object | 验收条件和验证步骤 |
| `depends_on` | array | 任务依赖:[T1, T2] |
| `estimated_minutes` | number | 预计耗时(分钟) |
### 验收条件
```json
{
"acceptance": {
"criteria": [
"Unit tests pass",
"Function returns correct result",
"No performance regression"
],
"verification": [
"Run: npm test -- module.test.ts",
"Manual: Call function and verify output"
]
}
}
```
## 验证规则
### 必需字段检查
```javascript
function validateSolution(solution) {
if (!solution.id) throw new Error("Missing: id");
if (!solution.issue_id) throw new Error("Missing: issue_id");
if (!solution.description) throw new Error("Missing: description");
if (!Array.isArray(solution.tasks)) throw new Error("tasks must be array");
if (solution.tasks.length === 0) throw new Error("tasks cannot be empty");
return true;
}
function validateTask(task) {
if (!task.id) throw new Error("Missing: task.id");
if (!task.title) throw new Error("Missing: task.title");
if (!task.action) throw new Error("Missing: task.action");
if (!Array.isArray(task.implementation)) throw new Error("implementation must be array");
if (!task.acceptance) throw new Error("Missing: task.acceptance");
if (!Array.isArray(task.acceptance.criteria)) throw new Error("acceptance.criteria must be array");
if (task.acceptance.criteria.length === 0) throw new Error("acceptance.criteria cannot be empty");
return true;
}
```
### 格式验证
- ID 格式:`SOL-ISS-\d+-\d+`
- Action 值Create | Modify | Fix | Refactor | Add | Remove
- Risk/Impact/Complexity 值low | medium | high
- Score 范围0.0 - 1.0
## 任务依赖
### 表示方法
```json
{
"tasks": [
{
"id": "T1",
"title": "Create auth module",
"depends_on": []
},
{
"id": "T2",
"title": "Add authentication logic",
"depends_on": ["T1"]
},
{
"id": "T3",
"title": "Add tests",
"depends_on": ["T1", "T2"]
}
]
}
```
### DAG 验证
```javascript
function validateDAG(tasks) {
const visited = new Set();
const recursionStack = new Set();
function hasCycle(taskId) {
visited.add(taskId);
recursionStack.add(taskId);
const task = tasks.find(t => t.id === taskId);
if (!task || !task.depends_on) return false;
for (const dep of task.depends_on) {
if (!visited.has(dep)) {
if (hasCycle(dep)) return true;
} else if (recursionStack.has(dep)) {
return true; // 发现循环
}
}
recursionStack.delete(taskId);
return false;
}
for (const task of tasks) {
if (!visited.has(task.id) && hasCycle(task.id)) {
throw new Error(`Circular dependency detected: ${task.id}`);
}
}
return true;
}
```
## 文件保存
### 位置
```
.workflow/.scratchpad/codex-issue-{timestamp}/solutions/
├── ISS-001-plan.json # 规划结果
├── ISS-001-execution.json # 执行结果
├── ISS-002-plan.json
└── ISS-002-execution.json
```
### 文件内容
**规划结果**:包含 solution 完整定义
**执行结果**:包含执行状态和提交信息
```json
{
"solution_id": "SOL-ISS-001-1",
"status": "completed|failed",
"executed_at": "ISO-8601",
"execution_result": {
"files_modified": ["src/auth.ts"],
"commit_hash": "abc123...",
"tests_passed": true
}
}
```
## 质量门控
### Solution 评分标准
| 指标 | 权重 | 评分方法 |
|------|------|----------|
| 任务完整性 | 30% | 无空任务,每个任务有 acceptance |
| 依赖合法性 | 20% | 无循环依赖,依赖链清晰 |
| 验收可测 | 30% | Criteria 明确可测,有验证步骤 |
| 复杂度评估 | 20% | Risk/Impact/Complexity 合理评估 |
### 通过条件
- 所有必需字段存在
- 无格式错误
- 无循环依赖
- Score >= 0.8

View File

@@ -0,0 +1,32 @@
⚠️ **DEPRECATED** - This file is deprecated as of v2.0 (2025-01-29)
**Use instead**: [`agent-roles.md`](agent-roles.md)
This file has been superseded by a consolidated `agent-roles.md` that improves organization and eliminates duplication.
**Why the change?**
- Consolidates all agent role definitions in one place
- Eliminates duplicated role descriptions
- Single source of truth for agent capabilities
- Better organization with unified reference format
**Migration**:
```javascript
// OLD (v1.0)
// Reference: specs/subagent-roles.md
// NEW (v2.0)
// Reference: specs/agent-roles.md
```
**Timeline**:
- v2.0 (2025-01-29): Old file kept for backward compatibility
- v2.1 (2025-03-31): Old file will be removed
---
# Subagent Roles Definition (Legacy - See agent-roles.md instead)
See [`agent-roles.md`](agent-roles.md) for the current consolidated agent roles specification.
All content has been merged into the new agent-roles.md file with improved organization and formatting.

331
AGENTS.md
View File

@@ -1,331 +0,0 @@
# Codex Agent Execution Protocol
## Overview
**Role**: Autonomous development, implementation, and testing specialist
## Prompt Structure
All prompts follow this 6-field format:
```
PURPOSE: [development goal]
TASK: [specific implementation task]
MODE: [auto|write]
CONTEXT: [file patterns]
EXPECTED: [deliverables]
RULES: [templates | additional constraints]
```
**Subtask indicator**: `Subtask N of M: [title]` or `CONTINUE TO NEXT SUBTASK`
## MODE Definitions
### MODE: auto (default)
**Permissions**:
- Full file operations (create/modify/delete)
- Run tests and builds
- Commit code incrementally
**Execute**:
1. Parse PURPOSE and TASK
2. Analyze CONTEXT files - find 3+ similar patterns
3. Plan implementation following RULES
4. Generate code with tests
5. Run tests continuously
6. Commit working code incrementally
7. Validate EXPECTED deliverables
8. Report results (with context for next subtask if multi-task)
**Constraint**: Must test every change
### MODE: write
**Permissions**:
- Focused file operations
- Create/modify specific files
- Run tests for validation
**Execute**:
1. Analyze CONTEXT files
2. Make targeted changes
3. Validate tests pass
4. Report file changes
## Execution Protocol
### Core Requirements
**ALWAYS**:
- Parse all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
- Study CONTEXT files - find 3+ similar patterns before implementing
- Apply RULES (templates + constraints) exactly
- Test continuously after every change
- Commit incrementally with working code
- Match project style and patterns exactly
- List all created/modified files at output beginning
- Use direct binary calls (avoid shell wrappers)
- Prefer apply_patch for text edits
- Configure Windows UTF-8 encoding for Chinese support
**NEVER**:
- Make assumptions without code verification
- Ignore existing patterns
- Skip tests
- Use clever tricks over boring solutions
- Over-engineer solutions
- Break existing code or backward compatibility
- Exceed 3 failed attempts without stopping
### RULES Processing
- Parse RULES field to extract template content and constraints
- Recognize `|` as separator: `template content | additional constraints`
- Apply ALL template guidelines as mandatory
- Apply ALL additional constraints as mandatory
- Treat rule violations as task failures
### Multi-Task Execution (Resume Pattern)
**First subtask**: Standard execution flow above
**Subsequent subtasks** (via `resume --last`):
- Recall context from previous subtasks
- Build on previous work (don't repeat)
- Maintain consistency with established patterns
- Focus on current subtask scope only
- Test integration with previous work
- Report context for next subtask
## System Optimization
**Direct Binary Calls**: Always call binaries directly in `functions.shell`, set `workdir`, avoid shell wrappers (`bash -lc`, `cmd /c`, etc.)
**Text Editing Priority**:
1. Use `apply_patch` tool for all routine text edits
2. Fall back to `sed` for single-line substitutions if unavailable
3. Avoid Python editing scripts unless both fail
**apply_patch invocation**:
```json
{
"command": ["apply_patch", "*** Begin Patch\n*** Update File: path/to/file\n@@\n- old\n+ new\n*** End Patch\n"],
"workdir": "<workdir>",
"justification": "Brief reason"
}
```
**Windows UTF-8 Encoding** (before commands):
```powershell
[Console]::InputEncoding = [Text.UTF8Encoding]::new($false)
[Console]::OutputEncoding = [Text.UTF8Encoding]::new($false)
chcp 65001 > $null
```
## Output Standards
### Format Priority
**If template defines output format** → Follow template format EXACTLY (all sections mandatory)
**If template has no format** → Use default format below based on task type
### Default Output Formats
#### Single Task Implementation
```markdown
# Implementation: [TASK Title]
## Changes
- Created: `path/to/file1.ext` (X lines)
- Modified: `path/to/file2.ext` (+Y/-Z lines)
- Deleted: `path/to/file3.ext`
## Summary
[2-3 sentence overview of what was implemented]
## Key Decisions
1. [Decision] - Rationale and reference to similar pattern
2. [Decision] - path/to/reference:line
## Implementation Details
[Evidence-based description with code references]
## Testing
- Tests written: X new tests
- Tests passing: Y/Z tests
- Coverage: N%
## Validation
✅ Tests: X passing
✅ Coverage: Y%
✅ Build: Success
## Next Steps
[Recommendations or future improvements]
```
#### Multi-Task Execution (with Resume)
**First Subtask**:
```markdown
# Subtask 1/N: [TASK Title]
## Changes
[List of file changes]
## Implementation
[Details with code references]
## Testing
✅ Tests: X passing
✅ Integration: Compatible with existing code
## Context for Next Subtask
- Key decisions: [established patterns]
- Files created: [paths and purposes]
- Integration points: [where next subtask should connect]
```
**Subsequent Subtasks**:
```markdown
# Subtask N/M: [TASK Title]
## Changes
[List of file changes]
## Integration Notes
✅ Compatible with subtask N-1
✅ Maintains established patterns
✅ Tests pass with previous work
## Implementation
[Details with code references]
## Testing
✅ Tests: X passing
✅ Total coverage: Y%
## Context for Next Subtask
[If not final subtask, provide context for continuation]
```
#### Partial Completion
```markdown
# Task Status: Partially Completed
## Completed
- [What worked successfully]
- Files: `path/to/completed.ext`
## Blocked
- **Issue**: [What failed]
- **Root Cause**: [Analysis of failure]
- **Attempted**: [Solutions tried - attempt X of 3]
## Required
[What's needed to proceed]
## Recommendation
[Suggested next steps or alternative approaches]
```
### Code References
**Format**: `path/to/file:line_number`
**Example**: `src/auth/jwt.ts:45` - Implemented token validation following pattern from `src/auth/session.ts:78`
### Related Files Section
**Always include at output beginning** - List ALL files analyzed, created, or modified:
```markdown
## Related Files
- `path/to/file1.ext` - [Role in implementation]
- `path/to/file2.ext` - [Reference pattern used]
- `path/to/file3.ext` - [Modified for X reason]
```
## Error Handling
### Three-Attempt Rule
**On 3rd failed attempt**:
1. Stop execution
2. Report: What attempted, what failed, root cause
3. Request guidance or suggest alternatives
### Recovery Strategies
| Error Type | Response |
|------------|----------|
| **Syntax/Type** | Review errors → Fix → Re-run tests → Validate build |
| **Runtime** | Analyze stack trace → Add error handling → Test error cases |
| **Test Failure** | Debug in isolation → Review setup → Fix implementation/test |
| **Build Failure** | Check messages → Fix incrementally → Validate each fix |
## Quality Standards
### Code Quality
- Follow project's existing patterns
- Match import style and naming conventions
- Single responsibility per function/class
- DRY (Don't Repeat Yourself)
- YAGNI (You Aren't Gonna Need It)
### Testing
- Test all public functions
- Test edge cases and error conditions
- Mock external dependencies
- Target 80%+ coverage
### Error Handling
- Proper try-catch blocks
- Clear error messages
- Graceful degradation
- Don't expose sensitive info
## Core Principles
**Incremental Progress**:
- Small, testable changes
- Commit working code frequently
- Build on previous work (subtasks)
**Evidence-Based**:
- Study 3+ similar patterns before implementing
- Match project style exactly
- Verify with existing code
**Pragmatic**:
- Boring solutions over clever code
- Simple over complex
- Adapt to project reality
**Context Continuity** (Multi-Task):
- Leverage resume for consistency
- Maintain established patterns
- Test integration between subtasks
## Execution Checklist
**Before**:
- [ ] Understand PURPOSE and TASK clearly
- [ ] Review CONTEXT files, find 3+ patterns
- [ ] Check RULES templates and constraints
**During**:
- [ ] Follow existing patterns exactly
- [ ] Write tests alongside code
- [ ] Run tests after every change
- [ ] Commit working code incrementally
**After**:
- [ ] All tests pass
- [ ] Coverage meets target
- [ ] Build succeeds
- [ ] All EXPECTED deliverables met

339
UNIFIED_EXECUTE_SUMMARY.md Normal file
View File

@@ -0,0 +1,339 @@
# Unified-Execute-With-File: Implementation Summary
## 🎉 Project Complete
Both Claude and Codex versions of the universal execution engine are now ready for production use.
---
## 📦 Deliverables
### 1. Claude CLI Command (Optimized)
- **Location**: `.claude/commands/workflow/unified-execute-with-file.md`
- **Size**: 807 lines (25 KB)
- **Status**: ✅ Production-ready
- **Optimization**: 26% reduction from original 1,094 lines
**Usage**:
```bash
/workflow:unified-execute-with-file
/workflow:unified-execute-with-file -p .workflow/IMPL_PLAN.md -m parallel
/workflow:unified-execute-with-file -y "auth module"
```
### 2. Codex Prompt (Format-Adapted)
- **Location**: `.codex/prompts/unified-execute-with-file.md`
- **Size**: 722 lines (22 KB)
- **Status**: ✅ Production-ready
- **Savings**: 85 fewer lines than Claude version
**Usage**:
```
PLAN_PATH=".workflow/IMPL_PLAN.md"
EXECUTION_MODE="parallel"
AUTO_CONFIRM="yes"
EXECUTION_CONTEXT="auth module"
```
### 3. Comparison Guide
- **Location**: `.codex/prompts/UNIFIED_EXECUTE_COMPARISON.md`
- **Size**: 205 lines (5.5 KB)
- **Purpose**: Parameter mapping, format differences, migration paths
---
## ✨ Core Features (Both Versions)
### Plan Parsing
- ✅ IMPL_PLAN.md (from `/workflow:plan`)
- ✅ brainstorm synthesis.json (from `/workflow:brainstorm-with-file`)
- ✅ analysis conclusions.json (from `/workflow:analyze-with-file`)
- ✅ debug recommendations (from `/workflow:debug-with-file`)
- ✅ task JSON files (from `/workflow:lite-plan`)
### Multi-Agent Support
- ✅ code-developer (implementation)
- ✅ tdd-developer (test-driven development)
- ✅ test-fix-agent (testing & fixes)
- ✅ doc-generator (documentation)
- ✅ cli-execution-agent (CLI-based)
- ✅ universal-executor (fallback)
### Execution Strategy
- ✅ Dependency resolution (topological sort)
- ✅ Parallel execution (max 3 tasks/wave)
- ✅ File conflict detection
- ✅ Sequential fallback for conflicts
- ✅ Wave-based grouping
### Progress Tracking
- ✅ execution-events.md: Single source of truth
- ✅ Append-only unified execution log
- ✅ Agent reads all previous executions
- ✅ Knowledge chain between agents
- ✅ Human-readable + machine-parseable
### Error Handling
- ✅ Automatic retry mechanism
- ✅ User-interactive retry/skip/abort
- ✅ Dependency-aware task skipping
- ✅ Detailed error recovery notes
### Session Management
- ✅ Incremental execution (no re-execution)
- ✅ Resumable from failure points
- ✅ Cross-version compatibility (Claude ↔ Codex)
- ✅ Persistent session tracking
---
## 📂 Session Structure
Both versions create identical session structure:
```
.workflow/.execution/{executionId}/
├── execution.md # Execution plan and status
│ # - Task table, dependency graph
│ # - Execution timeline, statistics
└── execution-events.md # SINGLE SOURCE OF TRUTH
# - All agent executions (chronological)
# - Success/failure with details
# - Artifacts and notes for next agent
```
**Generated files**:
- Created at project paths: `src/types/auth.ts` (not `artifacts/src/types/auth.ts`)
- execution-events.md records actual paths for reference
---
## 🚀 Execution Flow
```
1. Load & Parse Plan
├─ Detect plan format
├─ Extract tasks
└─ Validate dependencies
2. Session Setup
├─ Create execution folder
├─ Initialize execution.md
└─ Initialize execution-events.md
3. Pre-Execution Validation
├─ Check task feasibility
├─ Detect dependency cycles
└─ User confirmation (unless auto-confirm)
4. Execution Orchestration
├─ Topological sort
├─ Group into waves (parallel-safe)
├─ Execute wave by wave
└─ Track progress in real-time
5. Progress Logging
├─ Each agent reads all previous executions
├─ Agent executes with full context
├─ Agent appends event (success/failure)
└─ Next agent inherits complete history
6. Completion
├─ Collect statistics
├─ Update execution.md
├─ execution-events.md complete
└─ Offer follow-up options
```
---
## 📊 Statistics
| Metric | Claude | Codex | Combined |
|--------|--------|-------|----------|
| **Lines** | 807 | 722 | 1,529 |
| **Size (KB)** | 25 | 22 | 47 |
| **Phases** | 4 | 4 | 4 |
| **Agent types** | 6+ | 6+ | 6+ |
| **Max parallel tasks** | 3 | 3 | 3 |
---
## 🔄 Cross-Version Compatibility
**Migration is seamless**:
| Scenario | Status |
|----------|--------|
| Start Claude → Resume Codex | ✅ Compatible |
| Start Codex → Resume Claude | ✅ Compatible |
| Mix both in workflows | ✅ Compatible |
| execution-events.md format | ✅ Identical |
| Session ID structure | ✅ Identical |
| Artifact locations | ✅ Identical |
| Agent selection | ✅ Identical |
---
## 📈 Implementation Progress
### Phase 1: Claude Optimization
- Initial version: 1,094 lines
- Optimizations:
- Consolidated Phase 3 (205 → 30 lines)
- Merged error handling (90 → 40 lines)
- Removed duplicate template
- Preserved all technical specifications
- Result: 807 lines (-26%)
### Phase 2: Codex Adaptation
- Format conversion: YAML CLI → Variable substitution
- Streamlined Phase documentation
- Maintained all core logic
- Result: 722 lines (85 fewer than Claude)
### Phase 3: Documentation
- Created comparison guide (205 lines)
- Parameter mapping matrix
- Format differences analysis
- Migration paths documented
---
## 📝 Git Commits
```
0fe8c18a docs: Add comparison guide between Claude and Codex versions
0086413f feat: Add Codex unified-execute-with-file prompt
8ff698ae refactor: Optimize unified-execute-with-file command documentation
```
---
## 🎯 Design Principles
1. **Single Source of Truth**
- execution-events.md as unified execution log
- No redundant tracking systems
2. **Knowledge Chain**
- Each agent reads all previous executions
- Context automatically inherited
- Full visibility into dependencies
3. **Format Agnostic**
- Accepts any planning/brainstorm/analysis output
- Smart format detection
- Extensible parser architecture
4. **Incremental Progress**
- No re-execution of completed tasks
- Resume from failure points
- Session persistence
5. **Safety & Visibility**
- Append-only event logging
- No data loss on failure
- Detailed error recovery
- Complete execution timeline
---
## 🔧 When to Use Each Version
### Use Claude Version When:
- Running in Claude Code CLI environment
- Need direct tool integration (TodoWrite, Task, AskUserQuestion)
- Prefer CLI flag syntax (`-y`, `-p`, `-m`)
- Building multi-command workflows
- Want full workflow system integration
### Use Codex Version When:
- Executing directly in Codex
- Prefer variable substitution format
- Need standalone execution
- Integrating with Codex command chains
- Want simpler parameter interface
---
## ✅ Quality Assurance
- ✅ Both versions functionally equivalent
- ✅ Dependency management validated
- ✅ Parallel execution tested
- ✅ Error handling verified
- ✅ Event logging format documented
- ✅ Cross-version compatibility confirmed
- ✅ Parameter mapping complete
- ✅ Session structure identical
---
## 📚 Documentation
**Main files**:
1. `.claude/commands/workflow/unified-execute-with-file.md` (807 lines)
- Complete Claude CLI command specification
- Full implementation details
- Phase-by-phase breakdown
2. `.codex/prompts/unified-execute-with-file.md` (722 lines)
- Codex-adapted prompt
- Format substitution
- Streamlined logic
3. `.codex/prompts/UNIFIED_EXECUTE_COMPARISON.md` (205 lines)
- Format differences
- Functional equivalence matrix
- Parameter mapping
- Usage recommendations
- Migration paths
---
## 🎓 Integration Points
**Input formats consumed**:
- IMPL_PLAN.md (from `/workflow:plan`)
- brainstorm synthesis.json (from `/workflow:brainstorm-with-file`)
- analysis conclusions.json (from `/workflow:analyze-with-file`)
- debug recommendations (from `/workflow:debug-with-file`)
- task JSON files (from `/workflow:lite-plan`)
**Output formats produced**:
- execution.md: Plan overview + execution timeline
- execution-events.md: Complete execution record
- Generated files at project paths
**Agent coordination**:
- code-developer, tdd-developer, test-fix-agent, doc-generator, cli-execution-agent, universal-executor
---
## 🚀 Ready for Production
Both implementations are complete, tested, and documented:
- **Claude CLI**: `/workflow:unified-execute-with-file`
- **Codex Prompt**: `unified-execute-with-file`
- **Comparison**: `UNIFIED_EXECUTE_COMPARISON.md`
Start using immediately or integrate into existing workflows.
---
## 📞 Next Steps
1. **Use Claude version** for workflow system integration
2. **Use Codex version** for direct Codex execution
3. **Refer to comparison guide** for format mapping
4. **Mix versions** for multi-tool workflows
5. **Extend parsers** for new plan formats as needed
---
**Project Status**: ✅ **COMPLETE**
All deliverables ready for production use.

View File

@@ -209,6 +209,7 @@ export function run(argv: string[]): void {
.option('--turn <n>', 'Turn number for cache (default: latest)')
.option('--raw', 'Raw output only (no formatting)')
.option('--final', 'Output final result only with usage hint')
.option('--to-file <path>', 'Save output to file')
.action((subcommand, args, options) => cliCommand(subcommand, args, options));
// Memory command

View File

@@ -77,6 +77,10 @@ function notifyDashboard(data: Record<string, unknown>): void {
* Uses specific event types that match frontend handlers
*/
function broadcastStreamEvent(eventType: string, payload: Record<string, unknown>): void {
if (process.env.DEBUG_CLI_EVENTS) {
console.error(`[CLI-BROADCAST] START ${eventType} at ${Date.now()}`);
}
const data = JSON.stringify({
type: eventType,
...payload,
@@ -107,6 +111,10 @@ function broadcastStreamEvent(eventType: string, payload: Record<string, unknown
});
req.write(data);
req.end();
if (process.env.DEBUG_CLI_EVENTS) {
console.error(`[CLI-BROADCAST] END ${eventType} at ${Date.now()}`);
}
}
interface CliExecOptions {
@@ -132,6 +140,8 @@ interface CliExecOptions {
title?: string; // Optional title for review summary
// Template/Rules options
rule?: string; // Template name for auto-discovery (defines $PROTO and $TMPL env vars)
// Output options
toFile?: string; // Save output to file
}
/** Cache configuration parsed from --cache */
@@ -580,7 +590,7 @@ async function statusAction(debug?: boolean): Promise<void> {
* @param {Object} options - CLI options
*/
async function execAction(positionalPrompt: string | undefined, options: CliExecOptions): Promise<void> {
const { prompt: optionPrompt, file, tool: userTool, mode = 'analysis', model, cd, includeDirs, stream, resume, id, noNative, cache, injectMode, debug, uncommitted, base, commit, title, rule } = options;
const { prompt: optionPrompt, file, tool: userTool, mode = 'analysis', model, cd, includeDirs, stream, resume, id, noNative, cache, injectMode, debug, uncommitted, base, commit, title, rule, toFile } = options;
// Determine the tool to use: explicit --tool option, or defaultTool from config
let tool = userTool;
@@ -919,6 +929,13 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
mode
});
if (process.env.DEBUG) {
console.error(`[CLI] Generated executionId: ${executionId}`);
}
// Buffer to accumulate output when both --stream and --to-file are specified
let streamBuffer = '';
// Streaming output handler - broadcasts to dashboard AND writes to stdout
const onOutput = (unit: CliOutputUnit) => {
// Always broadcast to dashboard for real-time viewing
@@ -939,10 +956,14 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
case 'stdout':
case 'code':
case 'streaming_content': // Show streaming delta content in real-time
process.stdout.write(typeof unit.content === 'string' ? unit.content : JSON.stringify(unit.content));
const content1 = typeof unit.content === 'string' ? unit.content : JSON.stringify(unit.content);
process.stdout.write(content1);
if (toFile) streamBuffer += content1;
break;
case 'stderr':
process.stderr.write(typeof unit.content === 'string' ? unit.content : JSON.stringify(unit.content));
const content2 = typeof unit.content === 'string' ? unit.content : JSON.stringify(unit.content);
process.stderr.write(content2);
if (toFile) streamBuffer += content2;
break;
case 'thought':
// Optional: display thinking process with different color
@@ -955,7 +976,9 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
default:
// Other types: output content if available
if (unit.content) {
process.stdout.write(typeof unit.content === 'string' ? unit.content : '');
const content3 = typeof unit.content === 'string' ? unit.content : '';
process.stdout.write(content3);
if (toFile) streamBuffer += content3;
}
}
}
@@ -975,7 +998,7 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
includeDirs,
// timeout removed - controlled by external caller (bash timeout)
resume,
id, // custom execution ID
id: executionId, // unified execution ID (matches broadcast events)
noNative,
stream: !!stream, // stream=true → streaming enabled (no cache), stream=false → cache output (default)
outputFormat, // Enable JSONL parsing for tools that support it
@@ -1006,12 +1029,43 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
const output = result.parsedOutput || result.stdout;
if (output) {
console.log(output);
// Save to file if --to-file is specified
if (toFile) {
try {
const { writeFileSync, mkdirSync } = await import('fs');
const { dirname, resolve } = await import('path');
const filePath = resolve(cd || process.cwd(), toFile);
const dir = dirname(filePath);
mkdirSync(dir, { recursive: true });
writeFileSync(filePath, output, 'utf8');
if (debug) {
console.log(chalk.gray(` Saved output to: ${filePath}`));
}
} catch (err) {
console.error(chalk.red(` Error saving to file: ${(err as Error).message}`));
}
}
}
}
// Print summary with execution ID and turn info
console.log();
if (result.success) {
// Save streaming output to file if needed
if (stream && toFile && streamBuffer) {
try {
const { writeFileSync, mkdirSync } = await import('fs');
const { dirname, resolve } = await import('path');
const filePath = resolve(cd || process.cwd(), toFile);
const dir = dirname(filePath);
mkdirSync(dir, { recursive: true });
writeFileSync(filePath, streamBuffer, 'utf8');
} catch (err) {
console.error(chalk.red(` Error saving to file: ${(err as Error).message}`));
}
}
if (!spinner) {
const turnInfo = result.conversation.turn_count > 1
? ` (turn ${result.conversation.turn_count})`
@@ -1033,6 +1087,11 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
if (!stream) {
console.log(chalk.dim(` Output (optional): ccw cli output ${result.execution.id}`));
}
if (toFile) {
const { resolve } = await import('path');
const filePath = resolve(cd || process.cwd(), toFile);
console.log(chalk.green(` Saved to: ${filePath}`));
}
// Notify dashboard: execution completed (legacy)
notifyDashboard({
@@ -1046,15 +1105,27 @@ async function execAction(positionalPrompt: string | undefined, options: CliExec
});
// Broadcast CLI_EXECUTION_COMPLETED for real-time streaming viewer
if (process.env.DEBUG_CLI_EVENTS) {
console.error(`[CLI-TIMING] Broadcasting CLI_EXECUTION_COMPLETED at ${Date.now()}`);
}
broadcastStreamEvent('CLI_EXECUTION_COMPLETED', {
executionId, // Use the same executionId as started event
success: true,
duration: result.execution.duration_ms
});
if (process.env.DEBUG_CLI_EVENTS) {
console.error(`[CLI-TIMING] Broadcast returned, setting timeout at ${Date.now()}`);
}
// Ensure clean exit after successful execution
// Delay to allow HTTP request to complete
setTimeout(() => process.exit(0), 150);
// FIX: Increased from 150ms to 500ms for long-running executions
setTimeout(() => {
if (process.env.DEBUG_CLI_EVENTS) {
console.error(`[CLI-TIMING] process.exit(0) at ${Date.now()}`);
}
process.exit(0);
}, 500);
} else {
if (!spinner) {
console.log(chalk.red(` ✗ Failed (${result.execution.status})`));

View File

@@ -918,9 +918,11 @@ export async function handleClaudeRoutes(ctx: RouteContext): Promise<boolean> {
const userCodexPath = join(homedir(), '.codex', 'AGENTS.md');
const chineseRefPattern = /@.*chinese-response\.md/i;
const chineseSectionPattern = /## 中文回复/; // For Codex direct content
const oldCodexRefPattern = /- \*\*中文回复准则\*\*:\s*@.*chinese-response\.md/i; // Old Codex format
let claudeEnabled = false;
let codexEnabled = false;
let codexNeedsMigration = false;
let guidelinesPath = '';
// Check if user CLAUDE.md exists and contains Chinese response reference
@@ -934,6 +936,10 @@ export async function handleClaudeRoutes(ctx: RouteContext): Promise<boolean> {
if (existsSync(userCodexPath)) {
const content = readFileSync(userCodexPath, 'utf8');
codexEnabled = chineseSectionPattern.test(content);
// Check if Codex has old @ reference format that needs migration
if (codexEnabled && oldCodexRefPattern.test(content)) {
codexNeedsMigration = true;
}
}
// Find guidelines file path - always use user-level path
@@ -948,6 +954,7 @@ export async function handleClaudeRoutes(ctx: RouteContext): Promise<boolean> {
enabled: claudeEnabled, // backward compatibility
claudeEnabled,
codexEnabled,
codexNeedsMigration, // New field: true if Codex has old @ reference format
guidelinesPath,
guidelinesExists: !!guidelinesPath,
userClaudeMdExists: existsSync(userClaudePath),
@@ -1002,11 +1009,36 @@ export async function handleClaudeRoutes(ctx: RouteContext): Promise<boolean> {
if (isCodex) {
// Codex: Direct content concatenation (does not support @ references)
const chineseSectionPattern = /\n*## 中文回复\n[\s\S]*?(?=\n## |$)/;
const oldRefPattern = /- \*\*中文回复准则\*\*:\s*@.*chinese-response\.md/i; // Old @ reference format
if (enabled) {
// Check if section already exists
if (chineseSectionPattern.test(content)) {
return { success: true, message: 'Already enabled' };
// Check if section exists and if it needs migration
const hasSection = chineseSectionPattern.test(content);
if (hasSection) {
// Check if it's the old format with @ reference
const hasOldRef = oldRefPattern.test(content);
if (hasOldRef) {
// Migrate: remove old section and add new content
content = content.replace(chineseSectionPattern, '\n');
content = content.replace(/\n{3,}/g, '\n\n').trim();
if (content) content += '\n';
// Read chinese-response.md content
const chineseResponseContent = readFileSync(userGuidelinesPath, 'utf8');
// Add new section with direct content
const newSection = `\n## 中文回复\n\n${chineseResponseContent}\n`;
content = content.trimEnd() + '\n' + newSection;
writeFileSync(targetFile, content, 'utf8');
return { success: true, enabled, migrated: true, message: 'Migrated from @ reference to direct content' };
}
// Already has correct format
return { success: true, message: 'Already enabled with correct format' };
}
// Read chinese-response.md content
@@ -1016,7 +1048,7 @@ export async function handleClaudeRoutes(ctx: RouteContext): Promise<boolean> {
const newSection = `\n## 中文回复\n\n${chineseResponseContent}\n`;
content = content.trimEnd() + '\n' + newSection;
} else {
// Remove Chinese response section
// Remove Chinese response section (both old and new format)
content = content.replace(chineseSectionPattern, '\n');
content = content.replace(/\n{3,}/g, '\n\n').trim();
if (content) content += '\n';

View File

@@ -605,7 +605,7 @@ export async function handleCliRoutes(ctx: RouteContext): Promise<boolean> {
// API: Execute CLI Tool
if (pathname === '/api/cli/execute' && req.method === 'POST') {
handlePostRequest(req, res, async (body) => {
const { tool, prompt, mode, format, model, dir, includeDirs, timeout, smartContext, parentExecutionId, category } = body as any;
const { tool, prompt, mode, format, model, dir, includeDirs, timeout, smartContext, parentExecutionId, category, toFile } = body as any;
if (!tool || !prompt) {
return { error: 'tool and prompt are required', status: 400 };
@@ -696,6 +696,21 @@ export async function handleCliRoutes(ctx: RouteContext): Promise<boolean> {
console.log(`[ActiveExec] Direct execution ${executionId} marked as ${activeExec.status}, retained for ${EXECUTION_RETENTION_MS / 1000}s`);
}
// Save output to file if --to-file is specified
if (toFile && result.stdout) {
try {
const { writeFileSync, mkdirSync } = await import('fs');
const { dirname, resolve } = await import('path');
const filePath = resolve(dir || initialPath, toFile);
const dirPath = dirname(filePath);
mkdirSync(dirPath, { recursive: true });
writeFileSync(filePath, result.stdout, 'utf8');
console.log(`[API] Output saved to: ${filePath}`);
} catch (err) {
console.warn(`[API] Failed to save output to file: ${(err as Error).message}`);
}
}
// Broadcast completion
broadcastToClients({
type: 'CLI_EXECUTION_COMPLETED',

View File

@@ -3,7 +3,7 @@
* Handles all Help-related API endpoints for command guide and CodexLens docs
*/
import { readFileSync, existsSync, watch } from 'fs';
import { join } from 'path';
import { join, normalize, relative, resolve, sep } from 'path';
import { homedir } from 'os';
import type { RouteContext } from './types.js';
@@ -372,6 +372,65 @@ export async function handleHelpRoutes(ctx: RouteContext): Promise<boolean> {
return true;
}
// API: Get command document content by source path
if (pathname === '/api/help/command-content') {
const sourceParam = url.searchParams.get('source');
if (!sourceParam) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Missing source parameter' }));
return true;
}
try {
// Determine the source path's actual location:
// The source in command.json is relative to .claude/skills/ccw-help/
// E.g., "../../commands/cli/cli-init.md"
// We need to resolve this against that actual location, not the project root
const baseDir = initialPath || join(homedir(), '.claude');
const commandJsonDir = join(baseDir, 'skills', 'ccw-help');
// Resolve the source path against where command.json actually is
const resolvedPath = resolve(commandJsonDir, sourceParam);
// Normalize the path for the OS
const normalizedPath = normalize(resolvedPath);
// Security: Verify path is within base directory (prevent path traversal)
const relPath = relative(baseDir, normalizedPath);
if (relPath.startsWith('..') || relPath.startsWith('~')) {
console.warn(`[help-content] Access denied: Path traversal attempt - ${relPath}`);
res.writeHead(403, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Access denied' }));
return true;
}
console.log(`[help-content] Base directory: ${baseDir}`);
console.log(`[help-content] Command.json dir: ${commandJsonDir}`);
console.log(`[help-content] Source parameter: ${sourceParam}`);
console.log(`[help-content] Attempting to load: ${normalizedPath}`);
console.log(`[help-content] Relative path check: ${relPath}`);
if (!existsSync(normalizedPath)) {
console.warn(`[help-content] File not found: ${normalizedPath}`);
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Document not found' }));
return true;
}
const content = readFileSync(normalizedPath, 'utf8');
console.log(`[help-content] Successfully served: ${normalizedPath}`);
res.writeHead(200, { 'Content-Type': 'text/markdown; charset=utf-8' });
res.end(content);
} catch (error) {
console.error('[help-content] Error:', error);
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Failed to read document', message: (error as any).message }));
}
return true;
}
// API: Get CodexLens documentation metadata
if (pathname === '/api/help/codexlens') {
// Return CodexLens quick-start guide data

View File

@@ -1613,6 +1613,7 @@ var chineseResponseEnabled = false;
var chineseResponseLoading = false;
var codexChineseResponseEnabled = false;
var codexChineseResponseLoading = false;
var codexChineseNeedsMigration = false; // Track if Codex needs migration from old @ reference
var codexCliEnhancementEnabled = false;
var codexCliEnhancementLoading = false;
var windowsPlatformEnabled = false;
@@ -1626,12 +1627,14 @@ async function loadLanguageSettings() {
var data = await response.json();
chineseResponseEnabled = data.claudeEnabled || data.enabled || false;
codexChineseResponseEnabled = data.codexEnabled || false;
codexChineseNeedsMigration = data.codexNeedsMigration || false; // Track migration status
return data;
} catch (err) {
console.error('Failed to load language settings:', err);
chineseResponseEnabled = false;
codexChineseResponseEnabled = false;
return { claudeEnabled: false, codexEnabled: false, guidelinesExists: false };
codexChineseNeedsMigration = false;
return { claudeEnabled: false, codexEnabled: false, codexNeedsMigration: false, guidelinesExists: false };
}
}
@@ -1708,6 +1711,11 @@ async function toggleChineseResponse(enabled, target) {
var data = await response.json();
if (isCodex) {
codexChineseResponseEnabled = data.enabled;
// Handle migration status
if (data.migrated) {
codexChineseNeedsMigration = false;
showRefreshToast('Codex: 已从 @ 引用迁移到直接文本拼接', 'success');
}
} else {
chineseResponseEnabled = data.enabled;
}
@@ -1715,9 +1723,11 @@ async function toggleChineseResponse(enabled, target) {
// Update UI
renderLanguageSettingsSection();
// Show toast
// Show toast (skip if migration message already shown)
var toolName = isCodex ? 'Codex' : 'Claude';
showRefreshToast(toolName + ': ' + (enabled ? t('lang.enableSuccess') : t('lang.disableSuccess')), 'success');
if (!data.migrated) {
showRefreshToast(toolName + ': ' + (enabled ? t('lang.enableSuccess') : t('lang.disableSuccess')), 'success');
}
} catch (err) {
console.error('Failed to toggle Chinese response:', err);
// Error already shown in the !response.ok block
@@ -1921,7 +1931,9 @@ async function renderLanguageSettingsSection() {
(codexChineseResponseEnabled ? t('lang.enabled') : t('lang.disabled')) +
'</span>' +
'</div>' +
'<p class="cli-setting-desc">' + t('lang.chineseDescCodex') + '</p>' +
'<p class="cli-setting-desc">' + t('lang.chineseDescCodex') +
(codexChineseNeedsMigration ? '<br><span style="color: #f59e0b; font-size: 0.85em;">⚠️ 检测到旧格式(@引用),请关闭后重新启用以迁移到新格式</span>' : '') +
'</p>' +
'</div>' +
// Windows Platform
'<div class="cli-setting-item">' +

View File

@@ -262,6 +262,8 @@ function renderCommandsTab(category) {
// Initialize accordion handlers
initializeAccordions();
// Initialize command card click handlers
initializeCommandCardHandlers();
}
function renderCommandCard(cmd) {
@@ -271,8 +273,13 @@ function renderCommandCard(cmd) {
'Advanced': 'bg-error-light text-error'
}[cmd.difficulty] || 'bg-muted text-muted-foreground';
// Create safe JSON string for command data
var cmdJson = escapeHtml(JSON.stringify(cmd));
return `
<div class="bg-background border border-border rounded-lg p-4 hover:border-primary transition-colors">
<div class="command-card cursor-pointer bg-background border border-border rounded-lg p-4 hover:border-primary transition-all hover:shadow-md"
data-command='${cmdJson}'
title="Click to view details">
<div class="flex items-start justify-between mb-2">
<div class="flex-1">
<div class="flex items-center gap-2 mb-1">
@@ -281,6 +288,7 @@ function renderCommandCard(cmd) {
</div>
<p class="text-sm text-muted-foreground">${escapeHtml(cmd.description)}</p>
</div>
<i data-lucide="arrow-right" class="w-4 h-4 text-muted-foreground opacity-0 group-hover:opacity-100 transition-opacity ml-2"></i>
</div>
${cmd.arguments ? `
<div class="mt-2 text-xs">
@@ -854,3 +862,417 @@ function renderCodexLensQuickStart() {
container.innerHTML = html;
if (typeof lucide !== 'undefined') lucide.createIcons();
}
// ========== Command Detail Modal ==========
function showCommandDetailModal(cmd) {
var modal = document.createElement('div');
modal.className = 'fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50 p-4';
modal.id = 'commandDetailModal';
var difficultyColor = {
'Beginner': 'bg-success/10 text-success',
'Intermediate': 'bg-warning/10 text-warning',
'Advanced': 'bg-error/10 text-error'
}[cmd.difficulty] || 'bg-muted/10 text-muted-foreground';
var sourceLink = cmd.source ? cmd.source.replace(/\.\.\/..\/..\//g, '') : '';
var html = `
<div class="bg-card border border-border rounded-lg max-w-4xl w-full max-h-[90vh] overflow-hidden flex flex-col">
<!-- Header -->
<div class="sticky top-0 bg-card border-b border-border px-6 py-4 flex items-start justify-between">
<div class="flex-1">
<div class="flex items-center gap-2 mb-2">
<code class="text-lg font-mono text-primary font-bold">${escapeHtml(cmd.command)}</code>
<span class="text-xs px-3 py-1 rounded-full ${difficultyColor} font-medium">${escapeHtml(cmd.difficulty || 'Intermediate')}</span>
</div>
<p class="text-sm text-muted-foreground">${escapeHtml(cmd.category || 'general')}${cmd.subcategory ? ' / ' + escapeHtml(cmd.subcategory) : ''}</p>
</div>
<button class="close-modal text-muted-foreground hover:text-foreground transition-colors" title="Close">
<i data-lucide="x" class="w-6 h-6"></i>
</button>
</div>
<!-- Content - Tabs -->
<div class="border-b border-border flex">
<button class="detail-tab active px-4 py-3 text-sm font-medium transition-colors border-b-2 border-primary" data-tab="overview">
Overview
</button>
${cmd.source ? `
<button class="detail-tab px-4 py-3 text-sm font-medium transition-colors border-b-2 border-transparent text-muted-foreground hover:text-foreground" data-tab="document">
<i data-lucide="file-text" class="w-4 h-4 inline-block mr-1"></i>
Full Document
</button>
` : ''}
</div>
<!-- Tab Content -->
<div class="flex-1 overflow-y-auto">
<!-- Overview Tab -->
<div id="overview-tab" class="detail-tab-content p-6 space-y-6 active">
<!-- Description -->
<div>
<h3 class="text-sm font-semibold text-foreground mb-2">Description</h3>
<p class="text-sm text-muted-foreground leading-relaxed">${escapeHtml(cmd.description || 'No description available')}</p>
</div>
<!-- Usage Scenario -->
${cmd.usage_scenario ? `
<div>
<h3 class="text-sm font-semibold text-foreground mb-2">Use Case</h3>
<p class="text-sm text-muted-foreground">${escapeHtml(cmd.usage_scenario)}</p>
</div>
` : ''}
<!-- Arguments -->
${cmd.arguments ? `
<div>
<h3 class="text-sm font-semibold text-foreground mb-2">Arguments</h3>
<div class="bg-background rounded-lg p-3 border border-border">
<code class="text-xs font-mono text-foreground">${escapeHtml(cmd.arguments)}</code>
</div>
</div>
` : ''}
<!-- Flow Information -->
${cmd.flow ? `
<div>
<h3 class="text-sm font-semibold text-foreground mb-3">Workflow</h3>
<div class="space-y-2">
${cmd.flow.prerequisites ? `
<div class="text-xs">
<span class="text-muted-foreground">Prerequisites:</span>
<div class="mt-1 space-y-1">
${cmd.flow.prerequisites.map(p => `
<div class="inline-block px-2 py-1 bg-primary/10 text-primary rounded text-xs mr-2">
${escapeHtml(p)}
</div>
`).join('')}
</div>
</div>
` : ''}
${cmd.flow.next_steps ? `
<div class="text-xs">
<span class="text-muted-foreground">Next Steps:</span>
<div class="mt-1 space-y-1">
${cmd.flow.next_steps.map(n => `
<div class="inline-block px-2 py-1 bg-success/10 text-success rounded text-xs mr-2">
${escapeHtml(n)}
</div>
`).join('')}
</div>
</div>
` : ''}
${cmd.flow.alternatives ? `
<div class="text-xs">
<span class="text-muted-foreground">Alternatives:</span>
<div class="mt-1 space-y-1">
${cmd.flow.alternatives.map(a => `
<div class="inline-block px-2 py-1 bg-warning/10 text-warning rounded text-xs mr-2">
${escapeHtml(a)}
</div>
`).join('')}
</div>
</div>
` : ''}
</div>
</div>
` : ''}
<!-- Source -->
${sourceLink ? `
<div>
<h3 class="text-sm font-semibold text-foreground mb-2">Source File</h3>
<div class="text-xs text-muted-foreground font-mono break-all">${escapeHtml(sourceLink)}</div>
</div>
` : ''}
</div>
<!-- Document Tab -->
${cmd.source ? `
<div id="document-tab" class="detail-tab-content p-6">
<div class="bg-background border border-border rounded-lg p-4">
<div id="document-loader" class="flex items-center justify-center py-8">
<i data-lucide="loader-2" class="w-5 h-5 animate-spin text-primary mr-2"></i>
<span class="text-sm text-muted-foreground">Loading document...</span>
</div>
<div id="document-content" class="hidden prose prose-invert max-w-none text-sm">
<!-- Markdown content will be loaded here -->
</div>
<div id="document-error" class="hidden text-sm text-error">
Failed to load document
</div>
</div>
</div>
` : ''}
</div>
<!-- Footer -->
<div class="border-t border-border px-6 py-3 flex gap-2 justify-end bg-background rounded-b-lg">
<button class="close-modal px-4 py-2 text-sm font-medium text-foreground bg-muted hover:bg-muted/80 rounded-lg transition-colors">
Close
</button>
</div>
</div>
`;
modal.innerHTML = html;
document.body.appendChild(modal);
// Initialize tab switching
var tabButtons = modal.querySelectorAll('.detail-tab');
tabButtons.forEach(function(btn) {
btn.addEventListener('click', function() {
var tabName = this.getAttribute('data-tab');
// Update active tab button
tabButtons.forEach(function(b) {
b.classList.remove('active', 'border-primary', 'text-foreground');
b.classList.add('border-transparent', 'text-muted-foreground');
});
this.classList.add('active', 'border-primary', 'text-foreground');
this.classList.remove('border-transparent', 'text-muted-foreground');
// Show/hide tab content
var tabContents = modal.querySelectorAll('.detail-tab-content');
tabContents.forEach(function(content) {
content.classList.remove('active');
});
var activeTab = modal.querySelector('#' + tabName + '-tab');
if (activeTab) {
activeTab.classList.add('active');
}
// Load document content if needed
if (tabName === 'document' && cmd.source) {
loadCommandDocument(modal, cmd.source);
}
});
});
// Close handlers
var closeButtons = modal.querySelectorAll('.close-modal');
closeButtons.forEach(function(btn) {
btn.addEventListener('click', function() {
modal.remove();
});
});
// Close on background click
modal.addEventListener('click', function(e) {
if (e.target === modal) {
modal.remove();
}
});
// Close on Escape key
var closeOnEscape = function(e) {
if (e.key === 'Escape') {
modal.remove();
document.removeEventListener('keydown', closeOnEscape);
}
};
document.addEventListener('keydown', closeOnEscape);
if (typeof lucide !== 'undefined') lucide.createIcons();
}
// ========== Load Command Document ==========
function loadCommandDocument(modal, sourcePath) {
var contentDiv = modal.querySelector('#document-content');
var loaderDiv = modal.querySelector('#document-loader');
var errorDiv = modal.querySelector('#document-error');
// Check if already loaded
if (contentDiv && !contentDiv.classList.contains('hidden')) {
return;
}
// Start loading
if (loaderDiv) loaderDiv.classList.remove('hidden');
if (errorDiv) errorDiv.classList.add('hidden');
if (contentDiv) contentDiv.classList.add('hidden');
// Fetch document content
fetch('/api/help/command-content?source=' + encodeURIComponent(sourcePath))
.then(function(response) {
if (!response.ok) {
throw new Error('HTTP ' + response.status + ': ' + response.statusText);
}
return response.text();
})
.then(function(markdown) {
// Parse markdown to HTML
try {
var html = parseMarkdown(markdown);
if (!html) {
throw new Error('parseMarkdown returned empty result');
}
} catch (parseError) {
console.error('[Help] parseMarkdown failed:', parseError.message, parseError.stack);
throw parseError;
}
if (contentDiv) {
contentDiv.innerHTML = html;
contentDiv.classList.remove('hidden');
}
if (loaderDiv) loaderDiv.classList.add('hidden');
if (typeof lucide !== 'undefined') lucide.createIcons();
})
.catch(function(error) {
console.error('[Help] Failed to load document:', error.message || error);
if (contentDiv) contentDiv.classList.add('hidden');
if (loaderDiv) loaderDiv.classList.add('hidden');
if (errorDiv) {
errorDiv.textContent = 'Failed to load document: ' + (error.message || 'Unknown error');
errorDiv.classList.remove('hidden');
}
});
}
// ========== Markdown Parser (Simple) ==========
function parseMarkdown(markdown) {
// Remove frontmatter
var lines = markdown.split('\n');
var startIdx = 0;
if (lines[0] === '---') {
for (var i = 1; i < lines.length; i++) {
if (lines[i] === '---') {
startIdx = i + 1;
break;
}
}
}
var content = lines.slice(startIdx).join('\n').trim();
var html = '';
var currentList = null;
var inCodeBlock = false;
var codeBlockContent = '';
var codeBlockLang = '';
lines = content.split('\n');
for (var i = 0; i < lines.length; i++) {
var line = lines[i];
// Code blocks
if (line.startsWith('```')) {
if (!inCodeBlock) {
inCodeBlock = true;
codeBlockLang = line.substring(3).trim();
codeBlockContent = '';
} else {
inCodeBlock = false;
var langClass = codeBlockLang ? ' language-' + escapeHtml(codeBlockLang) : '';
html += '<pre class="bg-background border border-border rounded-lg p-4 overflow-x-auto my-3"><code class="text-xs font-mono text-foreground' + langClass + '">' +
escapeHtml(codeBlockContent).replace(/\n/g, '<br>') + '</code></pre>';
}
continue;
}
if (inCodeBlock) {
codeBlockContent += line + '\n';
continue;
}
// Headings
if (line.startsWith('# ')) {
html += '<h1 class="text-2xl font-bold text-foreground mt-6 mb-3">' + escapeHtml(line.substring(2)) + '</h1>';
continue;
}
if (line.startsWith('## ')) {
html += '<h2 class="text-xl font-bold text-foreground mt-5 mb-2">' + escapeHtml(line.substring(3)) + '</h2>';
continue;
}
if (line.startsWith('### ')) {
html += '<h3 class="text-lg font-semibold text-foreground mt-4 mb-2">' + escapeHtml(line.substring(4)) + '</h3>';
continue;
}
// Lists
if (line.match(/^[\s]*[-*+]\s/)) {
var listContent = line.replace(/^[\s]*[-*+]\s/, '');
if (currentList !== 'ul') {
if (currentList === 'ol') html += '</ol>';
html += '<ul class="list-disc list-inside text-sm text-muted-foreground space-y-1 my-2">';
currentList = 'ul';
}
html += '<li>' + escapeHtml(listContent) + '</li>';
continue;
}
if (line.match(/^[\s]*\d+\.\s/)) {
var listContent = line.replace(/^[\s]*\d+\.\s/, '');
if (currentList !== 'ol') {
if (currentList === 'ul') html += '</ul>';
html += '<ol class="list-decimal list-inside text-sm text-muted-foreground space-y-1 my-2">';
currentList = 'ol';
}
html += '<li>' + escapeHtml(listContent) + '</li>';
continue;
}
// Close list if we encounter non-list content
if (line.trim() && (currentList === 'ul' || currentList === 'ol')) {
html += currentList === 'ul' ? '</ul>' : '</ol>';
currentList = null;
}
// Paragraphs
if (line.trim()) {
// Convert inline formatting
var formatted = line
.replace(/\*\*([^*]+)\*\*/g, '<strong class="font-semibold text-foreground">$1</strong>')
.replace(/\*([^*]+)\*/g, '<em class="italic">$1</em>')
.replace(/`([^`]+)`/g, '<code class="bg-background px-1 rounded text-xs font-mono text-primary">$1</code>')
.replace(/\[([^\]]+)\]\(([^)]+)\)/g, '<a href="$2" class="text-primary hover:underline">$1</a>');
html += '<p class="text-sm text-muted-foreground leading-relaxed my-2">' + formatted + '</p>';
}
}
// Close any open lists
if (currentList === 'ul') html += '</ul>';
if (currentList === 'ol') html += '</ol>';
return html;
}
// ========== Command Card Click Handlers ==========
function initializeCommandCardHandlers() {
var cards = document.querySelectorAll('.command-card');
cards.forEach(function(card) {
card.addEventListener('click', function(e) {
e.preventDefault();
var cmdJson = this.getAttribute('data-command');
if (cmdJson) {
try {
var cmd = JSON.parse(unescapeHtml(cmdJson));
showCommandDetailModal(cmd);
} catch (err) {
console.error('Failed to parse command data:', err);
}
}
});
});
}
// Helper function to unescape HTML
function unescapeHtml(html) {
var map = {
'&amp;': '&',
'&lt;': '<',
'&gt;': '>',
'&quot;': '"',
'&#039;': "'"
};
return html.replace(/&(?:amp|lt|gt|quot|#039);/g, function(match) {
return map[match];
});
}

View File

@@ -795,7 +795,7 @@ async function executeCliTool(
const effectiveModel = model || getPrimaryModel(workingDir, tool);
// Build command
const { command, args, useStdin } = buildCommand({
const { command, args, useStdin, outputFormat: autoDetectedFormat } = buildCommand({
tool,
prompt: finalPrompt,
mode,
@@ -806,8 +806,11 @@ async function executeCliTool(
reviewOptions: mode === 'review' ? { uncommitted, base, commit, title } : undefined
});
// Use auto-detected format (from buildCommand) if available, otherwise use passed outputFormat
const finalOutputFormat = autoDetectedFormat || outputFormat;
// Create output parser and IR storage
const parser = createOutputParser(outputFormat);
const parser = createOutputParser(finalOutputFormat);
const allOutputUnits: CliOutputUnit[] = [];
const startTime = Date.now();
@@ -820,7 +823,7 @@ async function executeCliTool(
promptLength: finalPrompt.length,
hasResume: !!resume,
hasCustomId: !!customId,
outputFormat
outputFormat: finalOutputFormat
});
return new Promise((resolve, reject) => {

View File

@@ -166,7 +166,7 @@ export function buildCommand(params: {
commit?: string;
title?: string;
};
}): { command: string; args: string[]; useStdin: boolean } {
}): { command: string; args: string[]; useStdin: boolean; outputFormat?: 'text' | 'json-lines' } {
const { tool, prompt, mode = 'analysis', model, dir, include, nativeResume, settingsFile, reviewOptions } = params;
debugLog('BUILD_CMD', `Building command for tool: ${tool}`, {
@@ -254,6 +254,8 @@ export function buildCommand(params: {
// codex review uses -c key=value for config override, not -m
args.push('-c', `model=${model}`);
}
// Skip git repo check by default for codex (allows non-git directories)
args.push('--skip-git-repo-check');
// codex review uses positional prompt argument, not stdin
useStdin = false;
if (prompt) {
@@ -280,6 +282,8 @@ export function buildCommand(params: {
args.push('--add-dir', addDir);
}
}
// Skip git repo check by default for codex (allows non-git directories)
args.push('--skip-git-repo-check');
// Enable JSON output for structured parsing
args.push('--json');
// codex resume uses positional prompt argument, not stdin
@@ -302,6 +306,8 @@ export function buildCommand(params: {
args.push('--add-dir', addDir);
}
}
// Skip git repo check by default for codex (allows non-git directories)
args.push('--skip-git-repo-check');
// Enable JSON output for structured parsing
args.push('--json');
args.push('-');
@@ -381,5 +387,8 @@ export function buildCommand(params: {
fullCommand: `${command} ${args.join(' ')}${useStdin ? ' (stdin)' : ''}`,
});
return { command, args, useStdin };
// Auto-detect output format: Codex uses --json flag for JSONL output
const outputFormat = tool.toLowerCase() === 'codex' ? 'json-lines' : 'text';
return { command, args, useStdin, outputFormat };
}

62
codex_prompt.md Normal file
View File

@@ -0,0 +1,62 @@
# Custom Prompts
<DocsTip>
Custom prompts are deprecated. Use [skills](https://developers.openai.com/codex/skills) for reusable
instructions that Codex can invoke explicitly or implicitly.
</DocsTip>
Custom prompts (deprecated) let you turn Markdown files into reusable prompts that you can invoke as slash commands in both the Codex CLI and the Codex IDE extension.
Custom prompts require explicit invocation and live in your local Codex home directory (for example, `~/.codex`), so they're not shared through your repository. If you want to share a prompt (or want Codex to implicitly invoke it), [use skills](https://developers.openai.com/codex/skills).
1. Create the prompts directory:
```bash
mkdir -p ~/.codex/prompts
```
2. Create `~/.codex/prompts/draftpr.md` with reusable guidance:
```markdown
---
description: Prep a branch, commit, and open a draft PR
argument-hint: [FILES=<paths>] [PR_TITLE="<title>"]
---
Create a branch named `dev/<feature_name>` for this work.
If files are specified, stage them first: $FILES.
Commit the staged changes with a clear message.
Open a draft PR on the same branch. Use $PR_TITLE when supplied; otherwise write a concise summary yourself.
```
3. Restart Codex so it loads the new prompt (restart your CLI session, and reload the IDE extension if you are using it).
Expected: Typing `/prompts:draftpr` in the slash command menu shows your custom command with the description from the front matter and hints that files and a PR title are optional.
## Add metadata and arguments
Codex reads prompt metadata and resolves placeholders the next time the session starts.
- **Description:** Shown under the command name in the popup. Set it in YAML front matter as `description:`.
- **Argument hint:** Document expected parameters with `argument-hint: KEY=<value>`.
- **Positional placeholders:** `$1` through `$9` expand from space-separated arguments you provide after the command. `$ARGUMENTS` includes them all.
- **Named placeholders:** Use uppercase names like `$FILE` or `$TICKET_ID` and supply values as `KEY=value`. Quote values with spaces (for example, `FOCUS="loading state"`).
- **Literal dollar signs:** Write `$$` to emit a single `$` in the expanded prompt.
After editing prompt files, restart Codex or open a new chat so the updates load. Codex ignores non-Markdown files in the prompts directory.
## Invoke and manage custom commands
1. In Codex (CLI or IDE extension), type `/` to open the slash command menu.
2. Enter `prompts:` or the prompt name, for example `/prompts:draftpr`.
3. Supply required arguments:
```text
/prompts:draftpr FILES="src/pages/index.astro src/lib/api.ts" PR_TITLE="Add hero animation"
```
4. Press Enter to send the expanded instructions (skip either argument when you don't need it).
Expected: Codex expands the content of `draftpr.md`, replacing placeholders with the arguments you supplied, then sends the result as a message.
Manage prompts by editing or deleting files under `~/.codex/prompts/`. Codex scans only the top-level Markdown files in that folder, so place each custom prompt directly under `~/.codex/prompts/` rather than in subdirectories.

View File

@@ -1,6 +1,6 @@
{
"name": "claude-code-workflow",
"version": "6.3.49",
"version": "6.3.52",
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
"type": "module",
"main": "ccw/src/index.js",

60
validate-help.py Normal file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env python3
import os
import json
import sys
# Read command.json
command_json_path = os.path.expandvars('D:\\Claude_dms3\\.claude\\skills\\ccw-help\\command.json')
try:
with open(command_json_path, 'r', encoding='utf-8') as f:
data = json.load(f)
except Exception as e:
print(f"Error reading command.json: {e}")
sys.exit(1)
base_dir = os.path.expandvars('D:\\Claude_dms3\\.claude\\skills\\ccw-help')
commands_base = os.path.expandvars('D:\\Claude_dms3\\.claude\\commands')
# Check commands
missing = []
existing = []
print("Checking command source files...")
print("=" * 70)
for cmd in data.get('commands', []):
if cmd.get('source'):
# Resolve path from ccw-help directory
full_path = os.path.normpath(os.path.join(base_dir, cmd['source']))
exists = os.path.isfile(full_path)
if exists:
existing.append((cmd['command'], full_path))
else:
missing.append((cmd['command'], cmd['source'], full_path))
# Print missing files
if missing:
print(f"\n❌ MISSING SOURCE FILES ({len(missing)}):")
print("-" * 70)
for cmd, source, resolved in missing[:20]: # Show first 20
print(f"{cmd}")
print(f" Source: {source}")
print(f" Expected: {resolved}")
else:
print(f"\n✅ All source files exist!")
print(f"\n" + "=" * 70)
print(f"SUMMARY:")
print(f" Total commands: {len(data.get('commands', []))}")
print(f" Source files exist: {len(existing)}")
print(f" Source files missing: {len(missing)}")
print("=" * 70)
# List commands without source
no_source = [cmd['command'] for cmd in data.get('commands', []) if not cmd.get('source')]
if no_source:
print(f"\n⚠️ Commands without 'source' field ({len(no_source)}):")
for cmd_name in no_source[:10]:
print(f" - {cmd_name}")