Compare commits

...

273 Commits

Author SHA1 Message Date
catlog22
f64f619713 chore: bump version to 6.3.1
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 20:19:30 +08:00
catlog22
a742fa0f8a feat: 将 Code Index MCP 提供者的按钮更改为下拉选择框,优化用户界面 2025-12-25 20:15:15 +08:00
catlog22
6894c7e80b feat: 更新 Code Index MCP 提供者支持,修改 CLAUDE.md 和相关样式 2025-12-25 20:12:45 +08:00
catlog22
203100431b feat: 添加 Code Index MCP 提供者支持,更新相关 API 和配置 2025-12-25 19:58:42 +08:00
catlog22
e8b9bcae92 feat: Add ccw-litellm uninstall button and fix npm install path resolution
- Fix ccw-litellm path lookup for npm distribution by adding PACKAGE_ROOT
- Add uninstall button to API Settings page for ccw-litellm
- Add /api/litellm-api/ccw-litellm/uninstall endpoint

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 18:29:57 +08:00
catlog22
052351ab5b fix: 删除不再需要的 prompts.zip 文件 2025-12-25 18:17:49 +08:00
catlog22
9dd84e3416 fix: Update agent execution instructions and improve rendering layout in API settings 2025-12-25 18:13:28 +08:00
catlog22
211c25d969 fix: Use pip show for more reliable ccw-litellm detection
- Primary method: pip show ccw-litellm (most reliable)
- Fallback: Python import with simpler syntax
- Increased timeout for pip show to 10s

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 17:57:34 +08:00
catlog22
275684d319 fix: Load embedding pool config before rendering sidebar
Ensures the sidebar summary displays correctly on page load

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 17:55:05 +08:00
catlog22
0f8a47e8f6 fix: Add shell:true for Windows PATH resolution in ccw-litellm status check
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 17:50:00 +08:00
catlog22
303c840464 fix: Remove invalid i18n key from Embedding Pool UI
Removed 'apiSettings.configuration' heading that was showing raw key

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 17:43:29 +08:00
catlog22
b15008fbce feat: Enhance Embedding Pool UI with sidebar summary
- Add renderEmbeddingPoolSidebar() for config summary display
- Show status, target model, strategy, and provider stats
- Improve visual hierarchy with icon indicators
- Update sidebar rendering for embedding-pool tab

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 17:39:15 +08:00
catlog22
a8cf3e1ad6 feat: Refactor embedding pool sidebar rendering and always load semantic dependencies status 2025-12-25 17:36:11 +08:00
catlog22
0515ef6e8b refactor: Simplify CodexLens rotation UI, link to API Settings
- Simplify rotation section to show status only
- Add link to navigate to API Settings Embedding Pool
- Update loadRotationStatus to read from embedding-pool API
- Remove detailed modal in favor of API Settings config
- Add i18n translations for 'Configure in API Settings'

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 17:29:44 +08:00
catlog22
777d5df573 feat: Add aggregated endpoint for CodexLens dashboard initialization and improve loading performance 2025-12-25 17:22:42 +08:00
catlog22
c5f379ba01 fix: Force refresh ccw-litellm status on first page load
- Add isFirstApiSettingsRender flag to track first load
- Force refresh ccw-litellm status on first page load
- Add ?refresh=true query param support to backend API
- Frontend passes refresh param to bypass backend cache
- Subsequent tab switches still use cache for performance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 17:18:23 +08:00
catlog22
145d38c9bd feat: Implement venv status caching with TTL and clear cache on install/uninstall 2025-12-25 17:13:07 +08:00
catlog22
eab957ce00 perf: Optimize API Settings page loading performance
- Add forceRefresh parameter to loadApiSettings() with caching
- Implement 60s TTL cache for ccwLitellmStatus check
- Tab switching now uses cached data (0 network requests)
- Clear cache after save/delete operations for data consistency
- Response time reduced from 100-500ms to <10ms on tab switch

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 17:11:58 +08:00
catlog22
b5fb077ad6 fix: Improve Embedding Pool UI styling
- Add dedicated CSS for embedding-pool-main-panel
- Style discovered providers list with proper cards
- Fix toggle switch visibility
- Add info-message component styling
- Update renderDiscoveredProviders() to use CSS classes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 16:57:33 +08:00
catlog22
ebcbb11cb2 feat: Enhance CodexLens search functionality with new parameters and result handling
- Added search limit, content length, and extra files input fields in the CodexLens manager UI.
- Updated API request parameters to include new fields: max_content_length and extra_files_count.
- Refactored smart-search.ts to support new parameters with default values.
- Implemented result splitting logic to return both full content and additional file paths.
- Updated CLI commands to remove worker limits and allow dynamic scaling based on endpoint count.
- Introduced EmbeddingPoolConfig for improved embedding management and auto-discovery of providers.
- Enhanced search engines to utilize new parameters for fuzzy and exact searches.
- Added support for embedding single texts in the LiteLLM embedder.
2025-12-25 16:16:44 +08:00
catlog22
a1413dd1b3 feat: Unified Embedding Pool with auto-discovery
Architecture refactoring for multi-provider rotation:

Backend:
- Add EmbeddingPoolConfig type with autoDiscover support
- Implement discoverProvidersForModel() for auto-aggregation
- Add GET/PUT /api/litellm-api/embedding-pool endpoints
- Add GET /api/litellm-api/embedding-pool/discover/:model preview
- Convert ccw-litellm status check to async with 5-min cache
- Maintain backward compatibility with legacy rotation config

Frontend:
- Add "Embedding Pool" tab in API Settings
- Auto-discover providers when target model selected
- Show provider/key count with include/exclude controls
- Increase sidebar width (280px → 320px)
- Add sync result feedback on save

Other:
- Remove worker count limits (was max=32)
- Add i18n translations (EN/CN)
- Update .gitignore for .mcp.json

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 16:06:49 +08:00
catlog22
4e6ee2db25 chore: stop tracking .mcp.json (already in .gitignore) 2025-12-25 16:05:18 +08:00
catlog22
8e744597d1 feat: Implement CodexLens multi-provider embedding rotation management
- Added functions to get and update CodexLens embedding rotation configuration.
- Introduced functionality to retrieve enabled embedding providers for rotation.
- Created endpoints for managing rotation configuration via API.
- Enhanced dashboard UI to support multi-provider rotation configuration.
- Updated internationalization strings for new rotation features.
- Adjusted CLI commands and embedding manager to support increased concurrency limits.
- Modified hybrid search weights for improved ranking behavior.
2025-12-25 14:13:27 +08:00
catlog22
dfa8b541b4 fix: 修复语义依赖检测 Python 代码缩进错误
之前的 commit 3c3ce55 错误地在 checkSemanticStatus 的 Python 代码
每行前添加了一个空格,导致 Python IndentationError,使得前端
始终显示"语义搜索依赖未安装"。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 13:21:51 +08:00
catlog22
1dc55f8811 feat(ui): 支持自定义 API 并发数 (1-32 workers)
- 添加 codexlens.concurrency 和 concurrencyHint 翻译 (中英文)
- 将 worker 下拉菜单改为数字输入框,支持 1-32 范围
- 添加 validateConcurrencyInput 输入验证函数
- 默认值 4 workers,显示推荐提示

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 13:11:51 +08:00
catlog22
501d9a05d4 fix: 修复 ModelScope API 路由 bug 导致的 Ollama 连接错误
- 添加 _sanitize_text() 方法处理以 'import' 开头的文本
- ModelScope 后端错误地将此类文本路由到本地 Ollama 端点
- 通过在文本前添加空格绕过路由检测,不影响嵌入质量
- 增强 embedding_manager.py 的重试逻辑和错误处理
- 在 commands.py 中成功生成后调用全局模型锁定

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-25 12:52:43 +08:00
catlog22
229d51cd18 feat: 添加全局模型锁定功能,防止不同模型混合使用,增强嵌入生成的稳定性 2025-12-25 11:20:05 +08:00
catlog22
40e61b30d6 feat: 添加多端点支持和负载均衡功能,增强 LiteLLM 嵌入管理 2025-12-25 11:01:08 +08:00
catlog22
3c3ce55842 feat: 添加对 LiteLLM 嵌入后端的支持,增强并发 API 调用能力 2025-12-24 22:20:13 +08:00
catlog22
e3e61bcae9 feat: Enhance LiteLLM integration and CLI management
- Added token estimation and batching functionality in LiteLLMEmbedder to handle large text inputs efficiently.
- Updated embed method to support max_tokens_per_batch parameter for better API call management.
- Introduced new API routes for managing custom CLI endpoints, including GET, POST, PUT, and DELETE methods.
- Enhanced CLI history component to support source directory context for native session content.
- Improved error handling and logging in various components for better debugging and user feedback.
- Added internationalization support for new API endpoint features in the i18n module.
- Updated CodexLens CLI commands to allow for concurrent API calls with a max_workers option.
- Enhanced embedding manager to track model information and handle embeddings generation more robustly.
- Added entry points for CLI commands in the package configuration.
2025-12-24 18:01:26 +08:00
catlog22
dfca4d60ee feat: 添加 LiteLLM 嵌入后端支持及相关配置选项 2025-12-24 16:41:04 +08:00
catlog22
e671b45948 feat: Enhance configuration management and embedding capabilities
- Added JSON-based settings management in Config class for embedding and LLM configurations.
- Introduced methods to save and load settings from a JSON file.
- Updated BaseEmbedder and its subclasses to include max_tokens property for better token management.
- Enhanced chunking strategy to support recursive splitting of large symbols with improved overlap handling.
- Implemented comprehensive tests for recursive splitting and chunking behavior.
- Added CLI tools configuration management for better integration with external tools.
- Introduced a new command for compacting session memory into structured text for recovery.
2025-12-24 16:32:27 +08:00
catlog22
b00113d212 feat: Enhance embedding management and model configuration
- Updated embedding_manager.py to include backend parameter in model configuration.
- Modified model_manager.py to utilize cache_name for ONNX models.
- Refactored hybrid_search.py to improve embedder initialization based on backend type.
- Added backend column to vector_store.py for better model configuration management.
- Implemented migration for existing database to include backend information.
- Enhanced API settings implementation with comprehensive provider and endpoint management.
- Introduced LiteLLM integration guide detailing configuration and usage.
- Added examples for LiteLLM usage in TypeScript.
2025-12-24 14:03:59 +08:00
catlog22
9b926d1a1e docs: Sync README_CN.md with English version
- Add Smithery badge
- Update project description to match English (JSON-driven multi-agent framework)
- Change installation from script to npm install (recommended)
- Minor text adjustments for consistency

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-24 14:03:45 +08:00
catlog22
98c9f1a830 fix: Add api-settings to server.ts MODULE_FILES array
server.ts has a duplicate MODULE_FILES/MODULE_CSS_FILES array separate
from dashboard-generator.ts. The api-settings files were missing from
this duplicate list, causing renderApiSettings to be undefined.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 21:14:27 +08:00
catlog22
46ac591fe8 Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-12-23 20:46:01 +08:00
catlog22
bf66b095c7 feat: Add unified LiteLLM API management with dashboard UI and CLI integration
- Create ccw-litellm Python package with AbstractEmbedder and AbstractLLMClient interfaces
- Add BaseEmbedder abstraction and factory pattern to codex-lens for pluggable backends
- Implement API Settings dashboard page for provider credentials and custom endpoints
- Add REST API routes for CRUD operations on providers and endpoints
- Extend CLI with --model parameter for custom endpoint routing
- Integrate existing context-cache for @pattern file resolution
- Add provider model registry with predefined models per provider type
- Include i18n translations (en/zh) for all new UI elements

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 20:36:32 +08:00
catlog22
5228581324 feat: Add context_cache MCP tool with simplified CLI options
Add context_cache MCP tool for caching files by @patterns:
- pattern-parser.ts: Parse @expressions using glob
- context-cache-store.ts: In-memory cache with TTL/LRU
- context-cache.ts: MCP tool with pack/read/status/release/cleanup

Simplify CLI cache options:
- --cache now uses comma-separated format instead of JSON
- Items starting with @ are patterns, others are text content
- Add --inject-mode option (none/full/progressive)
- Default: codex=full, gemini/qwen=none

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 19:54:05 +08:00
catlog22
c9c704e671 Merge pull request #42 from rhyme227/fix/cross-platform-path-handling
fix(hooks): correct cross-platform path handling in getProjectSettingsPath
2025-12-23 18:49:35 +08:00
catlog22
16d4c7c646 feat: 增加写模式协议的提示结构 2025-12-23 18:49:04 +08:00
catlog22
39056292b7 feat: Add CodexLens Manager to dashboard and enhance GPU management
- Introduced a new CodexLens Manager item in the dashboard for easier access.
- Implemented GPU management commands in the CLI, including listing available GPUs, selecting a specific GPU, and resetting to automatic detection.
- Enhanced the embedding generation process to utilize GPU resources more effectively, including batch size optimization for better performance.
- Updated the embedder to support device ID options for GPU selection, ensuring compatibility with DirectML and CUDA.
- Added detailed logging and error handling for GPU detection and selection processes.
- Updated package version to 6.2.9 and added comprehensive documentation for Codex Agent Execution Protocol.
2025-12-23 18:35:30 +08:00
rhyme
87ffd283ce fix(hooks): correct cross-platform path handling in getProjectSettingsPath
Remove incorrect path separator conversion that caused directory creation
issues on Linux/WSL platforms. The function was converting forward slashes
to backslashes, which are treated as literal filename characters on Unix
systems rather than path separators.

Changes:
- Remove manual path normalization in getProjectSettingsPath()
- Rely on Node.js path.join() for cross-platform compatibility
- Fix affects both hooks-routes.ts and mcp-routes.ts

Impact:
- Linux/WSL: Fixes incorrect directory creation
- Windows: No behavior change, maintains correct functionality

Fixes project-level hook settings being saved to wrong location when
using Dashboard frontend on Linux/WSL systems.
2025-12-23 17:58:33 +08:00
catlog22
8eb42816f1 Merge pull request #40 from rhyme227/fix/cache-manager-esm-compatibility
fix(core): replace require() with ESM imports in cache-manager
2025-12-23 16:36:31 +08:00
rhyme
ebdf64c0b9 fix(core): replace require() with ESM imports in cache-manager
Remove CommonJS require() calls that caused \"require is not defined\"
errors when scanning .workflow directories in ESM context.

Changes:
- Add unlinkSync and readdirSync to fs import statement
- Replace require('fs').unlinkSync() with direct unlinkSync() call
- Replace require('fs').readdirSync() with direct readdirSync() call

Fixes: Cannot scan directory .workflow/active: require is not defined

File: ccw/src/core/cache-manager.ts
2025-12-23 15:33:29 +08:00
catlog22
caab5f476e Merge pull request #39 from rhyme227/fix/codexlens-model-cache-detection
fix(codexlens): correct fastembed 0.7.4 cache path and download trigger
2025-12-23 15:23:13 +08:00
rhyme
1998f3ae8a fix(codexlens): correct fastembed 0.7.4 cache path and download trigger
- Update cache path to ~/.cache/huggingface (HuggingFace Hub default)
- Fix model path format: models--{org}--{model}
- Add .embed() call to trigger actual download in download_model()
- Ensure cross-platform compatibility (Linux/Windows)
2025-12-23 14:51:08 +08:00
catlog22
5ff2a43b70 bump version to 6.2.9 2025-12-23 10:28:48 +08:00
catlog22
3cd842ca1a fix: ccw package.json removal - add root build script and fix cli.ts path resolution
- Fix cli.ts loadPackageInfo() to try root package.json first (../../package.json)
- Add build script and devDependencies to root package.json
- Remove ccw/package.json and ccw/package-lock.json (no longer needed)
- CodexLens: add config.json support for index_dir configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 10:25:15 +08:00
catlog22
86cefa7bda bump version to 6.2.8 in package.json and package-lock.json 2025-12-23 09:49:55 +08:00
catlog22
fdac697f6e refactor: 移除 ccw/package.json 文件并更新路径引用 2025-12-23 09:47:07 +08:00
catlog22
8203d690cb fix: CodexLens model detection, hybrid search stability, and JSON logging
- Fix model installation detection using fastembed ONNX cache names
- Add embeddings_config table for model metadata tracking
- Fix hybrid search segfault by using single-threaded GPU mode
- Suppress INFO logs in JSON mode to prevent error display
- Add model dropdown filtering to show only installed models

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-22 21:49:10 +08:00
catlog22
cf58dc0dd3 bump version to 6.2.6 in package.json 2025-12-22 20:17:38 +08:00
catlog22
6a69af3bf1 feat: 更新嵌入批处理大小至 256,以优化性能并提高 GPU 加速效率 2025-12-22 17:55:05 +08:00
catlog22
acdfbb4644 feat: Enhance CodexLens with GPU support and semantic status improvements
- Added accelerator and providers fields to SemanticStatus interface.
- Updated checkSemanticStatus function to retrieve ONNX providers and accelerator type.
- Introduced detectGpuSupport function to identify available GPU modes (CUDA, DirectML).
- Modified installSemantic function to support GPU acceleration modes and clean up ONNX Runtime installations.
- Updated package requirements in PKG-INFO for semantic-gpu and semantic-directml extras.
- Added new source files for GPU support and enrichment functionalities.
- Updated tests to cover new features and ensure comprehensive testing.
2025-12-22 17:42:26 +08:00
catlog22
72f24bf535 feat: 更新版本号至 6.2.4,添加 GPU 加速支持和相关依赖 2025-12-22 14:15:36 +08:00
catlog22
ba23244876 feat: 更新版本号至 6.2.2,并添加 dist 目录到文件列表 2025-12-22 12:06:59 +08:00
catlog22
624f9f18b4 feat: 更新项目名称和版本号,提升版本管理清晰度 2025-12-22 10:29:32 +08:00
catlog22
17002345c9 feat: 更新钩子模板检查逻辑,支持基于唯一模式的命令匹配;在搜索元数据中添加回退模式字段 2025-12-22 10:25:53 +08:00
catlog22
f3f2051c45 feat: 优化项目和全局配置的获取逻辑,添加Codex配置支持 2025-12-22 10:16:58 +08:00
catlog22
e60d793c8c fix: 修复 SmartSearch 的 ripgrep limit 和 FTS 分词器问题
- Ripgrep 模式: 添加总结果数量限制,防止返回超过 2MB 数据
  - --max-count 只限制每个文件的匹配数,现在在收集结果时应用 limit
  - 达到限制时在 metadata 中添加 warning 提示

- FTS 分词器: 将点号(.)添加到 tokenchars,修复 PortRole.FLOW 等带点号标识符的精确搜索
  - 更新 dir_index.py 和 migration_004_dual_fts.py 中的 tokenize 配置
  - 需要重建索引才能生效

- Exact 模式: 添加 fuzzy 回退,当精确搜索无结果时自动尝试模糊搜索
  - 回退时在 metadata 中标注 fallback: 'fuzzy'

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-22 09:50:29 +08:00
catlog22
7ecc64614a feat: include codex-lens Python package in npm distribution
- Add codex-lens/src/codexlens/ to package.json files
- Add codex-lens/pyproject.toml for pip install
- Update .npmignore to exclude Python cache and dev files
- Enables ccw view CodexLens installation from npm package

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-22 08:41:55 +08:00
catlog22
0311237db2 chore: release v6.2.0 with 122 commits
- Updated CHANGELOG.md with comprehensive 6.2.0 release notes
- Bump version to 6.2.0 in package.json
- 62 new features, 17 bug fixes, 11 refactors, 6 performance optimizations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 23:59:29 +08:00
catlog22
11d8187258 feat: 添加取消索引和检查索引状态的API,优化CodexLens的用户体验 2025-12-21 23:52:46 +08:00
catlog22
fc4a9af0cb feat: 引入流式生成器以优化内存使用,改进嵌入生成过程 2025-12-21 23:47:29 +08:00
catlog22
fa64e11a77 refactor: 优化嵌入生成过程,调整批处理大小和内存管理策略 2025-12-21 23:37:34 +08:00
catlog22
210f0f1012 feat: 添加钩子命令,简化 Claude Code 钩子操作接口,支持会话上下文加载和通知功能 2025-12-21 23:28:19 +08:00
catlog22
6d3f10d1d7 feat: 增加文件读取功能的行分页支持,优化智能搜索的多词查询匹配 2025-12-21 21:45:04 +08:00
catlog22
09483c9f07 feat: 添加智能代码清理命令,支持主线检测和过时工件发现 2025-12-21 21:10:55 +08:00
catlog22
2871950ab8 fix: 修复向量索引进度显示过早完成的问题
问题:FTS 索引完成后立即显示 100%,但嵌入生成仍在后台运行

修复:
- codex-lens.ts: 将 "Indexed X files" 阶段从 complete 改为 fts_complete (60%)
- codex-lens.ts: 添加嵌入批次和 Finalizing index 阶段解析
- embedding_manager.py: 使用 bulk_insert() 模式延迟 ANN 索引构建
- embedding_manager.py: 添加 "Finalizing index" 进度回调

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 20:55:45 +08:00
catlog22
5849f751bc fix: 修复嵌入生成内存泄漏,优化性能
- HNSW 索引:预分配从 100 万降至 5 万,添加动态扩容和可控保存
- Embedder:添加 embed_to_numpy() 避免 .tolist() 转换,增强缓存清理
- embedding_manager:每 10 批次重建 embedder 实例,显式 gc.collect()
- VectorStore:添加 bulk_insert() 上下文管理器,支持 numpy 批量写入
- Chunker:添加 skip_token_count 轻量模式,使用 char/4 估算(~9x 加速)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 19:15:47 +08:00
catlog22
45f92fe066 feat: 实现 MCP 工具集中式路径验证,增强安全性和可配置性
- 新增 path-validator.ts:参考 MCP filesystem 服务器设计的集中式路径验证器
  - 支持 CCW_PROJECT_ROOT 和 CCW_ALLOWED_DIRS 环境变量配置
  - 多层路径验证:绝对路径解析 → 沙箱检查 → 符号链接验证
  - 向后兼容:未设置环境变量时回退到 process.cwd()

- 更新所有 MCP 工具使用集中式路径验证:
  - write-file.ts: 使用 validatePath()
  - edit-file.ts: 使用 validatePath({ mustExist: true })
  - read-file.ts: 使用 validatePath() + getProjectRoot()
  - smart-search.ts: 使用 getProjectRoot()
  - core-memory.ts: 使用 getProjectRoot()

- MCP 服务器启动时输出项目根目录和允许目录信息

- MCP 管理界面增强:
  - CCW Tools 安装卡片新增路径设置 UI
  - 支持 CCW_PROJECT_ROOT 和 CCW_ALLOWED_DIRS 配置
  - 添加"使用当前项目"快捷按钮
  - 支持 Claude 和 Codex 两种模式
  - 添加中英文国际化翻译

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 18:14:06 +08:00
catlog22
f492f4839a refactor: 移除 CLI 中过宽的异常捕获
- 移除所有 16 个 except Exception 块
- 只保留对特定异常的捕获 (StorageError, ConfigError, SearchError 等)
- 允许未知异常自然传播,便于调试
- 保留嵌入功能的可选异常处理

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 17:19:54 +08:00
catlog22
fa81793bea refactor: 优化异常处理,使用 cached_property 替代 property,增强代码可读性;添加 RelationshipType 枚举以规范化关系类型 2025-12-21 17:01:49 +08:00
catlog22
c12ef3e772 feat: 优化 core_memory MCP 工具,支持跨项目导出和精简列表输出
- 新增 findMemoryAcrossProjects 函数,支持按 CMEM-xxx ID 跨所有项目搜索记忆
- export 操作增强:当前项目找不到时自动搜索所有项目数据库
- list 操作优化:返回精简格式,preview 截断为 100 字符,减少输出体积

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 16:39:45 +08:00
catlog22
6eebdb8898 fix: 修复额外的内存泄露问题
1. hybrid_search.py: 修复 _search_vector 方法中的 SQLite 连接泄露
   - 使用 with 语句包装数据库连接
   - 添加异常处理确保连接正确关闭

2. symbol_extractor.py: 添加上下文管理器支持
   - 实现 __enter__ 和 __exit__ 方法
   - 支持 with 语句自动管理资源

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 16:39:38 +08:00
catlog22
3e9a309079 refactor: 移除图索引功能,修复内存泄露,优化嵌入生成
主要更改:

1. 移除图索引功能 (graph indexing)
   - 删除 graph_analyzer.py 及相关迁移文件
   - 移除 CLI 的 graph 命令和 --enrich 标志
   - 清理 chain_search.py 中的图查询方法 (370行)
   - 删除相关测试文件

2. 修复嵌入生成内存问题
   - 重构 generate_embeddings.py 使用流式批处理
   - 改用 embedding_manager 的内存安全实现
   - 文件从 548 行精简到 259 行 (52.7% 减少)

3. 修复内存泄露
   - chain_search.py: quick_search 使用 with 语句管理 ChainSearchEngine
   - embedding_manager.py: 使用 with 语句管理 VectorStore
   - vector_store.py: 添加暴力搜索内存警告

4. 代码清理
   - 移除 Symbol 模型的 token_count 和 symbol_type 字段
   - 清理相关测试用例

测试: 760 passed, 7 skipped

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-21 16:22:03 +08:00
catlog22
15d5890861 feat: 更新 stopCommand 函数以确保进程正常退出;优化 MCP 路由序列化以支持嵌套对象;添加 CSS 类以实现文本行数限制 2025-12-21 12:47:25 +08:00
catlog22
89b3475508 feat: 更新 CodexLens 路由处理,保持总大小一致性并添加完整索引目录大小 2025-12-21 11:02:07 +08:00
catlog22
6e301538ed feat: 调整导航项字体大小,提升可读性和视觉一致性 2025-12-21 10:53:25 +08:00
catlog22
c3a31f2c5d feat: 优化 CLI 通知逻辑,确保进程在通知后能正常退出;增强侧边栏导航项的层级可视化 2025-12-21 10:37:29 +08:00
catlog22
559b1e02a7 feat: 增加 CLI 通知的超时设置,优化执行工具的会话跟踪逻辑 2025-12-21 10:02:36 +08:00
catlog22
9e4412c7a8 Add workflow execution prompts for task management
- Introduced `execute.md` for sequential task execution from session folder with detailed execution rules, task tracking, and error handling.
- Added `lite-execute.md` for lightweight task execution from a JSON plan file, emphasizing serial execution and dependency management.
2025-12-21 09:52:30 +08:00
catlog22
6dab38172f feat: 增强 CSS 布局,优化组件的灵活性和响应式设计;更新调试和轻量级计划工作流文档 2025-12-21 09:36:56 +08:00
catlog22
f1ee46e1ac feat: 优化文件管理器的加载体验,异步加载新鲜度数据并添加加载状态指示 2025-12-20 23:46:11 +08:00
catlog22
775928456d feat: 增强会话管理功能,添加会话状态跟踪和进程披露,优化钩子管理界面 2025-12-20 23:14:07 +08:00
catlog22
fd4a15c84e fix: improve chunking logic in Chunker class and enhance smart search tool with comprehensive features
- Updated the Chunker class to adjust the window movement logic, ensuring proper handling of overlap lines.
- Introduced a new smart search tool with features including intent classification, CodexLens integration, multi-backend search routing, and index status checking.
- Implemented various search modes (auto, hybrid, exact, ripgrep, priority) with detailed metadata and error handling.
- Added support for progress tracking during index initialization and enhanced output transformation based on user-defined modes.
- Included comprehensive documentation for usage and parameters in the smart search tool.
2025-12-20 21:44:15 +08:00
catlog22
be725ce21f chore: 删除不再使用的 reindex 和测试脚本 2025-12-20 17:34:26 +08:00
catlog22
fa31552cc1 fix: Use manifest-based cleanup for clean install
- Clean only files recorded in installation manifest
- Backup only manifest-recorded files instead of entire directories
- Skip cleanup when no manifest exists (first install or manual)
- Preserve user settings (settings.json, settings.local.json)
- Remove unused directory-based cleanup functions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 17:12:53 +08:00
catlog22
a3ccf5baed fix: Refactor installation process for improved cleanup and backup handling 2025-12-20 16:52:15 +08:00
catlog22
8c6225b749 docs: Add dashboard operation guides
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 16:16:14 +08:00
catlog22
89e77c0089 docs: Update documentation and CLI command improvements
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 16:15:35 +08:00
catlog22
b27d8a9570 feat: Add CLAUDE.md freshness tracking and update reminders
- Add SQLite table and CRUD methods for tracking update history
- Create freshness calculation service based on git file changes
- Add API endpoints for freshness data, marking updates, and history
- Display freshness badges in file tree (green/yellow/red indicators)
- Show freshness gauge and details in metadata panel
- Auto-mark files as updated after CLI sync
- Add English and Chinese i18n translations

Freshness algorithm: 100 - min((changedFilesCount / 20) * 100, 100)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 16:14:46 +08:00
catlog22
4a3ff82200 fix: Align semantic status check with CodexLens checkSemanticStatus
- Use checkSemanticStatus() from codex-lens.ts instead of isEmbedderAvailable()
- Returns accurate detection of fastembed installation status
- Fix install guide link to navigate to CLI Manager and open CodexLens config
- Update i18n: "Install Guide" -> "Go to Settings" (EN/ZH)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 13:51:50 +08:00
catlog22
bfbab44756 docs: Improve MCP tool descriptions for clarity and completeness
- smart_search: Condense verbose description from 40 to 12 lines while preserving key usage examples
- write_file: Add missing createDirectories parameter documentation
- edit_file: Clarify update/line mode naming and add batch edit examples

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 13:46:24 +08:00
catlog22
4458af83d8 feat: Upgrade to version 6.2.0 with major enhancements
- Updated COMMAND_SPEC.md to reflect new version and features including native CodexLens and CLI refactor.
- Revised GETTING_STARTED.md and GETTING_STARTED_CN.md for improved onboarding experience with new features.
- Enhanced INSTALL_CN.md to highlight the new CodexLens and Dashboard capabilities.
- Updated README.md and README_CN.md to showcase version 6.2.0 features and breaking changes.
- Introduced memory embedder scripts with comprehensive documentation and quick reference.
- Added test suite for memory embedder functionality to ensure reliability and correctness.
- Implemented TypeScript integration examples for memory embedder usage.
2025-12-20 13:16:09 +08:00
catlog22
6b62b5b5a9 feat: Add embedding status hint in clustering UI
- Add embedding status check API endpoints (embed-status, embed)
- Display embedding hint in cluster list view:
  - Warning when vector model not installed
  - Progress indicator when embeddings pending
  - Generate button for quick embedding
- Add i18n translations (EN/ZH)
- Add CSS styles for embedding hint components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 13:15:25 +08:00
catlog22
31cc060837 feat: Add vector embeddings for core memory semantic search
- Add memory_chunks table for storing chunked content with embeddings
- Create Python embedder script (memory_embedder.py) using CodexLens fastembed
- Add TypeScript bridge (memory-embedder-bridge.ts) for Python interop
- Implement content chunking with paragraph/sentence-aware splitting
- Add vectorSimilarity dimension to clustering (weight 0.3)
- New CLI commands: ccw memory embed, search, embed-status
- Extend core-memory MCP tool with embed/search/embed_status operations

Clustering improvement: max relevance 0.388 → 0.809 (+109%)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 13:09:43 +08:00
catlog22
ea284d739a feat: Add cluster management commands for deletion, merging, and deduplication 2025-12-20 12:23:08 +08:00
catlog22
ab06ed0083 feat: Enhance session clustering with new CLI options and adjust clustering threshold 2025-12-20 11:41:28 +08:00
catlog22
4de4db3c69 feat: Implement executor assignment and clustering optimizations for session management 2025-12-20 11:29:16 +08:00
catlog22
e1cac5dd50 Refactor search modes and optimize embedding generation
- Updated the dashboard template to hide the Code Graph Explorer feature.
- Enhanced the `executeCodexLens` function to use `exec` for better cross-platform compatibility and improved command execution.
- Changed the default `maxResults` and `limit` parameters in the smart search tool to 10 for better performance.
- Introduced a new `priority` search mode in the smart search tool, replacing the previous `parallel` mode, which now follows a fallback strategy: hybrid -> exact -> ripgrep.
- Optimized the embedding generation process in the embedding manager by batching operations and using a cached embedder instance to reduce model loading overhead.
- Implemented a thread-safe singleton pattern for the embedder to improve performance across multiple searches.
2025-12-20 11:08:34 +08:00
catlog22
7adde91e9f feat: Add search result grouping by similarity score
Add functionality to group search results with similar content and scores
into a single representative result with additional locations.

Changes:
- Add AdditionalLocation entity model for storing grouped result locations
- Add additional_locations field to SearchResult for backward compatibility
- Implement group_similar_results() function in ranking.py with:
  - Content-based grouping (by excerpt or content field)
  - Score-based sub-grouping with configurable threshold
  - Metadata preservation with grouped_count tracking
- Add group_results and grouping_threshold options to SearchOptions
- Integrate grouping into ChainSearchEngine.search() after RRF fusion

Test coverage:
- 36 multi-level tests covering unit, boundary, integration, and performance
- Real-world scenario tests for RRF scores and duplicate code detection

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 16:33:44 +08:00
catlog22
3428642d04 feat: Optimize FTS search with batch symbol fetching and improved excerpt generation 2025-12-19 15:35:31 +08:00
catlog22
2f0cce0089 feat: Enhance CodexLens indexing and search capabilities with new CLI options and improved error handling 2025-12-19 15:10:37 +08:00
catlog22
c7ced2bfbb feat(ccw): Add count-based memory update strategy and fix MCP card click
- Add count-based update strategy for memory hooks (threshold-based triggering)
- Add threshold config field (default: 10, range: 3-50)
- Add i18n translations for count-based strategy (EN/ZH)
- Fix MCP card click not responding due to HTML entity encoding in JSON
- Add unescapeHtml() utility function for decoding HTML entities
- Move stopPropagation from container div to individual buttons
- Change CCW MCP install to use cmd /c npx -y format for Windows

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 11:41:21 +08:00
catlog22
69049e3f45 feat: Return complete method blocks in FTS search results
- Add helper methods to locate match lines and find containing symbols
- Modify search_fts, search_fts_exact, search_fts_fuzzy to return complete
  code blocks (functions/methods/classes) instead of short snippets
- Join with files table to get full content and file_id
- Query symbols table to find the smallest symbol containing the match
- Fall back to context lines when no symbol contains the match
- Add return_full_content and context_lines parameters for flexibility
- Include start_line, end_line, symbol_name, symbol_kind in SearchResult

This improves search result quality by returning semantically meaningful
code blocks rather than arbitrary 20-byte snippets.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 11:38:39 +08:00
catlog22
e17e9a6473 feat: Enhance memory compact output format for better session recovery
- Add Session ID and Project Root fields for workflow tracking
- Split Active Files into Working Files (modified) and Reference Files (read-only)
- Use absolute paths for all file references
- Support multiple plan sources: workflow/todo/user-stated/inferred
- Preserve complete execution plan verbatim without summarization
- Add path resolution rules and reference file categories
- Update quality checklist with new validation criteria

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 11:00:07 +08:00
catlog22
5e91ba6c60 Implement ANN index using HNSW algorithm and update related tests
- Added ANNIndex class for approximate nearest neighbor search using HNSW.
- Integrated ANN index with VectorStore for enhanced search capabilities.
- Updated test suite for ANN index, including tests for adding, searching, saving, and loading vectors.
- Modified existing tests to accommodate changes in search performance expectations.
- Improved error handling for file operations in tests to ensure compatibility with Windows file locks.
- Adjusted hybrid search performance assertions for increased stability in CI environments.
2025-12-19 10:35:29 +08:00
catlog22
9f6e6852da feat: Add core memory clustering visualization and hooks configuration
- Implemented core memory clustering visualization in core-memory-clusters.js
- Added functions for loading, rendering, and managing clusters and their members
- Created example hooks configuration in hooks-config-example.json for session management
- Developed test script for hooks integration in test-hooks.js
- Included error handling and notifications for user interactions
2025-12-18 23:06:58 +08:00
catlog22
68f9de0c69 refactor: Replace knowledge graph with session clustering system
Remove legacy knowledge graph and evolution tracking features,
introduce new session clustering model for Core Memory.

Changes:
- Remove knowledge_graph, knowledge_graph_edges, evolution_history tables
- Add session_clusters, cluster_members, cluster_relations tables
- Add session_metadata_cache for metadata caching
- Add new interfaces: SessionCluster, ClusterMember, ClusterRelation, SessionMetadataCache
- Add CRUD methods for cluster management
- Add session metadata upsert and search methods
- Remove extractKnowledgeGraph, getKnowledgeGraph, trackEvolution methods
- Remove API routes for /knowledge-graph, /evolution, /graph
- Add database migration to clean up old tables
- Update generateClusterId() for CLST-* ID format

Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 19:21:52 +08:00
catlog22
17af615fe2 Add help view and core memory styles
- Introduced styles for the help view including tab transitions, accordion animations, search highlighting, and responsive design.
- Implemented core memory styles with modal base styles, memory card designs, and knowledge graph visualization.
- Enhanced dark mode support across various components.
- Added loading states and empty state designs for better user experience.
2025-12-18 18:29:45 +08:00
catlog22
4577be71ce feat: Add notification system for core memory view with animations 2025-12-18 15:28:14 +08:00
catlog22
0311d63b7d feat: Enhance graph exploration with file and module filtering options 2025-12-18 14:58:20 +08:00
catlog22
440314c16d feat: Enhance CLI resume functionality with usage guidance and multi-session merge
- Add resume usage timing guidance (multi-round planning, multi-model collaboration)
- Document multi-session merge capability with --resume <id1>,<id2> syntax
- Improve CLI prompt validation to allow resume without explicit prompt
- Streamline documentation by removing redundant examples and model details

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-18 14:35:01 +08:00
catlog22
8dd4a513c8 Refactor CLI command usage from ccw cli exec to ccw cli -p for improved prompt handling
- Updated command patterns across documentation and templates to reflect the new CLI syntax.
- Enhanced CLI tool implementation to support reading prompts from files and multi-line inputs.
- Modified core components and views to ensure compatibility with the new command structure.
- Adjusted help messages and internationalization strings to align with the updated command format.
- Improved error handling and user notifications in the CLI execution flow.
2025-12-18 14:12:45 +08:00
catlog22
e096fc98e2 feat: Implement core memory management with knowledge graph and evolution tracking
- Added core-memory.js and core-memory-graph.js for managing core memory views and visualizations.
- Introduced functions for viewing knowledge graphs and evolution history of memories.
- Implemented modal dialogs for creating, editing, and viewing memory details.
- Developed core-memory.ts for backend operations including list, import, export, and summary generation.
- Integrated Zod for parameter validation in core memory operations.
- Enhanced UI with dynamic rendering of memory cards and detailed views.
2025-12-18 10:07:29 +08:00
catlog22
4329bd8e80 fix: Add run_in_background=false to all agent Task invocations
Ensure all agent executions wait for results before proceeding.
Modified 20 workflow command files with 32 Task call updates.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 10:00:22 +08:00
catlog22
ae07df612d feat: Increase indexing timeout to 30 minutes for large codebases
- /api/codexlens/init: 5min → 30min
- smart-search init action: 5min → 30min
- executeInitWithProgress: 5min → 30min
- executeCodexLens default: 1min → 5min

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 09:48:55 +08:00
catlog22
d5d6f1fbbe feat: Replace modal with bottom floating progress bar for indexing
- Fixed bottom progress bar appears during CodexLens index init
- Shows spinner, status text, percentage, and progress bar
- Green success state with checkmark on completion
- Red error state with alert icon on failure
- Auto-dismisses after 3 seconds on success
- Added console logging for debugging WebSocket events
- Slide-down animation when closing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 09:33:19 +08:00
catlog22
b9d068d6d4 feat: Add real-time progress output for MCP init action
- MCP server outputs progress to stderr during smart_search init
- Progress format: [Progress] {percent}% - {message}
- Does not interfere with JSON-RPC protocol (stdout)
- Added executeInitWithProgress for external progress callback
- Added executeToolWithProgress to tools/index.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 00:05:32 +08:00
catlog22
48ac43d628 fix: Add cleanup of obsolete files during ccw install reinstallation
- Add getFileReferenceCounts() to track file references across manifests
- Add cleanupOldFiles() to remove files not in new installation
- Protect shared files referenced by other installations (Global/Path)
- Display cleanup statistics in installation summary

Previously, reinstalling would only overwrite existing files but leave
obsolete files that were removed from source. Now properly cleans up
while protecting files shared between Global and Path installations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 23:47:49 +08:00
catlog22
79da2c8c17 refactor: Remove redundant schema details, let planning agent read schema independently
- Remove detailed plan.json structure from prompt (agent reads schema file)
- Clarify execution steps: agent reads schema first, then explores and generates
- Remove ambiguous "Execute CLI planning using Gemini" step

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 23:36:41 +08:00
catlog22
6aac7bb8e3 refactor: Replace exact deduplication with intelligent intent-based merging in lite-plan
Simplify clarification question deduplication to use Claude's native intelligence
for identifying similar intents and merging questions across exploration angles.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 23:17:27 +08:00
catlog22
51a61bef31 Add parallel search mode and index progress bar
Features:
- CCW smart_search: Add 'parallel' mode that runs hybrid + exact + ripgrep
  simultaneously with RRF (Reciprocal Rank Fusion) for result merging
- Dashboard: Add real-time progress bar for CodexLens index initialization
- MCP: Return progress metadata in init action response
- Codex-lens: Auto-detect optimal worker count for parallel indexing

Changes:
- smart-search.ts: Add parallel mode, RRF fusion, progress tracking
- codex-lens.ts: Add onProgress callback support, progress parsing
- codexlens-routes.ts: Broadcast index progress via WebSocket
- codexlens-manager.js: New index progress modal with real-time updates
- notifications.js: Add WebSocket event handler registration system
- i18n.js: Add English/Chinese translations for progress UI
- index_tree.py: Workers parameter now auto-detects CPU count (max 16)
- commands.py: CLI --workers parameter supports auto-detection

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 23:17:15 +08:00
catlog22
44d84116c3 Revert: Remove ccw session management while keeping ccw cli exec
Selectively revert ccw session management commands back to commit 5114a94,
while preserving ccw cli exec improvements.

Changes:
- Session management commands (start, list, resume, complete): Full revert to bash commands
- execute.md: Full revert (only had ccw session changes)
- review.md: Reverted ccw session read, kept ccw cli exec
- docs.md: Reverted ccw session read/write, kept ccw cli exec
- lite-fix.md: Reverted ccw session init/read, kept other changes
- lite-plan.md: Reverted ccw session init/read, kept other changes
- lite-execute.md: No changes (kept ccw cli exec intact)
- code-developer.md: No changes (kept ccw cli exec intact)

All ccw session management operations replaced with bash commands.
All ccw cli exec commands preserved for unified CLI execution.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-17 22:52:12 +08:00
catlog22
474a1ce027 fix: Correct Exa MCP tool call syntax in context-requirements
- Change exa() to mcp__exa__search() for correct MCP invocation
- Add complete parameter documentation (query, numResults, livecrawl)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 22:25:50 +08:00
catlog22
b22839c99f fix: Resolve MCP installation issues and enhance path resolution
- Fixed API endpoint mismatches in mcp-manager.js to ensure global install/update buttons function correctly.
- Corrected undefined function references in mcp-manager.js for project installation.
- Refactored event handling to eliminate global scope pollution in mcp-manager.js.
- Added comprehensive debugging guide for MCP installation issues.
- Implemented a session path resolver to infer content types from filenames and paths, improving usability.
- Introduced tests for embeddings improvements in init and status commands to verify functionality.
2025-12-17 22:05:16 +08:00
catlog22
8b927f302c Fix MCP Manager panel - 13 critical issues resolved
Issues fixed:
1. API endpoint mismatch (/api/mcp-add-global-server)
2. Undefined function reference (installMcpToProject → copyMcpServerToProject)
3. Inline onclick handler scope issues (converted to data-action)
4. querySelector only finding first element (use querySelectorAll)
5. Path normalization causing wrong MCP badge count
6. Lucide icons destroying event listeners (reordered execution)
7. Codex button JSON serialization syntax error
8. Codex API routes 404 (add /api/codex-mcp route matching)
9. False warning suppression for conditional buttons
10. HTML syntax error from JSON in onclick attributes
11. CSS z-index issue - ::before pseudo-element blocking clicks
12. Navigation badge path matching (try both slash formats)
13. Remove deprecated codex_lens tool (merged into smart_search)

Key changes:
- server.ts: Add /api/codex-mcp route matching
- 15-mcp-manager.css: Add pointer-events:none to ::before, z-index for buttons
- components/mcp-manager.js: Fix updateMcpBadge path matching, global exports
- views/mcp-manager.js: Convert onclick to data-action, add event listeners,
  update CCW_MCP_TOOLS list, fix JSON escaping in HTML attributes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 19:23:39 +08:00
catlog22
c16da759b2 Fix session management location inference and ccw command usage
This commit addresses multiple issues in session management and command documentation:

Session Management Fixes:
- Add auto-inference of location from type parameter in session.ts
- When --type lite-plan/lite-fix is specified, automatically set location accordingly
- Preserve explicit --location parameter when provided
- Update session-manager.ts to support type-based location inference
- Fix metadata filename selection (session-metadata.json vs workflow-session.json)

Command Documentation Fixes:
- Add missing --mode analysis parameter (3 locations):
  * commands/memory/docs.md
  * commands/workflow/lite-execute.md (2 instances)
- Add missing --mode write parameter (4 locations):
  * commands/workflow/tools/task-generate-agent.md
- Remove non-existent subcommands (3 locations):
  * commands/workflow/session/complete.md (manifest, project)
- Update session command syntax to use simplified format:
  * Changed from 'ccw session manifest read' to 'test -f' checks
  * Changed from 'ccw session project read' to 'test -f' checks

Documentation Updates:
- Update lite-plan.md and lite-fix.md to use --type parameter
- Update session/start.md to document lite-plan and lite-fix types
- Sync all fixes to skills/command-guide/reference directory (84 files)

All ccw command usage across the codebase is now consistent and correct.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-17 18:09:23 +08:00
catlog22
74a830694c Fix CodexLens embeddings generation to achieve 100% coverage
Previously, embeddings were only generated for root directory files (1.6% coverage, 5/303 files).
This fix implements recursive processing across all subdirectory indexes, achieving 100% coverage
with 2,042 semantic chunks across all 303 files in 26 index databases.

Key improvements:

1. **Recursive embeddings generation** (embedding_manager.py):
   - Add generate_embeddings_recursive() to process all _index.db files in directory tree
   - Add get_embeddings_status() for comprehensive coverage statistics
   - Add discover_all_index_dbs() helper for recursive file discovery

2. **Enhanced CLI commands** (commands.py):
   - embeddings-generate: Add --recursive flag for full project coverage
   - init: Use recursive generation by default for complete indexing
   - status: Display embeddings coverage statistics with 50% threshold

3. **Smart search routing improvements** (smart-search.ts):
   - Add 50% embeddings coverage threshold for hybrid mode routing
   - Auto-fallback to exact mode when coverage insufficient
   - Strip ANSI color codes from JSON output for correct parsing
   - Add embeddings_coverage_percent to IndexStatus and SearchMetadata
   - Provide clear warnings with actionable suggestions

4. **Documentation and analysis**:
   - Add SMART_SEARCH_ANALYSIS.md with initial investigation
   - Add SMART_SEARCH_CORRECTED_ANALYSIS.md revealing true extent of issue
   - Add EMBEDDINGS_FIX_SUMMARY.md with complete fix summary
   - Add check_embeddings.py script for coverage verification

Results:
- Coverage improved from 1.6% (5/303 files) to 100% (303/303 files) - 62.5x increase
- Semantic chunks increased from 10 to 2,042 - 204x increase
- All 26 subdirectory indexes now have embeddings vs just 1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-17 17:54:33 +08:00
catlog22
d06a3ca12e Add comprehensive workflows for CLI tools usage, coding philosophy, context requirements, file modification, and CodexLens auto hybrid mode
- Introduced a detailed guide for intelligent tools selection strategy, including quick reference, tool specifications, prompt templates, and best practices for CLI execution.
- Established a coding philosophy document outlining core beliefs, simplicity principles, and guidelines for effective coding practices.
- Created context requirements documentation emphasizing the importance of understanding existing patterns and dependencies before implementation.
- Developed a file modification workflow detailing the use of edit_file and write_file MCP tools, along with priority logic for file reading and editing.
- Implemented CodexLens auto hybrid mode, enhancing the CLI with automatic vector embedding generation and default hybrid search mode based on embedding availability.
2025-12-17 09:53:30 +08:00
catlog22
154a9283b5 Add internationalization support for help view and implement help rendering logic
- Introduced `help-i18n.js` for managing translations in Chinese and English for the help view.
- Created `help.js` to render the help view, including command categories, workflow diagrams, and CodexLens quick-start.
- Implemented search functionality with debounce for command filtering.
- Added workflow diagram rendering with Cytoscape.js integration.
- Developed tests for write-file verification, ensuring proper handling of small and large JSON files.
2025-12-16 22:24:29 +08:00
catlog22
b702791c2c Remove LLM enhancement features and related components as per user request. This includes the deletion of source code files, CLI commands, front-end components, tests, scripts, and documentation associated with LLM functionality. Simplified dependencies and reduced complexity while retaining core vector search capabilities. Validation of changes confirmed successful removal and functionality. 2025-12-16 21:38:27 +08:00
catlog22
d21066c282 Add scripts for inspecting LLM summaries and testing misleading comments
- Implement `inspect_llm_summaries.py` to display LLM-generated summaries from the semantic_chunks table in the database.
- Create `show_llm_analysis.py` to demonstrate LLM analysis of misleading code examples, highlighting discrepancies between comments and actual functionality.
- Develop `test_misleading_comments.py` to compare pure vector search with LLM-enhanced search, focusing on the impact of misleading or missing comments on search results.
- Introduce `test_llm_enhanced_search.py` to provide a test suite for evaluating the effectiveness of LLM-enhanced vector search against pure vector search.
- Ensure all new scripts are integrated with the existing codebase and follow the established coding standards.
2025-12-16 20:29:28 +08:00
catlog22
df23975a0b Add comprehensive tests for schema cleanup migration and search comparison
- Implement tests for migration 005 to verify removal of deprecated fields in the database schema.
- Ensure that new databases are created with a clean schema.
- Validate that keywords are correctly extracted from the normalized file_keywords table.
- Test symbol insertion without deprecated fields and subdir operations without direct_files.
- Create a detailed search comparison test to evaluate vector search vs hybrid search performance.
- Add a script for reindexing projects to extract code relationships and verify GraphAnalyzer functionality.
- Include a test script to check TreeSitter parser availability and relationship extraction from sample files.
2025-12-16 19:27:05 +08:00
catlog22
3da0ef2adb Add comprehensive tests for query parsing and Reciprocal Rank Fusion
- Implemented tests for the QueryParser class, covering various identifier splitting methods (CamelCase, snake_case, kebab-case), OR expansion, and FTS5 operator preservation.
- Added parameterized tests to validate expected token outputs for different query formats.
- Created edge case tests to ensure robustness against unusual input scenarios.
- Developed tests for the Reciprocal Rank Fusion (RRF) algorithm, including score computation, weight handling, and result ranking across multiple sources.
- Included tests for normalization of BM25 scores and tagging search results with source metadata.
2025-12-16 10:20:19 +08:00
catlog22
35485bbbb1 feat: Enhance navigation and cleanup for graph explorer view
- Added a cleanup function to reset the state when navigating away from the graph explorer.
- Updated navigation logic to call the cleanup function before switching views.
- Improved internationalization by adding new translations for graph-related terms.
- Adjusted icon sizes for better UI consistency in the graph explorer.
- Implemented impact analysis button functionality in the graph explorer.
- Refactored CLI tool configuration to use updated model names.
- Enhanced CLI executor to handle prompts correctly for codex commands.
- Introduced code relationship storage for better visualization in the index tree.
- Added support for parsing Markdown and plain text files in the symbol parser.
- Updated tests to reflect changes in language detection logic.
2025-12-15 23:11:01 +08:00
catlog22
894b93e08d feat(graph-explorer): implement interactive code relationship visualization with Cytoscape.js
- Added main render function to initialize the graph explorer view.
- Implemented data loading functions for graph nodes, edges, and search process data.
- Created UI layout with tabs for graph view and search process view.
- Developed filtering options for nodes and edges with corresponding UI elements.
- Integrated Cytoscape.js for graph visualization, including node and edge styling.
- Added functionality for node selection and details display.
- Implemented impact analysis feature with modal display for results.
- Included utility functions for refreshing data and managing UI states.
2025-12-15 19:35:18 +08:00
catlog22
97640a517a feat(storage): implement storage manager for centralized management and cleanup
- Added a new Storage Manager component to handle storage statistics, project cleanup, and configuration for CCW centralized storage.
- Introduced functions to calculate directory sizes, get project storage stats, and clean specific or all storage.
- Enhanced SQLiteStore with a public API for executing queries securely.
- Updated tests to utilize the new execute_query method and validate storage management functionalities.
- Improved performance by implementing connection pooling with idle timeout management in SQLiteStore.
- Added new fields (token_count, symbol_type) to the symbols table and adjusted related insertions.
- Enhanced error handling and logging for storage operations.
2025-12-15 17:39:38 +08:00
catlog22
ee0886fc48 feat: 更新工作流执行文档,增强任务跟踪和状态映射规则 2025-12-15 15:16:16 +08:00
catlog22
0fe16963cd Add comprehensive tests for tokenizer, performance benchmarks, and TreeSitter parser functionality
- Implemented unit tests for the Tokenizer class, covering various text inputs, edge cases, and fallback mechanisms.
- Created performance benchmarks comparing tiktoken and pure Python implementations for token counting.
- Developed extensive tests for TreeSitterSymbolParser across Python, JavaScript, and TypeScript, ensuring accurate symbol extraction and parsing.
- Added configuration documentation for MCP integration and custom prompts, enhancing usability and flexibility.
- Introduced a refactor script for GraphAnalyzer to streamline future improvements.
2025-12-15 14:36:09 +08:00
catlog22
82dcafff00 feat: 添加工作流执行文档,支持从会话文件夹顺序执行任务 2025-12-15 14:32:08 +08:00
catlog22
3ffb907a6f feat: add semantic graph design for static code analysis
- Introduced a comprehensive design document for a Code Semantic Graph aimed at enhancing static analysis capabilities.
- Defined the architecture, core components, and implementation steps for analyzing function calls, data flow, and dependencies.
- Included detailed specifications for nodes and edges in the graph, along with database schema for storage.
- Outlined phases for implementation, technical challenges, success metrics, and application scenarios.
2025-12-15 09:47:18 +08:00
catlog22
d91477ad80 feat: Implement CLAUDE.md Manager View with file tree, viewer, and metadata actions
- Added main JavaScript functionality for CLAUDE.md management including file loading, rendering, and editing capabilities.
- Created a test HTML file to validate the functionality of the CLAUDE.md manager.
- Introduced CLI generation examples and documentation for rules creation via CLI.
- Enhanced error handling and notifications for file operations.
2025-12-14 23:08:36 +08:00
catlog22
0529b57694 Implement database migration framework and performance optimizations
- Added active memory configuration for manual interval and Gemini tool.
- Created file modification rules for handling edits and writes.
- Implemented migration manager for managing database schema migrations.
- Added migration 001 to normalize keywords into separate tables.
- Developed tests for validating performance optimizations including keyword normalization, path lookup, and symbol search.
- Created validation script to manually verify optimization implementations.
2025-12-14 18:08:32 +08:00
catlog22
79a2953862 Add comprehensive tests for vector/semantic search functionality
- Implement full coverage tests for Embedder model loading and embedding generation
- Add CRUD operations and caching tests for VectorStore
- Include cosine similarity computation tests
- Validate semantic search accuracy and relevance through various queries
- Establish performance benchmarks for embedding and search operations
- Ensure edge cases and error handling are covered
- Test thread safety and concurrent access scenarios
- Verify availability of semantic search dependencies
2025-12-14 17:17:09 +08:00
catlog22
8d542b8e45 fix(ccw): prevent settings.json fields from being overwritten by hook operations
readSettingsFile() in hooks-routes and mcp-routes was returning { hooks: {} }
as default value when file not found or parse failed, causing writeFileSync()
to overwrite other fields (mcpServers, projects, statusLine) in settings.json.

Changed default return to {} to preserve all existing fields when updating hooks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-14 16:17:40 +08:00
catlog22
ac9060ab3a fix(codex-lens): refine CLI exception handling with specific error types
Replace overly broad `except Exception` blocks with specific exception
handlers (StorageError, ConfigError, ParseError, SearchError, PermissionError)
across all CLI commands. This provides more precise error messages and
improves debugging experience for end users.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-14 16:10:59 +08:00
catlog22
1c9716e460 fix(ccw): only cache positive tool availability results
Fixes CLI tool detection failure after server restart.

The previous caching implementation cached negative results (errors,
timeouts) which caused persistent failures if the first check failed
for any transient reason (environment, timing, etc.).

Changes:
- Only cache positive tool availability results
- Don't cache errors or timeouts - they may be transient
- Subsequent checks will retry if a tool wasn't found

This ensures that once a tool is found, lookups are fast, but
transient failures don't persist in the cache.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-14 12:16:48 +08:00
catlog22
7e70e4c299 perf(ccw): optimize I/O operations and add caching layer
Performance Optimizations:

1. Async I/O Operations (data-aggregator.ts, session-scanner.ts):
   - Replace sync fs operations with fs/promises
   - Parallelize file reads with Promise.all()
   - Add concurrency limiting to prevent overwhelming system
   - Non-blocking event loop during aggregation

2. Data Caching Layer (cache-manager.ts):
   - New CacheManager<T> class for dashboard data caching
   - File timestamp tracking for change detection
   - TTL-based expiration (5 minutes default)
   - Automatic invalidation when files change
   - Cache location: .workflow/.ccw-cache/

3. CLI Executor Optimization (cli-executor.ts):
   - Tool availability caching with 5-minute TTL
   - Avoid repeated process spawning for where/which checks
   - Memory cache for frequently checked tools

Expected Performance Improvements:
- Data aggregation: 10x-50x faster with async I/O
- Cache hits: <5ms vs 200-500ms (40-100x improvement)
- CLI tool checks: <1ms cached vs 200-500ms

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-14 12:11:29 +08:00
catlog22
ac43cf85ec feat: Implement Skills Manager View and Notifier Module
- Added `skills-manager.js` for managing Claude Code skills with functionalities for loading, displaying, and editing skills.
- Introduced a Notifier module in `notifier.ts` for CLI to server communication, enabling notifications for UI updates on data changes.
- Created comprehensive documentation for the Chain Search implementation, including usage examples and performance tips.
- Developed a test suite for the Chain Search engine, covering basic search, quick search, symbol search, and files-only search functionalities.
2025-12-14 11:12:48 +08:00
catlog22
08dc0a0348 perf(codex-lens): optimize search performance with vectorized operations
Performance Optimizations:
- VectorStore: NumPy vectorized cosine similarity (100x+ faster)
  - Cached embedding matrix with pre-computed norms
  - Lazy content loading for top-k results only
  - Thread-safe cache invalidation
- SQLite: Added PRAGMA mmap_size=30GB for memory-mapped I/O
- FTS5: unicode61 tokenizer with tokenchars='_' for code identifiers
- ChainSearch: files_only fast path skipping snippet generation
- ThreadPoolExecutor: shared pool across searches

New Components:
- DirIndexStore: single-directory index with FTS5 and symbols
- RegistryStore: global project registry with path mappings
- PathMapper: source-to-index path conversion utility
- IndexTreeBuilder: hierarchical index tree construction
- ChainSearchEngine: parallel recursive directory search

Test Coverage:
- 36 comprehensive search functionality tests
- 14 performance benchmark tests
- 296 total tests passing (100% pass rate)

Benchmark Results:
- FTS5 search: 0.23-0.26ms avg (3900-4300 ops/sec)
- Vector search: 1.05-1.54ms avg (650-955 ops/sec)
- Full semantic: 4.56-6.38ms avg per query

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-14 11:06:24 +08:00
catlog22
90adef6cfb docs: clarify CodexLens action parameter requirements
Add clearer documentation for each action:
- search/search_files: requires query parameter
- symbol: no query, extracts all symbols from file

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 10:31:22 +08:00
catlog22
d4499cc6d7 refactor: replace Code Index MCP with native CCW CodexLens
Migrate all Code Index MCP references to CCW's built-in CodexLens tool:
- Update context-search-agent with codex_lens API calls
- Update test-context-search-agent with new search patterns
- Update memory/load command file discovery reference
- Update context-gather and task-generate-tdd MCP capabilities
- Update INSTALL docs to reflect CodexLens as built-in feature
- Sync all mirrored files in skills/command-guide/reference/

Old API (mcp__code-index__*) → New API (mcp__ccw-tools__codex_lens)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 10:26:12 +08:00
catlog22
958cf290e2 feat: add insights history feature with UI and backend support
- Implemented insights history section in the dashboard with cards displaying analysis results.
- Added CSS styles for insights cards, detail panel, and empty state.
- Enhanced i18n support for insights-related strings in English and Chinese.
- Developed backend methods for saving, retrieving, and deleting insights in the CLI history store.
- Integrated insights loading and rendering logic in the memory view.
- Added functionality to refresh insights and handle detail view interactions.
2025-12-14 09:48:02 +08:00
catlog22
d3a522f3e8 feat: add support for Claude CLI tool and enhance memory features
- Added new CLI tool "Claude" with command handling in cli-executor.ts.
- Implemented session discovery for Claude in native-session-discovery.ts.
- Enhanced memory view with active memory controls, including sync functionality and configuration options.
- Introduced zoom and fit view controls for memory graph visualization.
- Updated i18n.js for new memory-related translations.
- Improved error handling and migration for CLI history store.
2025-12-13 22:44:42 +08:00
catlog22
52935d4b8e feat: Implement resume strategy engine and session content parser
- Added `resume-strategy.ts` to determine optimal resume approaches including native, prompt concatenation, and hybrid modes.
- Introduced `determineResumeStrategy` function to evaluate various resume scenarios.
- Created utility functions for building context prefixes and formatting outputs in plain, YAML, and JSON formats.
- Added `session-content-parser.ts` to parse native CLI tool session files supporting Gemini/Qwen JSON and Codex JSONL formats.
- Implemented parsing logic for different session formats, including error handling for invalid lines.
- Provided functions to format conversations and extract user-assistant pairs from parsed sessions.
2025-12-13 20:29:19 +08:00
catlog22
32217f87fd feat(dashboard): CLI settings UI and Smart Context support
- Add CLI Settings section with Prompt Format selector
- Add Smart Context toggle and Max Files dropdown
- Update smart-search with output_mode (full/files_only/count)
- Add smart-context module for keyword extraction
- Improve Status page styling (remove card wrappers)
- Add i18n translations for new settings

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-13 17:28:39 +08:00
catlog22
675aff26ff feat(mcp): add read_file tool and simplify edit/write returns
- edit_file: truncate diff to 15 lines, compact result format
- write_file: return only path/bytes/message
- read_file: new tool with multi-file, directory, regex support
  - paths: single file, array, or directory
  - pattern: glob filter (*.ts)
  - contentPattern: regex content search
  - maxDepth, maxFiles, includeContent options
- Update tool-strategy.md documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-13 17:28:03 +08:00
catlog22
029384c427 feat: Implement SQLite storage for CLI execution history
- Introduced a new SQLite-based storage backend for managing CLI execution history.
- Added `CliHistoryStore` class to handle conversation records and turns with efficient queries.
- Migrated existing JSON history files to the new SQLite format.
- Updated CLI executor to use asynchronous and synchronous methods for saving and loading conversations.
- Enhanced execution history retrieval with support for filtering by tool, status, and search terms.
- Added prompt concatenation utilities to build multi-turn prompts in various formats (plain, YAML, JSON).
- Implemented batch deletion of conversations and improved error handling for database operations.
2025-12-13 14:53:53 +08:00
catlog22
37417caca2 feat(docs): clarify template requirements and fallback options in CLI execution guidelines 2025-12-13 14:29:23 +08:00
catlog22
8f58e4e48a feat(cli): update command patterns to use CCW CLI for analysis tasks 2025-12-13 14:24:37 +08:00
catlog22
68c872ad36 refactor(agents): make cli-lite-planning-agent generic for both workflows
- Update cli-lite-planning-agent to serve both lite-plan and lite-fix
- Add schema-driven output (plan-json-schema or fix-plan-json-schema)
- Support both explorations (lite-plan) and diagnoses (lite-fix) context
- Remove CLI execution ID planning from lite-plan.md and lite-fix.md
- Delegate CLI execution ID assignment to agent when user selects CLI execution

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-13 14:15:47 +08:00
catlog22
c780544792 feat(cli): add support for custom execution IDs and multi-turn conversations
- Introduced `--id <id>` option in CLI for custom execution IDs.
- Enhanced CLI command handling to support multi-turn conversations.
- Updated execution and conversation detail retrieval to accommodate new structure.
- Implemented merging of multiple conversations with tracking of source IDs.
- Improved history management to save and load conversation records.
- Added styles for displaying multi-turn conversation details in the dashboard.
- Refactored existing execution detail functions for backward compatibility.
2025-12-13 14:03:24 +08:00
catlog22
23e15e479e fix(cli-executor): update stdin prompt variable for CLI tool execution 2025-12-13 12:15:07 +08:00
catlog22
684618e72b feat: integrate session resume functionality into CLI commands and documentation 2025-12-13 12:09:32 +08:00
catlog22
93d3df1e08 feat: add task queue sidebar and resume functionality for CLI sessions
- Implemented task queue sidebar for managing active tasks with filtering options.
- Added functionality to close notification sidebar when opening task queue.
- Enhanced CLI history view to support resuming previous sessions with optional prompts.
- Updated CLI executor to handle resuming sessions for Codex, Gemini, and Qwen tools.
- Introduced utility functions for finding CLI history directories recursively.
- Improved task queue data management and rendering logic.
2025-12-13 11:51:53 +08:00
catlog22
335f5e9ec6 fix(ccw): correct template paths for TypeScript build
Fix paths from dist/core/ to src/templates/ since templates are not
compiled and remain in src directory.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-13 10:45:59 +08:00
catlog22
30e9ae0153 Refactor workflow session commands to utilize ccw CLI for session management
- Updated session completion process to use `ccw session` commands for listing, reading, and updating session statuses.
- Enhanced session listing command to provide metadata and statistics using `ccw session list` and `ccw session stats`.
- Streamlined session resume functionality with `ccw session` commands for checking status and updating session state.
- Improved session creation process by replacing manual directory and metadata handling with `ccw session init`.
- Introduced `session_manager` tool for simplified session operations, including listing, updating, and archiving sessions.
- Updated conflict resolution tools to generate structured output in `conflict-resolution.json` instead of markdown files.
- Enhanced TDD and UI design tools to utilize `ccw cli exec` for executing analysis commands, improving integration with the workflow.
2025-12-13 10:45:03 +08:00
catlog22
25ac862f46 feat(ccw): migrate backend to TypeScript
- Convert 40 JS files to TypeScript (CLI, tools, core, MCP server)
- Add Zod for runtime parameter validation
- Add type definitions in src/types/
- Keep src/templates/ as JavaScript (dashboard frontend)
- Update bin entries to use dist/
- Add tsconfig.json with strict mode
- Add backward-compatible exports for tests
- All 39 tests passing

Breaking changes: None (backward compatible)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-13 10:43:15 +08:00
catlog22
d4e59770d0 feat: Add CCW MCP server and tools integration
- Introduced `ccw-mcp` command for running CCW tools as an MCP server.
- Updated `package.json` to include new MCP dependencies and scripts.
- Enhanced CLI with new options for `codex_lens` tool.
- Implemented MCP server logic to expose CCW tools via Model Context Protocol.
- Added new tools and updated existing ones for better functionality and documentation.
- Created quick start and full documentation for MCP server usage.
- Added tests for MCP server functionality to ensure reliability.
2025-12-13 09:14:57 +08:00
catlog22
15122b9ebb fix(mcp): support same-name MCP servers with different configs
- Add getMcpConfigHash() to generate unique config fingerprints
- Modify getAllAvailableMcpServers() to differentiate same-name servers
  with different configurations using name@project-folder suffix
- Update renderAvailableServerCard() to show source project info and
  use originalName for correct installation
- Fix event handlers to use btn.dataset for event bubbling safety
- Add CCW Tools MCP installation with npx for cross-platform compatibility

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-12 23:20:34 +08:00
catlog22
a41e6d19fd Refactor code structure for improved readability and maintainability 2025-12-12 22:02:23 +08:00
catlog22
e879ec7189 feat(dashboard): optimize CCW Endpoint Tools display with card grid and detail modal 2025-12-12 20:01:55 +08:00
catlog22
4faa5f1c95 Add comprehensive tests for semantic chunking and search functionality
- Implemented tests for the ChunkConfig and Chunker classes, covering default and custom configurations.
- Added tests for symbol-based chunking, including single and multiple symbols, handling of empty symbols, and preservation of line numbers.
- Developed tests for sliding window chunking, ensuring correct chunking behavior with various content sizes and configurations.
- Created integration tests for semantic search, validating embedding generation, vector storage, and search accuracy across a complex codebase.
- Included performance tests for embedding generation and search operations.
- Established tests for chunking strategies, comparing symbol-based and sliding window approaches.
- Enhanced test coverage for edge cases, including handling of unicode characters and out-of-bounds symbol ranges.
2025-12-12 19:55:35 +08:00
catlog22
c42f91a7fe feat: Add support for Tree-Sitter parsing and enhance SQLite storage performance 2025-12-12 18:40:24 +08:00
catlog22
92d2085b64 Optimize SQLite FTS storage and pooling 2025-12-12 17:40:03 +08:00
catlog22
6a39f7e69d Refactor code structure for improved readability and maintainability 2025-12-12 16:27:37 +08:00
catlog22
dfa8dbc52a feat: Enhance CLI tools and history management
- Added CLI Manager and CLI History views to the navigation.
- Implemented rendering for CLI tools with detailed status and actions.
- Introduced a new CLI History view to display execution history with search and filter capabilities.
- Added hooks for managing and displaying available SKILLs in the Hook Manager.
- Created modals for Hook Wizards and Template View for better user interaction.
- Implemented semantic search dependency checks and installation functions in CodexLens.
- Updated dashboard layout to accommodate new features and improve user experience.
2025-12-12 16:26:49 +08:00
catlog22
a393601ec5 feat(codexlens): add CodexLens code indexing platform with incremental updates
- Add CodexLens Python package with SQLite FTS5 search and tree-sitter parsing
- Implement workspace-local index storage (.codexlens/ directory)
- Add incremental update CLI command for efficient file-level index refresh
- Integrate CodexLens with CCW tools (codex_lens action: update)
- Add CodexLens Auto-Sync hook template for automatic index updates on file changes
- Add CodexLens status card in CCW Dashboard CLI Manager with install/init buttons
- Add server APIs: /api/codexlens/status, /api/codexlens/bootstrap, /api/codexlens/init

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-12 15:02:32 +08:00
catlog22
b74a90b416 Refactor code structure for improved readability and maintainability 2025-12-12 11:19:58 +08:00
catlog22
77de8d857b feat: 添加冲突解决模式的JSON架构,支持冲突检测与解决 2025-12-12 09:21:05 +08:00
catlog22
76c1745269 refactor(task-generate-agent): optimize N+1 parallel planning prompts
- Phase 2B: Add structured agent prompt with TASK OBJECTIVE, MODULE SCOPE,
  SESSION PATHS, CONTEXT METADATA, CROSS-MODULE DEPENDENCIES sections
- Phase 3: Convert from function calls to agent invocation with complete
  prompt structure for IMPL_PLAN.md and TODO_LIST.md generation
- Align prompt structure with Phase 2A (Single Agent) for consistency
- Reference agent specification for template details instead of inline docs
- Add dependency resolution algorithm for CROSS:: placeholder resolution

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 23:49:40 +08:00
catlog22
811382775d feat: Implement fuzzy search functionality in smart-search.js
- Added buildFuzzyRegex function for approximate matching.
- Enhanced buildRipgrepCommand to support fuzzy parameter.
- Updated executeAutoMode to handle fuzzy search case.
- Implemented executeFuzzyMode for executing fuzzy search using ripgrep.
- Refactored import and export parsing functions for better modularity.
- Improved dependency graph building and circular dependency detection.
- Added caching mechanism for dependency graph to optimize performance.
2025-12-11 23:28:35 +08:00
catlog22
e8f1caa219 feat: Enhance CLI components with icons and improve file editing capabilities
- Added icons to the CLI History and CLI Tools headers for better UI representation.
- Updated CLI Status component to include tool-specific classes for styling.
- Refactored CCW Install Panel to improve layout and functionality, including upgrade and uninstall buttons.
- Enhanced the edit-file tool with new features:
  - Support for creating parent directories when writing files.
  - Added dryRun mode for previewing changes without modifying files.
  - Implemented a unified diff output for changes made.
  - Enabled multi-edit support in update mode.
- Introduced a new Smart Search Tool with multiple search modes (auto, exact, fuzzy, semantic, graph) and intent classification.
- Created a Write File Tool to handle file creation and overwriting with backup options.
2025-12-11 23:06:47 +08:00
catlog22
15c5cd5f6e refactor(notifications): 优化JSON格式化函数,增强可读性和错误处理 2025-12-11 21:33:37 +08:00
catlog22
766a8d2145 feat: Enhance global notifications with localStorage persistence and clear functionality
feat: Implement generic modal functions for better UI consistency

feat: Update navigation titles for CLI Tools view

feat: Add JSON formatting for notification details in CLI execution

feat: Introduce localStorage handling for global notification queue

feat: Expand CLI Manager view to include CCW installations with carousel

feat: Add CCW installation management with modal for user interaction

fix: Improve event delegation for explorer tree item interactions

refactor: Clean up CLI Tools section in dashboard HTML

feat: Add functionality to delete CLI execution history by ID
2025-12-11 21:18:28 +08:00
catlog22
e350e0c7bb Enhance logging and error handling in dashboard scripts
- Added detailed console logs in api.js to trace the execution flow during data loading and UI updates.
- Improved logging in main.js to monitor the initialization process and server mode data loading.
- Updated explorer.js to clarify the definition of defaultCliTool, indicating its location in another module.
2025-12-11 19:21:28 +08:00
catlog22
db4ab85d3e Refactor code structure for improved readability and maintainability 2025-12-11 19:02:07 +08:00
catlog22
cfcd277a58 refactor(session): 移除旧的操作参考,简化文档内容 2025-12-11 16:07:57 +08:00
catlog22
c256fd9379 refactor(session): migrate remaining bash commands to ccw session
Replace legacy bash operations in session commands with ccw session
commands for consistency and better maintainability.

Changes:
- session/list.md: Replace ls/wc with ccw session list
- session/complete.md: Replace bash marker/manifest ops with ccw session
- skills/command-guide reference docs: Mirror all changes

Commands replaced:
- `ls .workflow/active/WFS-*` → `ccw session list --location active`
- `test -f .archiving` → `ccw session read --type process --filename .archiving`
- `touch .archiving` → `ccw session write --type process --filename .archiving`
- `cat manifest.json` → `ccw session read manifest --type manifest`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-11 15:08:55 +08:00
catlog22
0a4c205105 refactor(commands/agents): replace bash/jq with ccw session commands
Replace legacy bash/jq operations with ccw session commands for better
consistency and maintainability across workflow commands and agents.

Changes:
- commands/memory/docs.md: Use ccw session update/read for session ops
- commands/workflow/review.md: Replace cat/jq with ccw session read
- commands/workflow/tdd-verify.md: Replace find/jq with ccw session read
- agents/conceptual-planning-agent.md: Use ccw session read for metadata
- agents/test-fix-agent.md: Use ccw session read for context package
- skills/command-guide/reference/*: Mirror changes to skill docs
- ccw/src/commands/session.js: Fix EPIPE error when piping to jq/head

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-11 14:48:02 +08:00
catlog22
e815c3c10e fix(session-manager): support archived parameter in getSessionBase
The archive operation was failing to move sessions from active to
archives directory because getSessionBase() ignored the second
parameter. Now correctly returns archive path when archived=true.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 14:03:14 +08:00
catlog22
8eb1a4e52e feat(uninstall): 增强卸载命令以支持全局安装保护和文件跳过逻辑 2025-12-11 11:16:18 +08:00
catlog22
19648721fc feat: 重构安装和升级命令以支持全局子目录的安装和排除 2025-12-11 11:06:15 +08:00
catlog22
b81d1039c5 feat(cli): Add CLI Manager with status and history components
- Implemented CLI History Component to display execution history with filtering and search capabilities.
- Created CLI Status Component to show availability of CLI tools and allow setting a default tool.
- Enhanced notifications to handle CLI execution events.
- Integrated CLI Manager view to combine status and history panels for better user experience.
- Developed CLI Executor Tool for unified execution of external CLI tools (Gemini, Qwen, Codex) with streaming output.
- Added functionality to save and retrieve CLI execution history.
- Updated dashboard HTML to include navigation for CLI tools management.
2025-12-11 11:05:57 +08:00
catlog22
a667b7548c refactor(commands): replace bash/jq operations with ccw session commands
- session/start.md: Replace find/mkdir/echo with ccw session list/init
- session/list.md: Replace find/jq with ccw session list/stats/read
- session/resume.md: Replace jq status updates with ccw session status
- session/complete.md: Replace jq/mv with ccw session archive
- execute.md: Replace session discovery with ccw session list/status
- code-developer.md: Replace jq context-package read with ccw session read

Benefits:
- Atomic operations (no temp files)
- Auto workspace detection (works from subdirectories)
- Auto session location detection (active/archived/lite)
- Status history auto-tracking for tasks
- Consistent error handling

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-10 23:31:51 +08:00
catlog22
598bea9b21 feat(ccw): add session manager tool with auto workspace detection
- Add session_manager tool for workflow session lifecycle management
- Add ccw session CLI command with subcommands:
  - list, init, status, task, stats, delete, read, write, update, archive, mkdir
- Implement auto workspace detection (traverse up to find .workflow)
- Implement auto session location detection (active, archived, lite-plan, lite-fix)
- Add dashboard notifications for tool executions via WebSocket
- Add granular event types (SESSION_CREATED, TASK_UPDATED, etc.)
- Add status_history auto-tracking for task status changes
- Update workflow session commands to document ccw session usage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-10 19:26:53 +08:00
catlog22
df104d6e9b feat: enforce synchronous exploration execution and update planning steps 2025-12-10 12:03:12 +08:00
catlog22
417f3c0f8c feat: enhance workflow session management with status updates and CSS styling 2025-12-10 10:01:07 +08:00
catlog22
5114a942dc chore: bump version to 6.1.4
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 22:32:58 +08:00
catlog22
edef937822 feat(v6.1.4): dashboard lite-fix enhancements and workflow docs update
- fix(dashboard): enhance lite-fix session parsing and plan rendering
- docs(workflow): add task status update command example for ccw dashboard

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 22:32:39 +08:00
catlog22
faa86eded0 docs: add changelog entries for 6.1.2 and 6.1.3
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 15:53:15 +08:00
catlog22
44fa6e0a42 chore: bump version to 6.1.3
- Update README version badge and release notes
- Published to npm with simplified edit_file CLI

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 15:52:43 +08:00
catlog22
be9a1c76d4 refactor(tool): simplify edit_file to parameter-based input only
- Remove JSON input support, use CLI parameters only
- Remove line mode, keep text replacement only
- Add sed as line operation alternative in tool-strategy.md
- Update documentation to English

Usage: ccw tool exec edit_file --path file.txt --old "old" --new "new"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 15:50:35 +08:00
catlog22
fcc811d6a1 docs(readme): update to v6.1.2 and simplify installation
- Update version badge and release notes to v6.1.2
- Remove one-click script install section (npm only)
- Update changelog highlights for latest release

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 15:41:39 +08:00
catlog22
906404f075 feat(server): add API endpoint for version check 2025-12-09 14:39:07 +08:00
catlog22
1267c8d0f4 feat(dashboard): add npm version update notification
- Add /api/version-check endpoint to check npm registry for updates
- Create version-check.js component with update banner UI
- Add CSS styles for version update banner
- Fix hook manager button event handling (use e.currentTarget)
- Bump version to 6.1.2

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 14:38:55 +08:00
catlog22
eb1093128e docs(skill): fix project name from "Claude DMS3" to "Claude Code Workflow"
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 14:38:55 +08:00
catlog22
4ddeb6551e Merge pull request #36 from gurdasnijor/smithery/add-badge
Add "Run in Smithery" badge
2025-12-09 10:30:05 +08:00
Gurdas Nijor
7252c2ff3d Add Smithery badge 2025-12-08 16:10:45 -08:00
catlog22
8dee45c0a3 chore: bump version to 6.1.1
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 00:04:17 +08:00
catlog22
99ead8b165 fix(tool): add smart JSON parser with Windows path handling
- Auto-fix Windows backslash paths in JSON input
- Provide helpful hints when path escaping errors occur

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 00:03:23 +08:00
catlog22
0c7f13d9a4 docs: update README to v6.1.0
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 00:01:13 +08:00
catlog22
ed32b95de1 chore: bump version to 6.1.0
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-08 23:52:14 +08:00
catlog22
beacc2e26b Fix icon duplications and add navigation color scheme
- Add color variables for navigation items (info, indigo, orange)
- Apply distinct colors to navigation items:
  * Active: green
  * Archived: blue
  * Lite Plan: indigo
  * Lite Fix: orange
- Fix icon conflicts in review session:
  * Change Dimensions icon from 📋 to 🎯
  * Change Location icon from 📄 to 📍
  * Change Root Cause icon from 🔍 to 🎯
- Add active state styles for each navigation item type
- Support both light and dark themes
2025-12-08 23:50:50 +08:00
catlog22
389621c954 fix(dashboard): replace emoji icons in tabs-context and tabs-other components
- tabs-context.js: package, clipboard-list, building-2, code-2, file-code, link, flask-conical, eye icons
- tabs-other.js: file-text, ruler, eye, blocks, package, git-branch, plug icons for exploration sections
- Update all empty state icons to use Lucide
- Update asset category icons (documentation, source code, tests)
- Update exploration titles with Lucide icons
2025-12-08 23:34:14 +08:00
catlog22
2ba7756d13 fix(dashboard): replace task status emoji icons with Lucide icons in session-detail
- Replace ✓/○/⟳ with check-circle/circle/loader-2 icons
- Update formatStatusLabel() to return HTML with Lucide icons
- Update task stats bar and bulk action buttons
- Simplify toast messages to avoid HTML in text content
- Add lucide.createIcons() call in updateTaskStatsBar()
2025-12-08 23:31:01 +08:00
catlog22
02f77c0a51 fix(dashboard): add missing Lucide icon initialization in project-overview and session-detail views 2025-12-08 23:24:17 +08:00
catlog22
5aa8d37cd0 fix(dashboard): ensure Lucide icons are initialized after all dynamic content renders
- Add lucide.createIcons() calls in renderSessions, renderProjectOverview, renderMcpManager, renderHookManager, renderLiteTasks, showSessionDetailPage
- Fixes issue where icons would not appear after page render
2025-12-08 23:23:40 +08:00
catlog22
a7b8ffc716 feat(explorer): async task execution, CLI selector fix, WebSocket frame handling
- Task queue now executes tasks in parallel using Promise.all
- addFolderToQueue uses selected CLI tool instead of hardcoded 'gemini'
- Added CSS styles for queue-cli-selector toolbar
- Force refresh notification list after all tasks complete
- Fixed WebSocket frame parsing to handle Ping/Pong control frames
- Respond to Ping with Pong, ignore other control frames
- Eliminates garbled characters in WebSocket logs
2025-12-08 23:21:47 +08:00
catlog22
b0bc53646e feat(dashboard): complete icon unification across all views
- Update home.js: inbox, calendar, list-checks icons
- Update project-overview.js: code-2, blocks, component, git-branch, sparkles, bug, wrench, book-open icons
- Update session-detail.js: list-checks, package, file-text, ruler, scale, search icons for tabs
- Update lite-tasks.js: zap, file-edit, wrench, calendar, list-checks, ruler, package, file-text icons
- Update mcp-manager.js: plug, building-2, user, map-pin, check-circle, x-circle, circle-dashed, lock icons
- Update hook-manager.js: webhook, pencil, trash-2, clock, check-circle, bell, octagon-x icons
- Add getHookEventIconLucide() helper function
- Initialize Lucide icons after dynamic content rendering

All emoji icons replaced with consistent Lucide SVG icons
2025-12-08 23:14:48 +08:00
catlog22
5f31c9ad7e feat(dashboard): unify icons with Lucide Icons library
- Introduce Lucide Icons via CDN for consistent SVG icons
- Replace emoji icons with Lucide SVG icons in sidebar navigation
- Fix Sessions/Explorer icon confusion (📁/📂 → history/folder-tree)
- Update top bar icons (logo, theme toggle, search, refresh)
- Update stats section icons with colored Lucide icons
- Add icon animations support (animate-spin for loading states)
- Update Explorer view with Lucide folder/file icons
- Support dark/light theme icon adaptation

Icon mapping:
- Explorer: folder-tree (was 📂)
- Sessions: history (was 📁)
- Overview: bar-chart-3
- Active: play-circle
- Archived: archive
- Lite Plan: file-edit
- Lite Fix: wrench
- MCP Servers: plug
- Hooks: webhook
2025-12-08 22:58:42 +08:00
catlog22
818d9f3f5d Add enhanced styles for the review tab, including layout, buttons, and responsive design 2025-12-08 22:11:14 +08:00
catlog22
1c3c070db4 fix(tools): 修复CLI工具多行prompt的shell转义问题
修复文件:
- update-module-claude.js
- generate-module-docs.js

主要修复:
- 使用临时文件+stdin管道传递prompt,避免shell转义问题
- 添加Windows PowerShell兼容性支持
- 添加执行日志输出便于调试
- 添加临时文件清理逻辑

技术细节:
- gemini/qwen: 使用 cat file | tool 方式
- codex: 使用 \ 命令替换
- Windows: 使用 Get-Content -Raw | tool
2025-12-08 21:56:41 +08:00
catlog22
91e4792aa9 feat(ccw): 添加 ccw tool exec 工具系统
新增工具:
- edit_file: AI辅助文件编辑 (update/line模式)
- get_modules_by_depth: 项目结构分析
- update_module_claude: CLAUDE.md文档生成
- generate_module_docs: 模块文档生成
- detect_changed_modules: Git变更检测
- classify_folders: 文件夹分类
- discover_design_files: 设计文件发现
- convert_tokens_to_css: 设计token转CSS
- ui_generate_preview: UI预览生成
- ui_instantiate_prototypes: 原型实例化

使用方式:
  ccw tool list              # 列出所有工具
  ccw tool exec <name> '{}'  # 执行工具

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-08 21:10:31 +08:00
catlog22
813bfa8f97 fix(claude): 修复 ccw tool exec 命令格式 - 位置参数改为JSON格式
修复内容:
- 将位置参数格式改为JSON格式: ccw tool exec tool '{"param":"value"}'
- 修复双引号字符串内的JSON引号转义问题
- 更新deprecated脚本的使用示例

受影响文件:
- commands/memory/update-full.md, docs-full-cli.md, docs-related-cli.md, update-related.md
- commands/workflow/ui-design/generate.md, import-from-code.md
- scripts/*.sh (9个deprecated脚本)
- skills/command-guide/reference/* (通过analyze_commands.py自动同步)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-08 21:09:21 +08:00
catlog22
8b29f6bb7c chore: bump version to 6.0.5 2025-12-08 10:29:10 +08:00
catlog22
27273405f7 feat: CCW Dashboard 增强 - 停止命令、浏览器修复和MCP多源配置
- 新增 ccw stop 命令支持优雅停止和强制终止 (--force)
- 修复 ccw view 服务器检测时浏览器无法打开的问题
- MCP 配置现在从多个源读取:
  - ~/.claude.json (项目级)
  - ~/.claude/settings.json 和 settings.local.json (全局)
  - 各工作空间的 .claude/settings.json (工作空间级)
- 新增全局 MCP 服务器显示区域
- 修复路径选择模态框样式问题

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-08 10:28:07 +08:00
catlog22
f4299457fb fix(package): 调整依赖项顺序以提高可读性 2025-12-08 10:00:42 +08:00
catlog22
06983a35ad fix(dashboard): 添加路径选择弹窗样式
- 修复 path-modal 样式丢失问题
- 添加完整的弹窗样式定义

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-08 09:59:15 +08:00
catlog22
a80953527b feat(ccw): 智能识别运行中的服务器,支持工作空间切换
- ccw view 自动检测服务器是否已运行
- 已运行时切换工作空间并打开浏览器,无需重启服务器
- 新增 /api/switch-path 和 /api/health 端点
- Dashboard 支持 URL path 参数加载指定工作空间

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-08 09:56:35 +08:00
catlog22
0f469e225b fix: 添加 ccw/package.json 到发布文件列表 2025-12-07 22:59:43 +08:00
catlog22
5dca69fbec chore: bump version to 6.0.1 2025-12-07 21:42:50 +08:00
catlog22
ac626e5895 feat: Review Session增加Fix进度跟踪卡片,移除独立Dashboard模板
- 新增Fix Progress跟踪卡片(走马灯样式)显示修复进度
- 添加/api/file端点支持读取fix-plan.json
- 移除review-fix/module-cycle/session-cycle中的独立dashboard生成
- 删除废弃的workflow-dashboard.html和review-cycle-dashboard.html模板
- 统一使用ccw view命令查看进度

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-07 21:41:43 +08:00
catlog22
cb78758839 docs: 更新 README 添加 npm 安装和 ccw CLI 说明
- 版本更新到 v6.0.0
- 添加 npm badge 和安装说明
- 新增 CCW CLI Tool 章节,说明所有命令
- 更新 description 为 JSON-driven multi-agent framework
- 修复 package.json 循环依赖

安装: npm install -g claude-code-workflow

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-07 20:56:58 +08:00
catlog22
844a2412b2 chore: 配置 npm 发布 @dyw1234/claude-code-workflow@6.0.0
- 添加 package.json 到项目根目录
- 配置 scoped package @dyw1234/claude-code-workflow
- 添加 .npmignore 排除不必要文件
- 发布 v6.0.0 到 npm registry

安装方式: npm install -g @dyw1234/claude-code-workflow

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-07 20:42:23 +08:00
catlog22
650d877430 feat: Dashboard 增强 - MCP管理器、Review Session 和 UI 改进
- 添加 MCP Manager 组件,支持服务器状态管理
- 增强 Review Session 视图,添加 conflict/review tabs
- 新增 _conflict_tab.js 和 _review_tab.js 组件
- 改进 carousel、tabs-other 等组件
- 大量 CSS 样式更新和优化
- home.js 添加新功能支持

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-07 20:07:29 +08:00
catlog22
f459061ad5 refactor: 简化 ccw 安装流程,移除远程下载功能
- 删除 version-fetcher.js,移除 GitHub API 依赖
- install.js: 移除远程版本选择,只保留本地安装
- upgrade.js: 重写为本地升级,比对包版本与已安装版本
- cli.js: 移除 -v/-t/-b 等版本相关选项
- 添加 CLAUDE.md 复制到 .claude 目录的逻辑

版本管理统一到 npm:npm install -g ccw@版本号

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-07 20:03:10 +08:00
catlog22
a6f9701679 Add exploration field rendering helpers for dynamic content display
- Implemented `renderExpField` to handle various data types for exploration fields.
- Created `renderExpArray` to format arrays, including support for objects with specific properties.
- Developed `renderExpObject` for recursive rendering of object values, filtering out private keys.
- Introduced HTML escaping for safe rendering of user-generated content.
2025-12-07 18:07:28 +08:00
catlog22
26a325efff feat: 添加最近路径管理功能,包括删除路径的API和前端交互 2025-12-07 17:35:10 +08:00
catlog22
0a96ee16a8 Add tool strategy documentation with triggering mechanisms and text processing references
- Introduced auto and manual triggering mechanisms for Exa.
- Added quick reference guides for sed and awk text processing.
- Established a fallback strategy for handling edit failures.
2025-12-07 17:09:07 +08:00
catlog22
43c962b48b feat: Add Notifications Component with WebSocket and Auto Refresh
- Implemented a Notifications component for real-time updates using WebSocket.
- Added silent refresh functionality to update data without notification bubbles.
- Introduced auto-refresh mechanism to periodically check for changes in workflow data.
- Enhanced data handling with session and task updates, ensuring UI reflects the latest state.

feat: Create Hook Manager View for Managing Hooks

- Developed a Hook Manager view to manage project and global hooks.
- Added functionality to create, edit, and delete hooks with a user-friendly interface.
- Implemented quick install templates for common hooks to streamline user experience.
- Included environment variables reference for hooks to assist users in configuration.

feat: Implement MCP Manager View for Server Management

- Created an MCP Manager view for managing MCP servers within projects.
- Enabled adding and removing servers from projects with a clear UI.
- Displayed available servers from other projects for easy access and management.
- Provided an overview of all projects and their associated MCP servers.

feat: Add Version Fetcher Utility for GitHub Releases

- Implemented a version fetcher utility to retrieve release information from GitHub.
- Added functions to fetch the latest release, recent releases, and latest commit details.
- Included functionality to download and extract repository zip files.
- Ensured cleanup of temporary directories after downloads to maintain system hygiene.
2025-12-07 15:48:39 +08:00
catlog22
724545ebd6 Remove backup HTML template for workflow dashboard 2025-12-07 12:59:59 +08:00
catlog22
a9a2004d4a feat(dashboard): add context sections for assets, dependencies, test context, and conflict detection 2025-12-06 21:04:50 +08:00
catlog22
5b14c8a832 feat: add project overview section and enhance task item styles
- Introduced a new project overview section in the dashboard, displaying project details, technology stack, architecture, key components, and development history.
- Updated the server logic to include project overview data.
- Enhanced task item styles with status-based background colors for better visual distinction.
- Improved markdown modal functionality for viewing context and implementation plan with normalized line endings.
- Refactored task rendering logic to simplify task item display and improve performance.
2025-12-06 20:50:23 +08:00
catlog22
e2c5a514cb fix(data-aggregator): remove extra closing brace causing syntax error
- Fixed loadDimensionData function that had duplicate closing brace
- This was preventing review session dimension data from being parsed

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-05 16:59:45 +08:00
catlog22
296761a34e style(dashboard): enhance step numbers and mod-point card styling
- Change step numbers from circles to rounded rectangles
- Add shadow to step number badges for depth
- Enhance mod-point-card with full border and stronger left accent
- Add hover effect with elevated shadow on mod-point-card

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-05 16:52:36 +08:00
catlog22
1d3436d51b feat(dashboard): simplify lite task list and add dedicated drawer
- Remove META/CONTEXT/FLOW_CONTROL collapsible sections from task list
- Add compact task item with action/scope/mods/steps badges
- Create dedicated renderLiteTaskDrawerContent for plan.json parsing
- Add Overview tab with description, scope, acceptance, dependencies
- Add Implementation tab with steps and modification points
- Add proper file list extraction from modification_points
- Add CSS styles for lite task badges and drawer components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-05 15:51:37 +08:00
catlog22
60bb11c315 feat(dashboard): simplify lite task UI and add exploration context support
- Remove status indicators from lite task cards (progress bars, percentages)
- Remove status icons and badges from task detail items
- Remove stats bar showing completed/in-progress/pending counts
- Add Plan tab in drawer for plan.json data display
- Add exploration-*.json parsing for context tab
- Add collapsible sections for architecture, dependencies, patterns
- Fix currentPath selector bug causing TypeError

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-05 15:47:49 +08:00
catlog22
72fe6195af fix: update Codex multiplier to reflect new time allocation guidelines 2025-12-05 11:14:42 +08:00
catlog22
04fb3b7ee3 feat(dashboard): add plan data loading and lite task detail styling
- Add plan.json data loading to getSessionDetailData function for lite tasks
- Implement plan tab content styling with summary and approach sections
- Add plan metadata grid layout for displaying task planning information
- Create lite task detail page styles with task stats bar and status indicators
- Add context tab content styling with improved section hierarchy
- Implement path tags and JSON content display styles for plan details
- Add collapsible sections for organizing plan information
- Create comprehensive styling for context fields, modification points, and implementation steps
- Add array and nested object rendering styles for JSON data visualization
- Implement button styles for JSON view toggle functionality
2025-12-04 22:55:56 +08:00
catlog22
942fca7ad8 refactor(dashboard): optimize template structure and enhance data aggregation
- Reorder CSS and JS file loading in dashboard-generator.js for consistency
- Simplify dashboard.css by removing redundant styles and consolidating to Tailwind-based approach
- Add backup files for dashboard.html, dashboard.css, and review-cycle-dashboard.html
- Create new Tailwind-based dashboard template (dashboard_tailwind.html) and test variant
- Add tailwind.config.js for Tailwind CSS configuration
- Enhance data-aggregator.js to load full task data for archived sessions (previously only counted)
- Add meta, context, and flow_control fields to task objects for richer data representation
- Implement review data loading for archived sessions to match active session behavior
- Improve task sorting consistency across active and archived sessions
- Reduce CSS file size by ~70% through Tailwind utility consolidation while maintaining visual parity
2025-12-04 21:41:30 +08:00
catlog22
39df995e37 Refactor code structure for improved readability and maintainability 2025-12-04 17:22:25 +08:00
catlog22
efaa8b6620 fix: Refine action-planning-agent documentation for clarity and structure 2025-12-04 09:47:53 +08:00
catlog22
35bd0aa8f6 feat: Add workflow dashboard template and utility functions
- Implemented a new HTML template for the workflow dashboard, featuring a responsive design with dark/light theme support, session statistics, and task management UI.
- Created a browser launcher utility to open HTML files in the default browser across platforms.
- Developed file utility functions for safe reading and writing of JSON and text files.
- Added path resolver utilities to validate and resolve file paths, ensuring security against path traversal attacks.
- Introduced UI utilities for displaying styled messages and banners in the console.
2025-12-04 09:40:12 +08:00
catlog22
0f9adc59f9 docs: Remove deprecated CLI commands, clarify semantic invocation
Addresses issue #33 - /cli:mode:bug-diagnosis command not found

Changes:
- Remove deprecated /cli:* commands (/cli:analyze, /cli:chat, /cli:execute,
  /cli:codex-execute, /cli:discuss-plan, /cli:mode:*) from documentation
- Only /cli:cli-init remains as the sole CLI command
- Update all references to use /workflow:lite-plan, /workflow:lite-fix
- Clarify that CLI tools are now invoked through semantic invocation
  (natural language) - Claude auto-selects Gemini/Qwen/Codex with templates
- Update COMMAND_SPEC.md, COMMAND_REFERENCE.md, GETTING_STARTED*.md,
  FAQ.md, WORKFLOW_DECISION_GUIDE*.md, workflow-architecture.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 23:05:40 +08:00
catlog22
c43a72ef46 release: v5.9.8 - Brainstorm workflow improvements and documentation updates
## Changes
- Simplify analysis output strategy to optional modular structure
- Update synthesis/artifacts documentation to use AskUserQuestion tool
- Add modular output strategy for brainstorm analysis
- Simplify clarification deduplication in lite-plan
- Add "Fix, Don't Hide" section to CLAUDE.md guidelines
- Simplify project.json schema by removing unused fields
- Update session ID format in lite-fix/lite-plan workflows
- Add development index to project JSON schema

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 22:45:04 +08:00
catlog22
7a61119c55 fix: Enhance session folder creation with success/failure feedback in lite-fix and lite-plan workflows 2025-12-02 14:47:08 +08:00
catlog22
d620eac621 fix: Remove file path reference from AGENTS.md 2025-12-01 22:18:44 +08:00
catlog22
1dbffbee2d fix: Enforce multi-round clarification in lite-plan and lite-fix
Add explicit instructions to execute ALL clarification rounds when >4
questions exist. AskUserQuestion tool limits max 4 per call, so multi-round
execution is mandatory to exhaust all clarification needs.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 23:44:00 +08:00
catlog22
c67817f46e fix: Clarify multi-round clarification and enforce session timestamp format
- Phase 2 Clarification: max 4 questions per round, multiple rounds allowed
- Session Setup: MANDATORY timestamp in sessionId format (slug-YYYY-MM-DD-HH-mm-ss)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 20:25:48 +08:00
catlog22
d654419423 feat: Add recommended selection to clarification questions
- Add recommended field to explore-json-schema.json clarification_needs
- Update lite-plan/lite-fix/context-gather agent prompts
- Display ★ marker and (Recommended) label in AskUserQuestion options

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 18:25:32 +08:00
catlog22
1e2240dbe9 docs: Add minimize changes principle to CLAUDE.md
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 18:21:24 +08:00
catlog22
b3778ef48c fix: Use UTC+8 timezone for lite-fix session timestamps
Add getUtc8ISOString() helper function to generate China Standard Time
timestamps instead of UTC. Applied to:
- Session ID generation (shortTimestamp)
- Diagnosis manifest timestamp
- Direct planning metadata timestamp

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 17:12:27 +08:00
catlog22
a16cf5c8d3 fix: Use UTC+8 timezone for lite-plan session timestamps
Add getUtc8ISOString() helper function to generate China Standard Time
timestamps instead of UTC. Applied to:
- Session ID generation (shortTimestamp)
- Exploration manifest timestamp
- Direct planning metadata timestamp

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 17:10:58 +08:00
catlog22
d82bf5a823 fix: Preserve user intent in test workflow session creation
- Add Step 1.0 to test-gen.md to load source session task_description
- Add Step 1.0 to test-fix-gen.md (Session Mode) to load original user intent
- Include originalTaskDescription in session creation to enable semantic CLI selection
- Fixes data flow gap identified by Gemini review: user's CLI tool preferences
  (e.g., "use Codex for fixes") now propagate through test workflow sessions

This ensures semantic CLI tool selection works correctly in derivative test sessions
by preserving the original user's task description from the source session.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 17:06:13 +08:00
catlog22
132eec900c refactor: Replace CLI execution flags with semantic-driven tool selection
- Remove --cli-execute flag from plan.md, tdd-plan.md, task-generate-agent.md, task-generate-tdd.md
- Remove --use-codex flag from test-gen.md, test-fix-gen.md, test-task-generate.md
- Remove meta.use_codex from task JSON schema in action-planning-agent.md and cli-planning-agent.md
- Add "Semantic CLI Tool Selection" section to action-planning-agent.md
- Document explicit source: metadata.task_description from context-package.json
- Update test-fix-agent.md execution mode documentation
- Update action-plan-verify.md to remove use_codex validation
- Sync SKILL reference copies via analyze_commands.py

CLI tool usage now determined semantically from user's task description
(e.g., "use Codex for implementation") instead of explicit flags.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 15:59:01 +08:00
catlog22
09114f59c8 fix: Complete lite-fix command with diagnosis workflow and schemas
- Add lite-fix.md command based on lite-plan.md structure
- Create diagnosis-json-schema.json for bug diagnosis results
- Create fix-plan-json-schema.json for fix planning
- Update intelligent-tools-strategy.md: simplify model selection, add 5min minimum timeout
- Fix missing _metadata.diagnosis_index in agent prompt
- Clarify --hotfix mode skip behavior

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 11:18:48 +08:00
catlog22
72099ae951 fix: Update agent calls in documentation to reflect changes from gemini-wrapper to gemini 2025-11-29 10:40:36 +08:00
catlog22
d66064024c refactor: Optimize context-gather workflow with Synthesis role clarification
- Rename Track 0 from 'Exploration Aggregation' to 'Exploration Synthesis'
- Add explicit synthesis logic for critical_files prioritization and deduplication
- Enhance explore-json-schema to support relevance scores in relevant_files
- Update explore agent prompts across context-gather and lite-plan commands
- Add synthesizeCriticalFiles and synthesizeConflictIndicators helper functions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 10:38:04 +08:00
catlog22
8c93848303 refactor: Enhance cli-explore-agent prompt in context-gather
Align with lite-plan.md prompt structure:
- Add detailed Task Objective with angle-specific analysis
- Add MANDATORY FIRST STEPS with explicit ordered commands
- Add 3-step Exploration Strategy (Structural Scan, Semantic Analysis, Write Output)
- Add detailed Expected Output with Required Fields
- Add Success Criteria checklist for validation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 23:10:06 +08:00
catlog22
57a86ab36f feat: Add parallel explore agents to context-gather workflow
- context-gather.md: Add Step 2 with complexity assessment (Low/Medium/High)
  and parallel cli-explore-agent execution (1-4 agents based on complexity)
- context-search-agent.md: Add Track 0 for exploration aggregation with
  exploration_results schema including all_patterns and all_integration_points
- conflict-resolution.md: Consume exploration_results for enhanced conflict
  detection with pre-identified conflict indicators
- task-generate-agent.md: Use exploration critical_files for focus_paths
  and all_patterns/all_integration_points for implementation guidance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-28 23:08:20 +08:00
582 changed files with 195282 additions and 16955 deletions

25
.claude/CLAUDE.md Normal file
View File

@@ -0,0 +1,25 @@
# Claude Instructions
- **CLI Tools Usage**: @~/.claude/workflows/cli-tools-usage.md
- **Coding Philosophy**: @~/.claude/workflows/coding-philosophy.md
- **Context Requirements**: @~/.claude/workflows/context-tools-ace.md
- **File Modification**: @~/.claude/workflows/file-modification.md
- **CLI Endpoints Config**: @.claude/cli-tools.json
## CLI Endpoints
**Strictly follow the @.claude/cli-tools.json configuration**
Available CLI endpoints are dynamically defined by the config file:
- Built-in tools and their enable/disable status
- Custom API endpoints registered via the Dashboard
- Managed through the CCW Dashboard Status page
## Agent Execution
- **Always use `run_in_background: false`** for Task tool agent calls: `Task({ subagent_type: "xxx", prompt: "...", run_in_background: false })` to ensure synchronous execution and immediate result visibility
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
## Code Diagnostics
- **Prefer `mcp__ide__getDiagnostics`** for code error checking over shell-based TypeScript compilation

View File

@@ -0,0 +1,4 @@
{
"interval": "manual",
"tool": "gemini"
}

View File

@@ -16,11 +16,9 @@ description: |
color: yellow
---
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
## Overview
**Agent Role**: Transform user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria.
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
**Core Capabilities**:
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
@@ -33,7 +31,7 @@ You are a pure execution agent specialized in creating actionable implementation
---
## 1. Execution Process
## 1. Input & Execution
### 1.1 Input Processing
@@ -43,7 +41,6 @@ You are a pure execution agent specialized in creating actionable implementation
- `context_package_path`: Context package with brainstorming artifacts catalog
- **Metadata**: Simple values
- `session_id`: Workflow session identifier (WFS-[topic])
- `execution_mode`: agent-mode | cli-execute-mode
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
**Legacy Support** (backward compatibility):
@@ -51,7 +48,7 @@ You are a pure execution agent specialized in creating actionable implementation
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
- **Task requirements**: Direct task description
### 1.2 Two-Phase Execution Flow
### 1.2 Execution Flow
#### Phase 1: Context Loading & Assembly
@@ -89,6 +86,27 @@ You are a pure execution agent specialized in creating actionable implementation
6. Assess task complexity (simple/medium/complex)
```
**MCP Integration** (when `mcp_capabilities` available):
```javascript
// Exa Code Context (mcp_capabilities.exa_code = true)
mcp__exa__get_code_context_exa(
query="TypeScript OAuth2 JWT authentication patterns",
tokensNum="dynamic"
)
// Integration in flow_control.pre_analysis
{
"step": "local_codebase_exploration",
"action": "Explore codebase structure",
"commands": [
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure"
}
```
**Context Package Structure** (fields defined by context-search-agent):
**Always Present**:
@@ -170,30 +188,6 @@ if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
5. Update session state for execution readiness
```
### 1.3 MCP Integration Guidelines
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
```javascript
// Get best practices and examples
mcp__exa__get_code_context_exa(
query="TypeScript OAuth2 JWT authentication patterns",
tokensNum="dynamic"
)
```
**Integration in flow_control.pre_analysis**:
```json
{
"step": "local_codebase_exploration",
"action": "Explore codebase structure",
"commands": [
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure"
}
```
---
## 2. Output Specifications
@@ -209,15 +203,69 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"id": "IMPL-N",
"title": "Descriptive task name",
"status": "pending|active|completed|blocked",
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
"cli_execution_id": "WFS-{session}-IMPL-N",
"cli_execution": {
"strategy": "new|resume|fork|merge_fork",
"resume_from": "parent-cli-id",
"merge_from": ["id1", "id2"]
}
}
```
**Field Descriptions**:
- `id`: Task identifier (format: `IMPL-N`)
- `id`: Task identifier
- Single module format: `IMPL-N` (e.g., IMPL-001, IMPL-002)
- Multi-module format: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1, IMPL-C1)
- Prefix: A, B, C... (assigned by module detection order)
- Sequence: 1, 2, 3... (per-module increment)
- `title`: Descriptive task name summarizing the work
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
- `cli_execution`: CLI execution strategy based on task dependencies
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
| Dependency Pattern | Strategy | CLI Command Pattern |
|--------------------|----------|---------------------|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
**Strategy Selection Algorithm**:
```javascript
function computeCliStrategy(task, allTasks) {
const deps = task.context?.depends_on || []
const childCount = allTasks.filter(t =>
t.context?.depends_on?.includes(task.id)
).length
if (deps.length === 0) {
return { strategy: "new" }
} else if (deps.length === 1) {
const parentTask = allTasks.find(t => t.id === deps[0])
const parentChildCount = allTasks.filter(t =>
t.context?.depends_on?.includes(deps[0])
).length
if (parentChildCount === 1) {
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
} else {
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
}
} else {
const mergeFrom = deps.map(depId =>
allTasks.find(t => t.id === depId).cli_execution_id
)
return { strategy: "merge_fork", merge_from: mergeFrom }
}
}
```
#### Meta Object
@@ -226,7 +274,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"meta": {
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
"execution_group": "parallel-abc123|null"
"execution_group": "parallel-abc123|null",
"module": "frontend|backend|shared|null",
"execution_config": {
"method": "agent|hybrid|cli",
"cli_tool": "codex|gemini|qwen|auto",
"enable_resume": true,
"previous_cli_id": "string|null"
}
}
}
```
@@ -235,6 +290,12 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
- `agent`: Assigned agent for execution
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -244,8 +305,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"type": "test-gen|test-fix",
"agent": "@code-developer|@test-fix-agent",
"test_framework": "jest|vitest|pytest|junit|mocha",
"coverage_target": "80%",
"use_codex": true|false
"coverage_target": "80%"
}
}
```
@@ -253,7 +313,8 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
**Test-Specific Fields**:
- `test_framework`: Existing test framework from project (required for test tasks)
- `coverage_target`: Target code coverage percentage (optional)
- `use_codex`: Whether to use Codex for automated fixes in test-fix tasks (optional, default: false)
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
#### Context Object
@@ -392,7 +453,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
// Pattern: Project structure analysis
{
"step": "analyze_project_architecture",
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
"output_to": "project_architecture"
},
@@ -409,14 +470,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
// Pattern: Gemini CLI deep analysis
{
"step": "gemini_analyze_[aspect]",
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
"output_to": "analysis_result"
},
// Pattern: Qwen CLI analysis (fallback/alternative)
{
"step": "qwen_analyze_[aspect]",
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
"output_to": "analysis_result"
},
@@ -457,7 +518,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
4. **Command Composition Patterns**:
- **Single command**: `bash([simple_search])`
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
- **MCP integration**: `mcp__[tool]__[function]([params])`
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
@@ -479,21 +540,38 @@ The `implementation_approach` supports **two execution modes** based on the pres
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`
- **Command patterns**:
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
- **Command patterns** (with resume support):
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
**Mode Selection Strategy**:
- **Default to agent execution** for most tasks
- **Use CLI mode** when:
- User explicitly requests CLI tool (codex/gemini/qwen)
- Task requires multi-step autonomous reasoning beyond agent capability
- Complex refactoring needs specialized tool analysis
- Building on previous CLI execution context (use `resume --last`)
**Semantic CLI Tool Selection**:
**Key Principle**: The `command` field is **optional**. Agent must decide based on task complexity and user preference.
Agent determines CLI tool usage per-step based on user semantics and task nature.
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
**User Semantic Triggers** (patterns to detect in task_description):
- "use Codex/codex" → Add `command` field with Codex CLI
- "use Gemini/gemini" → Add `command` field with Gemini CLI
- "use Qwen/qwen" → Add `command` field with Qwen CLI
- "CLI execution" / "automated" → Infer appropriate CLI tool
**Task-Based Selection** (when no explicit user preference):
- **Implementation/coding**: Codex preferred for autonomous development
- **Analysis/exploration**: Gemini preferred for large context analysis
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
- **Testing**: Depends on complexity - simple=agent, complex=Codex
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
- Agent orchestrates task execution
- When step has `command` field, agent executes it via CCW CLI
- When step has no `command` field, agent implements directly
- This maintains agent control while leveraging CLI tool power
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
**Examples**:
@@ -543,11 +621,26 @@ The `implementation_approach` supports **two execution modes** based on the pres
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation"
"output": "cli_implementation",
"cli_output_id": "step3_cli_id" // Store execution ID for resume
},
// === CLI MODE with Resume: Continue from previous CLI execution ===
{
"step": 4,
"title": "Continue implementation with context",
"description": "Resume from previous step with accumulated context",
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
"modification_points": ["[Continue from step 3]"],
"logic_flow": ["[Build on previous output]"],
"depends_on": [3],
"output": "continued_implementation",
"cli_output_id": "step4_cli_id"
}
]
```
@@ -589,10 +682,42 @@ The `implementation_approach` supports **two execution modes** based on the pres
- Analysis results (technical approach, architecture decisions)
- Brainstorming artifacts (role analyses, guidance specifications)
**Multi-Module Format** (when modules detected):
When multiple modules are detected (frontend/backend, etc.), organize IMPL_PLAN.md by module:
```markdown
# Implementation Plan
## Module A: Frontend (N tasks)
### IMPL-A1: [Task Title]
[Task details...]
### IMPL-A2: [Task Title]
[Task details...]
## Module B: Backend (N tasks)
### IMPL-B1: [Task Title]
[Task details...]
### IMPL-B2: [Task Title]
[Task details...]
## Cross-Module Dependencies
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
- IMPL-A2 → IMPL-B2 (UI state depends on Backend service)
```
**Cross-Module Dependency Notation**:
- During parallel planning, use `CROSS::{module}::{pattern}` format
- Example: `depends_on: ["CROSS::B::api-endpoint"]`
- Integration phase resolves to actual task IDs: `CROSS::B::api → IMPL-B1`
### 2.3 TODO_LIST.md Structure
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
**Single Module Format**:
```markdown
# Tasks: {Session Topic}
@@ -606,30 +731,54 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
- `- [x]` = Completed task
```
**Multi-Module Format** (hierarchical by module):
```markdown
# Tasks: {Session Topic}
## Module A (Frontend)
- [ ] **IMPL-A1**: [Task Title] → [📋](./.task/IMPL-A1.json)
- [ ] **IMPL-A2**: [Task Title] → [📋](./.task/IMPL-A2.json)
## Module B (Backend)
- [ ] **IMPL-B1**: [Task Title] → [📋](./.task/IMPL-B1.json)
- [ ] **IMPL-B2**: [Task Title] → [📋](./.task/IMPL-B2.json)
## Cross-Module Dependencies
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
## Status Legend
- `- [ ]` = Pending task
- `- [x]` = Completed task
```
**Linking Rules**:
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
- Consistent ID schemes: IMPL-XXX
- Consistent ID schemes: `IMPL-N` (single) or `IMPL-{prefix}{seq}` (multi-module)
### 2.4 Complexity-Based Structure Selection
### 2.4 Complexity & Structure Selection
Use `analysis_results.complexity` or task count to determine structure:
**Simple Tasks** (≤5 tasks):
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- All tasks at same level
**Single Module Mode**:
- **Simple Tasks** (≤5 tasks): Flat structure
- **Medium Tasks** (6-12 tasks): Flat structure
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
**Medium Tasks** (6-12 tasks):
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- All tasks at same level
**Multi-Module Mode** (N+1 parallel planning):
- **Per-module limit**: ≤9 tasks per module
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
**Complex Tasks** (>12 tasks):
- **Re-scope required**: Maximum 12 tasks hard limit
- If analysis_results contains >12 tasks, consolidate or request re-scoping
**Multi-Module Detection Triggers**:
- Explicit frontend/backend separation (`src/frontend`, `src/backend`)
- Monorepo structure (`packages/*`, `apps/*`)
- Context-package dependency clustering (2+ distinct module groups)
---
## 3. Quality & Standards
## 3. Quality Standards
### 3.1 Quantification Requirements (MANDATORY)
@@ -655,47 +804,48 @@ Use `analysis_results.complexity` or task count to determine structure:
- [ ] Each implementation step has its own acceptance criteria
**Examples**:
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
- BAD: `"Implement new commands"`
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
- BAD: `"All commands implemented successfully"`
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
- BAD: `"Implement new commands"`
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
- BAD: `"All commands implemented successfully"`
### 3.2 Planning Principles
### 3.2 Planning & Organization Standards
**Planning Principles**:
- Each stage produces working, testable code
- Clear success criteria for each deliverable
- Dependencies clearly identified between stages
- Incremental progress over big bangs
### 3.3 File Organization
**File Organization**:
- Session naming: `WFS-[topic-slug]`
- Task IDs: IMPL-XXX (flat structure only)
- Directory structure: flat task organization
### 3.4 Document Standards
- Task IDs:
- Single module: `IMPL-N` (e.g., IMPL-001, IMPL-002)
- Multi-module: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- Directory structure: flat task organization (all tasks in `.task/`)
**Document Standards**:
- Proper linking between documents
- Consistent navigation and references
---
## 4. Key Reminders
### 3.3 Guidelines Checklist
**ALWAYS:**
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
- **Load IMPL_PLAN template**: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt) before generating IMPL_PLAN.md
- **Use provided context package**: Extract all information from structured context
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
- **Follow 6-field schema**: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- **Validate task count**: Maximum 12 tasks hard limit, request re-scope if exceeded
- **Use session paths**: Construct all paths using provided session_id
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
- **Apply 举一反三 principle**: Adapt pre-analysis patterns to task-specific needs dynamically
- **Follow template validation**: Complete IMPL_PLAN.md template validation checklist before finalization
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
- Use provided context package: Extract all information from structured context
- Respect memory-first rule: Use provided content (already loaded from memory/file)
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
- Use session paths: Construct all paths using provided session_id
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
**NEVER:**
- Load files directly (use provided context package instead)

View File

@@ -67,7 +67,7 @@ Score = 0
**1. Project Structure**:
```bash
~/.claude/scripts/get_modules_by_depth.sh
ccw tool exec get_modules_by_depth '{}'
```
**2. Content Search**:
@@ -100,7 +100,7 @@ CONTEXT: @**/*
# Specific patterns
CONTEXT: @CLAUDE.md @src/**/* @*.ts
# Cross-directory (requires --include-directories)
# Cross-directory (requires --includeDirs)
CONTEXT: @**/* @../shared/**/* @../types/**/*
```
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
```
analyze|plan → gemini (qwen fallback) + mode=analysis
execute (simple|medium) → gemini (qwen fallback) + mode=write
execute (complex) → codex + mode=auto
execute (complex) → codex + mode=write
discuss → multi (gemini + codex parallel)
```
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
- **Position**: `-m` after prompt, before flags
### Command Templates
### Command Templates (CCW Unified CLI)
**Gemini/Qwen (Analysis)**:
```bash
cd {dir} && gemini -p "
ccw cli -p "
PURPOSE: {goal}
TASK: {task}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {output}
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
" -m gemini-2.5-pro
" --tool gemini --mode analysis --cd {dir}
# Qwen fallback: Replace 'gemini' with 'qwen'
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
```
**Gemini/Qwen (Write)**:
```bash
cd {dir} && gemini -p "..." --approval-mode yolo
ccw cli -p "..." --tool gemini --mode write --cd {dir}
```
**Codex (Auto)**:
**Codex (Write)**:
```bash
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
# Resume: Add 'resume --last' after prompt
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
ccw cli -p "..." --tool codex --mode write --cd {dir}
```
**Cross-Directory** (Gemini/Qwen):
```bash
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
```
**Directory Scope**:
- `@` only references current directory + subdirectories
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)

View File

@@ -67,7 +67,7 @@ Phase 4: Output Generation
```bash
# Project structure
~/.claude/scripts/get_modules_by_depth.sh
ccw tool exec get_modules_by_depth '{}'
# Pattern discovery (adapt based on language)
rg "^export (class|interface|function) " --type ts -n
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
### Gemini Semantic Analysis (deep-scan, dependency-map)
```bash
cd {dir} && gemini -p "
ccw cli -p "
PURPOSE: {from prompt}
TASK: {from prompt}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {from prompt}
RULES: {from prompt, if template specified} | analysis=READ-ONLY
"
" --tool gemini --mode analysis --cd {dir}
```
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only

View File

@@ -1,140 +1,117 @@
---
name: cli-lite-planning-agent
description: |
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Core capabilities:
- Task decomposition (1-10 tasks with IDs: T1, T2...)
- Dependency analysis (depends_on references)
- Flow control (parallel/sequential phases)
- Multi-angle exploration context integration
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
color: cyan
---
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
## Output Schema
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
**planObject Structure**:
```javascript
{
summary: string, // 2-3 sentence overview
approach: string, // High-level strategy
tasks: [TaskObject], // 1-10 structured tasks
flow_control: { // Execution phases
execution_order: [{ phase, tasks, type }],
exit_conditions: { success, failure }
},
focus_paths: string[], // Affected files (aggregated)
estimated_time: string,
recommended_execution: "Agent" | "Codex",
complexity: "Low" | "Medium" | "High",
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
}
```
**TaskObject Structure**:
```javascript
{
id: string, // T1, T2, T3...
title: string, // Action verb + target
file: string, // Target file path
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
description: string, // What to implement (1-2 sentences)
modification_points: [{ // Precise changes (optional)
file: string,
target: string, // function:lineRange
change: string
}],
implementation: string[], // 2-7 actionable steps
reference: { // Pattern guidance (optional)
pattern: string,
files: string[],
examples: string
},
acceptance: string[], // 1-4 quantified criteria
depends_on: string[] // Task IDs: ["T1", "T2"]
}
```
## Input Context
```javascript
{
task_description: string,
explorationsContext: { [angle]: ExplorationResult } | null,
explorationAngles: string[],
// Required
task_description: string, // Task or bug description
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
session: { id, folder, artifacts },
// Context (one of these based on workflow)
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
contextAngles: string[], // Exploration or diagnosis angles
// Optional
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High",
cli_config: { tool, template, timeout, fallback },
session: { id, folder, artifacts }
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
cli_config: { tool, template, timeout, fallback }
}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
```javascript
// Step 1: Always read schema first
const schema = Bash(`cat ${schema_path}`)
// Step 2: Generate plan conforming to schema
const planObject = generatePlanFromSchema(schema, context)
```
## Execution Flow
```
Phase 1: CLI Execution
├─ Aggregate multi-angle exploration findings
Phase 1: Schema & Context Loading
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
├─ Aggregate multi-angle context (explorations or diagnoses)
└─ Determine output structure from schema
Phase 2: CLI Execution
├─ Construct CLI command with planning template
├─ Execute Gemini (fallback: Qwen → degraded mode)
└─ Timeout: 60 minutes
Phase 2: Parsing & Enhancement
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
Phase 3: Parsing & Enhancement
├─ Parse CLI output sections
├─ Validate and enhance task objects
└─ Infer missing fields from exploration context
└─ Infer missing fields from context
Phase 3: planObject Generation
├─ Build planObject from parsed results
├─ Generate flow_control from depends_on if not provided
├─ Aggregate focus_paths from all tasks
└─ Return to orchestrator (lite-plan)
Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Return to orchestrator
```
## CLI Command Template
```bash
cd {project_root} && {cli_tool} -p "
PURPOSE: Generate implementation plan for {complexity} task
ccw cli -p "
PURPOSE: Generate plan for {task_description}
TASK:
• Analyze: {task_description}
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
• Identify parallel vs sequential execution phases
• Analyze task/bug description and context
• Break down into tasks following schema structure
• Identify dependencies and execution phases
MODE: analysis
CONTEXT: @**/* | Memory: {exploration_summary}
CONTEXT: @**/* | Memory: {context_summary}
EXPECTED:
## Implementation Summary
## Summary
[overview]
## High-Level Approach
[strategy]
## Task Breakdown
### T1: [Title]
**File**: [path]
### T1: [Title] (or FIX1 for fix-plan)
**Scope**: [module/feature path]
**Action**: [type]
**Description**: [what]
**Modification Points**: - [file]: [target] - [change]
**Implementation**: 1. [step]
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
**Acceptance**: - [quantified criterion]
**Acceptance/Verification**: - [quantified criterion]
**Depends On**: []
## Flow Control
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
**Exit Conditions**: - Success: [condition] - Failure: [condition]
## Time Estimate
**Total**: [time]
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
- Acceptance must be quantified (counts, method names, metrics)
- Dependencies use task IDs (T1, T2)
- Follow schema structure from {schema_path}
- Acceptance/verification must be quantified
- Dependencies use task IDs
- analysis=READ-ONLY
"
" --tool {cli_tool} --mode analysis --cd {project_root}
```
## Core Functions
@@ -279,6 +256,51 @@ function inferFile(task, ctx) {
}
```
### CLI Execution ID Assignment (MANDATORY)
```javascript
function assignCliExecutionIds(tasks, sessionId) {
const taskMap = new Map(tasks.map(t => [t.id, t]))
const childCount = new Map()
// Count children for each task
tasks.forEach(task => {
(task.depends_on || []).forEach(depId => {
childCount.set(depId, (childCount.get(depId) || 0) + 1)
})
})
tasks.forEach(task => {
task.cli_execution_id = `${sessionId}-${task.id}`
const deps = task.depends_on || []
if (deps.length === 0) {
task.cli_execution = { strategy: "new" }
} else if (deps.length === 1) {
const parent = taskMap.get(deps[0])
const parentChildCount = childCount.get(deps[0]) || 0
task.cli_execution = parentChildCount === 1
? { strategy: "resume", resume_from: parent.cli_execution_id }
: { strategy: "fork", resume_from: parent.cli_execution_id }
} else {
task.cli_execution = {
strategy: "merge_fork",
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
}
}
})
return tasks
}
```
**Strategy Rules**:
| depends_on | Parent Children | Strategy | CLI Command |
|------------|-----------------|----------|-------------|
| [] | - | `new` | `--id {cli_execution_id}` |
| [T1] | 1 | `resume` | `--resume {resume_from}` |
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
### Flow Control Inference
```javascript
@@ -303,21 +325,44 @@ function inferFlowControl(tasks) {
### planObject Generation
```javascript
function generatePlanObject(parsed, enrichedContext, input) {
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
return {
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
approach: parsed.approach || "Step-by-step implementation",
// Base fields (common to both schemas)
const base = {
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
tasks,
flow_control,
focus_paths,
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
complexity: input.complexity,
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
_metadata: {
timestamp: new Date().toISOString(),
source: "cli-lite-planning-agent",
planning_mode: "agent-based",
context_angles: input.contextAngles || [],
duration_seconds: Math.round((Date.now() - startTime) / 1000)
}
}
// Schema-specific fields
if (schemaType === 'fix-plan') {
return {
...base,
root_cause: parsed.root_cause || "Root cause from diagnosis",
strategy: parsed.strategy || "comprehensive_fix",
severity: input.severity || "Medium",
risk_level: parsed.risk_level || "medium"
}
} else {
return {
...base,
approach: parsed.approach || "Step-by-step implementation",
complexity: input.complexity || "Medium"
}
}
}
```
@@ -383,9 +428,12 @@ function validateTask(task) {
## Key Reminders
**ALWAYS**:
- Generate task IDs (T1, T2, T3...)
- **Read schema first** to determine output structure
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
- Include depends_on (even if empty [])
- Quantify acceptance criteria
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
- **Compute cli_execution strategy** based on depends_on
- Quantify acceptance/verification criteria
- Generate flow_control from dependencies
- Handle CLI errors with fallback chain
@@ -394,3 +442,5 @@ function validateTask(task) {
- Use vague acceptance criteria
- Create circular dependencies
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**

View File

@@ -66,8 +66,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
"task_config": {
"agent": "@test-fix-agent",
"type": "test-fix-iteration",
"max_iterations": 5,
"use_codex": false
"max_iterations": 5
}
}
```
@@ -108,7 +107,7 @@ Phase 3: Task JSON Generation
**Template-Based Command Construction with Test Layer Awareness**:
```bash
cd {project_root} && {cli_tool} -p "
ccw cli -p "
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
TASK:
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
@@ -135,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
- Consider previous iteration failures
- Validate fix doesn't introduce new vulnerabilities
- analysis=READ-ONLY
" {timeout_flag}
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
```
**Layer-Specific Guidance Injection**:
@@ -263,7 +262,6 @@ function extractModificationPoints() {
"analysis_report": ".process/iteration-{iteration}-analysis.md",
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
"max_iterations": "{task_config.max_iterations}",
"use_codex": "{task_config.use_codex}",
"parent_task": "{parent_task_id}",
"created_by": "@cli-planning-agent",
"created_at": "{timestamp}"
@@ -529,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
2. **Execute CLI**:
```bash
gemini -p "PURPOSE: Analyze integration test failure...
ccw cli -p "PURPOSE: Analyze integration test failure...
TASK: Examine component interactions, data flow, interface contracts...
RULES: Analyze full call stack and data flow across components"
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
```
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
4. **Generate Task JSON** (IMPL-fix-1.json):

View File

@@ -24,8 +24,6 @@ You are a code execution specialist focused on implementing high-quality, produc
- **Context-driven** - Use provided context and existing code patterns
- **Quality over speed** - Write boring, reliable code that works
## Execution Process
### 1. Context Assessment
@@ -36,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
- **context-package.json** (when available in workflow tasks)
**Context Package** :
`context-package.json` provides artifact paths - extract dynamically using `jq`:
`context-package.json` provides artifact paths - read using Read tool or ccw session:
```bash
# Get role analysis paths from context package
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
# Get context package content from session using Read tool
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
```
**Pre-Analysis: Smart Tech Stack Loading**:
@@ -123,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
- If `command` field present, execute it; otherwise use agent capabilities
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
**Test-Driven Development**:
- Write tests first (red → green → refactor)

View File

@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
3. **load_session_metadata**
- Action: Load session metadata
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_metadata
```
@@ -119,17 +119,6 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
- No dependency management
- Used for temporary context preparation
### NOT Handled by This Agent
**JSON format** (used by code-developer, test-fix-agent):
```json
"flow_control": {
"pre_analysis": [...],
"implementation_approach": [...]
}
```
This complete JSON format is stored in `.task/IMPL-*.json` files and handled by implementation agents, not conceptual-planning-agent.
### Role-Specific Analysis Dimensions
@@ -146,14 +135,14 @@ This complete JSON format is stored in `.task/IMPL-*.json` files and handled by
### Output Integration
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into the single role's output:
- Enhanced `analysis.md` with codebase insights and architectural patterns
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into role output documents:
- Enhanced analysis documents with codebase insights and architectural patterns
- Role-specific technical recommendations based on existing conventions
- Pattern-based best practices from actual code examination
- Realistic feasibility assessments based on current implementation
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
- Enhanced `analysis.md` with autonomous development recommendations
- Enhanced analysis documents with autonomous development recommendations
- Role-specific strategy based on intelligent system understanding
- Autonomous development approaches and implementation guidance
- Self-guided optimization and integration recommendations
@@ -166,7 +155,7 @@ When called, you receive:
- **User Context**: Specific requirements, constraints, and expectations from user discussion
- **Output Location**: Directory path for generated analysis files
- **Role Hint** (optional): Suggested role or role selection guidance
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **ASSIGNED_ROLE** (optional): Specific role assignment
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
@@ -229,26 +218,23 @@ Generate documents according to loaded role template specifications:
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
**Required Files**:
- **analysis.md**: Main role perspective analysis incorporating user context and role template
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
**Output Files**:
- **analysis.md**: Index document with overview (optionally with `@` references to sub-documents)
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
- **Content**: Includes both analysis AND recommendations sections within analysis files
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
- **analysis-{slug}.md**: Section content documents (slug from section heading: lowercase, hyphens)
- Maximum 5 sub-documents (merge related sections if needed)
- **Content**: Analysis AND recommendations sections
**File Structure Example**:
```
.workflow/WFS-[session]/.brainstorming/system-architect/
├── analysis.md # Main system architecture analysis with recommendations
├── analysis-1.md # (Optional) Continuation if content >800 lines
── deliverables/ # (Optional) Additional role-specific outputs
├── technical-architecture.md # System design specifications
├── technology-stack.md # Technology selection rationale
└── scalability-plan.md # Scaling strategy
├── analysis.md # Index with overview + @references
├── analysis-architecture-assessment.md # Section content
── analysis-technology-evaluation.md # Section content
├── analysis-integration-strategy.md # Section content
└── analysis-recommendations.md # Section content (max 5 sub-docs total)
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
```
## Role-Specific Planning Process
@@ -268,14 +254,10 @@ FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefi
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
### 3. Brainstorming Documentation Phase
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
- **Content**: Include both analysis AND recommendations sections within analysis files
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
- **Create analysis.md**: Main document with overview (optionally with `@` references)
- **Create sub-documents**: `analysis-{slug}.md` for major sections (max 5)
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
- **Naming Validation**: Verify ALL files start with `analysis` prefix
- **Quality Review**: Ensure outputs meet role template standards and user requirements
## Role-Specific Analysis Framework
@@ -324,5 +306,3 @@ When analysis is complete, ensure:
- **Relevance**: Directly addresses user's specified requirements
- **Actionability**: Provides concrete next steps and recommendations
### Windows Path Format Guidelines
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`

View File

@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
### 1. Reference Documentation (Project Standards)
**Tools**:
- `Read()` - Load CLAUDE.md, README.md, architecture docs
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
- `Glob()` - Find documentation files
**Use**: Phase 0 foundation setup
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
**Use**: Unfamiliar APIs/libraries/patterns
### 3. Existing Code Discovery
**Primary (Code-Index MCP)**:
- `mcp__code-index__set_project_path()` - Initialize index
- `mcp__code-index__find_files(pattern)` - File pattern matching
- `mcp__code-index__search_code_advanced()` - Content search
- `mcp__code-index__get_file_summary()` - File structure analysis
- `mcp__code-index__refresh_index()` - Update index
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast content search
- `find` - File discovery
- `Grep` - Pattern matching
**Priority**: Code-Index MCP > ripgrep > find > grep
**Priority**: CodexLens MCP > ripgrep > find > grep
## Simplified Execution Process (3 Phases)
@@ -77,12 +77,11 @@ if (file_exists(contextPackagePath)) {
**1.2 Foundation Setup**:
```javascript
// 1. Initialize Code Index (if available)
mcp__code-index__set_project_path(process.cwd())
mcp__code-index__refresh_index()
// 1. Initialize CodexLens (if available)
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
// 2. Project Structure
bash(~/.claude/scripts/get_modules_by_depth.sh)
bash(ccw tool exec get_modules_by_depth '{}')
// 3. Load Documentation (if not in memory)
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
@@ -100,10 +99,88 @@ if (!memory.has("README.md")) Read(README.md)
### Phase 2: Multi-Source Context Discovery
Execute all 3 tracks in parallel for comprehensive coverage.
Execute all tracks in parallel for comprehensive coverage.
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
#### Track 0: Exploration Synthesis (Optional)
**Trigger**: When `explorations-manifest.json` exists in session `.process/` folder
**Purpose**: Transform raw exploration data into prioritized, deduplicated insights. This is NOT simple aggregation - it synthesizes `critical_files` (priority-ranked), deduplicates patterns/integration_points, and generates `conflict_indicators`.
```javascript
// Check for exploration results from context-gather parallel explore phase
const manifestPath = `.workflow/active/${session_id}/.process/explorations-manifest.json`;
if (file_exists(manifestPath)) {
const manifest = JSON.parse(Read(manifestPath));
// Load full exploration data from each file
const explorationData = manifest.explorations.map(exp => ({
...exp,
data: JSON.parse(Read(exp.path))
}));
// Build explorations array with summaries
const explorations = explorationData.map(exp => ({
angle: exp.angle,
file: exp.file,
path: exp.path,
index: exp.data._metadata?.exploration_index || exp.index,
summary: {
relevant_files_count: exp.data.relevant_files?.length || 0,
key_patterns: exp.data.patterns,
integration_points: exp.data.integration_points
}
}));
// SYNTHESIS (not aggregation): Transform raw data into prioritized insights
const aggregated_insights = {
// CRITICAL: Synthesize priority-ranked critical_files from multiple relevant_files lists
// - Deduplicate by path
// - Rank by: mention count across angles + individual relevance scores
// - Top 10-15 files only (focused, actionable)
critical_files: synthesizeCriticalFiles(explorationData.flatMap(e => e.data.relevant_files || [])),
// SYNTHESIS: Generate conflict indicators from pattern mismatches, constraint violations
conflict_indicators: synthesizeConflictIndicators(explorationData),
// Deduplicate clarification questions (merge similar questions)
clarification_needs: deduplicateQuestions(explorationData.flatMap(e => e.data.clarification_needs || [])),
// Preserve source attribution for traceability
constraints: explorationData.map(e => ({ constraint: e.data.constraints, source_angle: e.angle })).filter(c => c.constraint),
// Deduplicate patterns across angles (merge identical patterns)
all_patterns: deduplicatePatterns(explorationData.map(e => ({ patterns: e.data.patterns, source_angle: e.angle }))),
// Deduplicate integration points (merge by file:line location)
all_integration_points: deduplicateIntegrationPoints(explorationData.map(e => ({ points: e.data.integration_points, source_angle: e.angle })))
};
// Store for Phase 3 packaging
exploration_results = { manifest_path: manifestPath, exploration_count: manifest.exploration_count,
complexity: manifest.complexity, angles: manifest.angles_explored,
explorations, aggregated_insights };
}
// Synthesis helper functions (conceptual)
function synthesizeCriticalFiles(allRelevantFiles) {
// 1. Group by path
// 2. Count mentions across angles
// 3. Average relevance scores
// 4. Rank by: (mention_count * 0.6) + (avg_relevance * 0.4)
// 5. Return top 10-15 with mentioned_by_angles attribution
}
function synthesizeConflictIndicators(explorationData) {
// 1. Detect pattern mismatches across angles
// 2. Identify constraint violations
// 3. Flag files mentioned with conflicting integration approaches
// 4. Assign severity: critical/high/medium/low
}
```
#### Track 1: Reference Documentation
Extract from Phase 0 loaded docs:
@@ -134,18 +211,18 @@ mcp__exa__web_search_exa({
**Layer 1: File Pattern Discovery**
```javascript
// Primary: Code-Index MCP
const files = mcp__code-index__find_files("*{keyword}*")
// Primary: CodexLens MCP
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
// Fallback: find . -iname "*{keyword}*" -type f
```
**Layer 2: Content Search**
```javascript
// Primary: Code-Index MCP
mcp__code-index__search_code_advanced({
pattern: "{keyword}",
file_pattern: "*.ts",
output_mode: "files_with_matches"
// Primary: CodexLens MCP
mcp__ccw-tools__codex_lens({
action: "search",
query: "{keyword}",
path: "."
})
// Fallback: rg "{keyword}" -t ts --files-with-matches
```
@@ -153,11 +230,10 @@ mcp__code-index__search_code_advanced({
**Layer 3: Semantic Patterns**
```javascript
// Find definitions (class, interface, function)
mcp__code-index__search_code_advanced({
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
regex: true,
output_mode: "content",
context_lines: 2
mcp__ccw-tools__codex_lens({
action: "search",
query: "^(export )?(class|interface|type|function) .*{keyword}",
path: "."
})
```
@@ -165,21 +241,22 @@ mcp__code-index__search_code_advanced({
```javascript
// Get file summaries for imports/exports
for (const file of discovered_files) {
const summary = mcp__code-index__get_file_summary(file)
// summary: {imports, functions, classes, line_count}
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
// summary: {symbols: [{name, type, line}]}
}
```
**Layer 5: Config & Tests**
```javascript
// Config files
mcp__code-index__find_files("*.config.*")
mcp__code-index__find_files("package.json")
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
// Tests
mcp__code-index__search_code_advanced({
pattern: "(describe|it|test).*{keyword}",
file_pattern: "*.{test,spec}.*"
mcp__ccw-tools__codex_lens({
action: "search",
query: "(describe|it|test).*{keyword}",
path: "."
})
```
@@ -371,7 +448,12 @@ Calculate risk level based on:
{
"path": "system-architect/analysis.md",
"type": "primary",
"content": "# System Architecture Analysis\n\n## Overview\n..."
"content": "# System Architecture Analysis\n\n## Overview\n@analysis-architecture.md\n@analysis-recommendations.md"
},
{
"path": "system-architect/analysis-architecture.md",
"type": "supplementary",
"content": "# Architecture Assessment\n\n..."
}
]
}
@@ -393,10 +475,39 @@ Calculate risk level based on:
},
"affected_modules": ["auth", "user-model", "middleware"],
"mitigation_strategy": "Incremental refactoring with backward compatibility"
},
"exploration_results": {
"manifest_path": ".workflow/active/{session}/.process/explorations-manifest.json",
"exploration_count": 3,
"complexity": "Medium",
"angles": ["architecture", "dependencies", "testing"],
"explorations": [
{
"angle": "architecture",
"file": "exploration-architecture.json",
"path": ".workflow/active/{session}/.process/exploration-architecture.json",
"index": 1,
"summary": {
"relevant_files_count": 5,
"key_patterns": "Service layer with DI",
"integration_points": "Container.registerService:45-60"
}
}
],
"aggregated_insights": {
"critical_files": [{"path": "src/auth/AuthService.ts", "relevance": 0.95, "mentioned_by_angles": ["architecture"]}],
"conflict_indicators": [{"type": "pattern_mismatch", "description": "...", "source_angle": "architecture", "severity": "medium"}],
"clarification_needs": [{"question": "...", "context": "...", "options": [], "source_angle": "architecture"}],
"constraints": [{"constraint": "Must follow existing DI pattern", "source_angle": "architecture"}],
"all_patterns": [{"patterns": "Service layer with DI", "source_angle": "architecture"}],
"all_integration_points": [{"points": "Container.registerService:45-60", "source_angle": "architecture"}]
}
}
}
```
**Note**: `exploration_results` is populated when exploration files exist (from context-gather parallel explore phase). If no explorations, this field is omitted or empty.
## Quality Validation
@@ -448,14 +559,14 @@ Output: .workflow/session/{session}/.process/context-package.json
- Expose sensitive data (credentials, keys)
- Exceed file limits (50 total)
- Include binaries/generated files
- Use ripgrep if code-index available
- Use ripgrep if CodexLens available
**ALWAYS**:
- Initialize code-index in Phase 0
- Initialize CodexLens in Phase 0
- Execute get_modules_by_depth.sh
- Load CLAUDE.md/README.md (unless in memory)
- Execute all 3 discovery tracks
- Use code-index MCP as primary
- Use CodexLens MCP as primary
- Fallback to ripgrep only when needed
- Use Exa for unfamiliar APIs
- Apply multi-factor scoring

View File

@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
**Step 2** (CLI execution):
- Agent substitutes [target_folders] into command
- Agent executes CLI command via Bash tool:
- Agent executes CLI command via CCW:
```bash
bash(cd src/modules && gemini --approval-mode yolo -p "
ccw cli -p "
PURPOSE: Generate module documentation
TASK: Create API.md and README.md for each module
MODE: write
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
./src/modules/api|code|code:3|dirs:0
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
")
" --tool gemini --mode write --cd src/modules
```
4. **CLI Execution** (Gemini CLI):
@@ -216,7 +216,7 @@ Before completion, verify:
{
"step": "analyze_module_structure",
"action": "Deep analysis of module structure and API",
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
"output_to": "module_analysis",
"on_error": "fail"
}

View File

@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
## Core Mission
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
## Input Context
@@ -42,12 +42,12 @@ TodoWrite([
# 3. Launch parallel jobs (max 4)
# Depth 5 example (Layer 3 - use multi-layer):
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
# Depth 1 example (Layer 2 - use single-layer):
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
# ... up to 4 concurrent jobs
# 4. Wait for all depth jobs to complete

View File

@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
**Use**: Phase 1 source context loading
### 2. Test Coverage Discovery
**Primary (Code-Index MCP)**:
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
- `mcp__code-index__search_code_advanced()` - Search test patterns
- `mcp__code-index__get_file_summary()` - Analyze test structure
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast test pattern search
@@ -120,9 +120,10 @@ for (const summary_path of summaries) {
**2.1 Existing Test Discovery**:
```javascript
// Method 1: Code-Index MCP (preferred)
const test_files = mcp__code-index__find_files({
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
// Method 1: CodexLens MCP (preferred)
const test_files = mcp__ccw-tools__codex_lens({
action: "search_files",
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
});
// Method 2: Fallback CLI

View File

@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- Identify test commands from project configuration
```bash
# Detect test framework and multi-layered commands
if [ -f "package.json" ]; then
# Extract layer-specific test commands
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
# Extract layer-specific test commands using Read tool or jq
PKG_JSON=$(cat package.json)
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
LINT_CMD="ruff check . || flake8 ."
UNIT_CMD="pytest tests/unit/"
@@ -142,9 +143,9 @@ run_test_layer "L1-unit" "$UNIT_CMD"
### 3. Failure Diagnosis & Fixing Loop
**Execution Modes**:
**Execution Modes** (determined by `flow_control.implementation_approach`):
**A. Manual Mode (Default, meta.use_codex=false)**:
**A. Agent Mode (Default, no `command` field in steps)**:
```
WHILE tests are failing AND iterations < max_iterations:
1. Use Gemini to diagnose failure (bug-fix template)
@@ -155,17 +156,17 @@ WHILE tests are failing AND iterations < max_iterations:
END WHILE
```
**B. Codex Mode (meta.use_codex=true)**:
**B. CLI Mode (`command` field present in implementation_approach steps)**:
```
WHILE tests are failing AND iterations < max_iterations:
1. Use Gemini to diagnose failure (bug-fix template)
2. Use Codex to apply fixes automatically with resume mechanism
2. Execute `command` field (e.g., Codex) to apply fixes automatically
3. Re-run test suite
4. Verify fix doesn't break other tests
END WHILE
```
**Codex Resume in Test-Fix Cycle** (when `meta.use_codex=true`):
**Codex Resume in Test-Fix Cycle** (when step has `command` with Codex):
- First iteration: Start new Codex session with full context
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies

47
.claude/cli-tools.json Normal file
View File

@@ -0,0 +1,47 @@
{
"version": "1.0.0",
"tools": {
"gemini": {
"enabled": true,
"isBuiltin": true,
"command": "gemini",
"description": "Google AI for code analysis"
},
"qwen": {
"enabled": true,
"isBuiltin": true,
"command": "qwen",
"description": "Alibaba AI assistant"
},
"codex": {
"enabled": true,
"isBuiltin": true,
"command": "codex",
"description": "OpenAI code generation"
},
"claude": {
"enabled": true,
"isBuiltin": true,
"command": "claude",
"description": "Anthropic AI assistant"
}
},
"customEndpoints": [],
"defaultTool": "gemini",
"settings": {
"promptFormat": "plain",
"smartContext": {
"enabled": false,
"maxFiles": 10
},
"nativeResume": true,
"recursiveQuery": true,
"cache": {
"injectionMode": "auto",
"defaultPrefix": "",
"defaultSuffix": ""
},
"codeIndexMcp": "ace"
},
"$schema": "./cli-tools.schema.json"
}

516
.claude/commands/clean.md Normal file
View File

@@ -0,0 +1,516 @@
---
name: clean
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution
argument-hint: "[--dry-run] [\"focus area\"]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Glob(*), Bash(*), Write(*)
---
# Clean Command (/clean)
## Overview
Intelligent cleanup command that explores the codebase to identify the development mainline, discovers artifacts that have drifted from it, and safely removes stale sessions, abandoned documents, and dead code.
**Core capabilities:**
- Mainline detection: Identify active development branches and core modules
- Drift analysis: Find sessions, documents, and code that deviate from mainline
- Intelligent discovery: cli-explore-agent based artifact scanning
- Safe execution: Confirmation-based cleanup with dry-run preview
## Usage
```bash
/clean # Full intelligent cleanup (explore → analyze → confirm → execute)
/clean --dry-run # Explore and analyze only, no execution
/clean "auth module" # Focus cleanup on specific area
```
## Execution Process
```
Phase 1: Mainline Detection
├─ Analyze git history for development trends
├─ Identify core modules (high commit frequency)
├─ Map active vs stale branches
└─ Build mainline profile
Phase 2: Drift Discovery (cli-explore-agent)
├─ Scan workflow sessions for orphaned artifacts
├─ Identify documents drifted from mainline
├─ Detect dead code and unused exports
└─ Generate cleanup manifest
Phase 3: Confirmation
├─ Display cleanup summary by category
├─ Show impact analysis (files, size, risk)
└─ AskUserQuestion: Select categories to clean
Phase 4: Execution (unless --dry-run)
├─ Execute cleanup by category
├─ Update manifests and indexes
└─ Report results
```
## Implementation
### Phase 1: Mainline Detection
**Session Setup**:
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `clean-${dateStr}`
const sessionFolder = `.workflow/.clean/${sessionId}`
Bash(`mkdir -p ${sessionFolder}`)
```
**Step 1.1: Git History Analysis**
```bash
# Get commit frequency by directory (last 30 days)
bash(git log --since="30 days ago" --name-only --pretty=format: | grep -v "^$" | cut -d/ -f1-2 | sort | uniq -c | sort -rn | head -20)
# Get recent active branches
bash(git for-each-ref --sort=-committerdate refs/heads/ --format='%(refname:short) %(committerdate:relative)' | head -10)
# Get files with most recent changes
bash(git log --since="7 days ago" --name-only --pretty=format: | grep -v "^$" | sort | uniq -c | sort -rn | head -30)
```
**Step 1.2: Build Mainline Profile**
```javascript
const mainlineProfile = {
coreModules: [], // High-frequency directories
activeFiles: [], // Recently modified files
activeBranches: [], // Branches with recent commits
staleThreshold: {
sessions: 7, // Days
branches: 30,
documents: 14
},
timestamp: getUtc8ISOString()
}
// Parse git log output to identify core modules
// Modules with >5 commits in last 30 days = core
// Modules with 0 commits in last 30 days = potentially stale
Write(`${sessionFolder}/mainline-profile.json`, JSON.stringify(mainlineProfile, null, 2))
```
---
### Phase 2: Drift Discovery
**Launch cli-explore-agent for intelligent artifact scanning**:
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Discover stale artifacts",
prompt=`
## Task Objective
Discover artifacts that have drifted from the development mainline. Identify stale sessions, abandoned documents, and dead code for cleanup.
## Context
- **Session Folder**: ${sessionFolder}
- **Mainline Profile**: ${sessionFolder}/mainline-profile.json
- **Focus Area**: ${focusArea || "全项目"}
## Discovery Categories
### Category 1: Stale Workflow Sessions
Scan and analyze workflow session directories:
**Locations to scan**:
- .workflow/active/WFS-* (active sessions)
- .workflow/archives/WFS-* (archived sessions)
- .workflow/.lite-plan/* (lite-plan sessions)
- .workflow/.debug/DBG-* (debug sessions)
**Staleness criteria**:
- Active sessions: No modification >7 days + no related git commits
- Archives: >30 days old + no feature references in project.json
- Lite-plan: >7 days old + plan.json not executed
- Debug: >3 days old + issue not in recent commits
**Analysis steps**:
1. List all session directories with modification times
2. Cross-reference with git log (are session topics in recent commits?)
3. Check manifest.json for orphan entries
4. Identify sessions with .archiving marker (interrupted)
### Category 2: Drifted Documents
Scan documentation that no longer aligns with code:
**Locations to scan**:
- .claude/rules/tech/* (generated tech rules)
- .workflow/.scratchpad/* (temporary notes)
- **/CLAUDE.md (module documentation)
- **/README.md (outdated descriptions)
**Drift criteria**:
- Tech rules: Referenced files no longer exist
- Scratchpad: Any file (always temporary)
- Module docs: Describe functions/classes that were removed
- READMEs: Reference deleted directories
**Analysis steps**:
1. Parse document content for file/function references
2. Verify referenced entities still exist in codebase
3. Flag documents with >30% broken references
### Category 3: Dead Code
Identify code that is no longer used:
**Scan patterns**:
- Unused exports (exported but never imported)
- Orphan files (not imported anywhere)
- Commented-out code blocks (>10 lines)
- TODO/FIXME comments >90 days old
**Analysis steps**:
1. Build import graph using rg/grep
2. Identify exports with no importers
3. Find files not in import graph
4. Scan for large comment blocks
## Output Format
Write to: ${sessionFolder}/cleanup-manifest.json
\`\`\`json
{
"generated_at": "ISO timestamp",
"mainline_summary": {
"core_modules": ["src/core", "src/api"],
"active_branches": ["main", "feature/auth"],
"health_score": 0.85
},
"discoveries": {
"stale_sessions": [
{
"path": ".workflow/active/WFS-old-feature",
"type": "active",
"age_days": 15,
"reason": "No related commits in 15 days",
"size_kb": 1024,
"risk": "low"
}
],
"drifted_documents": [
{
"path": ".claude/rules/tech/deprecated-lib",
"type": "tech_rules",
"broken_references": 5,
"total_references": 6,
"drift_percentage": 83,
"reason": "Referenced library removed",
"risk": "low"
}
],
"dead_code": [
{
"path": "src/utils/legacy.ts",
"type": "orphan_file",
"reason": "Not imported by any file",
"last_modified": "2025-10-01",
"risk": "medium"
}
]
},
"summary": {
"total_items": 12,
"total_size_mb": 45.2,
"by_category": {
"stale_sessions": 5,
"drifted_documents": 4,
"dead_code": 3
},
"by_risk": {
"low": 8,
"medium": 3,
"high": 1
}
}
}
\`\`\`
## Execution Commands
\`\`\`bash
# Session directories
find .workflow -type d -name "WFS-*" -o -name "DBG-*" 2>/dev/null
# Check modification times (Linux/Mac)
stat -c "%Y %n" .workflow/active/WFS-* 2>/dev/null
# Check modification times (Windows PowerShell via bash)
powershell -Command "Get-ChildItem '.workflow/active/WFS-*' | ForEach-Object { Write-Output \"$($_.LastWriteTime) $($_.FullName)\" }"
# Find orphan exports (TypeScript)
rg "export (const|function|class|interface|type)" --type ts -l
# Find imports
rg "import.*from" --type ts
# Find large comment blocks
rg "^\\s*/\\*" -A 10 --type ts
# Find old TODOs
rg "TODO|FIXME" --type ts -n
\`\`\`
## Success Criteria
- [ ] All session directories scanned with age calculation
- [ ] Documents cross-referenced with existing code
- [ ] Dead code detection via import graph analysis
- [ ] cleanup-manifest.json written with complete data
- [ ] Each item has risk level and cleanup reason
`
)
```
---
### Phase 3: Confirmation
**Step 3.1: Display Summary**
```javascript
const manifest = JSON.parse(Read(`${sessionFolder}/cleanup-manifest.json`))
console.log(`
## Cleanup Discovery Report
**Mainline Health**: ${Math.round(manifest.mainline_summary.health_score * 100)}%
**Core Modules**: ${manifest.mainline_summary.core_modules.join(', ')}
### Summary
| Category | Count | Size | Risk |
|----------|-------|------|------|
| Stale Sessions | ${manifest.summary.by_category.stale_sessions} | - | ${getRiskSummary('sessions')} |
| Drifted Documents | ${manifest.summary.by_category.drifted_documents} | - | ${getRiskSummary('documents')} |
| Dead Code | ${manifest.summary.by_category.dead_code} | - | ${getRiskSummary('code')} |
**Total**: ${manifest.summary.total_items} items, ~${manifest.summary.total_size_mb} MB
### Stale Sessions
${manifest.discoveries.stale_sessions.map(s =>
`- ${s.path} (${s.age_days}d, ${s.risk}): ${s.reason}`
).join('\n')}
### Drifted Documents
${manifest.discoveries.drifted_documents.map(d =>
`- ${d.path} (${d.drift_percentage}% broken, ${d.risk}): ${d.reason}`
).join('\n')}
### Dead Code
${manifest.discoveries.dead_code.map(c =>
`- ${c.path} (${c.type}, ${c.risk}): ${c.reason}`
).join('\n')}
`)
```
**Step 3.2: Dry-Run Exit**
```javascript
if (flags.includes('--dry-run')) {
console.log(`
---
**Dry-run mode**: No changes made.
Manifest saved to: ${sessionFolder}/cleanup-manifest.json
To execute cleanup: /clean
`)
return
}
```
**Step 3.3: User Confirmation**
```javascript
AskUserQuestion({
questions: [
{
question: "Which categories to clean?",
header: "Categories",
multiSelect: true,
options: [
{
label: "Sessions",
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
},
{
label: "Documents",
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
},
{
label: "Dead Code",
description: `${manifest.summary.by_category.dead_code} unused code files`
}
]
},
{
question: "Risk level to include?",
header: "Risk",
multiSelect: false,
options: [
{ label: "Low only", description: "Safest - only obviously stale items" },
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
{ label: "All", description: "Aggressive - includes high-risk items" }
]
}
]
})
```
---
### Phase 4: Execution
**Step 4.1: Filter Items by Selection**
```javascript
const selectedCategories = userSelection.categories // ['Sessions', 'Documents', ...]
const riskLevel = userSelection.risk // 'Low only', 'Low + Medium', 'All'
const riskFilter = {
'Low only': ['low'],
'Low + Medium': ['low', 'medium'],
'All': ['low', 'medium', 'high']
}[riskLevel]
const itemsToClean = []
if (selectedCategories.includes('Sessions')) {
itemsToClean.push(...manifest.discoveries.stale_sessions.filter(s => riskFilter.includes(s.risk)))
}
if (selectedCategories.includes('Documents')) {
itemsToClean.push(...manifest.discoveries.drifted_documents.filter(d => riskFilter.includes(d.risk)))
}
if (selectedCategories.includes('Dead Code')) {
itemsToClean.push(...manifest.discoveries.dead_code.filter(c => riskFilter.includes(c.risk)))
}
TodoWrite({
todos: itemsToClean.map(item => ({
content: `Clean: ${item.path}`,
status: "pending",
activeForm: `Cleaning ${item.path}`
}))
})
```
**Step 4.2: Execute Cleanup**
```javascript
const results = { deleted: [], failed: [], skipped: [] }
for (const item of itemsToClean) {
TodoWrite({ todos: [...] }) // Mark current as in_progress
try {
if (item.type === 'orphan_file' || item.type === 'dead_export') {
// Dead code: Delete file or remove export
Bash({ command: `rm -rf "${item.path}"` })
} else {
// Sessions and documents: Delete directory/file
Bash({ command: `rm -rf "${item.path}"` })
}
results.deleted.push(item.path)
TodoWrite({ todos: [...] }) // Mark as completed
} catch (error) {
results.failed.push({ path: item.path, error: error.message })
}
}
```
**Step 4.3: Update Manifests**
```javascript
// Update archives manifest if sessions were deleted
if (selectedCategories.includes('Sessions')) {
const archiveManifestPath = '.workflow/archives/manifest.json'
if (fileExists(archiveManifestPath)) {
const archiveManifest = JSON.parse(Read(archiveManifestPath))
const deletedSessionIds = results.deleted
.filter(p => p.includes('WFS-'))
.map(p => p.split('/').pop())
const updatedManifest = archiveManifest.filter(entry =>
!deletedSessionIds.includes(entry.session_id)
)
Write(archiveManifestPath, JSON.stringify(updatedManifest, null, 2))
}
}
// Update project.json if features referenced deleted sessions
const projectPath = '.workflow/project.json'
if (fileExists(projectPath)) {
const project = JSON.parse(Read(projectPath))
const deletedPaths = new Set(results.deleted)
project.features = project.features.filter(f =>
!deletedPaths.has(f.traceability?.archive_path)
)
project.statistics.total_features = project.features.length
project.statistics.last_updated = getUtc8ISOString()
Write(projectPath, JSON.stringify(project, null, 2))
}
```
**Step 4.4: Report Results**
```javascript
console.log(`
## Cleanup Complete
**Deleted**: ${results.deleted.length} items
**Failed**: ${results.failed.length} items
**Skipped**: ${results.skipped.length} items
### Deleted Items
${results.deleted.map(p => `- ${p}`).join('\n')}
${results.failed.length > 0 ? `
### Failed Items
${results.failed.map(f => `- ${f.path}: ${f.error}`).join('\n')}
` : ''}
Cleanup manifest archived to: ${sessionFolder}/cleanup-manifest.json
`)
```
---
## Session Folder Structure
```
.workflow/.clean/{YYYY-MM-DD}/
├── mainline-profile.json # Git history analysis
└── cleanup-manifest.json # Discovery results
```
## Risk Level Definitions
| Risk | Description | Examples |
|------|-------------|----------|
| **Low** | Safe to delete, no dependencies | Empty sessions, scratchpad files, 100% broken docs |
| **Medium** | Likely unused, verify before delete | Orphan files, old archives, partially broken docs |
| **High** | May have hidden dependencies | Files with some imports, recent modifications |
## Error Handling
| Situation | Action |
|-----------|--------|
| No git repository | Skip mainline detection, use file timestamps only |
| Session in use (.archiving) | Skip with warning |
| Permission denied | Report error, continue with others |
| Manifest parse error | Regenerate from filesystem scan |
| Empty discovery | Report "codebase is clean" |
## Related Commands
- `/workflow:session:complete` - Properly archive active sessions
- `/memory:compact` - Save session memory before cleanup
- `/workflow:status` - View current workflow state

View File

@@ -191,7 +191,7 @@ target/
### Step 2: Workspace Analysis (MANDATORY FIRST)
```bash
# Analyze workspace structure
bash(~/.claude/scripts/get_modules_by_depth.sh json)
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
```
### Step 3: Technology Detection

View File

@@ -0,0 +1,383 @@
---
name: compact
description: Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool
argument-hint: "[optional: session description]"
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
examples:
- /memory:compact
- /memory:compact "completed core-memory module"
---
# Memory Compact Command (/memory:compact)
## 1. Overview
The `memory:compact` command **compresses current session working memory** into structured text optimized for **session recovery**, extracts critical information, and saves it to persistent storage via MCP `core_memory` tool.
**Core Philosophy**:
- **Session Recovery First**: Capture everything needed to resume work seamlessly
- **Minimize Re-exploration**: Include file paths, decisions, and state to avoid redundant analysis
- **Preserve Train of Thought**: Keep notes and hypotheses for complex debugging
- **Actionable State**: Record last action result and known issues
## 2. Parameters
- `"session description"` (Optional): Session description to supplement objective
- Example: "completed core-memory module"
- Example: "debugging JWT refresh - suspected memory leak"
## 3. Structured Output Format
```markdown
## Session ID
[WFS-ID if workflow session active, otherwise (none)]
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Objective
[High-level goal - the "North Star" of this session]
## Execution Plan
[CRITICAL: Embed the LATEST plan in its COMPLETE and DETAILED form]
### Source: [workflow | todo | user-stated | inferred]
<details>
<summary>Full Execution Plan (Click to expand)</summary>
[PRESERVE COMPLETE PLAN VERBATIM - DO NOT SUMMARIZE]
- ALL phases, tasks, subtasks
- ALL file paths (absolute)
- ALL dependencies and prerequisites
- ALL acceptance criteria
- ALL status markers ([x] done, [ ] pending)
- ALL notes and context
Example:
## Phase 1: Setup
- [x] Initialize project structure
- Created D:\Claude_dms3\src\core\index.ts
- Added dependencies: lodash, zod
- [ ] Configure TypeScript
- Update tsconfig.json for strict mode
## Phase 2: Implementation
- [ ] Implement core API
- Target: D:\Claude_dms3\src\api\handler.ts
- Dependencies: Phase 1 complete
- Acceptance: All tests pass
</details>
## Working Files (Modified)
[Absolute paths to actively modified files]
- D:\Claude_dms3\src\file1.ts (role: main implementation)
- D:\Claude_dms3\tests\file1.test.ts (role: unit tests)
## Reference Files (Read-Only)
[Absolute paths to context files - NOT modified but essential for understanding]
- D:\Claude_dms3\.claude\CLAUDE.md (role: project instructions)
- D:\Claude_dms3\src\types\index.ts (role: type definitions)
- D:\Claude_dms3\package.json (role: dependencies)
## Last Action
[Last significant action and its result/status]
## Decisions
- [Decision]: [Reasoning]
- [Decision]: [Reasoning]
## Constraints
- [User-specified limitation or preference]
## Dependencies
- [Added/changed packages or environment requirements]
## Known Issues
- [Deferred bug or edge case]
## Changes Made
- [Completed modification]
## Pending
- [Next step] or (none)
## Notes
[Unstructured thoughts, hypotheses, debugging trails]
```
## 4. Field Definitions
| Field | Purpose | Recovery Value |
|-------|---------|----------------|
| **Session ID** | Workflow session identifier (WFS-*) | Links memory to specific stateful task execution |
| **Project Root** | Absolute path to project directory | Enables correct path resolution in new sessions |
| **Objective** | Ultimate goal of the session | Prevents losing track of broader feature |
| **Execution Plan** | Complete plan from any source (verbatim) | Preserves full planning context, avoids re-planning |
| **Working Files** | Actively modified files (absolute paths) | Immediately identifies where work was happening |
| **Reference Files** | Read-only context files (absolute paths) | Eliminates re-exploration for critical context |
| **Last Action** | Final tool output/status | Immediate state awareness (success/failure) |
| **Decisions** | Architectural choices + reasoning | Prevents re-litigating settled decisions |
| **Constraints** | User-imposed limitations | Maintains personalized coding style |
| **Dependencies** | Package/environment changes | Prevents missing dependency errors |
| **Known Issues** | Deferred bugs/edge cases | Ensures issues aren't forgotten |
| **Changes Made** | Completed modifications | Clear record of what was done |
| **Pending** | Next steps | Immediate action items |
| **Notes** | Hypotheses, debugging trails | Preserves "train of thought" |
## 5. Execution Flow
### Step 1: Analyze Current Session
Extract the following from conversation history:
```javascript
const sessionAnalysis = {
sessionId: "", // WFS-* if workflow session active, null otherwise
projectRoot: "", // Absolute path: D:\Claude_dms3
objective: "", // High-level goal (1-2 sentences)
executionPlan: {
source: "workflow" | "todo" | "user-stated" | "inferred",
content: "" // Full plan content - ALWAYS preserve COMPLETE and DETAILED form
},
workingFiles: [], // {absolutePath, role} - modified files
referenceFiles: [], // {absolutePath, role} - read-only context files
lastAction: "", // Last significant action + result
decisions: [], // {decision, reasoning}
constraints: [], // User-specified limitations
dependencies: [], // Added/changed packages
knownIssues: [], // Deferred bugs
changesMade: [], // Completed modifications
pending: [], // Next steps
notes: "" // Unstructured thoughts
}
```
### Step 2: Generate Structured Text
```javascript
// Helper: Generate execution plan section
const generateExecutionPlan = (plan) => {
const sourceLabels = {
'workflow': 'workflow (IMPL_PLAN.md)',
'todo': 'todo (TodoWrite)',
'user-stated': 'user-stated',
'inferred': 'inferred'
};
// CRITICAL: Preserve complete plan content verbatim - DO NOT summarize
return `### Source: ${sourceLabels[plan.source] || plan.source}
<details>
<summary>Full Execution Plan (Click to expand)</summary>
${plan.content}
</details>`;
};
const structuredText = `## Session ID
${sessionAnalysis.sessionId || '(none)'}
## Project Root
${sessionAnalysis.projectRoot}
## Objective
${sessionAnalysis.objective}
## Execution Plan
${generateExecutionPlan(sessionAnalysis.executionPlan)}
## Working Files (Modified)
${sessionAnalysis.workingFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
## Reference Files (Read-Only)
${sessionAnalysis.referenceFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
## Last Action
${sessionAnalysis.lastAction}
## Decisions
${sessionAnalysis.decisions.map(d => `- ${d.decision}: ${d.reasoning}`).join('\n') || '(none)'}
## Constraints
${sessionAnalysis.constraints.map(c => `- ${c}`).join('\n') || '(none)'}
## Dependencies
${sessionAnalysis.dependencies.map(d => `- ${d}`).join('\n') || '(none)'}
## Known Issues
${sessionAnalysis.knownIssues.map(i => `- ${i}`).join('\n') || '(none)'}
## Changes Made
${sessionAnalysis.changesMade.map(c => `- ${c}`).join('\n') || '(none)'}
## Pending
${sessionAnalysis.pending.length > 0
? sessionAnalysis.pending.map(p => `- ${p}`).join('\n')
: '(none)'}
## Notes
${sessionAnalysis.notes || '(none)'}`
```
### Step 3: Import to Core Memory via MCP
Use the MCP `core_memory` tool to save the structured text:
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
Or via CLI (pipe structured text to import):
```bash
# Write structured text to temp file, then import
echo "$structuredText" | ccw core-memory import
# Or from a file
ccw core-memory import --file /path/to/session-memory.md
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 4: Report Recovery ID
After successful import, **clearly display the Recovery ID** to the user:
```
╔════════════════════════════════════════════════════════════════════════════╗
║ ✓ Session Memory Saved ║
║ ║
║ Recovery ID: CMEM-YYYYMMDD-HHMMSS ║
║ ║
║ To restore: "Please import memory <ID>" ║
║ (MCP: core_memory export | CLI: ccw core-memory export --id <ID>) ║
╚════════════════════════════════════════════════════════════════════════════╝
```
## 6. Quality Checklist
Before generating:
- [ ] Session ID captured if workflow session active (WFS-*)
- [ ] Project Root is absolute path (e.g., D:\Claude_dms3)
- [ ] Objective clearly states the "North Star" goal
- [ ] Execution Plan: COMPLETE plan preserved VERBATIM (no summarization)
- [ ] Plan Source: Clearly identified (workflow | todo | user-stated | inferred)
- [ ] Plan Details: ALL phases, tasks, file paths, dependencies, status markers included
- [ ] All file paths are ABSOLUTE (not relative)
- [ ] Working Files: 3-8 modified files with roles
- [ ] Reference Files: Key context files (CLAUDE.md, types, configs)
- [ ] Last Action captures final state (success/failure)
- [ ] Decisions include reasoning, not just choices
- [ ] Known Issues separates deferred from forgotten bugs
- [ ] Notes preserve debugging hypotheses if any
## 7. Path Resolution Rules
### Project Root Detection
1. Check current working directory from environment
2. Look for project markers: `.git/`, `package.json`, `.claude/`
3. Use the topmost directory containing these markers
### Absolute Path Conversion
```javascript
// Convert relative to absolute
const toAbsolutePath = (relativePath, projectRoot) => {
if (path.isAbsolute(relativePath)) return relativePath;
return path.join(projectRoot, relativePath);
};
// Example: "src/api/auth.ts" → "D:\Claude_dms3\src\api\auth.ts"
```
### Reference File Categories
| Category | Examples | Priority |
|----------|----------|----------|
| Project Config | `.claude/CLAUDE.md`, `package.json`, `tsconfig.json` | High |
| Type Definitions | `src/types/*.ts`, `*.d.ts` | High |
| Related Modules | Parent/sibling modules with shared interfaces | Medium |
| Test Files | Corresponding test files for modified code | Medium |
| Documentation | `README.md`, `ARCHITECTURE.md` | Low |
## 8. Plan Detection (Priority Order)
### Priority 1: Workflow Session (IMPL_PLAN.md)
```javascript
// Check for active workflow session
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
if (manifest.sessions?.length > 0) {
const session = manifest.sessions[0];
const plan = await mcp__ccw-tools__session_manager({
operation: "read",
session_id: session.id,
content_type: "plan"
});
sessionAnalysis.sessionId = session.id;
sessionAnalysis.executionPlan.source = "workflow";
sessionAnalysis.executionPlan.content = plan.content;
}
```
### Priority 2: TodoWrite (Current Session Todos)
```javascript
// Extract from conversation - look for TodoWrite tool calls
// Preserve COMPLETE todo list with all details
const todos = extractTodosFromConversation();
if (todos.length > 0) {
sessionAnalysis.executionPlan.source = "todo";
// Format todos with full context - preserve status markers
sessionAnalysis.executionPlan.content = todos.map(t =>
`- [${t.status === 'completed' ? 'x' : t.status === 'in_progress' ? '>' : ' '}] ${t.content}`
).join('\n');
}
```
### Priority 3: User-Stated Plan
```javascript
// Look for explicit plan statements in user messages:
// - "Here's my plan: 1. ... 2. ... 3. ..."
// - "I want to: first..., then..., finally..."
// - Numbered or bulleted lists describing steps
const userPlan = extractUserStatedPlan();
if (userPlan) {
sessionAnalysis.executionPlan.source = "user-stated";
sessionAnalysis.executionPlan.content = userPlan;
}
```
### Priority 4: Inferred Plan
```javascript
// If no explicit plan, infer from:
// - Task description and breakdown discussion
// - Sequence of actions taken
// - Outstanding work mentioned
const inferredPlan = inferPlanFromDiscussion();
if (inferredPlan) {
sessionAnalysis.executionPlan.source = "inferred";
sessionAnalysis.executionPlan.content = inferredPlan;
}
```
## 9. Notes
- **Timing**: Execute at task completion or before context switch
- **Frequency**: Once per independent task or milestone
- **Recovery**: New session can immediately continue with full context
- **Knowledge Graph**: Entity relationships auto-extracted for visualization
- **Absolute Paths**: Critical for cross-session recovery on different machines

View File

@@ -101,10 +101,10 @@ src/ (depth 1) → SINGLE STRATEGY
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Get module structure with classification
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
// OR with path parameter
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
@@ -200,7 +200,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -263,7 +263,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
EXECUTION SCRIPT: ccw tool exec generate_module_docs
- Accepts strategy parameter: full | single
- Accepts folder type detection: code | navigation
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -273,7 +273,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
run_in_background: false
})
exit_code=$?
@@ -322,7 +322,7 @@ let project_root = get_project_root();
report("Generating project README.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -335,7 +335,7 @@ for (let tool of tool_order) {
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -350,7 +350,7 @@ if (bash_result.stdout.includes("API_FOUND")) {
report("Generating HTTP API documentation...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {

View File

@@ -51,7 +51,7 @@ Orchestrates context-aware documentation generation/update for changed modules u
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -123,7 +123,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -207,21 +207,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module

View File

@@ -64,12 +64,17 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
```bash
# Get target path, project name, and root
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
# Create session directories (replace timestamp)
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
```javascript
// Create docs session (type: docs)
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
// Parse output to get sessionId
```
# Create workflow-session.json (replace values)
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
```bash
# Update workflow-session.json with docs-specific fields
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
```
### Phase 2: Analyze Structure
@@ -80,10 +85,10 @@ bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentati
```bash
# 1. Run folder analysis
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
# 2. Get top-level directories (first 2 path levels)
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
@@ -230,12 +235,12 @@ api_id=$((group_count + 3))
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `cd dir && gemini -p "..."`
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
@@ -326,7 +331,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"depends_on": [1],
"output": "generated_docs"
}
@@ -358,7 +363,7 @@ api_id=$((group_count + 3))
},
{
"step": "analyze_project",
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
"output_to": "project_outline"
}
],
@@ -398,7 +403,7 @@ api_id=$((group_count + 3))
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
@@ -435,7 +440,7 @@ api_id=$((group_count + 3))
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
],
"implementation_approach": [
{
@@ -596,7 +601,7 @@ api_id=$((group_count + 3))
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`

View File

@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
allowed-tools: Task(*), Bash(*)
examples:
- /memory:load "在当前前端基础上开发用户认证功能"
- /memory:load --tool qwen -p "重构支付模块API"
- /memory:load --tool qwen "重构支付模块API"
---
# Memory Load Command (/memory:load)
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
3. **Extracts Keywords**: Derives core keywords from task description
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
6. **Generates Content Package**: Returns structured JSON core content package
@@ -109,7 +109,7 @@ Task(
1. **Project Structure**
\`\`\`bash
bash(~/.claude/scripts/get_modules_by_depth.sh)
bash(ccw tool exec get_modules_by_depth '{}')
\`\`\`
2. **Core Documentation**
@@ -136,7 +136,7 @@ Task(
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
\`\`\`bash
cd . && ${tool} -p "
ccw cli -p "
PURPOSE: Extract project core context for task: ${task_description}
TASK: Analyze project architecture, tech stack, key patterns, relevant files
MODE: analysis
@@ -147,7 +147,7 @@ RULES:
- Identify key architecture patterns and technical constraints
- Extract integration points and development standards
- Output concise, structured format
"
" --tool ${tool} --mode analysis
\`\`\`
### Step 4: Generate Core Content Package
@@ -212,7 +212,7 @@ Before returning:
### Example 2: Using Qwen Tool
```bash
/memory:load --tool qwen -p "重构支付模块API"
/memory:load --tool qwen "重构支付模块API"
```
Agent uses Qwen CLI for analysis, returns same structured package.

View File

@@ -0,0 +1,310 @@
---
name: tech-research-rules
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Rules Generator
## Overview
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Output Structure**:
```
.claude/rules/tech/{tech-stack}/
├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
---
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Template-Driven**: Agent reads templates before generating content
4. **Agent Produces Files**: Agent writes all rule files directly
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
---
## 3-Phase Execution
### Phase 1: Prepare Context & Detect Tech Stack
**Goal**: Detect input mode, extract tech stack info, determine file extensions
**Input Mode Detection**:
```bash
input="$1"
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
# Read workflow-session.json to extract tech stack
else
MODE="direct"
TECH_STACK_NAME="$input"
fi
```
**Tech Stack Analysis**:
```javascript
// Decompose composite tech stacks
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
```bash
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
rules_dir=".claude/rules/tech/${normalized_name}"
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
- If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
- If `--regenerate` → Delete existing and regenerate
**Output Variables**:
- `TECH_STACK_NAME`: Normalized name
- `PRIMARY_LANG`: Primary language
- `FILE_EXT`: File extension pattern
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
**TodoWrite**: Mark phase 1 completed
---
### Phase 2: Agent Produces Path-Conditional Rules
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate to agent for Exa research and rule file generation
**Template Files**:
```
~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
subagent_type: "general-purpose",
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: `
You are generating path-conditional rules for Claude Code.
## Context
- Tech Stack: ${TECH_STACK_NAME}
- Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
## Instructions
Read the agent prompt template for detailed instructions:
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
## Execution Steps
1. Execute Exa research queries (see agent prompt)
2. Read each rule template
3. Generate rule files following template structure
4. Write files to output directory
5. Write metadata.json
6. Report completion
## Variable Substitutions
Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
- {PRIMARY_LANG} → ${PRIMARY_LANG}
- {FILE_EXT} → ${FILE_EXT}
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
`
})
```
**Completion Criteria**:
- 4-6 rule files written with proper `paths` frontmatter
- metadata.json written
- Agent reports files created
**TodoWrite**: Mark phase 2 completed
---
### Phase 3: Verify & Report
**Goal**: Verify generated files and provide usage summary
**Steps**:
1. **Verify Files**:
```bash
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
```
2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
```javascript
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
```
4. **Generate Summary Report**:
```
Tech Stack Rules Generated
Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
Files Created:
├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
```
**TodoWrite**: Mark phase 3 completed
---
## Path Pattern Reference
| Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
| Tech Stack | Core Pattern | Test Pattern |
|------------|--------------|--------------|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
| Go | `**/*.go` | `**/*_test.go` |
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
```
**Arguments**:
- **session-id**: `WFS-*` format - Extract from workflow session
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- **--regenerate**: Force regenerate existing rules
---
## Examples
### Single Language
```bash
/memory:tech-research "typescript"
```
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
### Frontend Stack
```bash
/memory:tech-research "typescript-react"
```
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**: Extract tech stack from session → Generate rules
---
## Comparison: Rules vs SKILL
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -1,477 +0,0 @@
---
name: tech-research
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Research SKILL Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility**:
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
- Orchestrator only provides context paths and waits for completion
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **No User Prompts**: Never ask user questions or wait for input between phases
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
---
## 3-Phase Execution
### Phase 1: Prepare Context Paths
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
**Input Mode Detection**:
```bash
# Get input parameter
input="$1"
# Detect mode
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
CONTEXT_PATH=".workflow/${SESSION_ID}"
else
MODE="direct"
TECH_STACK_NAME="$input"
CONTEXT_PATH="$input" # Pass tech stack name as context
fi
```
**Check Existing SKILL**:
```bash
# For session mode, peek at session to get tech stack name
if [[ "$MODE" == "session" ]]; then
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
Read(.workflow/${SESSION_ID}/workflow-session.json)
# Extract tech_stack_name (minimal extraction)
fi
# Normalize and check
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf ".claude/skills/${normalized_name}")
SKIP_GENERATION = false
message = "Regenerating tech stack SKILL from scratch."
} else {
SKIP_GENERATION = false
message = "No existing SKILL found, generating new tech stack documentation."
}
```
**Output Variables**:
- `MODE`: `session` or `direct`
- `SESSION_ID`: Session ID (if session mode)
- `CONTEXT_PATH`: Path to session directory OR tech stack name
- `TECH_STACK_NAME`: Extracted or provided tech stack name
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Agent Produces All Files
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
**Agent Task Specification**:
```
Task(
subagent_type: "general-purpose",
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
prompt: "
Generate a complete tech stack SKILL package with Exa research.
**Context Provided**:
- Mode: {MODE}
- Context Path: {CONTEXT_PATH}
**Templates Available**:
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
**Your Responsibilities**:
1. **Extract Tech Stack Information**:
IF MODE == 'session':
- Read `.workflow/active/{session_id}/workflow-session.json`
- Read `.workflow/active/{session_id}/.process/context-package.json`
- Extract tech_stack: {language, frameworks, libraries}
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
- Example: \"typescript-react-nextjs\"
IF MODE == 'direct':
- Tech stack name = CONTEXT_PATH
- Parse composite: split by '-' delimiter
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
2. **Execute Exa Research** (4-6 parallel queries):
Base Queries (always execute):
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
Component Queries (if composite):
- For each additional component:
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
3. **Read Module Format Template**:
Read template for structure guidance:
```bash
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
```
4. **Synthesize Content into 6 Modules**:
Follow template structure from tech-module-format.txt:
- **principles.md** - Core concepts, philosophies (~3K tokens)
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
- **testing.md** - Testing strategies, frameworks (~3K tokens)
- **config.md** - Setup, configuration, tooling (~3K tokens)
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
Each module follows template format:
- Frontmatter (YAML)
- Main sections with clear headings
- Code examples from Exa research
- Best practices sections
- References to Exa sources
5. **Write Files Directly**:
```javascript
// Create directory
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
// Write each module file using Write tool
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
// Write frameworks.md only if composite
// Write metadata.json
Write({
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
content: JSON.stringify({
tech_stack_name,
components,
is_composite,
generated_at: timestamp,
source: \"exa-research\",
research_summary: { total_queries, total_sources }
})
})
```
6. **Report Completion**:
Provide summary:
- Tech stack name
- Files created (count)
- Exa queries executed
- Sources consulted
**CRITICAL**:
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
- Do NOT return JSON or structured data - produce actual .md files
- Handle errors gracefully (Exa failures, missing files, template read failures)
- If tech stack cannot be determined, ask orchestrator to clarify
"
)
```
**Completion Criteria**:
- Agent task executed successfully
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
- metadata.json written
- Agent reports completion
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
// Extract: tech_stack_name, components, is_composite, research_summary
```
3. **Read Module Headers** (optional, first 20 lines):
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
// Repeat for other modules
```
4. **Read SKILL Index Template**:
```javascript
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
```
5. **Generate SKILL.md Index**:
Follow template from tech-skill-index.txt with variable substitutions:
- `{TECH_STACK_NAME}`: From metadata.json
- `{MAIN_TECH}`: Primary technology
- `{ISO_TIMESTAMP}`: Current timestamp
- `{QUERY_COUNT}`: From research_summary
- `{SOURCE_COUNT}`: From research_summary
- Conditional sections for composite tech stacks
Template provides structure for:
- Frontmatter with metadata
- Overview and tech stack description
- Module organization (Core/Practical/Config sections)
- Loading recommendations (Quick/Implementation/Complete)
- Usage guidelines and auto-trigger keywords
- Research metadata and version history
6. **Write SKILL.md**:
```javascript
Write({
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All module files verified
- Loading recommendations included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Tech Stack SKILL Package Complete
Tech Stack: {TECH_STACK_NAME}
Location: .claude/skills/{TECH_STACK_NAME}/
Files: SKILL.md + 5-6 modules + metadata.json
Exa Research: {queries} queries, {sources} sources
Usage: Skill(command: "{TECH_STACK_NAME}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Invalid session ID: Report error, verify session exists
- Missing context-package: Warn, fall back to direct mode
- No tech stack detected: Ask user to specify tech stack name
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- Exa API failures: Agent handles internally with retries
- Incomplete results: Warn user, proceed with partial data if minimum sections available
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
- **--tool**: Reserved for future CLI integration (default: gemini)
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/{tech-stack}/
├── SKILL.md # Index (Phase 3)
├── principles.md # Agent (Phase 2)
├── patterns.md # Agent
├── practices.md # Agent
├── testing.md # Agent
├── config.md # Agent
├── frameworks.md # Agent (if composite)
└── metadata.json # Agent
```
### Direct Mode - Single Stack
```bash
/memory:tech-research "typescript"
```
**Workflow**:
1. Phase 1: Detects direct mode, checks existing SKILL
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
3. Phase 3: Generates SKILL.md index
### Direct Mode - Composite Stack
```bash
/memory:tech-research "typescript-react-nextjs"
```
**Workflow**:
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
3. Phase 3: Generates SKILL.md index with framework integration
### Session Mode - Extract from Workflow
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**:
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
3. Phase 3: Generates SKILL.md index
### Regenerate Existing
```bash
/memory:tech-research "react" --regenerate
```
**Workflow**:
1. Phase 1: Deletes existing SKILL due to --regenerate
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
3. Phase 3: Generates updated SKILL.md
### Skip Path - Fast Update
```bash
/memory:tech-research "python"
```
**Scenario**: SKILL already exists with 7 files
**Workflow**:
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
2. Phase 2: **SKIPPED**
3. Phase 3: Updates SKILL.md index only (5-10x faster)

View File

@@ -99,10 +99,10 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
// Get module structure
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
// OR with --path
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
@@ -185,7 +185,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -244,7 +244,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
EXECUTION SCRIPT: ccw tool exec update_module_claude
- Accepts strategy parameter: multi-layer | single-layer
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -252,7 +252,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
run_in_background: false
})
exit_code=$?

View File

@@ -41,7 +41,7 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
```javascript
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -102,7 +102,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -184,21 +184,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module

View File

@@ -187,7 +187,7 @@ Objectives:
3. Use Gemini for aggregation (optional):
Command pattern:
cd .workflow/.archives/{session_id} && gemini -p "
ccw cli -p "
PURPOSE: Extract lessons and conflicts from workflow session
TASK:
• Analyze IMPL_PLAN and lessons from manifest
@@ -198,7 +198,7 @@ Objectives:
CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt
"
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
@@ -334,7 +334,7 @@ Objectives:
- Sort sessions by date
2. Use Gemini for final aggregation:
gemini -p "
ccw cli -p "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK:
• Group successes by functional domain
@@ -345,7 +345,7 @@ Objectives:
CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt
"
" --tool gemini --mode analysis
3. Read templates for formatting (same 4 templates as single mode)

View File

@@ -101,7 +101,7 @@ Load only minimal necessary context from each artifact:
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority, use_codex)
- Meta (complexity, priority)
### 3. Build Semantic Models

View File

@@ -81,6 +81,7 @@ ELSE:
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate API designer analysis addressing topic framework
## Framework Integration Required
@@ -136,6 +137,7 @@ Task(subagent_type="conceptual-planning-agent",
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing API designer analysis
## Current Analysis Context

View File

@@ -2,452 +2,360 @@
name: artifacts
description: Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis
argument-hint: "topic or challenge description [--count N]"
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
---
## Overview
Six-phase workflow: **Automatic project context collection**Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
Seven-phase workflow: **Context collection****Topic analysis****Role selection****Role questions****Conflict resolution****Final check****Generate specification**
All user interactions use AskUserQuestion tool (max 4 questions per call, multi-round).
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
**Core Principle**: Questions dynamically generated from project context + topic keywords, NOT generic templates
**Parameters**:
- `topic` (required): Topic or challenge description (structured format recommended)
- `--count N` (optional): Number of roles user WANTS to select (system will recommend N+2 options for user to choose from, default: 3)
- `--count N` (optional): Number of roles to select (system recommends N+2 options, default: 3)
---
## Quick Reference
### Phase Summary
| Phase | Goal | AskUserQuestion | Storage |
|-------|------|-----------------|---------|
| 0 | Context collection | - | context-package.json |
| 1 | Topic analysis | 2-4 questions | intent_context |
| 2 | Role selection | 1 multi-select | selected_roles |
| 3 | Role questions | 3-4 per role | role_decisions[role] |
| 4 | Conflict resolution | max 4 per round | cross_role_decisions |
| 4.5 | Final check | progressive rounds | additional_decisions |
| 5 | Generate spec | - | guidance-specification.md |
### AskUserQuestion Pattern
```javascript
// Single-select (Phase 1, 3, 4)
AskUserQuestion({
questions: [
{
question: "{问题文本}",
header: "{短标签}", // max 12 chars
multiSelect: false,
options: [
{ label: "{选项}", description: "{说明和影响}" },
{ label: "{选项}", description: "{说明和影响}" },
{ label: "{选项}", description: "{说明和影响}" }
]
}
// ... max 4 questions per call
]
})
// Multi-select (Phase 2)
AskUserQuestion({
questions: [{
question: "请选择 {count} 个角色",
header: "角色选择",
multiSelect: true,
options: [/* max 4 options per call */]
}]
})
```
### Multi-Round Execution
```javascript
const BATCH_SIZE = 4;
for (let i = 0; i < allQuestions.length; i += BATCH_SIZE) {
const batch = allQuestions.slice(i, i + BATCH_SIZE);
AskUserQuestion({ questions: batch });
// Store responses before next round
}
```
---
## Task Tracking
**⚠️ TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
**TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
**When called from auto-parallel**:
- Find the artifacts parent task: "Execute artifacts command for interactive framework generation"
- Mark parent task as "in_progress"
- APPEND artifacts sub-tasks AFTER the parent task (Phase 0-5)
- Mark each sub-task as it completes
- When Phase 5 completes, mark parent task as "completed"
- **PRESERVE all other auto-parallel tasks** (role agents, synthesis)
- Find artifacts parent task → Mark "in_progress"
- APPEND sub-tasks (Phase 0-5) → Mark each as completes
- When Phase 5 completes → Mark parent "completed"
- PRESERVE all other auto-parallel tasks
**Standalone Mode**:
```json
[
{"content": "Initialize session (.workflow/active/ session check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
{"content": "Phase 3: Generate 3-4 questions per role, output and wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 3 role questions"},
{"content": "Phase 4: Detect conflicts, output clarifications, wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 4 conflict resolution"},
{"content": "Phase 5: Transform Q&A to declarative statements, write guidance-specification.md", "status": "pending", "activeForm": "Phase 5 document generation"}
{"content": "Initialize session", "status": "pending", "activeForm": "Initializing"},
{"content": "Phase 0: Context collection", "status": "pending", "activeForm": "Phase 0"},
{"content": "Phase 1: Topic analysis (2-4 questions)", "status": "pending", "activeForm": "Phase 1"},
{"content": "Phase 2: Role selection", "status": "pending", "activeForm": "Phase 2"},
{"content": "Phase 3: Role questions (per role)", "status": "pending", "activeForm": "Phase 3"},
{"content": "Phase 4: Conflict resolution", "status": "pending", "activeForm": "Phase 4"},
{"content": "Phase 4.5: Final clarification", "status": "pending", "activeForm": "Phase 4.5"},
{"content": "Phase 5: Generate specification", "status": "pending", "activeForm": "Phase 5"}
]
```
## User Interaction Protocol
### Question Output Format
All questions output as structured text (detailed format with descriptions):
```markdown
【问题{N} - {短标签}】{问题文本}
a) {选项标签}
说明:{选项说明和影响}
b) {选项标签}
说明:{选项说明和影响}
c) {选项标签}
说明:{选项说明和影响}
请回答:{N}a 或 {N}b 或 {N}c
```
**Multi-select format** (Phase 2 role selection):
```markdown
【角色选择】请选择 {count} 个角色参与头脑风暴分析
a) {role-name} ({中文名})
推荐理由:{基于topic的相关性说明}
b) {role-name} ({中文名})
推荐理由:{基于topic的相关性说明}
...
支持格式:
- 分别选择2a 2c 2d (选择第2题的a、c、d选项)
- 合并语法2acd (选择a、c、d)
- 逗号分隔2a,c,d
请输入选择:
```
### Input Parsing Rules
**Supported formats** (intelligent parsing):
1. **Space-separated**: `1a 2b 3c` → Q1:a, Q2:b, Q3:c
2. **Comma-separated**: `1a,2b,3c` → Q1:a, Q2:b, Q3:c
3. **Multi-select combined**: `2abc` → Q2: options a,b,c
4. **Multi-select spaces**: `2 a b c` → Q2: options a,b,c
5. **Multi-select comma**: `2a,b,c` → Q2: options a,b,c
6. **Natural language**: `问题1选a` → 1a (fallback parsing)
**Parsing algorithm**:
- Extract question numbers and option letters
- Validate question numbers match output
- Validate option letters exist for each question
- If ambiguous/invalid, output example format and request re-input
**Error handling** (lenient):
- Recognize common variations automatically
- If parsing fails, show example and wait for clarification
- Support re-input without penalty
### Batching Strategy
**Batch limits**:
- **Default**: Maximum 10 questions per round
- **Phase 2 (role selection)**: Display all recommended roles at once (count+2 roles)
- **Auto-split**: If questions > 10, split into multiple rounds with clear round indicators
**Round indicators**:
```markdown
===== 第 1 轮问题 (共2轮) =====
【问题1 - ...】...
【问题2 - ...】...
...
【问题10 - ...】...
请回答 (格式: 1a 2b ... 10c)
```
### Interaction Flow
**Standard flow**:
1. Output questions in formatted text
2. Output expected input format example
3. Wait for user input
4. Parse input with intelligent matching
5. If parsing succeeds → Store answers and continue
6. If parsing fails → Show error, example, and wait for re-input
**No question/option limits**: Text-based interaction removes previous 4-question and 4-option restrictions
---
## Execution Phases
### Session Management
- Check `.workflow/active/` for existing sessions
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
- Parse `--count N` parameter from user input (default: 3 if not specified)
- Store decisions in `workflow-session.json` including count parameter
- Multiple → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
- Parse `--count N` parameter (default: 3)
- Store decisions in `workflow-session.json`
### Phase 0: Automatic Project Context Collection
### Phase 0: Context Collection
**Goal**: Gather project architecture, documentation, and relevant code context BEFORE user interaction
**Goal**: Gather project context BEFORE user interaction
**Detection Mechanism** (execute first):
```javascript
// Check if context-package already exists
const contextPackagePath = `.workflow/active/WFS-{session-id}/.process/context-package.json`;
**Steps**:
1. Check if `context-package.json` exists → Skip if valid
2. Invoke `context-search-agent` (BRAINSTORM MODE - lightweight)
3. Output: `.workflow/active/WFS-{session-id}/.process/context-package.json`
if (file_exists(contextPackagePath)) {
// Validate package
const package = Read(contextPackagePath);
if (package.metadata.session_id === session_id) {
console.log("✅ Valid context-package found, skipping Phase 0");
return; // Skip to Phase 1
}
}
```
**Implementation**: Invoke `context-search-agent` only if package doesn't exist
**Graceful Degradation**: If agent fails, continue to Phase 1 without context
```javascript
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather project context for brainstorm",
prompt=`
You are executing as context-search-agent (.claude/agents/context-search-agent.md).
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
## Execution Mode
**BRAINSTORM MODE** (Lightweight) - Phase 1-2 only (skip deep analysis)
Session: ${session_id}
Task: ${task_description}
Output: .workflow/${session_id}/.process/context-package.json
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${session_id}/.process/context-package.json
## Mission
Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Detection**: Check for existing context-package (early exit if valid)
2. **Foundation**: Initialize code-index, get project structure, load docs
3. **Analysis**: Extract keywords, determine scope, classify complexity
### Phase 2: Multi-Source Context Discovery
Execute all 3 discovery tracks:
- **Track 1**: Reference documentation (CLAUDE.md, architecture docs)
- **Track 2**: Web examples (use Exa MCP for unfamiliar tech/APIs)
- **Track 3**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. Synthesize 3-source data (docs > code > web)
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
4. Perform conflict detection with risk assessment
5. Generate and validate context-package.json
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: architecture_patterns, coding_conventions, tech_stack
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy}
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] File relevance accuracy >80%
- [ ] Dependency graph complete (max 2 transitive levels)
- [ ] Conflict risk level calculated correctly
- [ ] No sensitive data exposed
- [ ] Total files ≤50 (prioritize high-relevance)
Execute autonomously following agent documentation.
Report completion with statistics.
Required fields: metadata, project_context, assets, dependencies, conflict_detection
`
)
```
**Graceful Degradation**:
- If agent fails: Log warning, continue to Phase 1 without project context
- If package invalid: Re-run context-search-agent
### Phase 1: Topic Analysis
### Phase 1: Topic Analysis & Intent Classification
**Goal**: Extract keywords/challenges to drive all subsequent question generation, **enriched by Phase 0 project context**
**Goal**: Extract keywords/challenges enriched by Phase 0 context
**Steps**:
1. **Load Phase 0 context** (if available):
- Read `.workflow/active/WFS-{session-id}/.process/context-package.json`
- Extract: tech_stack, existing modules, conflict_risk, relevant files
1. Load Phase 0 context (tech_stack, modules, conflict_risk)
2. Deep topic analysis (entities, challenges, constraints, metrics)
3. Generate 2-4 context-aware probing questions
4. AskUserQuestion → Store to `session.intent_context`
2. **Deep topic analysis** (context-aware):
- Extract technical entities from topic + existing codebase
- Identify core challenges considering existing architecture
- Consider constraints (timeline/budget/compliance)
- Define success metrics based on current project state
3. **Generate 2-4 context-aware probing questions**:
- Reference existing tech stack in questions
- Consider integration with existing modules
- Address identified conflict risks from Phase 0
- Target root challenges and trade-off priorities
4. **User interaction**: Output questions using text format (see User Interaction Protocol), wait for user input
5. **Parse user answers**: Use intelligent parsing to extract answers from user input (support multiple formats)
6. **Storage**: Store answers to `session.intent_context` with `{extracted_keywords, identified_challenges, user_answers, project_context_used}`
**Example Output**:
```markdown
===== Phase 1: 项目意图分析 =====
【问题1 - 核心挑战】实时协作平台的主要技术挑战?
a) 实时数据同步
说明100+用户同时在线,状态同步复杂度高
b) 可扩展性架构
说明:用户规模增长时的系统扩展能力
c) 冲突解决机制
说明:多用户同时编辑的冲突处理策略
【问题2 - 优先级】MVP阶段最关注的指标
a) 功能完整性
说明:实现所有核心功能
b) 用户体验
说明:流畅的交互体验和响应速度
c) 系统稳定性
说明:高可用性和数据一致性
请回答 (格式: 1a 2b)
**Example**:
```javascript
AskUserQuestion({
questions: [
{
question: "实时协作平台的主要技术挑战?",
header: "核心挑战",
multiSelect: false,
options: [
{ label: "实时数据同步", description: "100+用户同时在线,状态同步复杂度高" },
{ label: "可扩展性架构", description: "用户规模增长时的系统扩展能力" },
{ label: "冲突解决机制", description: "多用户同时编辑的冲突处理策略" }
]
},
{
question: "MVP阶段最关注的指标",
header: "优先级",
multiSelect: false,
options: [
{ label: "功能完整性", description: "实现所有核心功能" },
{ label: "用户体验", description: "流畅的交互体验和响应速度" },
{ label: "系统稳定性", description: "高可用性和数据一致性" }
]
}
]
})
```
**User input examples**:
- `1a 2c` → Q1:a, Q2:c
- `1a,2c` → Q1:a, Q2:c
**⚠️ CRITICAL**: Questions MUST reference topic keywords. Generic "Project type?" violates dynamic generation.
### Phase 2: Role Selection
**⚠️ CRITICAL**: User MUST interact to select roles. NEVER auto-select without user confirmation.
**Goal**: User selects roles from intelligent recommendations
**Available Roles**:
- data-architect (数据架构师)
- product-manager (产品经理)
- product-owner (产品负责人)
- scrum-master (敏捷教练)
- subject-matter-expert (领域专家)
- system-architect (系统架构师)
- test-strategist (测试策略师)
- ui-designer (UI 设计师)
- ux-expert (UX 专家)
**Available Roles**: data-architect, product-manager, product-owner, scrum-master, subject-matter-expert, system-architect, test-strategist, ui-designer, ux-expert
**Steps**:
1. **Intelligent role recommendation** (AI analysis):
- Analyze Phase 1 extracted keywords and challenges
- Use AI reasoning to determine most relevant roles for the specific topic
- Recommend count+2 roles (e.g., if user wants 3 roles, recommend 5 options)
- Provide clear rationale for each recommended role based on topic context
1. Analyze Phase 1 keywords → Recommend count+2 roles with rationale
2. AskUserQuestion (multiSelect=true) → Store to `session.selected_roles`
3. If count+2 > 4, split into multiple rounds
2. **User selection** (text interaction):
- Output all recommended roles at once (no batching needed for count+2 roles)
- Display roles with labels and relevance rationale
- Wait for user input in multi-select format
- Parse user input (support multiple formats)
- **Storage**: Store selections to `session.selected_roles`
**Example Output**:
```markdown
===== Phase 2: 角色选择 =====
【角色选择】请选择 3 个角色参与头脑风暴分析
a) system-architect (系统架构师)
推荐理由:实时同步架构设计和技术选型的核心角色
b) ui-designer (UI设计师)
推荐理由:协作界面用户体验和实时状态展示
c) product-manager (产品经理)
推荐理由功能优先级和MVP范围决策
d) data-architect (数据架构师)
推荐理由:数据同步模型和存储方案设计
e) ux-expert (UX专家)
推荐理由:多用户协作交互流程优化
支持格式:
- 分别选择2a 2c 2d (选择a、c、d)
- 合并语法2acd (选择a、c、d)
- 逗号分隔2a,c,d (选择a、c、d)
请输入选择:
**Example**:
```javascript
AskUserQuestion({
questions: [{
question: "请选择 3 个角色参与头脑风暴分析",
header: "角色选择",
multiSelect: true,
options: [
{ label: "system-architect", description: "实时同步架构设计和技术选型" },
{ label: "ui-designer", description: "协作界面用户体验和状态展示" },
{ label: "product-manager", description: "功能优先级和MVP范围决策" },
{ label: "data-architect", description: "数据同步模型和存储方案设计" }
]
}]
})
```
**User input examples**:
- `2acd` → Roles: a, c, d (system-architect, product-manager, data-architect)
- `2a 2c 2d` → Same result
- `2a,c,d` → Same result
**⚠️ CRITICAL**: User MUST interact. NEVER auto-select without confirmation.
**Role Recommendation Rules**:
- NO hardcoded keyword-to-role mappings
- Use intelligent analysis of topic, challenges, and requirements
- Consider role synergies and coverage gaps
- Explain WHY each role is relevant to THIS specific topic
- Default recommendation: count+2 roles for user to choose from
### Phase 3: Role-Specific Questions (Dynamic Generation)
### Phase 3: Role-Specific Questions
**Goal**: Generate deep questions mapping role expertise to Phase 1 challenges
**Algorithm**:
```
FOR each selected role:
1. Map Phase 1 challenges to role domain:
- "real-time sync" + system-architect → State management pattern
- "100 users" + system-architect → Communication protocol
- "low latency" + system-architect → Conflict resolution
1. FOR each selected role:
- Map Phase 1 challenges to role domain
- Generate 3-4 questions (implementation depth, trade-offs, edge cases)
- AskUserQuestion per role → Store to `session.role_decisions[role]`
2. Process roles sequentially (one at a time for clarity)
3. If role needs > 4 questions, split into multiple rounds
2. Generate 3-4 questions per role probing implementation depth, trade-offs, edge cases:
Q: "How handle real-time state sync for 100+ users?" (explores approach)
Q: "How resolve conflicts when 2 users edit simultaneously?" (explores edge case)
Options: [Event Sourcing/Centralized/CRDT] (concrete, explain trade-offs for THIS use case)
3. Output questions in text format per role:
- Display all questions for current role (3-4 questions, no 10-question limit)
- Questions in Chinese (用中文提问)
- Wait for user input
- Parse answers using intelligent parsing
- Store answers to session.role_decisions[role]
**Example** (system-architect):
```javascript
AskUserQuestion({
questions: [
{
question: "100+ 用户实时状态同步方案?",
header: "状态同步",
multiSelect: false,
options: [
{ label: "Event Sourcing", description: "完整事件历史,支持回溯,存储成本高" },
{ label: "集中式状态管理", description: "实现简单,单点瓶颈风险" },
{ label: "CRDT", description: "去中心化,自动合并,学习曲线陡" }
]
},
{
question: "两个用户同时编辑冲突如何解决?",
header: "冲突解决",
multiSelect: false,
options: [
{ label: "自动合并", description: "用户无感知,可能产生意外结果" },
{ label: "手动解决", description: "用户控制,增加交互复杂度" },
{ label: "版本控制", description: "保留历史,需要分支管理" }
]
}
]
})
```
**Batching Strategy**:
- Each role outputs all its questions at once (typically 3-4 questions)
- No need to split per role (within 10-question batch limit)
- Multiple roles processed sequentially (one role at a time for clarity)
### Phase 4: Conflict Resolution
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format)
**Example Topic-Specific Questions** (system-architect role for "real-time collaboration platform"):
- "100+ 用户实时状态同步方案?" → Options: Event Sourcing / 集中式状态管理 / CRDT
- "两个用户同时编辑冲突如何解决?" → Options: 自动合并 / 手动解决 / 版本控制
- "低延迟通信协议选择?" → Options: WebSocket / SSE / 轮询
- "系统扩展性架构方案?" → Options: 微服务 / 单体+缓存 / Serverless
**Quality Requirements**: See "Question Generation Guidelines" section for detailed rules
### Phase 4: Cross-Role Clarification (Conflict Detection)
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers, not pre-defined relationships
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers
**Algorithm**:
```
1. Analyze Phase 3 answers for conflicts:
- Contradictory choices: product-manager "fast iteration" vs system-architect "complex Event Sourcing"
- Missing integration: ui-designer "Optimistic updates" but system-architect didn't address conflict handling
- Implicit dependencies: ui-designer "Live cursors" but no auth approach defined
2. FOR each detected conflict:
Generate clarification questions referencing SPECIFIC Phase 3 choices
3. Output clarification questions in text format:
- Batch conflicts into rounds (max 10 questions per round)
- Display questions with context from Phase 3 answers
- Questions in Chinese (用中文提问)
- Wait for user input
- Parse answers using intelligent parsing
- Store answers to session.cross_role_decisions
- Contradictory choices (e.g., "fast iteration" vs "complex Event Sourcing")
- Missing integration (e.g., "Optimistic updates" but no conflict handling)
- Implicit dependencies (e.g., "Live cursors" but no auth defined)
2. Generate clarification questions referencing SPECIFIC Phase 3 choices
3. AskUserQuestion (max 4 per call, multi-round) → Store to `session.cross_role_decisions`
4. If NO conflicts: Skip Phase 4 (inform user: "未检测到跨角色冲突跳过Phase 4")
**Example**:
```javascript
AskUserQuestion({
questions: [{
question: "CRDT 与 UI 回滚期望冲突,如何解决?\n背景system-architect选择CRDTui-designer期望回滚UI",
header: "架构冲突",
multiSelect: false,
options: [
{ label: "采用 CRDT", description: "保持去中心化调整UI期望" },
{ label: "显示合并界面", description: "增加用户交互,展示冲突详情" },
{ label: "切换到 OT", description: "支持回滚,增加服务器复杂度" }
]
}]
})
```
**Batching Strategy**:
- Maximum 10 clarification questions per round
- If conflicts > 10, split into multiple rounds
- Prioritize most critical conflicts first
### Phase 4.5: Final Clarification
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format with background context)
**Example Conflict Detection** (from Phase 3 answers):
- **Architecture Conflict**: "CRDT 与 UI 回滚期望冲突,如何解决?"
- Background: system-architect chose CRDT, ui-designer expects rollback UI
- Options: 采用 CRDT / 显示合并界面 / 切换到 OT
- **Integration Gap**: "实时光标功能缺少身份认证方案"
- Background: ui-designer chose live cursors, no auth defined
- Options: OAuth 2.0 / JWT Token / Session-based
**Quality Requirements**: See "Question Generation Guidelines" section for conflict-specific rules
### Phase 5: Generate Guidance Specification
**Purpose**: Ensure no important points missed before generating specification
**Steps**:
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions`
2. Transform Q&A pairs to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
3. Generate guidance-specification.md (template below) - **PRIMARY OUTPUT FILE**
4. Update workflow-session.json with **METADATA ONLY**:
- session_id (e.g., "WFS-topic-slug")
- selected_roles[] (array of role names, e.g., ["system-architect", "ui-designer", "product-manager"])
- topic (original user input string)
- timestamp (ISO-8601 format)
- phase_completed: "artifacts"
- count_parameter (number from --count flag)
5. Validate: No interrogative sentences in .md file, all decisions traceable, no content duplication in .json
1. Ask initial check:
```javascript
AskUserQuestion({
questions: [{
question: "在生成最终规范之前,是否有前面未澄清的重点需要补充?",
header: "补充确认",
multiSelect: false,
options: [
{ label: "无需补充", description: "前面的讨论已经足够完整" },
{ label: "需要补充", description: "还有重要内容需要澄清" }
]
}]
})
```
2. If "需要补充":
- Analyze user's additional points
- Generate progressive questions (not role-bound, interconnected)
- AskUserQuestion (max 4 per round) → Store to `session.additional_decisions`
- Repeat until user confirms completion
3. If "无需补充": Proceed to Phase 5
**⚠️ CRITICAL OUTPUT SEPARATION**:
- **guidance-specification.md**: Full guidance content (decisions, rationale, integration points)
- **workflow-session.json**: Session metadata ONLY (no guidance content, no decisions, no Q&A pairs)
- **NO content duplication**: Guidance stays in .md, metadata stays in .json
**Progressive Pattern**: Questions interconnected, each round informs next, continue until resolved.
## Output Document Template
### Phase 5: Generate Specification
**Steps**:
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions` + `additional_decisions`
2. Transform Q&A to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
3. Generate `guidance-specification.md`
4. Update `workflow-session.json` (metadata only)
5. Validate: No interrogative sentences, all decisions traceable
---
## Question Guidelines
### Core Principle
**Target**: 开发者(理解技术但需要从用户需求出发)
**Question Structure**: `[业务场景/需求前提] + [技术关注点]`
**Option Structure**: `标签:[技术方案] + 说明:[业务影响] + [技术权衡]`
### Quality Rules
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 业务场景作为问题前提
- ✅ 技术选项的业务影响说明
- ✅ 量化指标和约束条件
**MUST Avoid**:
- ❌ 纯技术选型无业务上下文
- ❌ 过度抽象的用户体验问题
- ❌ 脱离话题的通用架构问题
### Phase-Specific Requirements
| Phase | Focus | Key Requirements |
|-------|-------|------------------|
| 1 | 意图理解 | Reference topic keywords, 用户场景、业务约束、优先级 |
| 2 | 角色推荐 | Intelligent analysis (NOT keyword mapping), explain relevance |
| 3 | 角色问题 | Reference Phase 1 keywords, concrete options with trade-offs |
| 4 | 冲突解决 | Reference SPECIFIC Phase 3 choices, explain impact on both roles |
---
## Output & Governance
### Output Template
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
@@ -478,9 +386,9 @@ FOR each selected role:
## Next Steps
**⚠️ Automatic Continuation** (when called from auto-parallel):
- auto-parallel will assign agents to generate role-specific analysis documents
- Each selected role gets dedicated conceptual-planning-agent
- Agents read this guidance-specification.md for framework context
- auto-parallel assigns agents for role-specific analysis
- Each selected role gets conceptual-planning-agent
- Agents read this guidance-specification.md for context
## Appendix: Decision Tracking
| Decision ID | Category | Question | Selected | Phase | Rationale |
@@ -490,95 +398,19 @@ FOR each selected role:
| D-003+ | [Role] | [Q] | [A] | 3 | [Why] |
```
## Question Generation Guidelines
### Core Principle: Developer-Facing Questions with User Context
**Target Audience**: 开发者(理解技术但需要从用户需求出发)
**Generation Philosophy**:
1. **Phase 1**: 用户场景、业务约束、优先级(建立上下文)
2. **Phase 2**: 基于话题分析的智能角色推荐(非关键词映射)
3. **Phase 3**: 业务需求 + 技术选型(需求驱动的技术决策)
4. **Phase 4**: 技术冲突的业务权衡(帮助开发者理解影响)
### Universal Quality Rules
**Question Structure** (all phases):
```
[业务场景/需求前提] + [技术关注点]
```
**Option Structure** (all phases):
```
标签:[技术方案简称] + (业务特征)
说明:[业务影响] + [技术权衡]
```
**MUST Include** (all phases):
- ✅ All questions in Chinese (用中文提问)
- ✅ 业务场景作为问题前提
- ✅ 技术选项的业务影响说明
- ✅ 量化指标和约束条件
**MUST Avoid** (all phases):
- ❌ 纯技术选型无业务上下文
- ❌ 过度抽象的用户体验问题
- ❌ 脱离话题的通用架构问题
### Phase-Specific Requirements
**Phase 1 Requirements**:
- Questions MUST reference topic keywords (NOT generic "Project type?")
- Focus: 用户使用场景(谁用?怎么用?多频繁?)、业务约束(预算、时间、团队、合规)
- Success metrics: 性能指标、用户体验目标
- Priority ranking: MVP vs 长期规划
**Phase 3 Requirements**:
- Questions MUST reference Phase 1 keywords (e.g., "real-time", "100 users")
- Options MUST be concrete approaches with relevance to topic
- Each option includes trade-offs specific to this use case
- Include 业务需求驱动的技术问题、量化指标(并发数、延迟、可用性)
**Phase 4 Requirements**:
- Questions MUST reference SPECIFIC Phase 3 choices in background context
- Options address the detected conflict directly
- Each option explains impact on both conflicting roles
- NEVER use static "Cross-Role Matrix" - ALWAYS analyze actual Phase 3 answers
- Focus: 技术冲突的业务权衡、帮助开发者理解不同选择的影响
## Validation Checklist
Generated guidance-specification.md MUST:
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
- ✅ Every decision traceable to user answer
- ✅ Cross-role conflicts resolved or documented
- ✅ Next steps concrete and specific
- ✅ All Phase 1-4 decisions in session metadata
## Update Mechanism
### File Structure
```
IF guidance-specification.md EXISTS:
Prompt: "Regenerate completely / Update sections / Cancel"
ELSE:
Run full Phase 1-5 flow
.workflow/active/WFS-[topic]/
├── workflow-session.json # Metadata ONLY
├── .process/
│ └── context-package.json # Phase 0 output
└── .brainstorming/
└── guidance-specification.md # Full guidance content
```
## Governance Rules
### Session Metadata
**Output Requirements**:
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
- Every decision MUST trace to user answer
- Conflicts MUST be resolved (not marked "TBD")
- Next steps MUST be actionable
- Topic preserved as authoritative reference in session
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
## Storage Validation
**workflow-session.json** (metadata only):
```json
{
"session_id": "WFS-{topic-slug}",
@@ -591,14 +423,31 @@ ELSE:
}
```
**⚠️ Rule**: Session JSON stores ONLY metadata (session_id, selected_roles[], topic, timestamps). All guidance content goes to guidance-specification.md.
**⚠️ Rule**: Session JSON stores ONLY metadata. All guidance content goes to guidance-specification.md.
## File Structure
### Validation Checklist
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
- ✅ Every decision traceable to user answer
- ✅ Cross-role conflicts resolved or documented
- ✅ Next steps concrete and specific
- ✅ No content duplication between .json and .md
### Update Mechanism
```
.workflow/active/WFS-[topic]/
├── workflow-session.json # Session metadata ONLY
└── .brainstorming/
└── guidance-specification.md # Full guidance content
IF guidance-specification.md EXISTS:
Prompt: "Regenerate completely / Update sections / Cancel"
ELSE:
Run full Phase 0-5 flow
```
### Governance Rules
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
- Every decision MUST trace to user answer
- Conflicts MUST be resolved (not marked "TBD")
- Next steps MUST be actionable
- Topic preserved as authoritative reference
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.

View File

@@ -141,26 +141,10 @@ OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
TOPIC: {user-provided-topic}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
- Output: topic_framework_content
2. **load_role_template**
- Action: Load {role-name} planning template
- Command: Read(~/.claude/workflows/cli-templates/planning-roles/{role}.md)
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and original user intent
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_context (contains original user prompt as PRIMARY reference)
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
- Action: Load style SKILL package for design system reference
- Command: Read(.claude/skills/style-{style_skill_package}/SKILL.md) AND Read(.workflow/reference_style/{style_skill_package}/design-tokens.json)
- Output: style_skill_content, design_tokens
- Usage: Apply design tokens in ui-designer analysis and artifacts
1. load_topic_framework → .workflow/active/WFS-{session}/.brainstorming/guidance-specification.md
2. load_role_template → ~/.claude/workflows/cli-templates/planning-roles/{role}.md
3. load_session_metadata → .workflow/active/WFS-{session}/workflow-session.json
4. load_style_skill (ui-designer only, if style_skill_package) → .claude/skills/style-{style_skill_package}/
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
@@ -170,13 +154,9 @@ TOPIC: {user-provided-topic}
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md**: Comprehensive {role-name} analysis addressing all framework discussion points
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
- **FORBIDDEN**: Never use `recommendations.md` or any filename not starting with `analysis`
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
- **Content**: Includes both analysis AND recommendations sections within analysis files
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
3. **User Intent Alignment**: Validate analysis aligns with original user objectives from session_context
1. **analysis.md** (optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md
3. **User Intent Alignment**: Validate against session_context
## Completion Criteria
- Address each discussion point from guidance-specification.md with {role-name} expertise
@@ -199,10 +179,10 @@ TOPIC: {user-provided-topic}
- guidance-specification.md path
**Validation**:
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md`
- Optionally with `analysis-{slug}.md` sub-documents (max 5)
- **File pattern**: `analysis*.md` for globbing
- **FORBIDDEN**: `recommendations.md` or any non-`analysis` prefixed files
- All N role analyses completed
**TodoWrite Update (Phase 2 agents dispatched - tasks attached in parallel)**:
@@ -453,12 +433,9 @@ CONTEXT_VARS:
├── workflow-session.json # Session metadata ONLY
└── .brainstorming/
├── guidance-specification.md # Framework (Phase 1)
├── {role-1}/
── analysis.md # Role analysis (Phase 2)
├── {role-2}/
│ └── analysis.md
├── {role-N}/
│ └── analysis.md
├── {role}/
── analysis.md # Main document (with optional @references)
│ └── analysis-{slug}.md # Section documents (max 5)
└── synthesis-specification.md # Integration (Phase 3)
```

View File

@@ -2,325 +2,318 @@
name: synthesis
description: Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent
argument-hint: "[optional: --session session-id]"
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*), AskUserQuestion(*)
---
## Overview
Three-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
Six-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
**Phase 1-2 (Main Flow)**: Session detection → File discovery → Path preparation
**Phase 1-2**: Session detection → File discovery → Path preparation
**Phase 3A**: Cross-role analysis agent → Generate recommendations
**Phase 4**: User selects enhancements → User answers clarifications (via AskUserQuestion)
**Phase 5**: Parallel update agents (one per role)
**Phase 6**: Context package update → Metadata update → Completion report
**Phase 3A (Analysis Agent)**: Cross-role analysis → Generate recommendations
**Phase 4 (Main Flow)**: User selects enhancements → User answers clarifications → Build update plan
**Phase 5 (Parallel Update Agents)**: Each agent updates ONE role document → Parallel execution
**Phase 6 (Main Flow)**: Metadata update → Completion report
**Key Features**:
- Multi-agent architecture (analysis agent + parallel update agents)
- Clear separation: Agent analysis vs Main flow interaction
- Parallel document updates (one agent per role)
- User intent alignment validation
All user interactions use AskUserQuestion tool (max 4 questions per call, multi-round).
**Document Flow**:
- Input: `[role]/analysis*.md`, `guidance-specification.md`, session metadata
- Output: Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
---
## Quick Reference
### Phase Summary
| Phase | Goal | Executor | Output |
|-------|------|----------|--------|
| 1 | Session detection | Main flow | session_id, brainstorm_dir |
| 2 | File discovery | Main flow | role_analysis_paths |
| 3A | Cross-role analysis | Agent | enhancement_recommendations |
| 4 | User interaction | Main flow + AskUserQuestion | update_plan |
| 5 | Document updates | Parallel agents | Updated analysis*.md |
| 6 | Finalization | Main flow | context-package.json, report |
### AskUserQuestion Pattern
```javascript
// Enhancement selection (multi-select)
AskUserQuestion({
questions: [{
question: "请选择要应用的改进建议",
header: "改进选择",
multiSelect: true,
options: [
{ label: "EP-001: API Contract", description: "添加详细的请求/响应 schema 定义" },
{ label: "EP-002: User Intent", description: "明确用户需求优先级和验收标准" }
]
}]
})
// Clarification questions (single-select, multi-round)
AskUserQuestion({
questions: [
{
question: "MVP 阶段的核心目标是什么?",
header: "用户意图",
multiSelect: false,
options: [
{ label: "快速验证", description: "最小功能集,快速上线获取反馈" },
{ label: "技术壁垒", description: "完善架构,为长期发展打基础" },
{ label: "功能完整", description: "覆盖所有规划功能,延迟上线" }
]
}
]
})
```
---
## Task Tracking
```json
[
{"content": "Detect session and validate analyses", "status": "in_progress", "activeForm": "Detecting session"},
{"content": "Detect session and validate analyses", "status": "pending", "activeForm": "Detecting session"},
{"content": "Discover role analysis file paths", "status": "pending", "activeForm": "Discovering paths"},
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis agent"},
{"content": "Present enhancements for user selection", "status": "pending", "activeForm": "Presenting enhancements"},
{"content": "Generate and present clarification questions", "status": "pending", "activeForm": "Clarifying with user"},
{"content": "Build update plan from user input", "status": "pending", "activeForm": "Building update plan"},
{"content": "Execute parallel update agents (one per role)", "status": "pending", "activeForm": "Updating documents in parallel"},
{"content": "Update session metadata and generate report", "status": "pending", "activeForm": "Finalizing session"}
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis"},
{"content": "Present enhancements via AskUserQuestion", "status": "pending", "activeForm": "Selecting enhancements"},
{"content": "Clarification questions via AskUserQuestion", "status": "pending", "activeForm": "Clarifying"},
{"content": "Execute parallel update agents", "status": "pending", "activeForm": "Updating documents"},
{"content": "Update context package and metadata", "status": "pending", "activeForm": "Finalizing"}
]
```
---
## Execution Phases
### Phase 1: Discovery & Validation
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*` directories
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*`
2. **Validate Files**:
- `guidance-specification.md` (optional, warn if missing)
- `*/analysis*.md` (required, error if empty)
3. **Load User Intent**: Extract from `workflow-session.json` (project/description field)
3. **Load User Intent**: Extract from `workflow-session.json`
### Phase 2: Role Discovery & Path Preparation
**Main flow prepares file paths for Agent**:
1. **Discover Analysis Files**:
- Glob(.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md)
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
- Validate: At least one file exists (error if empty)
- Glob: `.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md`
- Supports: analysis.md + analysis-{slug}.md (max 5)
2. **Extract Role Information**:
- `role_analysis_paths`: Relative paths from brainstorm_dir
- `participating_roles`: Role names extracted from directory paths
- `role_analysis_paths`: Relative paths
- `participating_roles`: Role names from directories
3. **Pass to Agent** (Phase 3):
- `session_id`
- `brainstorm_dir`: .workflow/active/WFS-{session}/.brainstorming/
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
- `participating_roles`: ["product-manager", "system-architect", ...]
**Main Flow Responsibility**: File discovery and path preparation only (NO file content reading)
3. **Pass to Agent**: session_id, brainstorm_dir, role_analysis_paths, participating_roles
### Phase 3A: Analysis & Enhancement Agent
**First agent call**: Cross-role analysis and generate enhancement recommendations
**Agent executes cross-role analysis**:
```bash
Task(conceptual-planning-agent): "
```javascript
Task(conceptual-planning-agent, `
## Agent Mission
Analyze role documents, identify conflicts/gaps, and generate enhancement recommendations
Analyze role documents, identify conflicts/gaps, generate enhancement recommendations
## Input from Main Flow
- brainstorm_dir: {brainstorm_dir}
- role_analysis_paths: {role_analysis_paths}
- participating_roles: {participating_roles}
## Input
- brainstorm_dir: ${brainstorm_dir}
- role_analysis_paths: ${role_analysis_paths}
- participating_roles: ${participating_roles}
## Execution Instructions
[FLOW_CONTROL]
## Flow Control Steps
1. load_session_metadata → Read workflow-session.json
2. load_role_analyses → Read all analysis files
3. cross_role_analysis → Identify consensus, conflicts, gaps, ambiguities
4. generate_recommendations → Format as EP-001, EP-002, ...
### Flow Control Steps
**AGENT RESPONSIBILITY**: Execute these analysis steps sequentially with context accumulation:
1. **load_session_metadata**
- Action: Load original user intent as primary reference
- Command: Read({brainstorm_dir}/../workflow-session.json)
- Output: original_user_intent (from project/description field)
2. **load_role_analyses**
- Action: Load all role analysis documents
- Command: For each path in role_analysis_paths: Read({brainstorm_dir}/{path})
- Output: role_analyses_content_map = {role_name: content}
3. **cross_role_analysis**
- Action: Identify consensus themes, conflicts, gaps, underspecified areas
- Output: consensus_themes, conflicting_views, gaps_list, ambiguities
4. **generate_recommendations**
- Action: Convert cross-role analysis findings into structured enhancement recommendations
- Format: EP-001, EP-002, ... (sequential numbering)
- Fields: id, title, affected_roles, category, current_state, enhancement, rationale, priority
- Taxonomy: Map to 9 categories (User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology)
- Output: enhancement_recommendations (JSON array)
### Output to Main Flow
Return JSON array:
## Output Format
[
{
\"id\": \"EP-001\",
\"title\": \"API Contract Specification\",
\"affected_roles\": [\"system-architect\", \"api-designer\"],
\"category\": \"Architecture\",
\"current_state\": \"High-level API descriptions\",
\"enhancement\": \"Add detailed contract definitions with request/response schemas\",
\"rationale\": \"Enables precise implementation and testing\",
\"priority\": \"High\"
},
...
"id": "EP-001",
"title": "API Contract Specification",
"affected_roles": ["system-architect", "api-designer"],
"category": "Architecture",
"current_state": "High-level API descriptions",
"enhancement": "Add detailed contract definitions",
"rationale": "Enables precise implementation",
"priority": "High"
}
]
"
`)
```
### Phase 4: Main Flow User Interaction
### Phase 4: User Interaction
**Main flow handles all user interaction via text output**:
**All interactions via AskUserQuestion (Chinese questions)**
**⚠️ CRITICAL**: ALL questions MUST use Chinese (所有问题必须用中文) for better user understanding
#### Step 1: Enhancement Selection
1. **Present Enhancement Options** (multi-select):
```markdown
===== Enhancement 选择 =====
```javascript
// If enhancements > 4, split into multiple rounds
const enhancements = [...]; // from Phase 3A
const BATCH_SIZE = 4;
请选择要应用的改进建议(可多选):
for (let i = 0; i < enhancements.length; i += BATCH_SIZE) {
const batch = enhancements.slice(i, i + BATCH_SIZE);
a) EP-001: API Contract Specification
影响角色system-architect, api-designer
说明:添加详细的请求/响应 schema 定义
AskUserQuestion({
questions: [{
question: `请选择要应用的改进建议 (第${Math.floor(i/BATCH_SIZE)+1}轮)`,
header: "改进选择",
multiSelect: true,
options: batch.map(ep => ({
label: `${ep.id}: ${ep.title}`,
description: `影响: ${ep.affected_roles.join(', ')} | ${ep.enhancement}`
}))
}]
})
b) EP-002: User Intent Validation
影响角色product-manager, ux-expert
说明:明确用户需求优先级和验收标准
// Store selections before next round
}
c) EP-003: Error Handling Strategy
影响角色system-architect
说明:统一异常处理和降级方案
支持格式1abc 或 1a 1b 1c 或 1a,b,c
请输入选择(可跳过输入 skip
// User can also skip: provide "跳过" option
```
2. **Generate Clarification Questions** (based on analysis agent output):
-**ALL questions in Chinese (所有问题必须用中文)**
- Use 9-category taxonomy scan results
- Prioritize most critical questions (no hard limit)
- Each with 2-4 options + descriptions
#### Step 2: Clarification Questions
3. **Interactive Clarification Loop** (max 10 questions per round):
```markdown
===== Clarification 问题 (第 1/2 轮) =====
```javascript
// Generate questions based on 9-category taxonomy scan
// Categories: User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology
【问题1 - 用户意图】MVP 阶段的核心目标是什么?
a) 快速验证市场需求
说明:最小功能集,快速上线获取反馈
b) 建立技术壁垒
说明:完善架构,为长期发展打基础
c) 实现功能完整性
说明:覆盖所有规划功能,延迟上线
const clarifications = [...]; // from analysis
const BATCH_SIZE = 4;
【问题2 - 架构决策】技术栈选择的优先考虑因素?
a) 团队熟悉度
说明:使用现有技术栈,降低学习成本
b) 技术先进性
说明:采用新技术,提升竞争力
c) 生态成熟度
说明:选择成熟方案,保证稳定性
for (let i = 0; i < clarifications.length; i += BATCH_SIZE) {
const batch = clarifications.slice(i, i + BATCH_SIZE);
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
const totalRounds = Math.ceil(clarifications.length / BATCH_SIZE);
...最多10个问题
AskUserQuestion({
questions: batch.map(q => ({
question: q.question,
header: q.category.substring(0, 12),
multiSelect: false,
options: q.options.map(opt => ({
label: opt.label,
description: opt.description
}))
}))
})
请回答 (格式: 1a 2b 3c...)
// Store answers before next round
}
```
Wait for user input → Parse all answers in batch → Continue to next round if needed
### Question Guidelines
4. **Build Update Plan**:
```
**Target**: 开发者(理解技术但需要从用户需求出发)
**Question Structure**: `[跨角色分析发现] + [需要澄清的决策点]`
**Option Structure**: `标签:[具体方案] + 说明:[业务影响] + [技术权衡]`
**9-Category Taxonomy**:
| Category | Focus | Example Question Pattern |
|----------|-------|--------------------------|
| User Intent | 用户目标 | "MVP阶段核心目标" + 验证/壁垒/完整性 |
| Requirements | 需求细化 | "功能优先级如何排序?" + 核心/增强/可选 |
| Architecture | 架构决策 | "技术栈选择考量?" + 熟悉度/先进性/成熟度 |
| UX | 用户体验 | "交互复杂度取舍?" + 简洁/丰富/渐进 |
| Feasibility | 可行性 | "资源约束下的范围?" + 最小/标准/完整 |
| Risk | 风险管理 | "风险容忍度?" + 保守/平衡/激进 |
| Process | 流程规范 | "迭代节奏?" + 快速/稳定/灵活 |
| Decisions | 决策确认 | "冲突解决方案?" + 方案A/方案B/折中 |
| Terminology | 术语统一 | "统一使用哪个术语?" + 术语A/术语B |
**Quality Rules**:
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 基于跨角色分析的具体发现
- ✅ 选项包含业务影响说明
- ✅ 解决实际的模糊点或冲突
**MUST Avoid**:
- ❌ 与角色分析无关的通用问题
- ❌ 重复已在 artifacts 阶段确认的内容
- ❌ 过于细节的实现级问题
#### Step 3: Build Update Plan
```javascript
update_plan = {
"role1": {
"enhancements": [EP-001, EP-003],
"enhancements": ["EP-001", "EP-003"],
"clarifications": [
{"question": "...", "answer": "...", "category": "..."},
...
{"question": "...", "answer": "...", "category": "..."}
]
},
"role2": {
"enhancements": [EP-002],
"enhancements": ["EP-002"],
"clarifications": [...]
},
...
}
}
```
### Phase 5: Parallel Document Update Agents
**Parallel agent calls** (one per role needing updates):
**Execute in parallel** (one agent per role):
```bash
# Execute in parallel using single message with multiple Task calls
Task(conceptual-planning-agent): "
```javascript
// Single message with multiple Task calls for parallelism
Task(conceptual-planning-agent, `
## Agent Mission
Apply user-confirmed enhancements and clarifications to {role1} analysis document
Apply enhancements and clarifications to ${role} analysis
## Agent Intent
- **Goal**: Integrate synthesis results into role-specific analysis
- **Scope**: Update ONLY {role1}/analysis.md (isolated, no cross-role dependencies)
- **Constraints**: Preserve original insights, add refinements without deletion
## Input
- role: ${role}
- analysis_path: ${brainstorm_dir}/${role}/analysis.md
- enhancements: ${role_enhancements}
- clarifications: ${role_clarifications}
- original_user_intent: ${intent}
## Input from Main Flow
- role: {role1}
- analysis_path: {brainstorm_dir}/{role1}/analysis.md
- enhancements: [EP-001, EP-003] (user-selected improvements)
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
- original_user_intent: {from session metadata}
## Flow Control Steps
1. load_current_analysis → Read analysis file
2. add_clarifications_section → Insert Q&A section
3. apply_enhancements → Integrate into relevant sections
4. resolve_contradictions → Remove conflicts
5. enforce_terminology → Align terminology
6. validate_intent → Verify alignment with user intent
7. write_updated_file → Save changes
## Execution Instructions
[FLOW_CONTROL]
### Flow Control Steps
**AGENT RESPONSIBILITY**: Execute these update steps sequentially:
1. **load_current_analysis**
- Action: Load existing role analysis document
- Command: Read({brainstorm_dir}/{role1}/analysis.md)
- Output: current_analysis_content
2. **add_clarifications_section**
- Action: Insert Clarifications section with Q&A
- Format: \"## Clarifications\\n### Session {date}\\n- **Q**: {question} (Category: {category})\\n **A**: {answer}\"
- Output: analysis_with_clarifications
3. **apply_enhancements**
- Action: Integrate EP-001, EP-003 into relevant sections
- Strategy: Locate section by category (Architecture → Architecture section, UX → User Experience section)
- Output: analysis_with_enhancements
4. **resolve_contradictions**
- Action: Remove conflicts between original content and clarifications/enhancements
- Output: contradiction_free_analysis
5. **enforce_terminology_consistency**
- Action: Align all terminology with user-confirmed choices from clarifications
- Output: terminology_consistent_analysis
6. **validate_user_intent_alignment**
- Action: Verify all updates support original_user_intent
- Output: validated_analysis
7. **write_updated_file**
- Action: Save final analysis document
- Command: Write({brainstorm_dir}/{role1}/analysis.md, validated_analysis)
- Output: File update confirmation
### Output
Updated {role1}/analysis.md with Clarifications section + enhanced content
")
Task(conceptual-planning-agent): "
## Agent Mission
Apply user-confirmed enhancements and clarifications to {role2} analysis document
## Agent Intent
- **Goal**: Integrate synthesis results into role-specific analysis
- **Scope**: Update ONLY {role2}/analysis.md (isolated, no cross-role dependencies)
- **Constraints**: Preserve original insights, add refinements without deletion
## Input from Main Flow
- role: {role2}
- analysis_path: {brainstorm_dir}/{role2}/analysis.md
- enhancements: [EP-002] (user-selected improvements)
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
- original_user_intent: {from session metadata}
## Execution Instructions
[FLOW_CONTROL]
### Flow Control Steps
**AGENT RESPONSIBILITY**: Execute same 7 update steps as {role1} agent (load → clarifications → enhancements → contradictions → terminology → validation → write)
### Output
Updated {role2}/analysis.md with Clarifications section + enhanced content
")
# ... repeat for each role in update_plan
## Output
Updated ${role}/analysis.md
`)
```
**Agent Characteristics**:
- **Intent**: Integrate user-confirmed synthesis results (NOT generate new analysis)
- **Isolation**: Each agent updates exactly ONE role (parallel execution safe)
- **Context**: Minimal - receives only role-specific enhancements + clarifications
- **Dependencies**: Zero cross-agent dependencies (full parallelism)
- **Isolation**: Each agent updates exactly ONE role (parallel safe)
- **Dependencies**: Zero cross-agent dependencies
- **Validation**: All updates must align with original_user_intent
### Phase 6: Completion & Metadata Update
### Phase 6: Finalization
**Main flow finalizes**:
#### Step 1: Update Context Package
```javascript
// Sync updated analyses to context-package.json
const context_pkg = Read(".workflow/active/WFS-{session}/.process/context-package.json")
// Update guidance-specification if exists
// Update synthesis-specification if exists
// Re-read all role analysis files
// Update metadata timestamps
Write(context_pkg_path, JSON.stringify(context_pkg))
```
#### Step 2: Update Session Metadata
1. Wait for all parallel agents to complete
2. Update workflow-session.json:
```json
{
"phases": {
@@ -330,15 +323,13 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
"completed_at": "timestamp",
"participating_roles": [...],
"clarification_results": {
"enhancements_applied": ["EP-001", "EP-002", ...],
"enhancements_applied": ["EP-001", "EP-002"],
"questions_asked": 3,
"categories_clarified": ["Architecture", "UX", ...],
"roles_updated": ["role1", "role2", ...],
"outstanding_items": []
"categories_clarified": ["Architecture", "UX"],
"roles_updated": ["role1", "role2"]
},
"quality_metrics": {
"user_intent_alignment": "validated",
"requirement_coverage": "comprehensive",
"ambiguity_resolution": "complete",
"terminology_consistency": "enforced"
}
@@ -347,7 +338,8 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
}
```
3. Generate completion report (show to user):
#### Step 3: Completion Report
```markdown
## ✅ Clarification Complete
@@ -359,9 +351,11 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
✅ PROCEED: `/workflow:plan --session WFS-{session-id}`
```
---
## Output
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md`
**Updated Structure**:
```markdown
@@ -381,116 +375,24 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
- Ambiguities resolved, placeholders removed
- Consistent terminology
### Phase 6: Update Context Package
**Purpose**: Sync updated role analyses to context-package.json to avoid stale cache
**Operations**:
```bash
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
# 1. Read existing package
context_pkg = Read(context_pkg_path)
# 2. Re-read brainstorm artifacts (now with synthesis enhancements)
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
# 2.1 Update guidance-specification if exists
IF exists({brainstorm_dir}/guidance-specification.md):
context_pkg.brainstorm_artifacts.guidance_specification.content = Read({brainstorm_dir}/guidance-specification.md)
context_pkg.brainstorm_artifacts.guidance_specification.updated_at = NOW()
# 2.2 Update synthesis-specification if exists
IF exists({brainstorm_dir}/synthesis-specification.md):
IF context_pkg.brainstorm_artifacts.synthesis_output:
context_pkg.brainstorm_artifacts.synthesis_output.content = Read({brainstorm_dir}/synthesis-specification.md)
context_pkg.brainstorm_artifacts.synthesis_output.updated_at = NOW()
# 2.3 Re-read all role analysis files
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
context_pkg.brainstorm_artifacts.role_analyses = []
FOR file IN role_analysis_files:
role_name = extract_role_from_path(file) # e.g., "ui-designer"
relative_path = file.replace({brainstorm_dir}/, "")
context_pkg.brainstorm_artifacts.role_analyses.push({
"role": role_name,
"files": [{
"path": relative_path,
"type": "primary",
"content": Read(file),
"updated_at": NOW()
}]
})
# 3. Update metadata
context_pkg.metadata.updated_at = NOW()
context_pkg.metadata.synthesis_timestamp = NOW()
# 4. Write back
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
REPORT: "✅ Updated context-package.json with synthesis results"
```
**TodoWrite Update**:
```json
{"content": "Update context package with synthesis results", "status": "completed", "activeForm": "Updating context package"}
```
## Session Metadata
Update `workflow-session.json`:
```json
{
"phases": {
"BRAINSTORM": {
"status": "clarification_completed",
"clarification_completed": true,
"completed_at": "timestamp",
"participating_roles": ["product-manager", "system-architect", ...],
"clarification_results": {
"questions_asked": 3,
"categories_clarified": ["Architecture & Design", ...],
"roles_updated": ["system-architect", "ui-designer", ...],
"outstanding_items": []
},
"quality_metrics": {
"user_intent_alignment": "validated",
"requirement_coverage": "comprehensive",
"ambiguity_resolution": "complete",
"terminology_consistency": "enforced",
"decision_transparency": "documented"
}
}
}
}
```
---
## Quality Checklist
**Content**:
- All role analyses loaded/analyzed
- Cross-role analysis (consensus, conflicts, gaps)
- 9-category ambiguity scan
- Questions prioritized
- Clarifications documented
- All role analyses loaded/analyzed
- Cross-role analysis (consensus, conflicts, gaps)
- 9-category ambiguity scan
- Questions prioritized
**Analysis**:
- User intent validated
- Cross-role synthesis complete
- Ambiguities resolved
- Correct roles updated
- Terminology consistent
- Contradictions removed
- User intent validated
- Cross-role synthesis complete
- Ambiguities resolved
- ✅ Terminology consistent
**Documents**:
- Clarifications section formatted
- Sections reflect answers
- No placeholders (TODO/TBD)
- Valid Markdown
- Cross-references maintained
- Clarifications section formatted
- Sections reflect answers
- No placeholders (TODO/TBD)
- Valid Markdown

View File

@@ -81,6 +81,7 @@ ELSE:
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate system architect analysis addressing topic framework
## Framework Integration Required
@@ -136,6 +137,7 @@ Task(subagent_type="conceptual-planning-agent",
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing system architect analysis
## Current Analysis Context

View File

@@ -0,0 +1,321 @@
---
name: debug
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
argument-hint: "\"bug description or error message\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
# Workflow Debug Command (/workflow:debug)
## Overview
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
## Usage
```bash
/workflow:debug <BUG_DESCRIPTION>
# Arguments
<bug-description> Bug description, error message, or stack trace (required)
```
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + debug.log has content → Analyze mode
└─ NOT_FOUND or empty log → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Generate testable hypotheses (dynamic count)
├─ Add NDJSON logging instrumentation
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Generate new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : 'explore'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Explore Mode
**Step 1.1: Locate Error Source**
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords(bug_description)
// e.g., ['Stack Length', '未找到', 'registered 0']
// Search codebase for error locations
for (const keyword of keywords) {
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
}
// Identify affected files and functions
const affectedLocations = [...] // from search results
```
**Step 1.2: Generate Hypotheses (Dynamic)**
```javascript
// Hypothesis categories based on error pattern
const HYPOTHESIS_PATTERNS = {
"not found|missing|undefined|未找到": "data_mismatch",
"0|empty|zero|registered 0": "logic_error",
"timeout|connection|sync": "integration_issue",
"type|format|parse": "type_mismatch"
}
// Generate hypotheses based on actual issue (NOT fixed count)
function generateHypotheses(bugDescription, affectedLocations) {
const hypotheses = []
// Analyze bug and create targeted hypotheses
// Each hypothesis has:
// - id: H1, H2, ... (dynamic count)
// - description: What might be wrong
// - testable_condition: What to log
// - logging_point: Where to add instrumentation
return hypotheses // Could be 1, 3, 5, or more
}
const hypotheses = generateHypotheses(bug_description, affectedLocations)
```
**Step 1.3: Add NDJSON Instrumentation**
For each hypothesis, add logging at the relevant location:
**Python template**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values here
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**JavaScript/TypeScript template**:
```javascript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
**Output to user**:
```
## Hypotheses Generated
Based on error "{bug_description}", generated {n} hypotheses:
{hypotheses.map(h => `
### ${h.id}: ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
`).join('')}
**Debug log**: ${debugLogPath}
**Next**: Run reproduction steps, then come back for analysis.
```
---
### Analyze Mode
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Validate each hypothesis
for (const [hid, logs] of Object.entries(byHypothesis)) {
const hypothesis = hypotheses.find(h => h.id === hid)
const latestLog = logs[logs.length - 1]
// Check if evidence confirms or rejects hypothesis
const verdict = evaluateEvidence(hypothesis, latestLog.data)
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
}
```
**Output**:
```
## Evidence Analysis
Analyzed ${entries.length} log entries.
${results.map(r => `
### ${r.id}: ${r.description}
- **Status**: ${r.verdict}
- **Evidence**: ${JSON.stringify(r.evidence)}
- **Reason**: ${r.reason}
`).join('')}
${confirmedHypothesis ? `
## Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Ready to fix.
` : `
## Need More Evidence
Add more logging or refine hypotheses.
`}
```
---
### Fix & Cleanup
```javascript
// Apply fix based on confirmed hypothesis
// ... Edit affected files
// After user verifies fix works:
// Remove debug instrumentation (search for region markers)
const instrumentedFiles = Grep({
pattern: "# region debug|// region debug",
output_mode: "files_with_matches"
})
for (const file of instrumentedFiles) {
// Remove content between region markers
removeDebugRegions(file)
}
console.log(`
## Debug Complete
- Root cause: ${confirmedHypothesis.description}
- Fix applied to: ${modifiedFiles.join(', ')}
- Debug instrumentation removed
`)
```
---
## Debug Log Format (NDJSON)
Each line is a JSON object:
```json
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | Code location |
| `msg` | What's being tested |
| `data` | Captured values |
| `ts` | Timestamp (ms) |
## Session Folder
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (main artifact)
└── resolution.md # Summary after fix (optional)
```
## Iteration Flow
```
First Call (/workflow:debug "error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Generate hypotheses, add logging
└─ Await user reproduction
After Reproduction (/workflow:debug "error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
└─ Decision:
├─ Confirmed → Fix → User verify
│ ├─ Fixed → Cleanup → Done
│ └─ Not fixed → Add logging → Iterate
├─ Inconclusive → Add logging → Iterate
└─ All rejected → New hypotheses → Iterate
Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Generate new hypotheses with broader scope |
| Fix doesn't work | Iterate with more granular logging |
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |

View File

@@ -56,6 +56,7 @@ Phase 2: Planning Document Validation
└─ Validate .task/ contains IMPL-*.json files
Phase 3: TodoWrite Generation
├─ Update session status to "active" (Step 0)
├─ Parse TODO_LIST.md for task statuses
├─ Generate TodoWrite for entire workflow
└─ Prepare session context paths
@@ -67,7 +68,9 @@ Phase 4: Execution Strategy & Task Execution
├─ Get next in_progress task from TodoWrite
├─ Lazy load task JSON
├─ Launch agent with task context
├─ Mark task completed
├─ Mark task completed (update IMPL-*.json status)
│ # Quick fix: Update task status for ccw dashboard
│ # TS=$(date -Iseconds) && jq --arg ts "$TS" '.status="completed" | .status_history=(.status_history // [])+[{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
└─ Advance to next task
Phase 5: Completion
@@ -78,6 +81,7 @@ Phase 5: Completion
Resume Mode (--resume-session):
├─ Skip Phase 1 & Phase 2
└─ Entry Point: Phase 3 (TodoWrite Generation)
├─ Update session status to "active" (if not already)
└─ Continue: Phase 4 → Phase 5
```
@@ -177,6 +181,16 @@ bash(cat .workflow/active/${sessionId}/workflow-session.json)
### Phase 3: TodoWrite Generation
**Applies to**: Both normal and resume modes (resume mode entry point)
**Step 0: Update Session Status to Active**
Before generating TodoWrite, update session status from "planning" to "active":
```bash
# Update session status (idempotent - safe to run if already active)
jq '.status = "active" | .execution_started_at = (.execution_started_at // now | todate)' \
.workflow/active/${sessionId}/workflow-session.json > tmp.json && \
mv tmp.json .workflow/active/${sessionId}/workflow-session.json
```
This ensures the dashboard shows the session as "ACTIVE" during execution.
**Process**:
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
- Parse TODO_LIST.md to extract all tasks with current statuses
@@ -378,6 +392,7 @@ TodoWrite({
```bash
Task(subagent_type="{meta.agent}",
run_in_background=false,
prompt="Execute task: {task.title}
{[FLOW_CONTROL]}

View File

@@ -86,22 +86,23 @@ bash(cp .workflow/project.json .workflow/project.json.backup)
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Deep project analysis",
prompt=`
Analyze project for workflow initialization and generate .workflow/project.json.
## MANDATORY FIRST STEPS
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-json-schema.json (get schema reference)
2. Execute: ~/.claude/scripts/get_modules_by_depth.sh (get project structure)
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
## Task
Generate complete project.json with:
- project_name: ${projectName}
- initialized_at: current ISO timestamp
- overview: {description, technology_stack, architecture, key_components, entry_points, metrics}
- overview: {description, technology_stack, architecture, key_components}
- features: ${regenerate ? 'preserve from backup' : '[] (empty)'}
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated}'}
- memory_resources: {skills, documentation, module_docs, gaps, last_scanned}
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
## Analysis Requirements
@@ -118,28 +119,11 @@ Generate complete project.json with:
- Patterns: singleton, factory, repository
- Key components: 5-10 modules {name, path, description, importance}
**Metrics**:
- total_files: Source files (exclude tests/configs)
- lines_of_code: Use find + wc -l
- module_count: Use ~/.claude/scripts/get_modules_by_depth.sh
- complexity: low | medium | high
**Entry Points**:
- main: index.ts, main.py, main.go
- cli_commands: package.json scripts, Makefile targets
- api_endpoints: HTTP/REST routes (if applicable)
**Memory Resources**:
- skills: Scan .claude/skills/ → [{name, type, path}]
- documentation: Scan .workflow/docs/ → [{name, path, has_readme, has_architecture}]
- module_docs: Find **/CLAUDE.md (exclude node_modules, .git)
- gaps: Identify missing resources
## Execution
1. Structural scan: get_modules_by_depth.sh, find, wc -l
2. Semantic analysis: Gemini for patterns/architecture
3. Synthesis: Merge findings
4. ${regenerate ? 'Merge with preserved features/statistics from .workflow/project.json.backup' : ''}
4. ${regenerate ? 'Merge with preserved features/development_index/statistics from .workflow/project.json.backup' : ''}
5. Write JSON: Write('.workflow/project.json', jsonContent)
6. Report: Return brief completion summary
@@ -168,17 +152,6 @@ Frameworks: ${projectJson.overview.technology_stack.frameworks.join(', ')}
Style: ${projectJson.overview.architecture.style}
Components: ${projectJson.overview.key_components.length} core modules
### Metrics
Files: ${projectJson.overview.metrics.total_files}
LOC: ${projectJson.overview.metrics.lines_of_code}
Complexity: ${projectJson.overview.metrics.complexity}
### Memory Resources
SKILL Packages: ${projectJson.memory_resources.skills.length}
Documentation: ${projectJson.memory_resources.documentation.length}
Module Docs: ${projectJson.memory_resources.module_docs.length}
Gaps: ${projectJson.memory_resources.gaps.join(', ') || 'none'}
---
Project state: .workflow/project.json
${regenerate ? 'Backup: .workflow/project.json.backup' : ''}

View File

@@ -258,6 +258,33 @@ TodoWrite({
### Step 3: Launch Execution
**Executor Resolution** (任务级 executor 优先于全局设置):
```javascript
// 获取任务的 executor优先使用 executorAssignmentsfallback 到全局 executionMethod
function getTaskExecutor(task) {
const assignments = executionContext?.executorAssignments || {}
if (assignments[task.id]) {
return assignments[task.id].executor // 'gemini' | 'codex' | 'agent'
}
// Fallback: 全局 executionMethod 映射
const method = executionContext?.executionMethod || 'Auto'
if (method === 'Agent') return 'agent'
if (method === 'Codex') return 'codex'
// Auto: 根据复杂度
return planObject.complexity === 'Low' ? 'agent' : 'codex'
}
// 按 executor 分组任务
function groupTasksByExecutor(tasks) {
const groups = { gemini: [], codex: [], agent: [] }
tasks.forEach(task => {
const executor = getTaskExecutor(task)
groups[executor].push(task)
})
return groups
}
```
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
```javascript
const parallel = executionCalls.filter(c => c.executionType === "parallel")
@@ -283,8 +310,9 @@ for (const call of sequential) {
**Option A: Agent Execution**
When to use:
- `executionMethod = "Agent"`
- `executionMethod = "Auto" AND complexity = "Low"`
- `getTaskExecutor(task) === "agent"`
- `executionMethod = "Agent"` (全局 fallback)
-`executionMethod = "Auto" AND complexity = "Low"` (全局 fallback)
**Task Formatting Principle**: Each task is a self-contained checklist. The agent only needs to know what THIS task requires, not its position or relation to other tasks.
@@ -333,6 +361,7 @@ Complete each task according to its "Done when" checklist.
Task(
subagent_type="code-developer",
run_in_background=false,
description=batch.taskSummary,
prompt=formatBatchPrompt({
tasks: batch.tasks,
@@ -399,8 +428,9 @@ function extractRelatedFiles(tasks) {
**Option B: CLI Execution (Codex)**
When to use:
- `executionMethod = "Codex"`
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
- `getTaskExecutor(task) === "codex"`
- `executionMethod = "Codex"` (全局 fallback)
-`executionMethod = "Auto" AND complexity = "Medium/High"` (全局 fallback)
**Task Formatting Principle**: Same as Agent - each task is a self-contained checklist. No task numbering or position awareness.
@@ -473,10 +503,10 @@ Detailed plan: ${executionContext.session.artifacts.plan}`)
return prompt
}
codex --full-auto exec "${buildCLIPrompt(batch)}" --skip-git-repo-check -s danger-full-access
ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write
```
**Execution with tracking**:
**Execution with fixed IDs** (predictable ID pattern):
```javascript
// Launch CLI in foreground (NOT background)
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
@@ -486,15 +516,57 @@ const timeoutByComplexity = {
"High": 6000000 // 100 minutes
}
// Generate fixed execution ID: ${sessionId}-${groupId}
// This enables predictable ID lookup without relying on resume context chains
const sessionId = executionContext?.session?.id || 'standalone'
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
// Check if resuming from previous failed execution
const previousCliId = batch.resumeFromCliId || null
// Build command with fixed ID (and optional resume for continuation)
const cli_command = previousCliId
? `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
: `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
bash_result = Bash(
command=cli_command,
timeout=timeoutByComplexity[planObject.complexity] || 3600000
)
// Execution ID is now predictable: ${fixedExecutionId}
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
const cliExecutionId = fixedExecutionId
// Update TodoWrite when execution completes
```
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
**Resume on Failure** (with fixed ID):
```javascript
// If execution failed or timed out, offer resume option
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
console.log(`
⚠️ Execution incomplete. Resume available:
Fixed ID: ${fixedExecutionId}
Lookup: ccw cli detail ${fixedExecutionId}
Resume: ccw cli -p "Continue tasks" --resume ${fixedExecutionId} --tool codex --mode write --id ${fixedExecutionId}-retry
`)
// Store for potential retry in same session
batch.resumeFromCliId = fixedExecutionId
}
```
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
**Option C: CLI Execution (Gemini)**
When to use: `getTaskExecutor(task) === "gemini"` (分析类任务)
```bash
# 使用与 Option B 相同的 formatBatchPrompt切换 tool 和 mode
ccw cli -p "${formatBatchPrompt(batch)}" --tool gemini --mode analysis --id ${sessionId}-${batch.groupId}
```
### Step 4: Progress Tracking
@@ -541,21 +613,88 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-q
# - Report findings directly
# Method 2: Gemini Review (recommended)
gemini -p "[Shared Prompt Template with artifacts]"
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
# Method 3: Qwen Review (alternative)
qwen -p "[Shared Prompt Template with artifacts]"
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
# Same prompt as Gemini, different execution engine
# Method 4: Codex Review (autonomous)
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
```
**Multi-Round Review with Fixed IDs**:
```javascript
// Generate fixed review ID
const reviewId = `${sessionId}-review`
// First review pass with fixed ID
const reviewResult = Bash(`ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${reviewId}`)
// If issues found, continue review dialog with fixed ID chain
if (hasUnresolvedIssues(reviewResult)) {
// Resume with follow-up questions
Bash(`ccw cli -p "Clarify the security concerns you mentioned" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
}
```
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
- `@{plan.json}``@${executionContext.session.artifacts.plan}`
- `[@{exploration.json}]` → exploration files from artifacts (if exists)
### Step 6: Update Development Index
**Trigger**: After all executions complete (regardless of code review)
**Skip Condition**: Skip if `.workflow/project.json` does not exist
**Operations**:
```javascript
const projectJsonPath = '.workflow/project.json'
if (!fileExists(projectJsonPath)) return // Silent skip
const projectJson = JSON.parse(Read(projectJsonPath))
// Initialize if needed
if (!projectJson.development_index) {
projectJson.development_index = { feature: [], enhancement: [], bugfix: [], refactor: [], docs: [] }
}
// Detect category from keywords
function detectCategory(text) {
text = text.toLowerCase()
if (/\b(fix|bug|error|issue|crash)\b/.test(text)) return 'bugfix'
if (/\b(refactor|cleanup|reorganize)\b/.test(text)) return 'refactor'
if (/\b(doc|readme|comment)\b/.test(text)) return 'docs'
if (/\b(add|new|create|implement)\b/.test(text)) return 'feature'
return 'enhancement'
}
// Detect sub_feature from task file paths
function detectSubFeature(tasks) {
const dirs = tasks.map(t => t.file?.split('/').slice(-2, -1)[0]).filter(Boolean)
const counts = dirs.reduce((a, d) => { a[d] = (a[d] || 0) + 1; return a }, {})
return Object.entries(counts).sort((a, b) => b[1] - a[1])[0]?.[0] || 'general'
}
const category = detectCategory(`${planObject.summary} ${planObject.approach}`)
const entry = {
title: planObject.summary.slice(0, 60),
sub_feature: detectSubFeature(planObject.tasks),
date: new Date().toISOString().split('T')[0],
description: planObject.approach.slice(0, 100),
status: previousExecutionResults.every(r => r.status === 'completed') ? 'completed' : 'partial',
session_id: executionContext?.session?.id || null
}
projectJson.development_index[category].push(entry)
projectJson.statistics.last_updated = new Date().toISOString()
Write(projectJsonPath, JSON.stringify(projectJson, null, 2))
console.log(`✓ Development index: [${category}] ${entry.title}`)
```
## Best Practices
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
@@ -571,8 +710,10 @@ codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --ski
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `${sessionId}-${groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history`, verify date directories |
## Data Structures
@@ -594,10 +735,15 @@ Passed from lite-plan via global variable:
explorationAngles: string[], // List of exploration angles
explorationManifest: {...} | null, // Exploration manifest
clarificationContext: {...} | null,
executionMethod: "Agent" | "Codex" | "Auto",
executionMethod: "Agent" | "Codex" | "Auto", // 全局默认
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
originalUserInput: string,
// 任务级 executor 分配(优先于 executionMethod
executorAssignments: {
[taskId]: { executor: "gemini" | "codex" | "agent", reason: string }
},
// Session artifacts location (saved by lite-plan)
session: {
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
@@ -627,8 +773,20 @@ Collected after each execution call completes:
tasksSummary: string, // Brief description of tasks handled
completionSummary: string, // What was completed
keyOutputs: string, // Files created/modified, key changes
notes: string // Important context for next execution
notes: string, // Important context for next execution
fixedCliId: string | null // Fixed CLI execution ID (e.g., "implement-auth-2025-12-13-P1")
}
```
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
```bash
# Lookup previous execution
ccw cli detail ${fixedCliId}
# Resume with new fixed ID for retry
ccw cli -p "Continue from where we left off" --resume ${fixedCliId} --tool codex --mode write --id ${fixedCliId}-retry
```

File diff suppressed because it is too large Load Diff

View File

@@ -43,11 +43,11 @@ Phase 1: Task Analysis & Exploration
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
└─ needsExploration=false → Skip to Phase 2/3
Phase 2: Clarification (optional)
Phase 2: Clarification (optional, multi-round)
├─ Aggregate clarification_needs from all exploration angles
├─ Deduplicate similar questions
└─ Decision:
├─ Has clarifications → AskUserQuestion (max 4 questions)
├─ Has clarifications → AskUserQuestion (max 4 questions per round, multiple rounds allowed)
└─ No clarifications → Skip to Phase 3
Phase 3: Planning (NO CODE EXECUTION - planning only)
@@ -71,15 +71,18 @@ Phase 5: Dispatch
### Phase 1: Intelligent Multi-Angle Exploration
**Session Setup**:
**Session Setup** (MANDATORY - follow exactly):
```javascript
// Helper: Get UTC+8 (China Standard Time) ISO string
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-')
const sessionId = `${taskSlug}-${shortTimestamp}`
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-11-29
const sessionId = `${taskSlug}-${dateStr}` // e.g., "implement-jwt-refresh-2025-11-29"
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
bash(`mkdir -p ${sessionFolder}`)
bash(`mkdir -p ${sessionFolder} && test -d ${sessionFolder} && echo "SUCCESS: ${sessionFolder}" || echo "FAILED: ${sessionFolder}"`)
```
**Exploration Decision Logic**:
@@ -137,11 +140,17 @@ function selectAngles(taskDescription, count) {
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
// Planning strategy determination
const planningStrategy = complexity === 'Low'
? 'Direct Claude Planning'
: 'cli-lite-planning-agent'
console.log(`
## Exploration Plan
Task Complexity: ${complexity}
Selected Angles: ${selectedAngles.join(', ')}
Planning Strategy: ${planningStrategy}
Launching ${selectedAngles.length} parallel explorations...
`)
@@ -149,11 +158,16 @@ Launching ${selectedAngles.length} parallel explorations...
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
```javascript
// Launch agents with pre-assigned angles
const explorationTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
description=`Explore: ${angle}`,
prompt=`
## Task Objective
@@ -167,7 +181,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
@@ -196,11 +210,14 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
**Required Fields** (all ${angle} focused):
- project_structure: Modules/architecture relevant to ${angle}
- relevant_files: Files affected from ${angle} perspective
**IMPORTANT**: Use object format with relevance scores for synthesis:
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Core ${angle} logic"}]\`
Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
- patterns: ${angle}-related patterns to follow
- dependencies: Dependencies relevant to ${angle}
- integration_points: Where to integrate from ${angle} viewpoint
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
- constraints: ${angle}-specific limitations/conventions
- clarification_needs: ${angle}-related ambiguities (with options array)
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
- _metadata.exploration_angle: "${angle}"
## Success Criteria
@@ -211,7 +228,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
- [ ] Integration points include file:line locations
- [ ] Constraints are project-specific to ${angle}
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options array
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
@@ -234,7 +251,7 @@ const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json"
const explorationManifest = {
session_id: sessionId,
task_description: task_description,
timestamp: new Date().toISOString(),
timestamp: getUtc8ISOString(),
complexity: complexity,
exploration_count: explorationCount,
explorations: explorationFiles.map(file => {
@@ -270,10 +287,12 @@ Angles explored: ${explorationManifest.explorations.map(e => e.angle).join(', ')
---
### Phase 2: Clarification (Optional)
### Phase 2: Clarification (Optional, Multi-Round)
**Skip if**: No exploration or `clarification_needs` is empty across all explorations
**⚠️ CRITICAL**: AskUserQuestion tool limits max 4 questions per call. **MUST execute multiple rounds** to exhaust all clarification needs - do NOT stop at round 1.
**Aggregate clarification needs from all exploration angles**:
```javascript
// Load manifest and all exploration files
@@ -296,32 +315,37 @@ explorations.forEach(exp => {
}
})
// Deduplicate by question similarity
function deduplicateClarifications(clarifications) {
const unique = []
clarifications.forEach(c => {
const isDuplicate = unique.some(u =>
u.question.toLowerCase() === c.question.toLowerCase()
)
if (!isDuplicate) unique.push(c)
})
return unique
}
// Intelligent deduplication: analyze allClarifications by intent
// - Identify questions with similar intent across different angles
// - Merge similar questions: combine options, consolidate context
// - Produce dedupedClarifications with unique intents only
const dedupedClarifications = intelligentMerge(allClarifications)
const uniqueClarifications = deduplicateClarifications(allClarifications)
// Multi-round clarification: batch questions (max 4 per round)
if (dedupedClarifications.length > 0) {
const BATCH_SIZE = 4
const totalRounds = Math.ceil(dedupedClarifications.length / BATCH_SIZE)
if (uniqueClarifications.length > 0) {
AskUserQuestion({
questions: uniqueClarifications.map(need => ({
question: `[${need.source_angle}] ${need.question}\n\nContext: ${need.context}`,
header: need.source_angle,
multiSelect: false,
options: need.options.map(opt => ({
label: opt,
description: `Use ${opt} approach`
for (let i = 0; i < dedupedClarifications.length; i += BATCH_SIZE) {
const batch = dedupedClarifications.slice(i, i + BATCH_SIZE)
const currentRound = Math.floor(i / BATCH_SIZE) + 1
console.log(`### Clarification Round ${currentRound}/${totalRounds}`)
AskUserQuestion({
questions: batch.map(need => ({
question: `[${need.source_angle}] ${need.question}\n\nContext: ${need.context}`,
header: need.source_angle.substring(0, 12),
multiSelect: false,
options: need.options.map((opt, index) => ({
label: need.recommended === index ? `${opt}` : opt,
description: need.recommended === index ? `Recommended` : `Use ${opt}`
}))
}))
}))
})
})
// Store batch responses in clarificationContext before next round
}
}
```
@@ -335,12 +359,34 @@ if (uniqueClarifications.length > 0) {
**IMPORTANT**: Phase 3 is **planning only** - NO code execution. All execution happens in Phase 5 via lite-execute.
**Executor Assignment** (Claude 智能分配plan 生成后执行):
```javascript
// 分配规则(优先级从高到低):
// 1. 用户明确指定:"用 gemini 分析..." → gemini, "codex 实现..." → codex
// 2. 默认 → agent
const executorAssignments = {} // { taskId: { executor: 'gemini'|'codex'|'agent', reason: string } }
plan.tasks.forEach(task => {
// Claude 根据上述规则语义分析,为每个 task 分配 executor
executorAssignments[task.id] = { executor: '...', reason: '...' }
})
```
**Low Complexity** - Direct planning by Claude:
```javascript
// Step 1: Read schema
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
// Step 2: Generate plan following schema (Claude directly, no agent)
// Step 2: ⚠️ MANDATORY - Read and review ALL exploration files
const manifest = JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
manifest.explorations.forEach(exp => {
const explorationData = Read(exp.path)
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
})
// Step 3: Generate plan following schema (Claude directly, no agent)
// ⚠️ Plan MUST incorporate insights from exploration files read in Step 2
const plan = {
summary: "...",
approach: "...",
@@ -348,13 +394,13 @@ const plan = {
estimated_time: "...",
recommended_execution: "Agent",
complexity: "Low",
_metadata: { timestamp: new Date().toISOString(), source: "direct-planning", planning_mode: "direct" }
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
}
// Step 3: Write plan to session folder
// Step 4: Write plan to session folder
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
```
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
@@ -362,6 +408,7 @@ Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
```javascript
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description="Generate detailed implementation plan",
prompt=`
Generate implementation plan and write plan.json.
@@ -391,23 +438,9 @@ ${JSON.stringify(clarificationContext) || "None"}
${complexity}
## Requirements
Generate plan.json with:
- summary: 2-3 sentence overview
- approach: High-level implementation strategy (incorporating insights from all exploration angles)
- tasks: 2-7 structured tasks (**IMPORTANT: group by feature/module, NOT by file**)
- **Task Granularity Principle**: Each task = one complete feature unit or module
- title: action verb + target module/feature (e.g., "Implement auth token refresh")
- scope: module path (src/auth/) or feature name, prefer module-level over single file
- action, description
- modification_points: ALL files to modify for this feature (group related changes)
- implementation (3-7 steps covering all modification_points)
- reference (pattern, files, examples)
- acceptance (2-4 criteria for the entire feature)
- depends_on: task IDs this task depends on (use sparingly, only for true dependencies)
- estimated_time, recommended_execution, complexity
- _metadata:
- timestamp, source, planning_mode
- exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
Generate plan.json following the schema obtained above. Key constraints:
- tasks: 2-7 structured tasks (**group by feature/module, NOT by file**)
- _metadata.exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
## Task Grouping Rules
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
@@ -419,10 +452,10 @@ Generate plan.json with:
7. **Prefer parallel**: Most tasks should be independent (no depends_on)
## Execution
1. Read ALL exploration files for comprehensive context
1. Read schema file (cat command above)
2. Execute CLI planning using Gemini (Qwen fallback)
3. Synthesize findings from multiple exploration angles
4. Parse output and structure plan
3. Read ALL exploration files for comprehensive context
4. Synthesize findings and generate plan following schema
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
6. Return brief completion summary
`
@@ -519,9 +552,13 @@ executionContext = {
explorationAngles: manifest.explorations.map(e => e.angle),
explorationManifest: manifest,
clarificationContext: clarificationContext || null,
executionMethod: userSelection.execution_method,
executionMethod: userSelection.execution_method, // 全局默认,可被 executorAssignments 覆盖
codeReviewTool: userSelection.code_review_tool,
originalUserInput: task_description,
// 任务级 executor 分配(优先于全局 executionMethod
executorAssignments: executorAssignments, // { taskId: { executor, reason } }
session: {
id: sessionId,
folder: sessionFolder,
@@ -546,7 +583,7 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
## Session Folder Structure
```
.workflow/.lite-plan/{task-slug}-{timestamp}/
.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/
├── exploration-{angle1}.json # Exploration angle 1
├── exploration-{angle2}.json # Exploration angle 2
├── exploration-{angle3}.json # Exploration angle 3 (if applicable)

View File

@@ -1,7 +1,7 @@
---
name: plan
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution
argument-hint: "[--cli-execute] \"text description\"|file.md"
description: 5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs
argument-hint: "\"text description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
@@ -61,7 +61,7 @@ Phase 2: Context Gathering
├─ Tasks attached: Analyze structure → Identify integration → Generate package
└─ Output: contextPath + conflict_risk
Phase 3: Conflict Resolution (conditional)
Phase 3: Conflict Resolution
└─ Decision (conflict_risk check):
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
@@ -69,7 +69,7 @@ Phase 3: Conflict Resolution (conditional)
└─ conflict_risk < medium → Skip to Phase 4
Phase 4: Task Generation
└─ /workflow:tools:task-generate-agent --session sessionId [--cli-execute]
└─ /workflow:tools:task-generate-agent --session sessionId
└─ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
Return:
@@ -168,7 +168,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
---
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk)
### Phase 3: Conflict Resolution
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
@@ -185,10 +185,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Parse Output**:
- Extract: Execution status (success/skipped/failed)
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
- Verify: conflict-resolution.json file path (if executed)
**Validation**:
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
**Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
@@ -273,15 +273,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Step 4.1: Dispatch** - Generate implementation plan and task JSONs
```javascript
// Default (agent mode)
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId]")
// With CLI execution (if --cli-execute flag present)
SlashCommand(command="/workflow:tools:task-generate-agent --session [sessionId] --cli-execute")
```
**Flag**:
- `--cli-execute`: Generate tasks with Codex execution commands
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", the agent embeds `command` fields in relevant `implementation_approach` steps.
**Input**: `sessionId` from Phase 1
@@ -423,7 +418,7 @@ Phase 3: conflict-resolution [AUTO-TRIGGERED if conflict_risk ≥ medium]
↓ Output: Modified brainstorm artifacts (NO report file)
↓ Skip if conflict_risk is none/low → proceed directly to Phase 4
Phase 4: task-generate-agent --session sessionId [--cli-execute]
Phase 4: task-generate-agent --session sessionId
↓ Input: sessionId + resolved brainstorm artifacts + session memory
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
@@ -502,11 +497,9 @@ Return summary to user
- Parse context path from Phase 2 output, store in memory
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
- Wait for Phase 3 to finish executing (if executed), verify conflict-resolution.json created
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
- **Build Phase 4 command**:
- Base command: `/workflow:tools:task-generate-agent --session [sessionId]`
- Add `--cli-execute` if flag present
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
- Pass session ID to Phase 4 command
- Verify all Phase 4 outputs
- Update TodoWrite after each phase (dynamically adjust for Phase 3 presence)

View File

@@ -46,8 +46,7 @@ Automated fix orchestrator with **two-phase architecture**: AI-powered planning
1. **Intelligent Planning**: AI-powered analysis identifies optimal grouping and execution strategy
2. **Multi-stage Coordination**: Supports complex parallel + serial execution with dependency management
3. **Conservative Safety**: Mandatory test verification with automatic rollback on failure
4. **Real-time Visibility**: Dashboard shows planning progress, stage timeline, and active agents
5. **Resume Support**: Checkpoint-based recovery for interrupted sessions
4. **Resume Support**: Checkpoint-based recovery for interrupted sessions
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for automated review finding fixes
@@ -59,14 +58,14 @@ Automated fix orchestrator with **two-phase architecture**: AI-powered planning
```
Phase 1: Discovery & Initialization
└─ Validate export file, create fix session structure, initialize state files → Generate fix-dashboard.html
└─ Validate export file, create fix session structure, initialize state files
Phase 2: Planning Coordination (@cli-planning-agent)
├─ Analyze findings for patterns and dependencies
├─ Group by file + dimension + root cause similarity
├─ Determine execution strategy (parallel/serial/hybrid)
├─ Generate fix timeline with stages
└─ Output: fix-plan.json (dashboard auto-polls for status)
└─ Output: fix-plan.json
Phase 3: Execution Orchestration (Stage-based)
For each timeline stage:
@@ -198,12 +197,10 @@ if (result.passRate < 100%) {
- Session creation: Generate fix-session-id (`fix-{timestamp}`)
- Directory structure: Create `{review-dir}/fixes/{fix-session-id}/` with subdirectories
- State files: Initialize active-fix-session.json (session marker)
- Dashboard generation: Create fix-dashboard.html from template (see Dashboard Generation below)
- TodoWrite initialization: Set up 4-phase tracking
**Phase 2: Planning Coordination**
- Launch @cli-planning-agent with findings data and project context
- Monitor planning progress (dashboard shows "Planning fixes..." indicator)
- Validate fix-plan.json output (schema conformance, includes metadata with session status)
- Load plan into memory for execution phase
- TodoWrite update: Mark planning complete, start execution
@@ -216,7 +213,6 @@ if (result.passRate < 100%) {
- Assign agent IDs (agents update their fix-progress-{N}.json)
- Handle agent failures gracefully (mark group as failed, continue)
- Advance to next stage only when current stage complete
- Dashboard polls and aggregates fix-progress-{N}.json files for display
**Phase 4: Completion & Aggregation**
- Collect final status from all fix-progress-{N}.json files
@@ -224,7 +220,7 @@ if (result.passRate < 100%) {
- Update fix-history.json with new session entry
- Remove active-fix-session.json
- TodoWrite completion: Mark all phases done
- Output summary to user with dashboard link
- Output summary to user
**Phase 5: Session Completion (Optional)**
- If all findings fixed successfully (no failures):
@@ -234,53 +230,12 @@ if (result.passRate < 100%) {
- Output: "Some findings failed. Review fix-summary.md before completing session."
- Do NOT auto-complete session
### Dashboard Generation
**MANDATORY**: Dashboard MUST be generated from template during Phase 1 initialization
**Template Location**: `~/.claude/templates/fix-dashboard.html`
**⚠️ POST-GENERATION**: Orchestrator and agents MUST NOT read/write/modify fix-dashboard.html after creation
**Generation Steps**:
```bash
# 1. Copy template to fix session directory
cp ~/.claude/templates/fix-dashboard.html ${sessionDir}/fixes/${fixSessionId}/fix-dashboard.html
# 2. Replace SESSION_ID placeholder
sed -i "s|{{SESSION_ID}}|${sessionId}|g" ${sessionDir}/fixes/${fixSessionId}/fix-dashboard.html
# 3. Replace REVIEW_DIR placeholder
sed -i "s|{{REVIEW_DIR}}|${reviewDir}|g" ${sessionDir}/fixes/${fixSessionId}/fix-dashboard.html
# 4. Start local server and output dashboard URL
cd ${sessionDir}/fixes/${fixSessionId} && python -m http.server 8766 --bind 127.0.0.1 &
echo "🔧 Fix Dashboard: http://127.0.0.1:8766/fix-dashboard.html"
echo " (Press Ctrl+C to stop server when done)"
```
**Dashboard Features**:
- Real-time progress tracking via JSON polling (3-second interval)
- Stage timeline visualization with parallel/serial execution modes
- Active groups and agents monitoring
- Flow control steps tracking for each agent
- Fix history drawer with session summaries
- Consumes new JSON structure (fix-plan.json with metadata + fix-progress-{N}.json)
**JSON Consumption**:
- `fix-plan.json`: Reads metadata field for session info, timeline stages, groups configuration
- `fix-progress-{N}.json`: Polls all progress files to aggregate real-time status
- `active-fix-session.json`: Detects active session on load
- `fix-history.json`: Loads historical fix sessions
### Output File Structure
```
.workflow/active/WFS-{session-id}/.review/
├── fix-export-{timestamp}.json # Exported findings (input)
└── fixes/{fix-session-id}/
├── fix-dashboard.html # Interactive dashboard (generated once, auto-polls JSON)
├── fix-plan.json # Planning agent output (execution plan with metadata)
├── fix-progress-1.json # Group 1 progress (planning agent init → agent updates)
├── fix-progress-2.json # Group 2 progress (planning agent init → agent updates)
@@ -291,10 +246,8 @@ echo " (Press Ctrl+C to stop server when done)"
```
**File Producers**:
- **Orchestrator**: `fix-dashboard.html` (generated once from template during Phase 1)
- **Planning Agent**: `fix-plan.json` (with metadata), all `fix-progress-*.json` (initial state)
- **Execution Agents**: Update assigned `fix-progress-{N}.json` in real-time
- **Dashboard (Browser)**: Reads `fix-plan.json` + all `fix-progress-*.json`, aggregates in-memory every 3 seconds via JavaScript polling
### Agent Invocation Template
@@ -347,7 +300,7 @@ For each group (G1, G2, G3, ...), generate fix-progress-{N}.json following templ
- Flow control: Empty implementation_approach array
- Errors: Empty array
**CRITICAL**: Ensure complete template structure for Dashboard consumption - all fields must be present.
**CRITICAL**: Ensure complete template structure - all fields must be present.
## Analysis Requirements
@@ -419,7 +372,7 @@ Task({
description: `Fix ${group.findings.length} issues: ${group.group_name}`,
prompt: `
## Task Objective
Execute fixes for code review findings in group ${group.group_id}. Update progress file in real-time with flow control tracking for dashboard visibility.
Execute fixes for code review findings in group ${group.group_id}. Update progress file in real-time with flow control tracking.
## Assignment
- Group ID: ${group.group_id}
@@ -549,7 +502,6 @@ When all findings processed:
### Progress File Updates
- **MUST update after every significant action** (before/after each step)
- **Dashboard polls every 3 seconds** - ensure writes are atomic
- **Always maintain complete structure** - never write partial updates
- **Use ISO 8601 timestamps** - e.g., "2025-01-25T14:36:00Z"
@@ -638,9 +590,17 @@ TodoWrite({
1. **Trust AI Planning**: Planning agent's grouping and execution strategy are based on dependency analysis
2. **Conservative Approach**: Test verification is mandatory - no fixes kept without passing tests
3. **Parallel Efficiency**: Default 3 concurrent agents balances speed and resource usage
4. **Monitor Dashboard**: Real-time stage timeline and agent status provide execution visibility
5. **Resume Support**: Fix sessions can resume from checkpoints after interruption
6. **Manual Review**: Always review failed fixes manually - may require architectural changes
7. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
4. **Resume Support**: Fix sessions can resume from checkpoints after interruption
5. **Manual Review**: Always review failed fixes manually - may require architectural changes
6. **Incremental Fixing**: Start with small batches (5-10 findings) before large-scale fixes
## Related Commands
### View Fix Progress
Use `ccw view` to open the workflow dashboard in browser:
```bash
ccw view
```

View File

@@ -51,14 +51,12 @@ Independent multi-dimensional code review orchestrator with **hybrid parallel-it
2. **Session-Integrated**: Review results tracked within workflow session for unified management
3. **Comprehensive Coverage**: Same 7 specialized dimensions as session review
4. **Intelligent Prioritization**: Automatic identification of critical issues and cross-cutting concerns
5. **Real-time Visibility**: JSON-based progress tracking with interactive HTML dashboard
6. **Unified Archive**: Review results archived with session for historical reference
5. **Unified Archive**: Review results archived with session for historical reference
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for independent multi-dimensional module review
- Manages: dimension coordination, aggregation, iteration control, progress tracking
- Delegates: Code exploration and analysis to @cli-explore-agent, dimension-specific reviews via Deep Scan mode
- **⚠️ DASHBOARD CONSTRAINT**: Dashboard is generated ONCE during Phase 1 initialization. After initialization, orchestrator and agents MUST NOT read, write, or modify dashboard.html - it remains static for user interaction only.
## How It Works
@@ -66,7 +64,7 @@ Independent multi-dimensional code review orchestrator with **hybrid parallel-it
```
Phase 1: Discovery & Initialization
└─ Resolve file patterns, validate paths, initialize state, create output structure → Generate dashboard.html
└─ Resolve file patterns, validate paths, initialize state, create output structure
Phase 2: Parallel Reviews (for each dimension)
├─ Launch 7 review agents simultaneously
@@ -90,7 +88,7 @@ Phase 4: Iterative Deep-Dive (optional)
└─ Loop until no critical findings OR max iterations
Phase 5: Completion
└─ Finalize review-progress.json → Output dashboard path
└─ Finalize review-progress.json
```
### Agent Roles
@@ -188,8 +186,8 @@ const CATEGORIES = {
**Step 1: Session Creation**
```javascript
// Create workflow session for this review
SlashCommand(command="/workflow:session:start \"Code review for [target_pattern]\"")
// Create workflow session for this review (type: review)
SlashCommand(command="/workflow:session:start --type review \"Code review for [target_pattern]\"")
// Parse output
const sessionId = output.match(/SESSION_ID: (WFS-[^\s]+)/)[1];
@@ -219,37 +217,9 @@ done
**Step 4: Initialize Review State**
- State initialization: Create `review-state.json` with metadata, dimensions, max_iterations, resolved_files (merged metadata + state)
- Progress tracking: Create `review-progress.json` for dashboard polling
- Progress tracking: Create `review-progress.json` for progress tracking
**Step 5: Dashboard Generation**
**Constraints**:
- **MANDATORY**: Dashboard MUST be generated from template: `~/.claude/templates/review-cycle-dashboard.html`
- **PROHIBITED**: Direct creation or custom generation without template
- **POST-GENERATION**: Orchestrator and agents MUST NOT read/write/modify dashboard.html after creation
**Generation Commands** (3 independent steps):
```bash
# Step 1: Copy template to output location
cp ~/.claude/templates/review-cycle-dashboard.html ${sessionDir}/.review/dashboard.html
# Step 2: Replace SESSION_ID placeholder
sed -i "s|{{SESSION_ID}}|${sessionId}|g" ${sessionDir}/.review/dashboard.html
# Step 3: Replace REVIEW_TYPE placeholder
sed -i "s|{{REVIEW_TYPE}}|module|g" ${sessionDir}/.review/dashboard.html
# Step 4: Replace REVIEW_DIR placeholder
sed -i "s|{{REVIEW_DIR}}|${reviewDir}|g" ${sessionDir}/.review/dashboard.html
# Output: Start local server and output dashboard URL
# Use Python HTTP server (available on most systems)
cd ${sessionDir}/.review && python -m http.server 8765 --bind 127.0.0.1 &
echo "📊 Dashboard: http://127.0.0.1:8765/dashboard.html"
echo " (Press Ctrl+C to stop server when done)"
```
**Step 6: TodoWrite Initialization**
**Step 5: TodoWrite Initialization**
- Set up progress tracking with hierarchical structure
- Mark Phase 1 completed, Phase 2 in_progress
@@ -280,7 +250,6 @@ echo " (Press Ctrl+C to stop server when done)"
- Finalize review-progress.json with completion statistics
- Update review-state.json with completion_time and phase=complete
- TodoWrite completion: Mark all tasks done
- Output: Dashboard path to user
@@ -301,12 +270,11 @@ echo " (Press Ctrl+C to stop server when done)"
├── iterations/ # Deep-dive results
│ ├── iteration-1-finding-{uuid}.json
│ └── iteration-2-finding-{uuid}.json
── reports/ # Human-readable reports
├── security-analysis.md
├── security-cli-output.txt
├── deep-dive-1-{uuid}.md
└── ...
└── dashboard.html # Interactive dashboard (primary output)
── reports/ # Human-readable reports
├── security-analysis.md
├── security-cli-output.txt
├── deep-dive-1-{uuid}.md
└── ...
```
**Session Context**:
@@ -423,6 +391,7 @@ echo " (Press Ctrl+C to stop server when done)"
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Execute ${dimension} review analysis via Deep Scan`,
prompt=`
## Task Objective
@@ -508,6 +477,7 @@ Task(
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
prompt=`
## Task Objective
@@ -772,23 +742,25 @@ TodoWrite({
3. **Use Glob Wisely**: `src/auth/**` is more efficient than `src/**` with lots of irrelevant files
4. **Trust Aggregation Logic**: Auto-selection based on proven heuristics
5. **Monitor Logs**: Check reports/ directory for CLI analysis insights
6. **Dashboard Polling**: Refresh every 5 seconds for real-time updates
7. **Export Results**: Use dashboard export for external tracking tools
## Related Commands
### View Review Progress
Use `ccw view` to open the review dashboard in browser:
```bash
ccw view
```
### Automated Fix Workflow
After completing a module review, use the dashboard to select findings and export them for automated fixing:
After completing a module review, use the generated findings JSON for automated fixing:
```bash
# Step 1: Complete review (this command)
/workflow:review-module-cycle src/auth/**
# Step 2: Open dashboard, select findings, and export
# Dashboard generates: fix-export-{timestamp}.json
# Step 3: Run automated fixes
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/fix-export-{timestamp}.json
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -45,13 +45,11 @@ Session-based multi-dimensional code review orchestrator with **hybrid parallel-
1. **Comprehensive Coverage**: 7 specialized dimensions analyze all quality aspects simultaneously
2. **Intelligent Prioritization**: Automatic identification of critical issues and cross-cutting concerns
3. **Actionable Insights**: Deep-dive iterations provide step-by-step remediation plans
4. **Real-time Visibility**: JSON-based progress tracking with interactive HTML dashboard
### Orchestrator Boundary (CRITICAL)
- **ONLY command** for comprehensive multi-dimensional review
- Manages: dimension coordination, aggregation, iteration control, progress tracking
- Delegates: Code exploration and analysis to @cli-explore-agent, dimension-specific reviews via Deep Scan mode
- **⚠️ DASHBOARD CONSTRAINT**: Dashboard is generated ONCE during Phase 1 initialization. After initialization, orchestrator and agents MUST NOT read, write, or modify dashboard.html - it remains static for user interaction only.
## How It Works
@@ -59,7 +57,7 @@ Session-based multi-dimensional code review orchestrator with **hybrid parallel-
```
Phase 1: Discovery & Initialization
└─ Validate session, initialize state, create output structure → Generate dashboard.html
└─ Validate session, initialize state, create output structure
Phase 2: Parallel Reviews (for each dimension)
├─ Launch 7 review agents simultaneously
@@ -83,7 +81,7 @@ Phase 4: Iterative Deep-Dive (optional)
└─ Loop until no critical findings OR max iterations
Phase 5: Completion
└─ Finalize review-progress.json → Output dashboard path
└─ Finalize review-progress.json
```
### Agent Roles
@@ -199,36 +197,9 @@ git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u
**Step 5: Initialize Review State**
- State initialization: Create `review-state.json` with metadata, dimensions, max_iterations (merged metadata + state)
- Progress tracking: Create `review-progress.json` for dashboard polling
- Progress tracking: Create `review-progress.json` for progress tracking
**Step 6: Dashboard Generation**
**Constraints**:
- **MANDATORY**: Dashboard MUST be generated from template: `~/.claude/templates/review-cycle-dashboard.html`
- **PROHIBITED**: Direct creation or custom generation without template
- **POST-GENERATION**: Orchestrator and agents MUST NOT read/write/modify dashboard.html after creation
**Generation Commands** (3 independent steps):
```bash
# Step 1: Copy template to output location
cp ~/.claude/templates/review-cycle-dashboard.html ${sessionDir}/.review/dashboard.html
# Step 2: Replace SESSION_ID placeholder
sed -i "s|{{SESSION_ID}}|${sessionId}|g" ${sessionDir}/.review/dashboard.html
# Step 3: Replace REVIEW_TYPE placeholder
sed -i "s|{{REVIEW_TYPE}}|session|g" ${sessionDir}/.review/dashboard.html
# Step 4: Replace REVIEW_DIR placeholder
sed -i "s|{{REVIEW_DIR}}|${reviewDir}|g" ${sessionDir}/.review/dashboard.html
# Output: Start local server and output dashboard URL
cd ${sessionDir}/.review && python -m http.server 8765 --bind 127.0.0.1 &
echo "📊 Dashboard: http://127.0.0.1:8765/dashboard.html"
echo " (Press Ctrl+C to stop server when done)"
```
**Step 7: TodoWrite Initialization**
**Step 6: TodoWrite Initialization**
- Set up progress tracking with hierarchical structure
- Mark Phase 1 completed, Phase 2 in_progress
@@ -259,7 +230,6 @@ echo " (Press Ctrl+C to stop server when done)"
- Finalize review-progress.json with completion statistics
- Update review-state.json with completion_time and phase=complete
- TodoWrite completion: Mark all tasks done
- Output: Dashboard path to user
@@ -280,12 +250,11 @@ echo " (Press Ctrl+C to stop server when done)"
├── iterations/ # Deep-dive results
│ ├── iteration-1-finding-{uuid}.json
│ └── iteration-2-finding-{uuid}.json
── reports/ # Human-readable reports
├── security-analysis.md
├── security-cli-output.txt
├── deep-dive-1-{uuid}.md
└── ...
└── dashboard.html # Interactive dashboard (primary output)
── reports/ # Human-readable reports
├── security-analysis.md
├── security-cli-output.txt
├── deep-dive-1-{uuid}.md
└── ...
```
**Session Context**:
@@ -432,6 +401,7 @@ echo " (Press Ctrl+C to stop server when done)"
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Execute ${dimension} review analysis via Deep Scan`,
prompt=`
## Task Objective
@@ -518,6 +488,7 @@ Task(
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
prompt=`
## Task Objective
@@ -782,23 +753,25 @@ TodoWrite({
2. **Parallel Execution**: ~60 minutes for full initial review (7 dimensions)
3. **Trust Aggregation Logic**: Auto-selection based on proven heuristics
4. **Monitor Logs**: Check reports/ directory for CLI analysis insights
5. **Dashboard Polling**: Refresh every 5 seconds for real-time updates
6. **Export Results**: Use dashboard export for external tracking tools
## Related Commands
### View Review Progress
Use `ccw view` to open the review dashboard in browser:
```bash
ccw view
```
### Automated Fix Workflow
After completing a review, use the dashboard to select findings and export them for automated fixing:
After completing a review, use the generated findings JSON for automated fixing:
```bash
# Step 1: Complete review (this command)
/workflow:review-session-cycle
# Step 2: Open dashboard, select findings, and export
# Dashboard generates: fix-export-{timestamp}.json
# Step 3: Run automated fixes
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/fix-export-{timestamp}.json
# Step 2: Run automated fixes using dimension findings
/workflow:review-fix .workflow/active/WFS-{session-id}/.review/
```
See `/workflow:review-fix` for automated fixing with smart grouping, parallel execution, and test verification.

View File

@@ -112,11 +112,15 @@ After bash validation, the model takes control to:
1. **Load Context**: Read completed task summaries and changed files
```bash
# Load implementation summaries
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
# Load implementation summaries (iterate through .summaries/ directory)
for summary in .workflow/active/${sessionId}/.summaries/*.md; do
cat "$summary"
done
# Load test results (if available)
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
for test_summary in .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null; do
cat "$test_summary"
done
# Get changed files
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
@@ -132,51 +136,53 @@ After bash validation, the model takes control to:
```
- Use Gemini for security analysis:
```bash
cd .workflow/active/${sessionId} && gemini -p "
ccw cli -p "
PURPOSE: Security audit of completed implementation
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Security findings report with severity levels
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
" --approval-mode yolo
" --tool gemini --mode write --cd .workflow/active/${sessionId}
```
**Architecture Review** (`--type=architecture`):
- Use Qwen for architecture analysis:
```bash
cd .workflow/active/${sessionId} && qwen -p "
ccw cli -p "
PURPOSE: Architecture compliance review
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Architecture assessment with recommendations
RULES: Check for patterns, separation of concerns, modularity, scalability
" --approval-mode yolo
" --tool qwen --mode write --cd .workflow/active/${sessionId}
```
**Quality Review** (`--type=quality`):
- Use Gemini for code quality:
```bash
cd .workflow/active/${sessionId} && gemini -p "
ccw cli -p "
PURPOSE: Code quality and best practices review
TASK: Assess code readability, maintainability, adherence to best practices
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Quality assessment with improvement suggestions
RULES: Check for code smells, duplication, complexity, naming conventions
" --approval-mode yolo
" --tool gemini --mode write --cd .workflow/active/${sessionId}
```
**Action Items Review** (`--type=action-items`):
- Verify all requirements and acceptance criteria met:
```bash
# Load task requirements and acceptance criteria
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
"Task: " + .id + "\n" +
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
"Acceptance: " + (.context.acceptance | join(", "))
' {} \;
for task_file in .workflow/active/${sessionId}/.task/*.json; do
cat "$task_file" | jq -r '
"Task: " + .id + "\n" +
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
"Acceptance: " + (.context.acceptance | join(", "))
'
done
# Check implementation summaries against requirements
cd .workflow/active/${sessionId} && gemini -p "
ccw cli -p "
PURPOSE: Verify all requirements and acceptance criteria are met
TASK: Cross-check implementation summaries against original requirements
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
@@ -190,7 +196,7 @@ After bash validation, the model takes control to:
- Verify all acceptance criteria are met
- Flag any incomplete or missing action items
- Assess deployment readiness
" --approval-mode yolo
" --tool gemini --mode write --cd .workflow/active/${sessionId}
```

View File

@@ -8,493 +8,146 @@ examples:
# Complete Workflow Session (/workflow:session:complete)
## Overview
Mark the currently active workflow session as complete, analyze it for lessons learned, move it to the archive directory, and remove the active flag marker.
Mark the currently active workflow session as complete, archive it, and update manifests.
## Usage
```bash
/workflow:session:complete # Complete current active session
/workflow:session:complete --detailed # Show detailed completion summary
```
## Implementation Flow
### Phase 1: Pre-Archival Preparation (Transactional Setup)
**Purpose**: Find active session, create archiving marker to prevent concurrent operations. Session remains in active location for agent processing.
#### Step 1.1: Find Active Session and Get Name
```bash
# Find active session directory
bash(find .workflow/active/ -name "WFS-*" -type d | head -1)
# Extract session name from directory path
bash(basename .workflow/active/WFS-session-name)
```
**Output**: Session name `WFS-session-name`
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
```bash
# Check if session is already being archived
bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
```
**If RESUMING**:
- Previous archival attempt was interrupted
- Skip to Phase 2 to resume agent analysis
**If NEW**:
- Continue to Step 1.3
#### Step 1.3: Create Archiving Marker
```bash
# Mark session as "archiving in progress"
bash(touch .workflow/active/WFS-session-name/.archiving)
```
**Purpose**:
- Prevents concurrent operations on this session
- Enables recovery if archival fails
- Session remains in `.workflow/active/` for agent analysis
**Result**: Session still at `.workflow/active/WFS-session-name/` with `.archiving` marker
### Phase 2: Agent Analysis (In-Place Processing)
**Purpose**: Agent analyzes session WHILE STILL IN ACTIVE LOCATION. Generates metadata but does NOT move files or update manifest.
#### Agent Invocation
Invoke `universal-executor` agent to analyze session and prepare archive metadata.
**Agent Task**:
```
Task(
subagent_type="universal-executor",
description="Analyze session for archival",
prompt=`
Analyze workflow session for archival preparation. Session is STILL in active location.
## Context
- Session: .workflow/active/WFS-session-name/
- Status: Marked as archiving (.archiving marker present)
- Location: Active sessions directory (NOT archived yet)
## Tasks
1. **Extract session data** from workflow-session.json
- session_id, description/topic, started_at, completed_at, status
- If status != "completed", update it with timestamp
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
3. **Extract review data** (if .review/ exists):
- Count dimension results: .review/dimensions/*.json
- Count deep-dive results: .review/iterations/*.json
- Extract findings summary from dimension JSONs (total, critical, high, medium, low)
- Check fix results if .review/fixes/ exists (fixed_count, failed_count)
- Build review_metrics: {dimensions_analyzed, total_findings, severity_distribution, fix_success_rate}
4. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
- Return: {successes, challenges, watch_patterns}
- If review data exists, include review-specific lessons (common issue patterns, effective fixes)
5. **Build archive entry**:
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
- If review data exists, include review_metrics in metrics object
6. **Extract feature metadata** (for Phase 4):
- Parse IMPL_PLAN.md for title (first # heading)
- Extract description (first paragraph, max 200 chars)
- Generate feature tags (3-5 keywords from content)
7. **Return result**: Complete metadata package for atomic commit
{
"status": "success",
"session_id": "WFS-session-name",
"archive_entry": {
"session_id": "...",
"description": "...",
"archived_at": "...",
"archive_path": ".workflow/archives/WFS-session-name",
"metrics": {
"duration_hours": 2.5,
"tasks_completed": 5,
"summaries_generated": 3,
"review_metrics": { // Optional, only if .review/ exists
"dimensions_analyzed": 4,
"total_findings": 15,
"severity_distribution": {"critical": 1, "high": 3, "medium": 8, "low": 3},
"fix_success_rate": 0.87 // Optional, only if .review/fixes/ exists
}
},
"tags": [...],
"lessons": {...}
},
"feature_metadata": {
"title": "...",
"description": "...",
"tags": [...]
}
}
## Important Constraints
- DO NOT move or delete any files
- DO NOT update manifest.json yet
- Session remains in .workflow/active/ during analysis
- Return complete metadata package for orchestrator to commit atomically
## Error Handling
- On failure: return {"status": "error", "task": "...", "message": "..."}
- Do NOT modify any files on error
`
)
```
**Expected Output**:
- Agent returns complete metadata package
- Session remains in `.workflow/active/` with `.archiving` marker
- No files moved or manifests updated yet
### Phase 3: Atomic Commit (Transactional File Operations)
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
#### Step 3.1: Create Archive Directory
```bash
bash(mkdir -p .workflow/archives/)
```
#### Step 3.2: Move Session to Archive
```bash
bash(mv .workflow/active/WFS-session-name .workflow/archives/WFS-session-name)
```
**Result**: Session now at `.workflow/archives/WFS-session-name/`
#### Step 3.3: Update Manifest
```bash
# Read current manifest (or create empty array if not exists)
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
```
**JSON Update Logic**:
```javascript
// Read agent result from Phase 2
const agentResult = JSON.parse(agentOutput);
const archiveEntry = agentResult.archive_entry;
// Read existing manifest
let manifest = [];
try {
const manifestContent = Read('.workflow/archives/manifest.json');
manifest = JSON.parse(manifestContent);
} catch {
manifest = []; // Initialize if not exists
}
// Append new entry
manifest.push(archiveEntry);
// Write back
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
```
#### Step 3.4: Remove Archiving Marker
```bash
bash(rm .workflow/archives/WFS-session-name/.archiving)
```
**Result**: Clean archived session without temporary markers
**Output Confirmation**:
```
✓ Session "${sessionId}" archived successfully
Location: .workflow/archives/WFS-session-name/
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
Manifest: Updated with ${manifest.length} total sessions
${reviewMetrics ? `Review: ${reviewMetrics.total_findings} findings across ${reviewMetrics.dimensions_analyzed} dimensions, ${Math.round(reviewMetrics.fix_success_rate * 100)}% fixed` : ''}
```
### Phase 4: Update Project Feature Registry
**Purpose**: Record completed session as a project feature in `.workflow/project.json`.
**Execution**: Uses feature metadata from Phase 2 agent result to update project registry.
#### Step 4.1: Check Project State Exists
```bash
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
```
**If SKIP**: Output warning and skip Phase 4
```
WARNING: No project.json found. Run /workflow:session:start to initialize.
```
#### Step 4.2: Extract Feature Information from Agent Result
**Data Processing** (Uses Phase 2 agent output):
```javascript
// Extract feature metadata from agent result
const agentResult = JSON.parse(agentOutput);
const featureMeta = agentResult.feature_metadata;
// Data already prepared by agent:
const title = featureMeta.title;
const description = featureMeta.description;
const tags = featureMeta.tags;
// Create feature ID (lowercase slug)
const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 50);
```
#### Step 4.3: Update project.json
## Pre-defined Commands
```bash
# Read current project state
bash(cat .workflow/project.json)
# Phase 1: Find active session
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
SESSION_ID=$(basename "$SESSION_PATH")
# Phase 3: Move to archive
mkdir -p .workflow/archives/
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
# Cleanup marker
rm -f .workflow/archives/$SESSION_ID/.archiving
```
**JSON Update Logic**:
```javascript
// Read existing project.json (created by /workflow:init)
// Note: overview field is managed by /workflow:init, not modified here
const projectMeta = JSON.parse(Read('.workflow/project.json'));
const currentTimestamp = new Date().toISOString();
const currentDate = currentTimestamp.split('T')[0]; // YYYY-MM-DD
## Key Files to Read
// Extract tags from IMPL_PLAN.md (simple keyword extraction)
const tags = extractTags(planContent); // e.g., ["auth", "security"]
**For manifest.json generation**, read ONLY these files:
// Build feature object with complete metadata
const newFeature = {
id: featureId,
title: title,
description: description,
status: "completed",
tags: tags,
timeline: {
created_at: currentTimestamp,
implemented_at: currentDate,
updated_at: currentTimestamp
},
traceability: {
session_id: sessionId,
archive_path: archivePath, // e.g., ".workflow/archives/WFS-auth-system"
commit_hash: getLatestCommitHash() || "" // Optional: git rev-parse HEAD
},
docs: [], // Placeholder for future doc links
relations: [] // Placeholder for feature dependencies
};
| File | Extract |
|------|---------|
| `$SESSION_PATH/workflow-session.json` | session_id, description, started_at, status |
| `$SESSION_PATH/IMPL_PLAN.md` | title (first # heading), description (first paragraph) |
| `$SESSION_PATH/.tasks/*.json` | count files |
| `$SESSION_PATH/.summaries/*.md` | count files |
| `$SESSION_PATH/.review/dimensions/*.json` | count + findings summary (optional) |
// Add new feature to array
projectMeta.features.push(newFeature);
## Execution Flow
// Update statistics
projectMeta.statistics.total_features = projectMeta.features.length;
projectMeta.statistics.total_sessions += 1;
projectMeta.statistics.last_updated = currentTimestamp;
### Phase 1: Find Session (2 commands)
// Write back
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
```bash
# 1. Find and extract session
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
SESSION_ID=$(basename "$SESSION_PATH")
# 2. Check/create archiving marker
test -f "$SESSION_PATH/.archiving" && echo "RESUMING" || touch "$SESSION_PATH/.archiving"
```
**Helper Functions**:
```javascript
// Extract tags from IMPL_PLAN.md content
function extractTags(planContent) {
const tags = [];
**Output**: `SESSION_ID` = e.g., `WFS-auth-feature`
// Look for common keywords
const keywords = {
'auth': /authentication|login|oauth|jwt/i,
'security': /security|encrypt|hash|token/i,
'api': /api|endpoint|rest|graphql/i,
'ui': /component|page|interface|frontend/i,
'database': /database|schema|migration|sql/i,
'test': /test|testing|spec|coverage/i
};
### Phase 2: Generate Manifest Entry (Read-only)
for (const [tag, pattern] of Object.entries(keywords)) {
if (pattern.test(planContent)) {
tags.push(tag);
Read the key files above, then build this structure:
```json
{
"session_id": "<from workflow-session.json>",
"description": "<from workflow-session.json>",
"archived_at": "<current ISO timestamp>",
"archive_path": ".workflow/archives/<SESSION_ID>",
"metrics": {
"duration_hours": "<(completed_at - started_at) / 3600000>",
"tasks_completed": "<count .tasks/*.json>",
"summaries_generated": "<count .summaries/*.md>",
"review_metrics": {
"dimensions_analyzed": "<count .review/dimensions/*.json>",
"total_findings": "<sum from dimension JSONs>"
}
}
return tags.slice(0, 5); // Max 5 tags
}
// Get latest git commit hash (optional)
function getLatestCommitHash() {
try {
const result = Bash({
command: "git rev-parse --short HEAD 2>/dev/null",
description: "Get latest commit hash"
});
return result.trim();
} catch {
return "";
},
"tags": ["<3-5 keywords from IMPL_PLAN.md>"],
"lessons": {
"successes": ["<key wins>"],
"challenges": ["<difficulties>"],
"watch_patterns": ["<patterns to monitor>"]
}
}
```
#### Step 4.4: Output Confirmation
**Lessons Generation**: Use gemini with `~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt`
```
✓ Feature "${title}" added to project registry
ID: ${featureId}
Session: ${sessionId}
Location: .workflow/project.json
### Phase 3: Atomic Commit (4 commands)
```bash
# 1. Create archive directory
mkdir -p .workflow/archives/
# 2. Move session
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
# 3. Update manifest.json (Read → Append → Write)
# Read: .workflow/archives/manifest.json (or [])
# Append: archive_entry from Phase 2
# Write: updated JSON
# 4. Remove marker
rm -f .workflow/archives/$SESSION_ID/.archiving
```
**Error Handling**:
- If project.json malformed: Output error, skip update
- If feature_metadata missing from agent result: Skip Phase 4
- If extraction fails: Use minimal defaults
**Output**:
```
✓ Session "$SESSION_ID" archived successfully
Location: .workflow/archives/$SESSION_ID/
Manifest: Updated with N total sessions
```
**Phase 4 Total Commands**: 1 bash read + JSON manipulation
### Phase 4: Update project.json (Optional)
**Skip if**: `.workflow/project.json` doesn't exist
```bash
# Check
test -f .workflow/project.json || echo "SKIP"
```
**If exists**, add feature entry:
```json
{
"id": "<slugified title>",
"title": "<from IMPL_PLAN.md>",
"status": "completed",
"tags": ["<from Phase 2>"],
"timeline": { "implemented_at": "<date>" },
"traceability": { "session_id": "<SESSION_ID>", "archive_path": "<path>" }
}
```
**Output**:
```
✓ Feature added to project registry
```
## Error Recovery
### If Agent Fails (Phase 2)
| Phase | Symptom | Recovery |
|-------|---------|----------|
| 1 | No active session | `No active session found` |
| 2 | Analysis fails | Remove marker: `rm $SESSION_PATH/.archiving`, retry |
| 3 | Move fails | Session safe in active/, fix issue, retry |
| 3 | Manifest fails | Session in archives/, manually add entry, remove marker |
**Symptoms**:
- Agent returns `{"status": "error", ...}`
- Agent crashes or times out
- Analysis incomplete
## Quick Reference
**Recovery Steps**:
```bash
# Session still in .workflow/active/WFS-session-name
# Remove archiving marker
bash(rm .workflow/active/WFS-session-name/.archiving)
```
**User Notification**:
Phase 1: find session → create .archiving marker
Phase 2: read key files → build manifest entry (no writes)
Phase 3: mkdir → mv → update manifest.json → rm marker
Phase 4: update project.json features array (optional)
```
ERROR: Session archival failed during analysis phase
Reason: [error message from agent]
Session remains active in: .workflow/active/WFS-session-name
Recovery:
1. Fix any issues identified in error message
2. Retry: /workflow:session:complete
Session state: SAFE (no changes committed)
```
### If Move Fails (Phase 3)
**Symptoms**:
- `mv` command fails
- Permission denied
- Disk full
**Recovery Steps**:
```bash
# Archiving marker still present
# Session still in .workflow/active/ (move failed)
# No manifest updated yet
```
**User Notification**:
```
ERROR: Session archival failed during move operation
Reason: [mv error message]
Session remains in: .workflow/active/WFS-session-name
Recovery:
1. Fix filesystem issues (permissions, disk space)
2. Retry: /workflow:session:complete
- System will detect .archiving marker
- Will resume from Phase 2 (agent analysis)
Session state: SAFE (analysis complete, ready to retry move)
```
### If Manifest Update Fails (Phase 3)
**Symptoms**:
- JSON parsing error
- Write permission denied
- Session moved but manifest not updated
**Recovery Steps**:
```bash
# Session moved to .workflow/archives/WFS-session-name
# Manifest NOT updated
# Archiving marker still present in archived location
```
**User Notification**:
```
ERROR: Session archived but manifest update failed
Reason: [error message]
Session location: .workflow/archives/WFS-session-name
Recovery:
1. Fix manifest.json issues (syntax, permissions)
2. Manual manifest update:
- Add archive entry from agent output
- Remove .archiving marker: rm .workflow/archives/WFS-session-name/.archiving
Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
```
## Workflow Execution Strategy
### Transactional Four-Phase Approach
**Phase 1: Pre-Archival Preparation** (Marker creation)
- Find active session and extract name
- Check for existing `.archiving` marker (resume detection)
- Create `.archiving` marker if new
- **No data processing** - just state tracking
- **Total**: 2-3 bash commands (find + marker check/create)
**Phase 2: Agent Analysis** (Read-only data processing)
- Extract all session data from active location
- Count tasks and summaries
- Extract review data if .review/ exists (dimension results, findings, fix results)
- Generate lessons learned analysis (including review-specific lessons if applicable)
- Extract feature metadata from IMPL_PLAN.md
- Build complete archive + feature metadata package (with review_metrics if applicable)
- **No file modifications** - pure analysis
- **Total**: 1 agent invocation
**Phase 3: Atomic Commit** (Transactional file operations)
- Create archive directory
- Move session to archive location
- Update manifest.json with archive entry
- Remove `.archiving` marker
- **All-or-nothing**: Either all succeed or session remains in safe state
- **Total**: 4 bash commands + JSON manipulation
**Phase 4: Project Registry Update** (Optional feature tracking)
- Check project.json exists
- Use feature metadata from Phase 2 agent result
- Build feature object with complete traceability
- Update project statistics
- **Independent**: Can fail without affecting archival
- **Total**: 1 bash read + JSON manipulation
### Transactional Guarantees
**State Consistency**:
- Session NEVER in inconsistent state
- `.archiving` marker enables safe resume
- Agent failure leaves session in recoverable state
- Move/manifest operations grouped in Phase 3
**Failure Isolation**:
- Phase 1 failure: No changes made
- Phase 2 failure: Session still active, can retry
- Phase 3 failure: Clear error state, manual recovery documented
- Phase 4 failure: Does not affect archival success
**Resume Capability**:
- Detect interrupted archival via `.archiving` marker
- Resume from Phase 2 (skip marker creation)
- Idempotent operations (safe to retry)

View File

@@ -1,11 +1,13 @@
---
name: start
description: Discover existing sessions or start new workflow session with intelligent session management and conflict detection
argument-hint: [--auto|--new] [optional: task description for new session]
argument-hint: [--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]
examples:
- /workflow:session:start
- /workflow:session:start --auto "implement OAuth2 authentication"
- /workflow:session:start --new "fix login bug"
- /workflow:session:start --type review "Code review for auth module"
- /workflow:session:start --type tdd --auto "implement user authentication"
- /workflow:session:start --type test --new "test payment flow"
---
# Start Workflow Session (/workflow:session:start)
@@ -17,6 +19,23 @@ Manages workflow sessions with three operation modes: discovery (manual), auto (
1. **Project-level initialization** (first-time only): Creates `.workflow/project.json` for feature registry
2. **Session-level initialization** (always): Creates session directory structure
## Session Types
The `--type` parameter classifies sessions for CCW dashboard organization:
| Type | Description | Default For |
|------|-------------|-------------|
| `workflow` | Standard implementation (default) | `/workflow:plan` |
| `review` | Code review sessions | `/workflow:review-module-cycle` |
| `tdd` | TDD-based development | `/workflow:tdd-plan` |
| `test` | Test generation/fix sessions | `/workflow:test-fix-gen` |
| `docs` | Documentation sessions | `/memory:docs` |
**Validation**: If `--type` is provided with invalid value, return error:
```
ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
```
## Step 0: Initialize Project State (First-time Only)
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
@@ -86,8 +105,8 @@ bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.process)
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
# Create metadata
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
# Create metadata (include type field, default to "workflow" if not specified)
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
```
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
@@ -143,7 +162,8 @@ bash(mkdir -p .workflow/active/WFS-fix-login-bug/.summaries)
### Step 3: Create Metadata
```bash
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
# Include type field from --type parameter (default: "workflow")
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
```
**Output**: `SESSION_ID: WFS-fix-login-bug`

View File

@@ -1,357 +0,0 @@
---
name: workflow:status
description: Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view
argument-hint: "[optional: --project|task-id|--validate|--dashboard]"
---
# Workflow Status Command (/workflow:status)
## Overview
Generates on-demand views from project and session data. Supports multiple modes:
1. **Project Overview** (`--project`): Shows completed features and project statistics
2. **Workflow Tasks** (default): Shows current session task progress
3. **HTML Dashboard** (`--dashboard`): Generates interactive HTML task board with active and archived sessions
No synchronization needed - all views are calculated from current JSON state.
## Usage
```bash
/workflow:status # Show current workflow session overview
/workflow:status --project # Show project-level feature registry
/workflow:status impl-1 # Show specific task details
/workflow:status --validate # Validate workflow integrity
/workflow:status --dashboard # Generate HTML dashboard board
```
## Execution Process
```
Input Parsing:
└─ Decision (mode detection):
├─ --project flag → Project Overview Mode
├─ --dashboard flag → Dashboard Mode
├─ task-id argument → Task Details Mode
└─ No flags → Workflow Session Mode (default)
Project Overview Mode:
├─ Check project.json exists
├─ Read project data
├─ Parse and display overview + features
└─ Show recent archived sessions
Workflow Session Mode (default):
├─ Find active session
├─ Load session data
├─ Scan task files
└─ Display task progress
Dashboard Mode:
├─ Collect active sessions
├─ Collect archived sessions
├─ Generate HTML from template
└─ Write dashboard.html
```
## Implementation Flow
### Mode Selection
**Check for --project flag**:
- If `--project` flag present → Execute **Project Overview Mode**
- Otherwise → Execute **Workflow Session Mode** (default)
## Project Overview Mode
### Step 1: Check Project State
```bash
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
```
**If NOT_FOUND**:
```
No project state found.
Run /workflow:session:start to initialize project.
```
### Step 2: Read Project Data
```bash
bash(cat .workflow/project.json)
```
### Step 3: Parse and Display
**Data Processing**:
```javascript
const projectData = JSON.parse(Read('.workflow/project.json'));
const features = projectData.features || [];
const stats = projectData.statistics || {};
const overview = projectData.overview || null;
// Sort features by implementation date (newest first)
const sortedFeatures = features.sort((a, b) =>
new Date(b.implemented_at) - new Date(a.implemented_at)
);
```
**Output Format** (with extended overview):
```
## Project: ${projectData.project_name}
Initialized: ${projectData.initialized_at}
${overview ? `
### Overview
${overview.description}
**Technology Stack**:
${overview.technology_stack.languages.map(l => `- ${l.name}${l.primary ? ' (primary)' : ''}: ${l.file_count} files`).join('\n')}
Frameworks: ${overview.technology_stack.frameworks.join(', ')}
**Architecture**:
Style: ${overview.architecture.style}
Patterns: ${overview.architecture.patterns.join(', ')}
**Key Components** (${overview.key_components.length}):
${overview.key_components.map(c => `- ${c.name} (${c.path})\n ${c.description}`).join('\n')}
**Metrics**:
- Files: ${overview.metrics.total_files}
- Lines of Code: ${overview.metrics.lines_of_code}
- Complexity: ${overview.metrics.complexity}
---
` : ''}
### Completed Features (${stats.total_features})
${sortedFeatures.map(f => `
- ${f.title} (${f.timeline?.implemented_at || f.implemented_at})
${f.description}
Tags: ${f.tags?.join(', ') || 'none'}
Session: ${f.traceability?.session_id || f.session_id}
Archive: ${f.traceability?.archive_path || 'unknown'}
${f.traceability?.commit_hash ? `Commit: ${f.traceability.commit_hash}` : ''}
`).join('\n')}
### Project Statistics
- Total Features: ${stats.total_features}
- Total Sessions: ${stats.total_sessions}
- Last Updated: ${stats.last_updated}
### Quick Access
- View session details: /workflow:status
- Archive query: jq '.archives[] | select(.session_id == "SESSION_ID")' .workflow/archives/manifest.json
- Documentation: .workflow/docs/${projectData.project_name}/
### Query Commands
# Find by tag
cat .workflow/project.json | jq '.features[] | select(.tags[] == "auth")'
# View archive
cat ${feature.traceability.archive_path}/IMPL_PLAN.md
# List all tags
cat .workflow/project.json | jq -r '.features[].tags[]' | sort -u
```
**Empty State**:
```
## Project: ${projectData.project_name}
Initialized: ${projectData.initialized_at}
No features completed yet.
Complete your first workflow session to add features:
1. /workflow:plan "feature description"
2. /workflow:execute
3. /workflow:session:complete
```
### Step 4: Show Recent Sessions (Optional)
```bash
# List 5 most recent archived sessions
bash(ls -1t .workflow/archives/WFS-* 2>/dev/null | head -5 | xargs -I {} basename {})
```
**Output**:
```
### Recent Sessions
- WFS-auth-system (archived)
- WFS-payment-flow (archived)
- WFS-user-dashboard (archived)
Use /workflow:session:complete to archive current session.
```
## Workflow Session Mode (Default)
### Step 1: Find Active Session
```bash
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1
```
### Step 2: Load Session Data
```bash
cat .workflow/active/WFS-session/workflow-session.json
```
### Step 3: Scan Task Files
```bash
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null
```
### Step 4: Generate Task Status
```bash
cat .workflow/active/WFS-session/.task/impl-1.json | jq -r '.status'
```
### Step 5: Count Task Progress
```bash
find .workflow/active/WFS-session/.task/ -name "*.json" -type f | wc -l
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
```
### Step 6: Display Overview
```markdown
# Workflow Overview
**Session**: WFS-session-name
**Progress**: 3/8 tasks completed
## Active Tasks
- [IN PROGRESS] impl-1: Current task in progress
- [ ] impl-2: Next pending task
## Completed Tasks
- [COMPLETED] impl-0: Setup completed
```
## Dashboard Mode (HTML Board)
### Step 1: Check for --dashboard flag
```bash
# If --dashboard flag present → Execute Dashboard Mode
```
### Step 2: Collect Workflow Data
**Collect Active Sessions**:
```bash
# Find all active sessions
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null
# For each active session, read metadata and tasks
for session in $(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null); do
cat "$session/workflow-session.json"
find "$session/.task/" -name "*.json" -type f 2>/dev/null
done
```
**Collect Archived Sessions**:
```bash
# Find all archived sessions
find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null
# Read manifest if exists
cat .workflow/archives/manifest.json 2>/dev/null
# For each archived session, read metadata
for archive in $(find .workflow/archives/ -name "WFS-*" -type d 2>/dev/null); do
cat "$archive/workflow-session.json" 2>/dev/null
# Count completed tasks
find "$archive/.task/" -name "*.json" -type f 2>/dev/null | wc -l
done
```
### Step 3: Process and Structure Data
**Build data structure for dashboard**:
```javascript
const dashboardData = {
activeSessions: [],
archivedSessions: [],
generatedAt: new Date().toISOString()
};
// Process active sessions
for each active_session in active_sessions:
const sessionData = JSON.parse(Read(active_session/workflow-session.json));
const tasks = [];
// Load all tasks for this session
for each task_file in find(active_session/.task/*.json):
const taskData = JSON.parse(Read(task_file));
tasks.push({
task_id: taskData.task_id,
title: taskData.title,
status: taskData.status,
type: taskData.type
});
dashboardData.activeSessions.push({
session_id: sessionData.session_id,
project: sessionData.project,
status: sessionData.status,
created_at: sessionData.created_at || sessionData.initialized_at,
tasks: tasks
});
// Process archived sessions
for each archived_session in archived_sessions:
const sessionData = JSON.parse(Read(archived_session/workflow-session.json));
const taskCount = bash(find archived_session/.task/*.json | wc -l);
dashboardData.archivedSessions.push({
session_id: sessionData.session_id,
project: sessionData.project,
archived_at: sessionData.completed_at || sessionData.archived_at,
taskCount: parseInt(taskCount),
archive_path: archived_session
});
```
### Step 4: Generate HTML from Template
**Load template and inject data**:
```javascript
// Read the HTML template
const template = Read("~/.claude/templates/workflow-dashboard.html");
// Prepare data for injection
const dataJson = JSON.stringify(dashboardData, null, 2);
// Replace placeholder with actual data
const htmlContent = template.replace('{{WORKFLOW_DATA}}', dataJson);
// Ensure .workflow directory exists
bash(mkdir -p .workflow);
```
### Step 5: Write HTML File
```bash
# Write the generated HTML to .workflow/dashboard.html
Write({
file_path: ".workflow/dashboard.html",
content: htmlContent
})
```
### Step 6: Display Success Message
```markdown
Dashboard generated successfully!
Location: .workflow/dashboard.html
Open in browser:
file://$(pwd)/.workflow/dashboard.html
Features:
- 📊 Active sessions overview
- 📦 Archived sessions history
- 🔍 Search and filter
- 📈 Progress tracking
- 🎨 Dark/light theme
Refresh data: Re-run /workflow:status --dashboard
```

View File

@@ -1,7 +1,7 @@
---
name: tdd-plan
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking
argument-hint: "[--cli-execute] \"feature description\"|file.md"
argument-hint: "\"feature description\"|file.md"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
@@ -11,9 +11,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
**This command is a pure orchestrator**: Dispatches 6 slash commands in sequence, parse outputs, pass context, and ensure complete TDD workflow creation with Red-Green-Refactor task generation.
**Execution Modes**:
- **Agent Mode** (default): Use `/workflow:tools:task-generate-tdd` (autonomous agent-driven)
- **CLI Mode** (`--cli-execute`): Use `/workflow:tools:task-generate-tdd --cli-execute` (Gemini/Qwen)
**CLI Tool Selection**: CLI tool usage is determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
**Task Attachment Model**:
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
@@ -46,7 +44,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
**Step 1.1: Dispatch** - Session discovery and initialization
```javascript
SlashCommand(command="/workflow:session:start --auto \"TDD: [structured-description]\"")
SlashCommand(command="/workflow:session:start --type tdd --auto \"TDD: [structured-description]\"")
```
**TDD Structured Format**:
@@ -166,10 +164,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Parse Output**:
- Extract: Execution status (success/skipped/failed)
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
- Verify: conflict-resolution.json file path (if executed)
**Validation**:
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
**Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 5
@@ -235,13 +233,11 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Step 5.1: Dispatch** - TDD task generation via action-planning-agent
```javascript
// Agent Mode (default)
SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId]")
// CLI Mode (--cli-execute flag)
SlashCommand(command="/workflow:tools:task-generate-tdd --session [sessionId] --cli-execute")
```
**Note**: CLI tool usage is determined semantically from user's task description.
**Parse**: Extract feature count, task count (not chain count - tasks now contain internal TDD cycles)
**Validate**:
@@ -406,7 +402,7 @@ TDD Workflow Orchestrator
│ ├─ Phase 4.1: Detect conflicts with CLI
│ ├─ Phase 4.2: Present conflicts to user
│ └─ Phase 4.3: Apply resolution strategies
│ └─ Returns: CONFLICT_RESOLUTION.md ← COLLAPSED
│ └─ Returns: conflict-resolution.json ← COLLAPSED
│ ELSE:
│ └─ Skip to Phase 5
@@ -454,8 +450,7 @@ Convert user input to TDD-structured format:
- `/workflow:tools:test-context-gather` - Phase 3: Analyze existing test patterns and coverage
- `/workflow:tools:conflict-resolution` - Phase 4: Detect and resolve conflicts (auto-triggered if conflict_risk ≥ medium)
- `/compact` - Phase 4: Memory optimization (if context approaching limits)
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks with agent-driven approach (default, autonomous)
- `/workflow:tools:task-generate-tdd --cli-execute` - Phase 5: Generate TDD tasks with CLI tools (Gemini/Qwen, when `--cli-execute` flag used)
- `/workflow:tools:task-generate-tdd` - Phase 5: Generate TDD tasks (CLI tool usage determined semantically)
**Follow-up Commands**:
- `/workflow:action-plan-verify` - Recommended: Verify TDD plan quality and structure before execution

View File

@@ -77,18 +77,32 @@ find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
```bash
# Load all task JSONs
find .workflow/active/{sessionId}/.task/ -name '*.json'
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file"
done
# Extract task IDs
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.id'
done
# Check dependencies
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
# Check dependencies - read tasks and filter for IMPL/REFACTOR
for task_file in .workflow/active/{sessionId}/.task/IMPL-*.json; do
cat "$task_file" | jq -r '.context.depends_on[]?'
done
for task_file in .workflow/active/{sessionId}/.task/REFACTOR-*.json; do
cat "$task_file" | jq -r '.context.depends_on[]?'
done
# Check meta fields
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.meta.tdd_phase'
done
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.meta.agent'
done
```
**Validation**:
@@ -127,7 +141,7 @@ find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent
**Gemini analysis for comprehensive TDD compliance report**
```bash
cd project-root && gemini -p "
ccw cli -p "
PURPOSE: Generate TDD compliance report
TASK: Analyze TDD workflow execution and generate quality report
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
@@ -139,7 +153,7 @@ EXPECTED:
- Red-Green-Refactor cycle validation
- Best practices adherence assessment
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
" --tool gemini --mode analysis --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
```
**Output**: TDD_COMPLIANCE_REPORT.md

View File

@@ -221,6 +221,7 @@ return "conservative";
```javascript
Task(
subagent_type="cli-planning-agent",
run_in_background=false,
description=`Analyze test failures (iteration ${N}) - ${strategy} strategy`,
prompt=`
## Task Objective
@@ -271,6 +272,7 @@ Task(
```javascript
Task(
subagent_type="test-fix-agent",
run_in_background=false,
description=`Execute ${task.meta.type}: ${task.title}`,
prompt=`
## Task Objective

View File

@@ -1,7 +1,7 @@
---
name: test-fix-gen
description: Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning
argument-hint: "[--use-codex] [--cli-execute] (source-session-id | \"feature description\" | /path/to/file.md)"
argument-hint: "(source-session-id | \"feature description\" | /path/to/file.md)"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
@@ -43,7 +43,7 @@ fi
- **Session Isolation**: Creates independent `WFS-test-[slug]` session
- **Context-First**: Gathers implementation context via appropriate method
- **Format Reuse**: Creates standard `IMPL-*.json` tasks with `meta.type: "test-fix"`
- **Manual First**: Default to manual fixes, use `--use-codex` for automation
- **Semantic CLI Selection**: CLI tool usage determined from user's task description
- **Automatic Detection**: Input pattern determines execution mode
### Coordinator Role
@@ -79,16 +79,14 @@ This command is a **pure planning coordinator**:
```bash
# Basic syntax
/workflow:test-fix-gen [FLAGS] <INPUT>
# Flags (optional)
--use-codex # Enable Codex automated fixes in IMPL-002
--cli-execute # Enable CLI execution in IMPL-001
/workflow:test-fix-gen <INPUT>
# Input
<INPUT> # Session ID, description, or file path
```
**Note**: CLI tool usage is determined semantically from the task description. To request CLI execution, include it in your description (e.g., "use Codex for automated fixes").
### Usage Examples
#### Session Mode
@@ -96,11 +94,8 @@ This command is a **pure planning coordinator**:
# Test validation for completed implementation
/workflow:test-fix-gen WFS-user-auth-v2
# With automated fixes
/workflow:test-fix-gen --use-codex WFS-api-endpoints
# With CLI execution
/workflow:test-fix-gen --cli-execute --use-codex WFS-payment-flow
# With semantic CLI request
/workflow:test-fix-gen WFS-api-endpoints # Add "use Codex" in description for automated fixes
```
#### Prompt Mode - Text Description
@@ -108,17 +103,14 @@ This command is a **pure planning coordinator**:
# Generate tests from feature description
/workflow:test-fix-gen "Test the user authentication API endpoints in src/auth/api.ts"
# With automated fixes
/workflow:test-fix-gen --use-codex "Test user registration and login flows"
# With CLI execution (semantic)
/workflow:test-fix-gen "Test user registration and login flows, use Codex for automated fixes"
```
#### Prompt Mode - File Reference
```bash
# Generate tests from requirements file
/workflow:test-fix-gen ./docs/api-requirements.md
# With flags
/workflow:test-fix-gen --use-codex --cli-execute ./specs/feature.md
```
### Mode Comparison
@@ -143,7 +135,7 @@ This command is a **pure planning coordinator**:
5. **Complete All Phases**: Do not return until Phase 5 completes
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
7. **Automatic Detection**: Mode auto-detected from input pattern
8. **Parse Flags**: Extract `--use-codex` and `--cli-execute` flags for Phase 4
8. **Semantic CLI Detection**: CLI tool usage determined from user's task description for Phase 4
9. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
10. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
@@ -151,23 +143,35 @@ This command is a **pure planning coordinator**:
#### Phase 1: Create Test Session
**Step 1.1: Dispatch** - Create test workflow session
**Step 1.0: Load Source Session Intent (Session Mode Only)** - Preserve user's original task description for semantic CLI selection
```javascript
// Session Mode
SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]\"")
// Session Mode: Read source session metadata to get original task description
Read(".workflow/active/[sourceSessionId]/workflow-session.json")
// OR if context-package exists:
Read(".workflow/active/[sourceSessionId]/.process/context-package.json")
// Prompt Mode
SlashCommand(command="/workflow:session:start --new \"Test generation for: [description]\"")
// Extract: metadata.task_description or project/description field
// This preserves user's CLI tool preferences (e.g., "use Codex for fixes")
```
**Step 1.1: Dispatch** - Create test workflow session with preserved intent
```javascript
// Session Mode - Include original task description to enable semantic CLI selection
SlashCommand(command="/workflow:session:start --type test --new \"Test validation for [sourceSessionId]: [originalTaskDescription]\"")
// Prompt Mode - User's description already contains their intent
SlashCommand(command="/workflow:session:start --type test --new \"Test generation for: [description]\"")
```
**Input**: User argument (session ID, description, or file path)
**Expected Behavior**:
- Creates new session: `WFS-test-[slug]`
- Writes `workflow-session.json` metadata:
- **Session Mode**: Includes `workflow_type: "test_session"`, `source_session_id: "[sourceId]"`
- **Prompt Mode**: Includes `workflow_type: "test_session"` only
- Writes `workflow-session.json` metadata with `type: "test"`
- **Session Mode**: Additionally includes `source_session_id: "[sourceId]"`, description with original user intent
- **Prompt Mode**: Uses user's description (already contains intent)
- Returns new session ID
**Parse Output**:
@@ -283,13 +287,13 @@ For each targeted file/function, Gemini MUST generate:
**Step 4.1: Dispatch** - Generate test task JSONs
```javascript
SlashCommand(command="/workflow:tools:test-task-generate [--use-codex] [--cli-execute] --session [testSessionId]")
SlashCommand(command="/workflow:tools:test-task-generate --session [testSessionId]")
```
**Input**:
- `testSessionId` from Phase 1
- `--use-codex` flag (if present) - Controls IMPL-002 fix mode
- `--cli-execute` flag (if present) - Controls IMPL-001 generation mode
**Note**: CLI tool usage is determined semantically from user's task description.
**Expected Behavior**:
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3 (multi-layered test plan)
@@ -422,7 +426,7 @@ CRITICAL - Next Steps:
- **Phase 2**: Mode-specific context gathering (session summaries vs codebase analysis)
- **Phase 3**: Multi-layered test requirements analysis (L0: Static, L1: Unit, L2: Integration, L3: E2E)
- **Phase 4**: Multi-task generation with quality gate (IMPL-001, IMPL-001.5-review, IMPL-002)
- **Fix Mode Configuration**: `--use-codex` flag controls IMPL-002 fix mode (manual vs automated)
- **Fix Mode Configuration**: CLI tool usage determined semantically from user's task description
---
@@ -521,16 +525,15 @@ If quality gate fails:
- Task ID: `IMPL-002`
- `meta.type: "test-fix"`
- `meta.agent: "@test-fix-agent"`
- `meta.use_codex: true|false` (based on `--use-codex` flag)
- `context.depends_on: ["IMPL-001"]`
- `context.requirements`: Execute and fix tests
**Test-Fix Cycle Specification**:
**Note**: This specification describes what test-cycle-execute orchestrator will do. The agent only executes single tasks.
- **Cycle Pattern** (orchestrator-managed): test → gemini_diagnose → manual_fix (or codex) → retest
- **Cycle Pattern** (orchestrator-managed): test → gemini_diagnose → fix (agent or CLI) → retest
- **Tools Configuration** (orchestrator-controlled):
- Gemini for analysis with bug-fix template → surgical fix suggestions
- Manual fix application (default) OR Codex if `--use-codex` flag (resume mechanism)
- Agent fix application (default) OR CLI if `command` field present in implementation_approach
- **Exit Conditions** (orchestrator-enforced):
- Success: All tests pass
- Failure: Max iterations reached (5)
@@ -576,11 +579,11 @@ WFS-test-[session]/
**File**: `workflow-session.json`
**Session Mode** includes:
- `workflow_type: "test_session"`
- `type: "test"` (set by session:start --type test)
- `source_session_id: "[sourceSessionId]"` (enables automatic cross-session context)
**Prompt Mode** includes:
- `workflow_type: "test_session"`
- `type: "test"` (set by session:start --type test)
- No `source_session_id` field
### Execution Flow Diagram
@@ -674,8 +677,7 @@ Key Points:
4. **Mode Selection**:
- Use **Session Mode** for completed workflow validation
- Use **Prompt Mode** for ad-hoc test generation
- Use `--use-codex` for autonomous fix application
- Use `--cli-execute` for enhanced generation capabilities
- Include "use Codex" in description for autonomous fix application
## Related Commands
@@ -688,9 +690,7 @@ Key Points:
- `/workflow:tools:test-context-gather` - Phase 2 (Session Mode): Gather source session context
- `/workflow:tools:context-gather` - Phase 2 (Prompt Mode): Analyze codebase directly
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements using Gemini
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs using action-planning-agent (autonomous, default)
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes for IMPL-002 (when `--use-codex` flag used)
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode for IMPL-001 test generation (when `--cli-execute` flag used)
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs (CLI tool usage determined semantically)
**Follow-up Commands**:
- `/workflow:status` - Review generated test tasks

View File

@@ -1,7 +1,7 @@
---
name: test-gen
description: Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks
argument-hint: "[--use-codex] [--cli-execute] source-session-id"
argument-hint: "source-session-id"
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
---
@@ -16,7 +16,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
- **Context-First**: Prioritizes gathering code changes and summaries from source session
- **Format Reuse**: Creates standard `IMPL-*.json` task, using `meta.type: "test-fix"` for agent assignment
- **Parameter Simplification**: Tools auto-detect test session type via metadata, no manual cross-session parameters needed
- **Manual First**: Default to manual fixes, use `--use-codex` flag for automated Codex fix application
- **Semantic CLI Selection**: CLI tool usage is determined by user's task description (e.g., "use Codex for fixes")
**Task Attachment Model**:
- SlashCommand dispatch **expands workflow** by attaching sub-tasks to current TodoWrite
@@ -48,7 +48,7 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
5. **Complete All Phases**: Do not return to user until Phase 5 completes (summary returned)
6. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
7. **Automatic Detection**: context-gather auto-detects test session and gathers source session context
8. **Parse --use-codex Flag**: Extract flag from arguments and pass to Phase 4 (test-task-generate)
8. **Semantic CLI Selection**: CLI tool usage determined from user's task description, passed to Phase 4
9. **Command Boundary**: This command ends at Phase 5 summary. Test execution is NOT part of this command.
10. **Task Attachment Model**: SlashCommand dispatch **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
11. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
@@ -57,19 +57,35 @@ allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*)
### Phase 1: Create Test Session
**Step 1.1: Dispatch** - Create new test workflow session
**Step 1.0: Load Source Session Intent** - Preserve user's original task description for semantic CLI selection
```javascript
SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]\"")
// Read source session metadata to get original task description
Read(".workflow/active/[sourceSessionId]/workflow-session.json")
// OR if context-package exists:
Read(".workflow/active/[sourceSessionId]/.process/context-package.json")
// Extract: metadata.task_description or project/description field
// This preserves user's CLI tool preferences (e.g., "use Codex for fixes")
```
**Input**: `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
**Step 1.1: Dispatch** - Create new test workflow session with preserved intent
```javascript
// Include original task description to enable semantic CLI selection
SlashCommand(command="/workflow:session:start --new \"Test validation for [sourceSessionId]: [originalTaskDescription]\"")
```
**Input**:
- `sourceSessionId` from user argument (e.g., `WFS-user-auth`)
- `originalTaskDescription` from source session metadata (preserves CLI tool preferences)
**Expected Behavior**:
- Creates new session with pattern `WFS-test-[source-slug]` (e.g., `WFS-test-user-auth`)
- Writes metadata to `workflow-session.json`:
- `workflow_type: "test_session"`
- `source_session_id: "[sourceSessionId]"`
- Description includes original user intent for semantic CLI selection
- Returns new session ID for subsequent phases
**Parse Output**:
@@ -224,13 +240,13 @@ SlashCommand(command="/workflow:tools:test-concept-enhanced --session [testSessi
**Step 4.1: Dispatch** - Generate test task JSON files and planning documents
```javascript
SlashCommand(command="/workflow:tools:test-task-generate [--use-codex] [--cli-execute] --session [testSessionId]")
SlashCommand(command="/workflow:tools:test-task-generate --session [testSessionId]")
```
**Input**:
- `testSessionId` from Phase 1
- `--use-codex` flag (if present in original command) - Controls IMPL-002 fix mode
- `--cli-execute` flag (if present in original command) - Controls IMPL-001 generation mode
**Note**: CLI tool usage for fixes is determined semantically from user's task description (e.g., "use Codex for automated fixes").
**Expected Behavior**:
- Parse TEST_ANALYSIS_RESULTS.md from Phase 3
@@ -260,16 +276,15 @@ SlashCommand(command="/workflow:tools:test-task-generate [--use-codex] [--cli-ex
- Task ID: `IMPL-002`
- `meta.type: "test-fix"`
- `meta.agent: "@test-fix-agent"`
- `meta.use_codex: true|false` (based on --use-codex flag)
- `context.depends_on: ["IMPL-001"]`
- `context.requirements`: Execute and fix tests
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
- **Cycle pattern**: test → gemini_diagnose → manual_fix (or codex if --use-codex) → retest
- **Tools configuration**: Gemini for analysis with bug-fix template, manual or Codex for fixes
- **Cycle pattern**: test → gemini_diagnose → fix (agent or CLI based on `command` field) → retest
- **Tools configuration**: Gemini for analysis with bug-fix template, agent or CLI for fixes
- **Exit conditions**: Success (all pass) or failure (max iterations)
- `flow_control.implementation_approach.modification_points`: 3-phase execution flow
- Phase 1: Initial test execution
- Phase 2: Iterative Gemini diagnosis + manual/Codex fixes (based on flag)
- Phase 2: Iterative Gemini diagnosis + fixes (agent or CLI based on step's `command` field)
- Phase 3: Final validation and certification
<!-- TodoWrite: When test-task-generate dispatched, INSERT 3 test-task-generate tasks -->
@@ -327,7 +342,7 @@ Artifacts Created:
Test Framework: [detected framework]
Test Files to Generate: [count]
Fix Mode: [Manual|Codex Automated] (based on --use-codex flag)
Fix Mode: [Agent|CLI] (based on `command` field in implementation_approach steps)
Review Generated Artifacts:
- Test plan: .workflow/[testSessionId]/IMPL_PLAN.md
@@ -373,7 +388,7 @@ Ready for execution. Use appropriate workflow commands to proceed.
- **Phase 2**: Cross-session context gathering from source implementation session
- **Phase 3**: Test requirements analysis with Gemini for generation strategy
- **Phase 4**: Dual-task generation (IMPL-001 for test generation, IMPL-002 for test execution)
- **Fix Mode Configuration**: `--use-codex` flag controls IMPL-002 fix mode (manual vs automated)
- **Fix Mode Configuration**: CLI tool usage determined semantically from user's task description
@@ -444,7 +459,7 @@ Generates two task definition files:
- Agent: @test-fix-agent
- Dependency: IMPL-001 must complete first
- Max iterations: 5
- Fix mode: Manual or Codex (based on --use-codex flag)
- Fix mode: Agent or CLI (based on `command` field in implementation_approach)
See `/workflow:tools:test-task-generate` for complete task JSON schemas.
@@ -481,11 +496,10 @@ Created in `.workflow/active/WFS-test-[session]/`:
**IMPL-002.json Structure**:
- `meta.type: "test-fix"`
- `meta.agent: "@test-fix-agent"`
- `meta.use_codex`: true/false (based on --use-codex flag)
- `context.depends_on: ["IMPL-001"]`
- `flow_control.implementation_approach.test_fix_cycle`: Complete cycle specification
- Gemini diagnosis template
- Fix application mode (manual/codex)
- Fix application mode (agent or CLI based on `command` field)
- Max iterations: 5
- `flow_control.implementation_approach.modification_points`: 3-phase flow
@@ -503,13 +517,11 @@ See `/workflow:tools:test-task-generate` for complete JSON schemas.
**Prerequisite Commands**:
- `/workflow:plan` or `/workflow:execute` - Complete implementation session that needs test validation
**Dispatched by This Command** (5 phases):
**Dispatched by This Command** (4 phases):
- `/workflow:session:start` - Phase 1: Create independent test workflow session
- `/workflow:tools:test-context-gather` - Phase 2: Analyze test coverage and gather source session context
- `/workflow:tools:test-concept-enhanced` - Phase 3: Generate test requirements and strategy using Gemini
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs using action-planning-agent (autonomous, default)
- `/workflow:tools:test-task-generate --use-codex` - Phase 4: With automated Codex fixes for IMPL-002 (when `--use-codex` flag used)
- `/workflow:tools:test-task-generate --cli-execute` - Phase 4: With CLI execution mode for IMPL-001 test generation (when `--cli-execute` flag used)
- `/workflow:tools:test-task-generate` - Phase 4: Generate test task JSONs (CLI tool usage determined semantically)
**Follow-up Commands**:
- `/workflow:status` - Review generated test tasks

View File

@@ -108,42 +108,51 @@ Phase 4: Apply Modifications
**Agent Delegation**:
```javascript
Task(subagent_type="cli-execution-agent", prompt=`
Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
## Context
- Session: {session_id}
- Risk: {conflict_risk}
- Files: {existing_files_list}
## Exploration Context (from context-package.exploration_results)
- Exploration Count: ${contextPackage.exploration_results?.exploration_count || 0}
- Angles Analyzed: ${JSON.stringify(contextPackage.exploration_results?.angles || [])}
- Pre-identified Conflict Indicators: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.conflict_indicators || [])}
- Critical Files: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.critical_files?.map(f => f.path) || [])}
- All Patterns: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_patterns || [])}
- All Integration Points: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_integration_points || [])}
## Analysis Steps
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/active/{session_id}/.process/context-package.json
- **NEW**: Load exploration_results and use aggregated_insights for enhanced analysis
- Extract role analyses and requirements
### 2. Execute CLI Analysis (Enhanced with Scenario Uniqueness Detection)
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
Primary (Gemini):
cd {project_root} && gemini -p "
PURPOSE: Detect conflicts between plan and codebase, including module scenario overlaps
ccw cli -p "
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
TASK:
Compare architectures
**Review pre-identified conflict_indicators from exploration results**
• Compare architectures (use exploration key_patterns)
• Identify breaking API changes
• Detect data model incompatibilities
• Assess dependency conflicts
• **NEW: Analyze module scenario uniqueness**
- Extract new module functionality from plan
- Search all existing modules with similar functionality
- Compare scenario coverage and identify overlaps
• **Analyze module scenario uniqueness**
- Use exploration integration_points for precise locations
- Cross-validate with exploration critical_files
- Generate clarification questions for boundary definition
MODE: analysis
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/{session_id}/**/*
EXPECTED: Conflict list with severity ratings, including ModuleOverlap conflicts with:
- Existing module list with scenarios
- Overlap analysis matrix
EXPECTED: Conflict list with severity ratings, including:
- Validation of exploration conflict_indicators
- ModuleOverlap conflicts with overlap_analysis
- Targeted clarification questions
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | analysis=READ-ONLY
"
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
" --tool gemini --mode analysis --cd {project_root}
Fallback: Qwen (same prompt) → Claude (manual analysis)
@@ -160,7 +169,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
### 4. Return Structured Conflict Data
⚠️ DO NOT generate CONFLICT_RESOLUTION.md file
⚠️ Output to conflict-resolution.json (generated in Phase 4)
Return JSON format for programmatic processing:
@@ -458,14 +467,30 @@ selectedStrategies.forEach(item => {
console.log(`\n正在应用 ${modifications.length} 个修改...`);
// 2. Apply each modification using Edit tool
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
const appliedModifications = [];
const failedModifications = [];
const fallbackConstraints = []; // For files that don't exist
modifications.forEach((mod, idx) => {
try {
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
// Check if target file exists (brainstorm files may not exist in lite workflow)
if (!file_exists(mod.file)) {
console.log(` ⚠️ 文件不存在,写入 context-package.json 作为约束`);
fallbackConstraints.push({
source: "conflict-resolution",
conflict_id: mod.conflict_id,
target_file: mod.file,
section: mod.section,
change_type: mod.change_type,
content: mod.new_content,
rationale: mod.rationale
});
return; // Skip to next modification
}
if (mod.change_type === "update") {
Edit({
file_path: mod.file,
@@ -493,14 +518,45 @@ modifications.forEach((mod, idx) => {
}
});
// 3. Update context-package.json with resolution details
// 2b. Generate conflict-resolution.json output file
const resolutionOutput = {
session_id: sessionId,
resolved_at: new Date().toISOString(),
summary: {
total_conflicts: conflicts.length,
resolved_with_strategy: selectedStrategies.length,
custom_handling: customConflicts.length,
fallback_constraints: fallbackConstraints.length
},
resolved_conflicts: selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
strategy_approach: s.strategy.approach,
clarifications: s.clarifications || [],
modifications_applied: s.strategy.modifications?.filter(m =>
appliedModifications.some(am => am.conflict_id === s.conflict_id)
) || []
})),
custom_conflicts: customConflicts.map(c => ({
id: c.id,
brief: c.brief,
category: c.category,
suggestions: c.suggestions,
overlap_analysis: c.overlap_analysis || null
})),
planning_constraints: fallbackConstraints, // Constraints for files that don't exist
failed_modifications: failedModifications
};
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
console.log(`\n📄 冲突解决结果已保存: ${resolutionPath}`);
// 3. Update context-package.json with resolution details (reference to JSON file)
const contextPackage = JSON.parse(Read(contextPath));
contextPackage.conflict_detection.conflict_risk = "resolved";
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
clarifications: s.clarifications
}));
contextPackage.conflict_detection.resolution_file = resolutionPath; // Reference to detailed JSON
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
Write(contextPath, JSON.stringify(contextPackage, null, 2));
@@ -573,12 +629,50 @@ return {
✓ Agent log saved to .workflow/active/{session_id}/.chat/
```
## Output Format: Agent JSON Response
## Output Format
### Primary Output: conflict-resolution.json
**Path**: `.workflow/active/{session_id}/.process/conflict-resolution.json`
**Schema**:
```json
{
"session_id": "WFS-xxx",
"resolved_at": "ISO timestamp",
"summary": {
"total_conflicts": 3,
"resolved_with_strategy": 2,
"custom_handling": 1,
"fallback_constraints": 0
},
"resolved_conflicts": [
{
"conflict_id": "CON-001",
"strategy_name": "策略名称",
"strategy_approach": "实现方法",
"clarifications": [],
"modifications_applied": []
}
],
"custom_conflicts": [
{
"id": "CON-002",
"brief": "冲突摘要",
"category": "ModuleOverlap",
"suggestions": ["建议1", "建议2"],
"overlap_analysis": null
}
],
"planning_constraints": [],
"failed_modifications": []
}
```
### Secondary: Agent JSON Response (stdout)
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
**Format**: JSON to stdout (NO file generation)
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
### Key Requirements
@@ -626,11 +720,12 @@ If Edit tool fails mid-application:
- Requires: `conflict_risk ≥ medium`
**Output**:
- Modified files:
- Generated file:
- `.workflow/active/{session_id}/.process/conflict-resolution.json` (primary output)
- Modified files (if exist):
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved)
- NO report file generation
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved, resolution_file reference)
**User Interaction**:
- **Iterative conflict processing**: One conflict at a time, not in batches
@@ -658,7 +753,7 @@ If Edit tool fails mid-application:
✓ guidance-specification.md updated with resolved conflicts
✓ Role analyses (*.md) updated with resolved conflicts
✓ context-package.json marked as "resolved" with clarification records
No CONFLICT_RESOLUTION.md file generated
conflict-resolution.json generated with full resolution details
✓ Modification summary includes:
- Total conflicts
- Resolved with strategy (count)

View File

@@ -36,24 +36,23 @@ Step 1: Context-Package Detection
├─ Valid package exists → Return existing (skip execution)
└─ No valid package → Continue to Step 2
Step 2: Invoke Context-Search Agent
├─ Phase 1: Initialization & Pre-Analysis
│ ├─ Load project.json as primary context
│ ├─ Initialize code-index
│ └─ Classify complexity
Phase 2: Multi-Source Discovery
│ ├─ Track 1: Historical archive analysis
│ ├─ Track 2: Reference documentation
│ ├─ Track 3: Web examples (Exa MCP)
│ └─ Track 4: Codebase analysis (5-layer)
└─ Phase 3: Synthesis & Packaging
├─ Apply relevance scoring
├─ Integrate brainstorm artifacts
├─ Perform conflict detection
└─ Generate context-package.json
Step 2: Complexity Assessment & Parallel Explore (NEW)
├─ Analyze task_description → classify Low/Medium/High
├─ Select exploration angles (1-4 based on complexity)
├─ Launch N cli-explore-agents in parallel
│ └─ Each outputs: exploration-{angle}.json
Generate explorations-manifest.json
Step 3: Output Verification
Verify context-package.json created
Step 3: Invoke Context-Search Agent (with exploration input)
Phase 1: Initialization & Pre-Analysis
├─ Phase 2: Multi-Source Discovery
│ ├─ Track 0: Exploration Synthesis (prioritize & deduplicate)
│ ├─ Track 1-4: Existing tracks
└─ Phase 3: Synthesis & Packaging
└─ Generate context-package.json with exploration_results
Step 4: Output Verification
└─ Verify context-package.json contains exploration_results
```
## Execution Flow
@@ -80,13 +79,144 @@ if (file_exists(contextPackagePath)) {
}
```
### Step 2: Invoke Context-Search Agent
### Step 2: Complexity Assessment & Parallel Explore
**Only execute if Step 1 finds no valid package**
```javascript
// 2.1 Complexity Assessment
function analyzeTaskComplexity(taskDescription) {
const text = taskDescription.toLowerCase();
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
return 'Low';
}
const ANGLE_PRESETS = {
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
};
function selectAngles(taskDescription, complexity) {
const text = taskDescription.toLowerCase();
let preset = 'feature';
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
else if (/security|auth|permission/.test(text)) preset = 'security';
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
return ANGLE_PRESETS[preset].slice(0, count);
}
const complexity = analyzeTaskComplexity(task_description);
const selectedAngles = selectAngles(task_description, complexity);
const sessionFolder = `.workflow/active/${session_id}/.process`;
// 2.2 Launch Parallel Explore Agents
const explorationTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Explore: ${angle}`,
prompt=`
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Session ID**: ${session_id}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan** (Bash)
- get_modules_by_depth.sh → identify modules related to ${angle}
- find/rg → locate files relevant to ${angle} aspect
- Analyze imports/dependencies from ${angle} perspective
**Step 2: Semantic Analysis** (Gemini CLI)
- How does existing code handle ${angle} concerns?
- What patterns are used for ${angle}?
- Where would new code integrate from ${angle} viewpoint?
**Step 3: Write Output**
- Consolidate ${angle} findings into JSON
- Identify ${angle}-specific clarification needs
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
- project_structure: Modules/architecture relevant to ${angle}
- relevant_files: Files affected from ${angle} perspective
**IMPORTANT**: Use object format with relevance scores for synthesis:
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Core ${angle} logic"}]\`
Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
- patterns: ${angle}-related patterns to follow
- dependencies: Dependencies relevant to ${angle}
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
- constraints: ${angle}-specific limitations/conventions
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
- _metadata.exploration_angle: "${angle}"
## Success Criteria
- [ ] Schema obtained via cat explore-json-schema.json
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with ${angle} rationale
- [ ] Patterns are actionable (code examples, not generic advice)
- [ ] Integration points include file:line locations
- [ ] Constraints are project-specific to ${angle}
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings
`
)
);
// 2.3 Generate Manifest after all complete
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
const explorationManifest = {
session_id,
task_description,
timestamp: new Date().toISOString(),
complexity,
exploration_count: selectedAngles.length,
angles_explored: selectedAngles,
explorations: explorationFiles.map(file => {
const data = JSON.parse(Read(file));
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
})
};
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
```
### Step 3: Invoke Context-Search Agent
**Only execute after Step 2 completes**
```javascript
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for plan",
prompt=`
## Execution Mode
@@ -97,17 +227,24 @@ Task(
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${session_id}/.process/context-package.json
## Exploration Input (from Step 2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
- **Complexity**: ${complexity}
## Mission
Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize code-index, get project structure, load docs
3. **Foundation**: Initialize CodexLens, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
Execute all 4 discovery tracks:
Execute all discovery tracks:
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
@@ -116,7 +253,7 @@ Execute all 4 discovery tracks:
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include technology_stack, architecture, key_components, and entry_points.
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include description, technology_stack, architecture, and key_components.
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
5. Perform conflict detection with risk assessment
6. **Inject historical conflicts** from archive analysis into conflict_detection
@@ -125,11 +262,12 @@ Execute all 4 discovery tracks:
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: architecture_patterns, coding_conventions, tech_stack (sourced from `project.json` overview)
- **project_context**: description, technology_stack, architecture, key_components (sourced from `project.json` overview)
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
## Quality Validation
Before completion verify:
@@ -146,7 +284,7 @@ Report completion with statistics.
)
```
### Step 3: Output Verification
### Step 4: Output Verification
After agent completes, verify output:
@@ -156,6 +294,12 @@ const outputPath = `.workflow/${session_id}/.process/context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("❌ Agent failed to generate context-package.json");
}
// Verify exploration_results included
const pkg = JSON.parse(Read(outputPath));
if (pkg.exploration_results?.exploration_count > 0) {
console.log(`✅ Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
}
```
## Parameter Reference
@@ -176,6 +320,7 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
- **dependencies**: Internal and external dependency graphs
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
- **conflict_detection**: Risk assessment with mitigation strategies and historical conflicts
- **exploration_results**: Aggregated exploration insights (from parallel explore phase)
## Historical Archive Analysis

View File

@@ -1,10 +1,9 @@
---
name: task-generate-agent
description: Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation
argument-hint: "--session WFS-session-id [--cli-execute]"
argument-hint: "--session WFS-session-id"
examples:
- /workflow:tools:task-generate-agent --session WFS-auth
- /workflow:tools:task-generate-agent --session WFS-auth --cli-execute
---
# Generate Implementation Plan Command
@@ -15,8 +14,8 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
## Core Philosophy
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
- **Two-Phase Flow**: Discovery (context gathering) → Output (planning document generation)
- **Memory-First**: Reuse loaded documents from conversation memory
- **Smart Selection**: Load synthesis_output OR guidance + relevant role analyses, NOT all role analyses
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
@@ -26,25 +25,123 @@ Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.
```
Input Parsing:
├─ Parse flags: --session, --cli-execute
├─ Parse flags: --session
└─ Validation: session_id REQUIRED
Phase 1: Context Preparation (Command)
├─ Assemble session paths (metadata, context package, output dirs)
Provide metadata (session_id, execution_mode, mcp_capabilities)
Phase 0: User Configuration (Interactive)
├─ Question 1: Supplementary materials/guidelines?
Question 2: Execution method preference (Agent/CLI/Hybrid)
├─ Question 3: CLI tool preference (if CLI selected)
└─ Store: userConfig for agent prompt
Phase 2: Planning Document Generation (Agent)
Phase 1: Context Preparation & Module Detection (Command)
├─ Assemble session paths (metadata, context package, output dirs)
├─ Provide metadata (session_id, execution_mode, mcp_capabilities)
├─ Auto-detect modules from context-package + directory structure
└─ Decision:
├─ modules.length == 1 → Single Agent Mode (Phase 2A)
└─ modules.length >= 2 → Parallel Mode (Phase 2B + Phase 3)
Phase 2A: Single Agent Planning (Original Flow)
├─ Load context package (progressive loading strategy)
├─ Generate Task JSON Files (.task/IMPL-*.json)
├─ Create IMPL_PLAN.md
└─ Generate TODO_LIST.md
Phase 2B: N Parallel Planning (Multi-Module)
├─ Launch N action-planning-agents simultaneously (one per module)
├─ Each agent generates module-scoped tasks (IMPL-{prefix}{seq}.json)
├─ Task ID format: IMPL-A1, IMPL-A2... / IMPL-B1, IMPL-B2...
└─ Each module limited to ≤9 tasks
Phase 3: Integration (+1 Coordinator, Multi-Module Only)
├─ Collect all module task JSONs
├─ Resolve cross-module dependencies (CROSS::{module}::{pattern} → actual ID)
├─ Generate unified IMPL_PLAN.md (grouped by module)
└─ Generate TODO_LIST.md (hierarchical: module → tasks)
```
## Document Generation Lifecycle
### Phase 1: Context Preparation (Command Responsibility)
### Phase 0: User Configuration (Interactive)
**Command prepares session paths and metadata for planning document generation.**
**Purpose**: Collect user preferences before task generation to ensure generated tasks match execution expectations.
**User Questions**:
```javascript
AskUserQuestion({
questions: [
{
question: "Do you have supplementary materials or guidelines to include?",
header: "Materials",
multiSelect: false,
options: [
{ label: "No additional materials", description: "Use existing context only" },
{ label: "Provide file paths", description: "I'll specify paths to include" },
{ label: "Provide inline content", description: "I'll paste content directly" }
]
},
{
question: "Select execution method for generated tasks:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent (Recommended)", description: "Claude agent executes tasks directly" },
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps" },
{ label: "CLI Only", description: "All execution via CLI tools (codex/gemini/qwen)" }
]
},
{
question: "If using CLI, which tool do you prefer?",
header: "CLI Tool",
multiSelect: false,
options: [
{ label: "Codex (Recommended)", description: "Best for implementation tasks" },
{ label: "Gemini", description: "Best for analysis and large context" },
{ label: "Qwen", description: "Alternative analysis tool" },
{ label: "Auto", description: "Let agent decide per-task" }
]
}
]
})
```
**Handle Materials Response**:
```javascript
if (userConfig.materials === "Provide file paths") {
// Follow-up question for file paths
const pathsResponse = AskUserQuestion({
questions: [{
question: "Enter file paths to include (comma-separated or one per line):",
header: "Paths",
multiSelect: false,
options: [
{ label: "Enter paths", description: "Provide paths in text input" }
]
}]
})
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
}
```
**Build userConfig**:
```javascript
const userConfig = {
supplementaryMaterials: {
type: "none|paths|inline",
content: [...], // Parsed paths or inline content
},
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true // Always enable resume for CLI executions
}
```
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2A/2B.
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
**Command prepares session paths, metadata, and detects module structure.**
**Session Path Structure**:
```
@@ -53,8 +150,12 @@ Phase 2: Planning Document Generation (Agent)
├── .process/
│ └── context-package.json # Context package with artifact catalog
├── .task/ # Output: Task JSON files
├── IMPL_PLAN.md # Output: Implementation plan
└── TODO_LIST.md # Output: TODO list
├── IMPL-A1.json # Multi-module: prefixed by module
│ ├── IMPL-A2.json
│ ├── IMPL-B1.json
│ └── ...
├── IMPL_PLAN.md # Output: Implementation plan (grouped by module)
└── TODO_LIST.md # Output: TODO list (hierarchical)
```
**Command Preparation**:
@@ -65,10 +166,51 @@ Phase 2: Planning Document Generation (Agent)
2. **Provide Metadata** (simple values):
- `session_id`
- `execution_mode` (agent-mode | cli-execute-mode)
- `mcp_capabilities` (available MCP tools)
### Phase 2: Planning Document Generation (Agent Responsibility)
3. **Auto Module Detection** (determines single vs parallel mode):
```javascript
function autoDetectModules(contextPackage, projectRoot) {
// === Complexity Gate: Only parallelize for High complexity ===
const complexity = contextPackage.metadata?.complexity || 'Medium';
if (complexity !== 'High') {
// Force single agent mode for Low/Medium complexity
// This maximizes agent context reuse for related tasks
return [{ name: 'main', prefix: '', paths: ['.'] }];
}
// Priority 1: Explicit frontend/backend separation
if (exists('src/frontend') && exists('src/backend')) {
return [
{ name: 'frontend', prefix: 'A', paths: ['src/frontend'] },
{ name: 'backend', prefix: 'B', paths: ['src/backend'] }
];
}
// Priority 2: Monorepo structure
if (exists('packages/*') || exists('apps/*')) {
return detectMonorepoModules(); // Returns 2-3 main packages
}
// Priority 3: Context-package dependency clustering
const modules = clusterByDependencies(contextPackage.dependencies?.internal);
if (modules.length >= 2) return modules.slice(0, 3);
// Default: Single module (original flow)
return [{ name: 'main', prefix: '', paths: ['.'] }];
}
```
**Decision Logic**:
- `complexity !== 'High'` → Force Phase 2A (Single Agent, maximize context reuse)
- `modules.length == 1` → Phase 2A (Single Agent, original flow)
- `modules.length >= 2 && complexity == 'High'` → Phase 2B + Phase 3 (N+1 Parallel)
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
### Phase 2A: Single Agent Planning (Original Flow)
**Condition**: `modules.length == 1` (no multi-module detected)
**Purpose**: Generate IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT code implementation.
@@ -76,6 +218,7 @@ Phase 2: Planning Document Generation (Agent)
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
@@ -97,15 +240,47 @@ Output:
## CONTEXT METADATA
Session ID: {session-id}
Planning Mode: {agent-mode | cli-execute-mode}
MCP Capabilities: {exa_code, exa_web, code_index}
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (agent handles simple steps)
- "cli": Add command field to ALL implementation_approach steps
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## EXPECTED DELIVERABLES
1. Task JSON Files (.task/IMPL-*.json)
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- Flow control with pre_analysis steps
- **focus_paths enhanced with exploration critical_files**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **CLI Execution IDs and strategies (MANDATORY)**
2. Implementation Plan (IMPL_PLAN.md)
- Context analysis and artifact references
@@ -117,9 +292,30 @@ MCP Capabilities: {exa_code, exa_web, code_index}
- Links to task JSONs and summaries
- Matches task JSON hierarchy
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-{task_id}`)
- **cli_execution**: Strategy object based on depends_on:
- No deps → `{ "strategy": "new" }`
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
- N deps → `{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }`
**CLI Execution Strategy Rules**:
1. **new**: Task has no dependencies - starts fresh CLI conversation
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
**Execution Command Patterns**:
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
- merge_fork: `ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write`
## QUALITY STANDARDS
Hard Constraints:
- Task count <= 12 (hard limit - request re-scope if exceeded)
- Task count <= 18 (hard limit - request re-scope if exceeded)
- All requirements quantified (explicit counts and enumerated lists)
- Acceptance criteria measurable (include verification commands)
- Artifact references mapped from context package
@@ -135,4 +331,160 @@ Hard Constraints:
)
```
### Phase 2B: N Parallel Planning (Multi-Module)
**Condition**: `modules.length >= 2` (multi-module detected)
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task JSON generation.
**Note**: Phase 2B agents generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md are generated by Phase 3 Coordinator.
**Parallel Agent Invocation**:
```javascript
// Launch N agents in parallel (one per module)
const planningTasks = modules.map(module =>
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description=`Generate ${module.name} module task JSONs`,
prompt=`
## TASK OBJECTIVE
Generate task JSON files for ${module.name} module within workflow session
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
CRITICAL: Follow progressive loading strategy in agent specification
## MODULE SCOPE
- Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix}
- Task Limit: ≤9 tasks
- Other Modules: ${otherModules.join(', ')}
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
- Task Dir: .workflow/active/{session-id}/.task/
## CONTEXT METADATA
Session ID: {session-id}
MCP Capabilities: {exa_code, exa_web, code_index}
## CROSS-MODULE DEPENDENCIES
- Use placeholder: depends_on: ["CROSS::{module}::{pattern}"]
- Example: depends_on: ["CROSS::B::api-endpoint"]
- Phase 3 Coordinator resolves to actual task IDs
## EXPECTED DELIVERABLES
Task JSON Files (.task/IMPL-${module.prefix}*.json):
- 6-field schema per agent specification
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- Focus ONLY on ${module.name} module scope
## SUCCESS CRITERIA
- Task JSONs saved to .task/ with IMPL-${module.prefix}* naming
- Cross-module dependencies use CROSS:: placeholder format
- Return task count and brief summary
`
)
);
// Execute all in parallel
await Promise.all(planningTasks);
```
**Output Structure** (direct to .task/):
```
.task/
├── IMPL-A1.json # Module A (e.g., frontend)
├── IMPL-A2.json
├── IMPL-B1.json # Module B (e.g., backend)
├── IMPL-B2.json
└── IMPL-C1.json # Module C (e.g., shared)
```
**Task ID Naming**:
- Format: `IMPL-{prefix}{seq}.json`
- Prefix: A, B, C... (assigned by detection order)
- Sequence: 1, 2, 3... (per-module increment)
### Phase 3: Integration (+1 Coordinator Agent, Multi-Module Only)
**Condition**: Only executed when `modules.length >= 2`
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified IMPL_PLAN.md and TODO_LIST.md documents.
**Coordinator Agent Invocation**:
```javascript
// Wait for all Phase 2B agents to complete
const moduleResults = await Promise.all(planningTasks);
// Launch +1 Coordinator Agent
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Integrate module tasks and generate unified documents",
prompt=`
## TASK OBJECTIVE
Integrate all module task JSONs, resolve cross-module dependencies, and generate unified IMPL_PLAN.md and TODO_LIST.md
IMPORTANT: This is INTEGRATION ONLY - consolidate existing task JSONs, NOT creating new tasks.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json
- Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (from Phase 2B)
Output:
- Updated Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (resolved dependencies)
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
## CONTEXT METADATA
Session ID: {session-id}
Modules: ${modules.map(m => m.name + '(' + m.prefix + ')').join(', ')}
Module Count: ${modules.length}
## INTEGRATION STEPS
1. Collect all .task/IMPL-*.json, group by module prefix
2. Resolve CROSS:: dependencies → actual task IDs, update task JSONs
3. Generate IMPL_PLAN.md (multi-module format per agent specification)
4. Generate TODO_LIST.md (hierarchical format per agent specification)
## CROSS-MODULE DEPENDENCY RESOLUTION
- Pattern: CROSS::{module}::{pattern} → IMPL-{module}* matching title/context
- Example: CROSS::B::api-endpoint → IMPL-B1 (if B1 title contains "api-endpoint")
- Log unresolved as warnings
## EXPECTED DELIVERABLES
1. Updated Task JSONs with resolved dependency IDs
2. IMPL_PLAN.md - multi-module format with cross-dependency section
3. TODO_LIST.md - hierarchical by module with cross-dependency section
## SUCCESS CRITERIA
- No CROSS:: placeholders remaining in task JSONs
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
- Return: task count, per-module breakdown, resolved dependency count
`
)
```
**Dependency Resolution Algorithm**:
```javascript
function resolveCrossModuleDependency(placeholder, allTasks) {
const [, targetModule, pattern] = placeholder.match(/CROSS::(\w+)::(.+)/);
const candidates = allTasks.filter(t =>
t.id.startsWith(`IMPL-${targetModule}`) &&
(t.title.toLowerCase().includes(pattern.toLowerCase()) ||
t.context?.description?.toLowerCase().includes(pattern.toLowerCase()))
);
return candidates.length > 0
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
: placeholder; // Keep for manual resolution
}
```

View File

@@ -1,24 +1,23 @@
---
name: task-generate-tdd
description: Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation
argument-hint: "--session WFS-session-id [--cli-execute]"
argument-hint: "--session WFS-session-id"
examples:
- /workflow:tools:task-generate-tdd --session WFS-auth
- /workflow:tools:task-generate-tdd --session WFS-auth --cli-execute
---
# Autonomous TDD Task Generation Command
## Overview
Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Supports both agent-driven execution (default) and CLI tool execution modes. Generates complete Red-Green-Refactor cycles contained within each task.
Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Generates complete Red-Green-Refactor cycles contained within each task.
## Core Philosophy
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
- **Pre-Selected Templates**: Command selects correct TDD template based on `--cli-execute` flag **before** invoking agent
- **Agent Simplicity**: Agent receives pre-selected template and focuses only on content generation
- **Semantic CLI Selection**: CLI tool usage determined from user's task description, not flags
- **Agent Simplicity**: Agent generates content with semantic CLI detection
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
- **TDD-First**: Every feature starts with a failing test (Red phase)
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
@@ -43,10 +42,10 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
- Different tech stacks or domains within feature
### Task Limits
- **Maximum 10 tasks** (hard limit for TDD workflows)
- **Maximum 18 tasks** (hard limit for TDD workflows)
- **Feature-based**: Complete functional units with internal TDD cycles
- **Hierarchy**: Flat (≤5 simple features) | Two-level (6-10 for complex features with sub-features)
- **Re-scope**: If >10 tasks needed, break project into multiple TDD workflow sessions
- **Re-scope**: If >18 tasks needed, break project into multiple TDD workflow sessions
### TDD Cycle Mapping
- **Old approach**: 1 feature = 3 tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
@@ -57,7 +56,7 @@ Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent
```
Input Parsing:
├─ Parse flags: --session, --cli-execute
├─ Parse flags: --session
└─ Validation: session_id REQUIRED
Phase 1: Discovery & Context Loading (Memory-First)
@@ -69,7 +68,7 @@ Phase 1: Discovery & Context Loading (Memory-First)
└─ Optional: MCP external research
Phase 2: Agent Execution (Document Generation)
├─ Pre-agent template selection (agent-mode OR cli-execute-mode)
├─ Pre-agent template selection (semantic CLI detection)
├─ Invoke action-planning-agent
├─ Generate TDD Task JSON Files (.task/IMPL-*.json)
│ └─ Each task: complete Red-Green-Refactor cycle internally
@@ -86,11 +85,8 @@ Phase 2: Agent Execution (Document Generation)
```javascript
{
"session_id": "WFS-[session-id]",
"execution_mode": "agent-mode" | "cli-execute-mode", // Determined by flag
"task_json_template_path": "~/.claude/workflows/cli-templates/prompts/workflow/task-json-agent-mode.txt"
| "~/.claude/workflows/cli-templates/prompts/workflow/task-json-cli-mode.txt",
// Path selected by command based on --cli-execute flag, agent reads it
"workflow_type": "tdd",
// Note: CLI tool usage is determined semantically by action-planning-agent based on user's task description
"session_metadata": {
// If in memory: use cached content
// Else: Load from .workflow/active//{session-id}/workflow-session.json
@@ -117,7 +113,7 @@ Phase 2: Agent Execution (Document Generation)
// Existing test patterns and coverage analysis
},
"mcp_capabilities": {
"code_index": true,
"codex_lens": true,
"exa_code": true,
"exa_web": true
}
@@ -156,9 +152,14 @@ Phase 2: Agent Execution (Document Generation)
roleAnalysisPaths.forEach(path => Read(path));
```
5. **Load Conflict Resolution** (from context-package.json, if exists)
5. **Load Conflict Resolution** (from conflict-resolution.json, if exists)
```javascript
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
// Check for new conflict-resolution.json format
if (contextPackage.conflict_detection?.resolution_file) {
Read(contextPackage.conflict_detection.resolution_file) // .process/conflict-resolution.json
}
// Fallback: legacy brainstorm_artifacts path
else if (contextPackage.brainstorm_artifacts?.conflict_resolution?.exists) {
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
}
```
@@ -193,14 +194,14 @@ const templatePath = hasCliExecuteFlag
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate TDD task JSON and implementation plan",
prompt=`
## Execution Context
**Session ID**: WFS-{session-id}
**Workflow Type**: TDD
**Execution Mode**: {agent-mode | cli-execute-mode}
**Task JSON Template Path**: {template_path}
**Note**: CLI tool usage is determined semantically from user's task description
## Phase 1: Discovery Results (Provided Context)
@@ -228,7 +229,7 @@ If conflict_risk was medium/high, modifications have been applied to:
- **guidance-specification.md**: Design decisions updated to resolve conflicts
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
- **context-package.json**: Marked as "resolved" with conflict IDs
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
- Conflict resolution results stored in conflict-resolution.json
### MCP Analysis Results (Optional)
**Code Structure**: {mcp_code_index_results}
@@ -254,7 +255,7 @@ Refer to: @.claude/agents/action-planning-agent.md for:
- Each task executes Red-Green-Refactor phases sequentially
- Task count = Feature count (typically 5 features = 5 tasks)
- Subtasks only when complexity >2500 lines or >6 files per cycle
- **Maximum 10 tasks** (hard limit for TDD workflows)
- **Maximum 18 tasks** (hard limit for TDD workflows)
#### TDD Cycle Mapping
- **Simple features**: IMPL-N with internal Red-Green-Refactor phases
@@ -265,16 +266,15 @@ Refer to: @.claude/agents/action-planning-agent.md for:
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
- **Location**: `.workflow/active//{session-id}/.task/`
- **Template**: Read from `{template_path}` (pre-selected by command based on `--cli-execute` flag)
- **Schema**: 5-field structure with TDD-specific metadata
- `meta.tdd_workflow`: true (REQUIRED)
- `meta.max_iterations`: 3 (Green phase test-fix cycle limit)
- `meta.use_codex`: false (manual fixes by default)
- `context.tdd_cycles`: Array with quantified test cases and coverage
- `flow_control.implementation_approach`: Exactly 3 steps with `tdd_phase` field
1. Red Phase (`tdd_phase: "red"`): Write failing tests
2. Green Phase (`tdd_phase: "green"`): Implement to pass tests
3. Refactor Phase (`tdd_phase: "refactor"`): Improve code quality
- CLI tool usage determined semantically (add `command` field when user requests CLI execution)
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
##### 2. IMPL_PLAN.md (TDD Variant)
@@ -324,7 +324,7 @@ Refer to: @.claude/agents/action-planning-agent.md for:
**Quality Gates** (Full checklist in action-planning-agent.md):
- ✓ Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
- ✓ Task count ≤10 (hard limit)
- ✓ Task count ≤18 (hard limit)
- ✓ Each task has meta.tdd_workflow: true
- ✓ Each task has exactly 3 implementation steps with tdd_phase field
- ✓ Green phase includes test-fix cycle logic
@@ -339,7 +339,7 @@ Generate all three documents and report completion status:
- TDD cycles configured: N cycles with quantified test cases
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
- Test context integrated: existing patterns and coverage
- MCP enhancements: code-index, exa-research
- MCP enhancements: CodexLens, exa-research
- Session ready for TDD execution: /workflow:execute
`
)
@@ -379,10 +379,12 @@ const agentContext = {
.flatMap(role => role.files)
.map(file => Read(file.path)),
// Load conflict resolution if exists (from context package)
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null,
// Load conflict resolution if exists (prefer new JSON format)
conflict_resolution: context_package.conflict_detection?.resolution_file
? Read(context_package.conflict_detection.resolution_file) // .process/conflict-resolution.json
: (brainstorm_artifacts?.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null),
// Optional MCP enhancements
mcp_analysis: executeMcpDiscovery()
@@ -414,7 +416,7 @@ This section provides quick reference for TDD task JSON structure. For complete
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
│ └── ...
└── .process/
├── CONFLICT_RESOLUTION.md # Conflict resolution strategies (if conflict_risk ≥ medium)
├── conflict-resolution.json # Conflict resolution results (if conflict_risk ≥ medium)
├── test-context-package.json # Test coverage analysis
├── context-package.json # Input from context-gather
├── context_package_path # Path to smart context package
@@ -475,16 +477,14 @@ This section provides quick reference for TDD task JSON structure. For complete
**Basic Usage**:
```bash
# Agent mode (default, autonomous execution)
# Standard execution
/workflow:tools:task-generate-tdd --session WFS-auth
# CLI tool mode (use Gemini/Qwen for generation)
/workflow:tools:task-generate-tdd --session WFS-auth --cli-execute
# With semantic CLI request (include in task description)
# e.g., "Generate TDD tasks for auth module, use Codex for implementation"
```
**Execution Modes**:
- **Agent mode** (default): Uses `action-planning-agent` with agent-mode task template
- **CLI mode** (`--cli-execute`): Uses Gemini/Qwen with cli-mode task template
**CLI Tool Selection**: Determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
**Output**:
- TDD task JSON files in `.task/` directory (IMPL-N.json format)
@@ -513,7 +513,7 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
3. **Success Path**: Tests pass → Complete task
4. **Failure Path**: Tests fail → Enter iterative fix cycle:
- **Gemini Diagnosis**: Analyze failures with bug-fix template
- **Fix Application**: Manual (default) or Codex (if meta.use_codex=true)
- **Fix Application**: Agent (default) or CLI (if `command` field present)
- **Retest**: Verify fix resolves failures
- **Repeat**: Up to max_iterations (default: 3)
5. **Safety Net**: Auto-revert all changes if max iterations reached
@@ -522,5 +522,5 @@ IMPL (Green phase) tasks include automatic test-fix cycle:
## Configuration Options
- **meta.max_iterations**: Number of fix attempts (default: 3 for TDD, 5 for test-gen)
- **meta.use_codex**: Enable Codex automated fixes (default: false, manual)
- **CLI tool usage**: Determined semantically from user's task description via `command` field in implementation_approach

View File

@@ -76,6 +76,7 @@ Phase 3: Output Validation (Command)
```javascript
Task(
subagent_type="cli-execution-agent",
run_in_background=false,
description="Analyze test coverage gaps and generate test strategy",
prompt=`
## TASK OBJECTIVE
@@ -89,7 +90,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
## EXECUTION STEPS
1. Execute Gemini analysis:
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
2. Generate TEST_ANALYSIS_RESULTS.md:
Synthesize gemini-test-analysis.md into standardized format for task generation

View File

@@ -86,6 +86,7 @@ if (file_exists(testContextPath)) {
```javascript
Task(
subagent_type="test-context-search-agent",
run_in_background=false,
description="Gather test coverage context",
prompt=`
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).

View File

@@ -1,11 +1,9 @@
---
name: test-task-generate
description: Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests
argument-hint: "[--use-codex] [--cli-execute] --session WFS-test-session-id"
argument-hint: "--session WFS-test-session-id"
examples:
- /workflow:tools:test-task-generate --session WFS-test-auth
- /workflow:tools:test-task-generate --use-codex --session WFS-test-auth
- /workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
---
# Generate Test Planning Documents Command
@@ -26,17 +24,17 @@ Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) u
### Test Generation (IMPL-001)
- **Agent Mode** (default): @code-developer generates tests within agent context
- **CLI Execute Mode** (`--cli-execute`): Use Codex CLI for autonomous test generation
- **CLI Mode**: Use CLI tools when `command` field present in implementation_approach (determined semantically)
### Test Execution & Fix (IMPL-002+)
- **Manual Mode** (default): Gemini diagnosis → user applies fixes
- **Codex Mode** (`--use-codex`): Gemini diagnosis → Codex applies fixes with resume mechanism
- **Agent Mode** (default): Gemini diagnosis → agent applies fixes
- **CLI Mode**: Gemini diagnosis → CLI applies fixes (when `command` field present in implementation_approach)
## Execution Process
```
Input Parsing:
├─ Parse flags: --session, --use-codex, --cli-execute
├─ Parse flags: --session
└─ Validation: session_id REQUIRED
Phase 1: Context Preparation (Command)
@@ -44,7 +42,7 @@ Phase 1: Context Preparation (Command)
│ ├─ session_metadata_path
│ ├─ test_analysis_results_path (REQUIRED)
│ └─ test_context_package_path
└─ Provide metadata (session_id, execution_mode, use_codex, source_session_id)
└─ Provide metadata (session_id, source_session_id)
Phase 2: Test Document Generation (Agent)
├─ Load TEST_ANALYSIS_RESULTS.md as primary requirements source
@@ -83,11 +81,11 @@ Phase 2: Test Document Generation (Agent)
2. **Provide Metadata** (simple values):
- `session_id`
- `execution_mode` (agent-mode | cli-execute-mode)
- `use_codex` flag (true | false)
- `source_session_id` (if exists)
- `mcp_capabilities` (available MCP tools)
**Note**: CLI tool usage is now determined semantically from user's task description, not by flags.
### Phase 2: Test Document Generation (Agent Responsibility)
**Purpose**: Generate test-specific IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT test execution.
@@ -96,6 +94,7 @@ Phase 2: Test Document Generation (Agent)
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
@@ -134,11 +133,14 @@ Output:
## CONTEXT METADATA
Session ID: {test-session-id}
Workflow Type: test_session
Planning Mode: {agent-mode | cli-execute-mode}
Use Codex: {true | false}
Source Session: {source-session-id} (if exists)
MCP Capabilities: {exa_code, exa_web, code_index}
## CLI TOOL SELECTION
Determine CLI tool usage per-step based on user's task description:
- If user specifies "use Codex/Gemini/Qwen for X" → Add command field to relevant steps
- Default: Agent execution (no command field) unless user explicitly requests CLI
## TEST-SPECIFIC REQUIREMENTS SUMMARY
(Detailed specifications in your agent definition)
@@ -149,25 +151,26 @@ MCP Capabilities: {exa_code, exa_web, code_index}
Task Configuration:
IMPL-001 (Test Generation):
- meta.type: "test-gen"
- meta.agent: "@code-developer" (agent-mode) OR CLI execution (cli-execute-mode)
- meta.agent: "@code-developer"
- meta.test_framework: Specify existing framework (e.g., "jest", "vitest", "pytest")
- flow_control: Test generation strategy from TEST_ANALYSIS_RESULTS.md
- CLI execution: Add `command` field when user requests (determined semantically)
IMPL-002+ (Test Execution & Fix):
- meta.type: "test-fix"
- meta.agent: "@test-fix-agent"
- meta.use_codex: true/false (based on flag)
- flow_control: Test-fix cycle with iteration limits and diagnosis configuration
- CLI execution: Add `command` field when user requests (determined semantically)
### Test-Fix Cycle Specification (IMPL-002+)
Required flow_control fields:
- max_iterations: 5
- diagnosis_tool: "gemini"
- diagnosis_template: "~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt"
- fix_mode: "manual" OR "codex" (based on use_codex flag)
- cycle_pattern: "test → gemini_diagnose → fix → retest"
- exit_conditions: ["all_tests_pass", "max_iterations_reached"]
- auto_revert_on_failure: true
- CLI fix: Add `command` field when user specifies CLI tool usage
### Automation Framework Configuration
Select automation tools based on test requirements from TEST_ANALYSIS_RESULTS.md:
@@ -191,8 +194,9 @@ PRIMARY requirements source - extract and map to task JSONs:
## EXPECTED DELIVERABLES
1. Test Task JSON Files (.task/IMPL-*.json)
- 6-field schema with quantified requirements from TEST_ANALYSIS_RESULTS.md
- Test-specific metadata: type, agent, use_codex, test_framework, coverage_target
- Test-specific metadata: type, agent, test_framework, coverage_target
- flow_control includes: reusable_test_tools, test_commands (from project config)
- CLI execution via `command` field when user requests (determined semantically)
- Artifact references from test-context-package.json
- Absolute paths in context.files_to_test
@@ -209,13 +213,13 @@ PRIMARY requirements source - extract and map to task JSONs:
## QUALITY STANDARDS
Hard Constraints:
- Task count: minimum 2, maximum 12
- Task count: minimum 2, maximum 18
- All requirements quantified from TEST_ANALYSIS_RESULTS.md
- Test framework matches existing project framework
- flow_control includes reusable_test_tools and test_commands from project
- use_codex flag correctly set in IMPL-002+ tasks
- Absolute paths for all focus_paths
- Acceptance criteria include verification commands
- CLI `command` field added only when user explicitly requests CLI tool usage
## SUCCESS CRITERIA
- All test planning documents generated successfully
@@ -233,21 +237,18 @@ Hard Constraints:
### Usage Examples
```bash
# Agent mode (default)
# Standard execution
/workflow:tools:test-task-generate --session WFS-test-auth
# With automated Codex fixes
/workflow:tools:test-task-generate --use-codex --session WFS-test-auth
# CLI execution mode for test generation
/workflow:tools:test-task-generate --cli-execute --session WFS-test-auth
# With semantic CLI request (include in task description)
# e.g., "Generate tests, use Codex for implementation and fixes"
```
### Flag Behavior
- **No flags**: `meta.use_codex=false` (manual fixes), agent-mode test generation
- **--use-codex**: `meta.use_codex=true` (Codex automated fixes in IMPL-002+)
- **--cli-execute**: CLI tool execution mode for IMPL-001 test generation
- **Both flags**: CLI generation + automated Codex fixes
### CLI Tool Selection
CLI tool usage is determined semantically from user's task description:
- Include "use Codex" for automated fixes
- Include "use Gemini" for analysis
- Default: Agent execution (no `command` field)
### Output
- Test task JSON files in `.task/` directory (minimum 2)

View File

@@ -320,7 +320,7 @@ Read({base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html)
### Step 1: Run Preview Generation Script
```bash
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
```
**Script generates**:
@@ -432,7 +432,7 @@ bash(test -f {base_path}/prototypes/compare.html && echo "exists")
bash(mkdir -p {base_path}/prototypes)
# Run preview script
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
```
## Output Structure
@@ -467,7 +467,7 @@ ERROR: Agent assembly failed
→ Check inputs exist, validate JSON structure
ERROR: Script permission denied
chmod +x ~/.claude/scripts/ui-generate-preview.sh
Verify ccw tool is available: ccw tool list
```
### Recovery Strategies

View File

@@ -106,7 +106,7 @@ echo " Output: $base_path"
# 3. Discover files using script
discovery_file="${intermediates_dir}/discovered-files.json"
~/.claude/scripts/discover-design-files.sh "$source" "$discovery_file"
ccw tool exec discover_design_files '{"sourceDir":"'"$source"'","outputPath":"'"$discovery_file"'"}'
echo " Output: $discovery_file"
```
@@ -161,6 +161,7 @@ echo "[Phase 1] Starting parallel agent analysis (3 agents)"
```javascript
Task(subagent_type="ui-design-agent",
run_in_background=false,
prompt="[STYLE_TOKENS_EXTRACTION]
Extract visual design tokens from code files using code import extraction pattern.
@@ -180,14 +181,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
- Alternative (if many files): Execute CLI analysis for comprehensive report:
\`\`\`bash
cd ${source} && gemini -p \"
ccw cli -p \"
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
\"
\" --tool gemini --mode analysis --cd ${source}
\`\`\`
**Step 1: Load file list**
@@ -276,6 +277,7 @@ Task(subagent_type="ui-design-agent",
```javascript
Task(subagent_type="ui-design-agent",
run_in_background=false,
prompt="[ANIMATION_TOKEN_GENERATION_TASK]
Extract animation tokens from code files using code import extraction pattern.
@@ -295,14 +297,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
\`\`\`bash
cd ${source} && gemini -p \"
ccw cli -p \"
PURPOSE: Detect animation frameworks and patterns
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
EXPECTED: JSON report listing frameworks, animation types, file locations
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
\"
\" --tool gemini --mode analysis --cd ${source}
\`\`\`
**Step 1: Load file list**
@@ -355,6 +357,7 @@ Task(subagent_type="ui-design-agent",
```javascript
Task(subagent_type="ui-design-agent",
run_in_background=false,
prompt="[LAYOUT_TEMPLATE_GENERATION_TASK]
Extract layout patterns from code files using code import extraction pattern.
@@ -374,14 +377,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
\`\`\`bash
cd ${source} && gemini -p \"
ccw cli -p \"
PURPOSE: Classify components as universal vs specialized
TASK: • Identify UI components • Classify reusability • Map layout systems
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
\"
\" --tool gemini --mode analysis --cd ${source}
\`\`\`
**Step 1: Load file list**

View File

@@ -1,35 +0,0 @@
#!/bin/bash
# Classify folders by type for documentation generation
# Usage: get_modules_by_depth.sh | classify-folders.sh
# Output: folder_path|folder_type|code:N|dirs:N
while IFS='|' read -r depth_info path_info files_info types_info claude_info; do
# Extract folder path from format "path:./src/modules"
folder_path=$(echo "$path_info" | cut -d':' -f2-)
# Skip if path extraction failed
[[ -z "$folder_path" || ! -d "$folder_path" ]] && continue
# Count code files (maxdepth 1)
code_files=$(find "$folder_path" -maxdepth 1 -type f \
\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \
-o -name "*.py" -o -name "*.go" -o -name "*.java" -o -name "*.rs" \
-o -name "*.c" -o -name "*.cpp" -o -name "*.cs" \) \
2>/dev/null | wc -l)
# Count subdirectories
subfolders=$(find "$folder_path" -maxdepth 1 -type d \
-not -path "$folder_path" 2>/dev/null | wc -l)
# Determine folder type
if [[ $code_files -gt 0 ]]; then
folder_type="code" # API.md + README.md
elif [[ $subfolders -gt 0 ]]; then
folder_type="navigation" # README.md only
else
folder_type="skip" # Empty or no relevant content
fi
# Output classification result
echo "${folder_path}|${folder_type}|code:${code_files}|dirs:${subfolders}"
done

View File

@@ -1,225 +0,0 @@
#!/bin/bash
# Convert design-tokens.json to tokens.css with Google Fonts import and global font rules
# Usage: cat design-tokens.json | ./convert_tokens_to_css.sh > tokens.css
# Or: ./convert_tokens_to_css.sh < design-tokens.json > tokens.css
# Read JSON from stdin
json_input=$(cat)
# Extract metadata for header comment
style_name=$(echo "$json_input" | jq -r '.meta.name // "Unknown Style"' 2>/dev/null || echo "Design Tokens")
# Generate header
cat <<EOF
/* ========================================
Design Tokens: ${style_name}
Auto-generated from design-tokens.json
======================================== */
EOF
# ========================================
# Google Fonts Import Generation
# ========================================
# Extract font families and generate Google Fonts import URL
fonts=$(echo "$json_input" | jq -r '
.typography.font_family | to_entries[] | .value
' 2>/dev/null | sed "s/'//g" | cut -d',' -f1 | sort -u)
# Build Google Fonts URL
google_fonts_url="https://fonts.googleapis.com/css2?"
font_params=""
while IFS= read -r font; do
# Skip system fonts and empty lines
if [[ -z "$font" ]] || [[ "$font" =~ ^(system-ui|sans-serif|serif|monospace|cursive|fantasy)$ ]]; then
continue
fi
# Special handling for common web fonts with weights
case "$font" in
"Comic Neue")
font_params+="family=Comic+Neue:wght@300;400;700&"
;;
"Patrick Hand"|"Caveat"|"Dancing Script"|"Architects Daughter"|"Indie Flower"|"Shadows Into Light"|"Permanent Marker")
# URL-encode font name and add common weights
encoded_font=$(echo "$font" | sed 's/ /+/g')
font_params+="family=${encoded_font}:wght@400;700&"
;;
"Segoe Print"|"Bradley Hand"|"Chilanka")
# These are system fonts, skip
;;
*)
# Generic font: add with default weights
encoded_font=$(echo "$font" | sed 's/ /+/g')
font_params+="family=${encoded_font}:wght@400;500;600;700&"
;;
esac
done <<< "$fonts"
# Generate @import if we have fonts
if [[ -n "$font_params" ]]; then
# Remove trailing &
font_params="${font_params%&}"
echo "/* Import Web Fonts */"
echo "@import url('${google_fonts_url}${font_params}&display=swap');"
echo ""
fi
# ========================================
# CSS Custom Properties Generation
# ========================================
echo ":root {"
# Colors - Brand
echo " /* Colors - Brand */"
echo "$json_input" | jq -r '
.colors.brand | to_entries[] |
" --color-brand-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Colors - Surface
echo " /* Colors - Surface */"
echo "$json_input" | jq -r '
.colors.surface | to_entries[] |
" --color-surface-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Colors - Semantic
echo " /* Colors - Semantic */"
echo "$json_input" | jq -r '
.colors.semantic | to_entries[] |
" --color-semantic-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Colors - Text
echo " /* Colors - Text */"
echo "$json_input" | jq -r '
.colors.text | to_entries[] |
" --color-text-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Colors - Border
echo " /* Colors - Border */"
echo "$json_input" | jq -r '
.colors.border | to_entries[] |
" --color-border-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Font Family
echo " /* Typography - Font Family */"
echo "$json_input" | jq -r '
.typography.font_family | to_entries[] |
" --font-family-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Font Size
echo " /* Typography - Font Size */"
echo "$json_input" | jq -r '
.typography.font_size | to_entries[] |
" --font-size-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Font Weight
echo " /* Typography - Font Weight */"
echo "$json_input" | jq -r '
.typography.font_weight | to_entries[] |
" --font-weight-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Line Height
echo " /* Typography - Line Height */"
echo "$json_input" | jq -r '
.typography.line_height | to_entries[] |
" --line-height-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Letter Spacing
echo " /* Typography - Letter Spacing */"
echo "$json_input" | jq -r '
.typography.letter_spacing | to_entries[] |
" --letter-spacing-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Spacing
echo " /* Spacing */"
echo "$json_input" | jq -r '
.spacing | to_entries[] |
" --spacing-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Border Radius
echo " /* Border Radius */"
echo "$json_input" | jq -r '
.border_radius | to_entries[] |
" --border-radius-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Shadows
echo " /* Shadows */"
echo "$json_input" | jq -r '
.shadows | to_entries[] |
" --shadow-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Breakpoints
echo " /* Breakpoints */"
echo "$json_input" | jq -r '
.breakpoints | to_entries[] |
" --breakpoint-\(.key): \(.value);"
' 2>/dev/null
echo "}"
echo ""
# ========================================
# Global Font Application
# ========================================
echo "/* ========================================"
echo " Global Font Application"
echo " ======================================== */"
echo ""
echo "body {"
echo " font-family: var(--font-family-body);"
echo " font-size: var(--font-size-base);"
echo " line-height: var(--line-height-normal);"
echo " color: var(--color-text-primary);"
echo " background-color: var(--color-surface-background);"
echo "}"
echo ""
echo "h1, h2, h3, h4, h5, h6, legend {"
echo " font-family: var(--font-family-heading);"
echo "}"
echo ""
echo "/* Reset default margins for better control */"
echo "* {"
echo " margin: 0;"
echo " padding: 0;"
echo " box-sizing: border-box;"
echo "}"

View File

@@ -1,157 +0,0 @@
#!/bin/bash
# Detect modules affected by git changes or recent modifications
# Usage: detect_changed_modules.sh [format]
# format: list|grouped|paths (default: paths)
#
# Features:
# - Respects .gitignore patterns (current directory or git root)
# - Detects git changes (staged, unstaged, or last commit)
# - Falls back to recently modified files (last 24 hours)
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
detect_changed_modules() {
local format="${1:-paths}"
local changed_files=""
local affected_dirs=""
local exclusion_filters=$(build_exclusion_filters)
# Step 1: Try to get git changes (staged + unstaged)
if git rev-parse --git-dir > /dev/null 2>&1; then
changed_files=$(git diff --name-only HEAD 2>/dev/null; git diff --name-only --cached 2>/dev/null)
# If no changes in working directory, check last commit
if [ -z "$changed_files" ]; then
changed_files=$(git diff --name-only HEAD~1 HEAD 2>/dev/null)
fi
fi
# Step 2: If no git changes, find recently modified source files (last 24 hours)
# Apply exclusion filters from .gitignore
if [ -z "$changed_files" ]; then
changed_files=$(eval "find . -type f \( \
-name '*.md' -o \
-name '*.js' -o -name '*.ts' -o -name '*.jsx' -o -name '*.tsx' -o \
-name '*.py' -o -name '*.go' -o -name '*.rs' -o \
-name '*.java' -o -name '*.cpp' -o -name '*.c' -o -name '*.h' -o \
-name '*.sh' -o -name '*.ps1' -o \
-name '*.json' -o -name '*.yaml' -o -name '*.yml' \
\) $exclusion_filters -mtime -1 2>/dev/null")
fi
# Step 3: Extract unique parent directories
if [ -n "$changed_files" ]; then
affected_dirs=$(echo "$changed_files" | \
sed 's|/[^/]*$||' | \
grep -v '^\.$' | \
sort -u)
# Add current directory if files are in root
if echo "$changed_files" | grep -q '^[^/]*$'; then
affected_dirs=$(echo -e ".\n$affected_dirs" | sort -u)
fi
fi
# Step 4: Output in requested format
case "$format" in
"list")
if [ -n "$affected_dirs" ]; then
echo "$affected_dirs" | while read dir; do
if [ -d "$dir" ]; then
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
if [ "$dir" = "." ]; then depth=0; fi
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
local has_claude="no"
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude|status:changed"
fi
done
fi
;;
"grouped")
if [ -n "$affected_dirs" ]; then
echo "📊 Affected modules by changes:"
# Group by depth
echo "$affected_dirs" | while read dir; do
if [ -d "$dir" ]; then
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
if [ "$dir" = "." ]; then depth=0; fi
local claude_indicator=""
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
echo "$depth:$dir$claude_indicator"
fi
done | sort -n | awk -F: '
{
if ($1 != prev_depth) {
if (prev_depth != "") print ""
print " 📁 Depth " $1 ":"
prev_depth = $1
}
print " - " $2 " (changed)"
}'
else
echo "📊 No recent changes detected"
fi
;;
"paths"|*)
echo "$affected_dirs"
;;
esac
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
detect_changed_modules "$@"
fi

View File

@@ -1,83 +0,0 @@
#!/usr/bin/env bash
# discover-design-files.sh - Discover design-related files and output JSON
# Usage: discover-design-files.sh <source_dir> <output_json>
set -euo pipefail
source_dir="${1:-.}"
output_json="${2:-discovered-files.json}"
# Function to find and format files as JSON array
find_files() {
local pattern="$1"
local files
files=$(eval "find \"$source_dir\" -type f $pattern \
! -path \"*/node_modules/*\" \
! -path \"*/dist/*\" \
! -path \"*/.git/*\" \
! -path \"*/build/*\" \
! -path \"*/coverage/*\" \
2>/dev/null | sort || true")
local count
if [ -z "$files" ]; then
count=0
else
count=$(echo "$files" | grep -c . || echo 0)
fi
local json_files=""
if [ "$count" -gt 0 ]; then
json_files=$(echo "$files" | awk '{printf "\"%s\"%s\n", $0, (NR<'$count'?",":"")}' | tr '\n' ' ')
fi
echo "$count|$json_files"
}
# Discover CSS/SCSS files
css_result=$(find_files '\( -name "*.css" -o -name "*.scss" \)')
css_count=${css_result%%|*}
css_files=${css_result#*|}
# Discover JS/TS files (all framework files)
js_result=$(find_files '\( -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o -name "*.mjs" -o -name "*.cjs" -o -name "*.vue" -o -name "*.svelte" \)')
js_count=${js_result%%|*}
js_files=${js_result#*|}
# Discover HTML files
html_result=$(find_files '-name "*.html"')
html_count=${html_result%%|*}
html_files=${html_result#*|}
# Calculate total
total_count=$((css_count + js_count + html_count))
# Generate JSON
cat > "$output_json" << EOF
{
"discovery_time": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"source_directory": "$(cd "$source_dir" && pwd)",
"file_types": {
"css": {
"count": $css_count,
"files": [${css_files}]
},
"js": {
"count": $js_count,
"files": [${js_files}]
},
"html": {
"count": $html_count,
"files": [${html_files}]
}
},
"total_files": $total_count
}
EOF
# Ensure file is fully written and synchronized to disk
# This prevents race conditions when the file is immediately read by another process
sync "$output_json" 2>/dev/null || sync # Sync specific file, fallback to full sync
sleep 0.1 # Additional safety: 100ms delay for filesystem metadata update
echo "Discovered: CSS=$css_count, JS=$js_count, HTML=$html_count (Total: $total_count)" >&2

View File

@@ -1,243 +0,0 @@
/**
* Animation & Transition Extraction Script
*
* Extracts CSS animations, transitions, and transform patterns from a live web page.
* This script runs in the browser context via Chrome DevTools Protocol.
*
* @returns {Object} Structured animation data
*/
(() => {
const extractionTimestamp = new Date().toISOString();
const currentUrl = window.location.href;
/**
* Parse transition shorthand or individual properties
*/
function parseTransition(element, computedStyle) {
const transition = computedStyle.transition || computedStyle.webkitTransition;
if (!transition || transition === 'none' || transition === 'all 0s ease 0s') {
return null;
}
// Parse shorthand: "property duration easing delay"
const transitions = [];
const parts = transition.split(/,\s*/);
parts.forEach(part => {
const match = part.match(/^(\S+)\s+([\d.]+m?s)\s+(\S+)(?:\s+([\d.]+m?s))?/);
if (match) {
transitions.push({
property: match[1],
duration: match[2],
easing: match[3],
delay: match[4] || '0s'
});
}
});
return transitions.length > 0 ? transitions : null;
}
/**
* Extract animation name and properties
*/
function parseAnimation(element, computedStyle) {
const animationName = computedStyle.animationName || computedStyle.webkitAnimationName;
if (!animationName || animationName === 'none') {
return null;
}
return {
name: animationName,
duration: computedStyle.animationDuration || computedStyle.webkitAnimationDuration,
easing: computedStyle.animationTimingFunction || computedStyle.webkitAnimationTimingFunction,
delay: computedStyle.animationDelay || computedStyle.webkitAnimationDelay || '0s',
iterationCount: computedStyle.animationIterationCount || computedStyle.webkitAnimationIterationCount || '1',
direction: computedStyle.animationDirection || computedStyle.webkitAnimationDirection || 'normal',
fillMode: computedStyle.animationFillMode || computedStyle.webkitAnimationFillMode || 'none'
};
}
/**
* Extract transform value
*/
function parseTransform(computedStyle) {
const transform = computedStyle.transform || computedStyle.webkitTransform;
if (!transform || transform === 'none') {
return null;
}
return transform;
}
/**
* Get element selector (simplified for readability)
*/
function getSelector(element) {
if (element.id) {
return `#${element.id}`;
}
if (element.className && typeof element.className === 'string') {
const classes = element.className.trim().split(/\s+/).slice(0, 2).join('.');
if (classes) {
return `.${classes}`;
}
}
return element.tagName.toLowerCase();
}
/**
* Extract all stylesheets and find @keyframes rules
*/
function extractKeyframes() {
const keyframes = {};
try {
// Iterate through all stylesheets
Array.from(document.styleSheets).forEach(sheet => {
try {
// Skip external stylesheets due to CORS
if (sheet.href && !sheet.href.startsWith(window.location.origin)) {
return;
}
Array.from(sheet.cssRules || sheet.rules || []).forEach(rule => {
// Check for @keyframes rules
if (rule.type === CSSRule.KEYFRAMES_RULE || rule.type === CSSRule.WEBKIT_KEYFRAMES_RULE) {
const name = rule.name;
const frames = {};
Array.from(rule.cssRules || []).forEach(keyframe => {
const key = keyframe.keyText; // e.g., "0%", "50%", "100%"
frames[key] = keyframe.style.cssText;
});
keyframes[name] = frames;
}
});
} catch (e) {
// Skip stylesheets that can't be accessed (CORS)
console.warn('Cannot access stylesheet:', sheet.href, e.message);
}
});
} catch (e) {
console.error('Error extracting keyframes:', e);
}
return keyframes;
}
/**
* Scan visible elements for animations and transitions
*/
function scanElements() {
const elements = document.querySelectorAll('*');
const transitionData = [];
const animationData = [];
const transformData = [];
const uniqueTransitions = new Set();
const uniqueAnimations = new Set();
const uniqueEasings = new Set();
const uniqueDurations = new Set();
elements.forEach(element => {
// Skip invisible elements
const rect = element.getBoundingClientRect();
if (rect.width === 0 && rect.height === 0) {
return;
}
const computedStyle = window.getComputedStyle(element);
// Extract transitions
const transitions = parseTransition(element, computedStyle);
if (transitions) {
const selector = getSelector(element);
transitions.forEach(t => {
const key = `${t.property}-${t.duration}-${t.easing}`;
if (!uniqueTransitions.has(key)) {
uniqueTransitions.add(key);
transitionData.push({
selector,
...t
});
uniqueEasings.add(t.easing);
uniqueDurations.add(t.duration);
}
});
}
// Extract animations
const animation = parseAnimation(element, computedStyle);
if (animation) {
const selector = getSelector(element);
const key = `${animation.name}-${animation.duration}`;
if (!uniqueAnimations.has(key)) {
uniqueAnimations.add(key);
animationData.push({
selector,
...animation
});
uniqueEasings.add(animation.easing);
uniqueDurations.add(animation.duration);
}
}
// Extract transforms (on hover/active, we only get current state)
const transform = parseTransform(computedStyle);
if (transform) {
const selector = getSelector(element);
transformData.push({
selector,
transform
});
}
});
return {
transitions: transitionData,
animations: animationData,
transforms: transformData,
uniqueEasings: Array.from(uniqueEasings),
uniqueDurations: Array.from(uniqueDurations)
};
}
/**
* Main extraction function
*/
function extractAnimations() {
const elementData = scanElements();
const keyframes = extractKeyframes();
return {
metadata: {
timestamp: extractionTimestamp,
url: currentUrl,
method: 'chrome-devtools',
version: '1.0.0'
},
transitions: elementData.transitions,
animations: elementData.animations,
transforms: elementData.transforms,
keyframes: keyframes,
summary: {
total_transitions: elementData.transitions.length,
total_animations: elementData.animations.length,
total_transforms: elementData.transforms.length,
total_keyframes: Object.keys(keyframes).length,
unique_easings: elementData.uniqueEasings,
unique_durations: elementData.uniqueDurations
}
};
}
// Execute extraction
return extractAnimations();
})();

View File

@@ -1,118 +0,0 @@
/**
* Extract Computed Styles from DOM
*
* This script extracts real CSS computed styles from a webpage's DOM
* to provide accurate design tokens for UI replication.
*
* Usage: Execute this function via Chrome DevTools evaluate_script
*/
(() => {
/**
* Extract unique values from a set and sort them
*/
const uniqueSorted = (set) => {
return Array.from(set)
.filter(v => v && v !== 'none' && v !== '0px' && v !== 'rgba(0, 0, 0, 0)')
.sort();
};
/**
* Parse rgb/rgba to OKLCH format (placeholder - returns original for now)
*/
const toOKLCH = (color) => {
// TODO: Implement actual RGB to OKLCH conversion
// For now, return the original color with a note
return `${color} /* TODO: Convert to OKLCH */`;
};
/**
* Extract only key styles from an element
*/
const extractKeyStyles = (element) => {
const s = window.getComputedStyle(element);
return {
color: s.color,
bg: s.backgroundColor,
borderRadius: s.borderRadius,
boxShadow: s.boxShadow,
fontSize: s.fontSize,
fontWeight: s.fontWeight,
padding: s.padding,
margin: s.margin
};
};
/**
* Main extraction function - extract all critical design tokens
*/
const extractDesignTokens = () => {
// Include all key UI elements
const selectors = [
'button', '.btn', '[role="button"]',
'input', 'textarea', 'select',
'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
'.card', 'article', 'section',
'a', 'p', 'nav', 'header', 'footer'
];
// Collect all design tokens
const tokens = {
colors: new Set(),
borderRadii: new Set(),
shadows: new Set(),
fontSizes: new Set(),
fontWeights: new Set(),
spacing: new Set()
};
// Extract from all elements
selectors.forEach(selector => {
try {
const elements = document.querySelectorAll(selector);
elements.forEach(element => {
const s = extractKeyStyles(element);
// Collect all tokens (no limits)
if (s.color && s.color !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.color);
if (s.bg && s.bg !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.bg);
if (s.borderRadius && s.borderRadius !== '0px') tokens.borderRadii.add(s.borderRadius);
if (s.boxShadow && s.boxShadow !== 'none') tokens.shadows.add(s.boxShadow);
if (s.fontSize) tokens.fontSizes.add(s.fontSize);
if (s.fontWeight) tokens.fontWeights.add(s.fontWeight);
// Extract all spacing values
[s.padding, s.margin].forEach(val => {
if (val && val !== '0px') {
val.split(' ').forEach(v => {
if (v && v !== '0px') tokens.spacing.add(v);
});
}
});
});
} catch (e) {
console.warn(`Error: ${selector}`, e);
}
});
// Return all tokens (no element details to save context)
return {
metadata: {
extractedAt: new Date().toISOString(),
url: window.location.href,
method: 'computed-styles'
},
tokens: {
colors: uniqueSorted(tokens.colors),
borderRadii: uniqueSorted(tokens.borderRadii), // ALL radius values
shadows: uniqueSorted(tokens.shadows), // ALL shadows
fontSizes: uniqueSorted(tokens.fontSizes),
fontWeights: uniqueSorted(tokens.fontWeights),
spacing: uniqueSorted(tokens.spacing)
}
};
};
// Execute and return results
return extractDesignTokens();
})();

View File

@@ -1,411 +0,0 @@
/**
* Extract Layout Structure from DOM - Enhanced Version
*
* Extracts real layout information from DOM to provide accurate
* structural data for UI replication.
*
* Features:
* - Framework detection (Nuxt.js, Next.js, React, Vue, Angular)
* - Multi-strategy container detection (strict → relaxed → class-based → framework-specific)
* - Intelligent main content detection with common class names support
* - Supports modern SPA frameworks
* - Detects non-semantic main containers (.main, .content, etc.)
* - Progressive exploration: Auto-discovers missing selectors when standard patterns fail
* - Suggests new class names to add to script based on actual page structure
*
* Progressive Exploration:
* When fewer than 3 main containers are found, the script automatically:
* 1. Analyzes all large visible containers (≥500×300px)
* 2. Extracts class name patterns (main/content/wrapper/container/page/etc.)
* 3. Suggests new selectors to add to the script
* 4. Returns exploration data in result.exploration
*
* Usage: Execute via Chrome DevTools evaluate_script
* Version: 2.2.0
*/
(() => {
/**
* Get element's bounding box relative to viewport
*/
const getBounds = (element) => {
const rect = element.getBoundingClientRect();
return {
x: Math.round(rect.x),
y: Math.round(rect.y),
width: Math.round(rect.width),
height: Math.round(rect.height)
};
};
/**
* Extract layout properties from an element
*/
const extractLayoutProps = (element) => {
const s = window.getComputedStyle(element);
return {
// Core layout
display: s.display,
position: s.position,
// Flexbox
flexDirection: s.flexDirection,
justifyContent: s.justifyContent,
alignItems: s.alignItems,
flexWrap: s.flexWrap,
gap: s.gap,
// Grid
gridTemplateColumns: s.gridTemplateColumns,
gridTemplateRows: s.gridTemplateRows,
gridAutoFlow: s.gridAutoFlow,
// Dimensions
width: s.width,
height: s.height,
maxWidth: s.maxWidth,
minWidth: s.minWidth,
// Spacing
padding: s.padding,
margin: s.margin
};
};
/**
* Identify layout pattern for an element
*/
const identifyPattern = (props) => {
const { display, flexDirection, gridTemplateColumns } = props;
if (display === 'flex' || display === 'inline-flex') {
if (flexDirection === 'column') return 'flex-column';
if (flexDirection === 'row') return 'flex-row';
return 'flex';
}
if (display === 'grid') {
const cols = gridTemplateColumns;
if (cols && cols !== 'none') {
const colCount = cols.split(' ').length;
return `grid-${colCount}col`;
}
return 'grid';
}
if (display === 'block') return 'block';
return display;
};
/**
* Detect frontend framework
*/
const detectFramework = () => {
if (document.querySelector('#__nuxt')) return { name: 'Nuxt.js', version: 'unknown' };
if (document.querySelector('#__next')) return { name: 'Next.js', version: 'unknown' };
if (document.querySelector('[data-reactroot]')) return { name: 'React', version: 'unknown' };
if (document.querySelector('[ng-version]')) return { name: 'Angular', version: 'unknown' };
if (window.Vue) return { name: 'Vue.js', version: window.Vue.version || 'unknown' };
return { name: 'Unknown', version: 'unknown' };
};
/**
* Build layout tree recursively
*/
const buildLayoutTree = (element, depth = 0, maxDepth = 3) => {
if (depth > maxDepth) return null;
const props = extractLayoutProps(element);
const bounds = getBounds(element);
const pattern = identifyPattern(props);
// Get semantic role
const tagName = element.tagName.toLowerCase();
const classes = Array.from(element.classList).slice(0, 3); // Max 3 classes
const role = element.getAttribute('role');
// Build node
const node = {
tag: tagName,
classes: classes,
role: role,
pattern: pattern,
bounds: bounds,
layout: {
display: props.display,
position: props.position
}
};
// Add flex/grid specific properties
if (props.display === 'flex' || props.display === 'inline-flex') {
node.layout.flexDirection = props.flexDirection;
node.layout.justifyContent = props.justifyContent;
node.layout.alignItems = props.alignItems;
node.layout.gap = props.gap;
}
if (props.display === 'grid') {
node.layout.gridTemplateColumns = props.gridTemplateColumns;
node.layout.gridTemplateRows = props.gridTemplateRows;
node.layout.gap = props.gap;
}
// Process children for container elements
if (props.display === 'flex' || props.display === 'grid' || props.display === 'block') {
const children = Array.from(element.children);
if (children.length > 0 && children.length < 50) { // Limit to 50 children
node.children = children
.map(child => buildLayoutTree(child, depth + 1, maxDepth))
.filter(child => child !== null);
}
}
return node;
};
/**
* Find main layout containers with multi-strategy approach
*/
const findMainContainers = () => {
const containers = [];
const found = new Set();
// Strategy 1: Strict selectors (body direct children)
const strictSelectors = [
'body > header',
'body > nav',
'body > main',
'body > footer'
];
// Strategy 2: Relaxed selectors (any level)
const relaxedSelectors = [
'header',
'nav',
'main',
'footer',
'[role="banner"]',
'[role="navigation"]',
'[role="main"]',
'[role="contentinfo"]'
];
// Strategy 3: Common class-based main content selectors
const commonClassSelectors = [
'.main',
'.content',
'.main-content',
'.page-content',
'.container.main',
'.wrapper > .main',
'div[class*="main-wrapper"]',
'div[class*="content-wrapper"]'
];
// Strategy 4: Framework-specific selectors
const frameworkSelectors = [
'#__nuxt header', '#__nuxt .main', '#__nuxt main', '#__nuxt footer',
'#__next header', '#__next .main', '#__next main', '#__next footer',
'#app header', '#app .main', '#app main', '#app footer',
'[data-app] header', '[data-app] .main', '[data-app] main', '[data-app] footer'
];
// Try all strategies
const allSelectors = [...strictSelectors, ...relaxedSelectors, ...commonClassSelectors, ...frameworkSelectors];
allSelectors.forEach(selector => {
try {
const elements = document.querySelectorAll(selector);
elements.forEach(element => {
// Avoid duplicates and invisible elements
if (!found.has(element) && element.offsetParent !== null) {
found.add(element);
const tree = buildLayoutTree(element, 0, 3);
if (tree && tree.bounds.width > 0 && tree.bounds.height > 0) {
containers.push(tree);
}
}
});
} catch (e) {
console.warn(`Selector failed: ${selector}`, e);
}
});
// Fallback: If no containers found, use body's direct children
if (containers.length === 0) {
Array.from(document.body.children).forEach(child => {
if (child.offsetParent !== null && !found.has(child)) {
const tree = buildLayoutTree(child, 0, 2);
if (tree && tree.bounds.width > 100 && tree.bounds.height > 100) {
containers.push(tree);
}
}
});
}
return containers;
};
/**
* Progressive exploration: Discover main containers when standard selectors fail
* Analyzes large visible containers and suggests class name patterns
*/
const exploreMainContainers = () => {
const candidates = [];
const minWidth = 500;
const minHeight = 300;
// Find all large visible divs
const allDivs = document.querySelectorAll('div');
allDivs.forEach(div => {
const rect = div.getBoundingClientRect();
const style = window.getComputedStyle(div);
// Filter: large size, visible, not header/footer
if (rect.width >= minWidth &&
rect.height >= minHeight &&
div.offsetParent !== null &&
!div.closest('header') &&
!div.closest('footer')) {
const classes = Array.from(div.classList);
const area = rect.width * rect.height;
candidates.push({
element: div,
classes: classes,
area: area,
bounds: {
width: Math.round(rect.width),
height: Math.round(rect.height)
},
display: style.display,
depth: getElementDepth(div)
});
}
});
// Sort by area (largest first) and take top candidates
candidates.sort((a, b) => b.area - a.area);
// Extract unique class patterns from top candidates
const classPatterns = new Set();
candidates.slice(0, 20).forEach(c => {
c.classes.forEach(cls => {
// Identify potential main content class patterns
if (cls.match(/main|content|container|wrapper|page|body|layout|app/i)) {
classPatterns.add(cls);
}
});
});
return {
candidates: candidates.slice(0, 10).map(c => ({
classes: c.classes,
bounds: c.bounds,
display: c.display,
depth: c.depth
})),
suggestedSelectors: Array.from(classPatterns).map(cls => `.${cls}`)
};
};
/**
* Get element depth in DOM tree
*/
const getElementDepth = (element) => {
let depth = 0;
let current = element;
while (current.parentElement) {
depth++;
current = current.parentElement;
}
return depth;
};
/**
* Analyze layout patterns
*/
const analyzePatterns = (containers) => {
const patterns = {
flexColumn: 0,
flexRow: 0,
grid: 0,
sticky: 0,
fixed: 0
};
const analyze = (node) => {
if (!node) return;
if (node.pattern === 'flex-column') patterns.flexColumn++;
if (node.pattern === 'flex-row') patterns.flexRow++;
if (node.pattern && node.pattern.startsWith('grid')) patterns.grid++;
if (node.layout.position === 'sticky') patterns.sticky++;
if (node.layout.position === 'fixed') patterns.fixed++;
if (node.children) {
node.children.forEach(analyze);
}
};
containers.forEach(analyze);
return patterns;
};
/**
* Main extraction function with progressive exploration
*/
const extractLayout = () => {
const framework = detectFramework();
const containers = findMainContainers();
const patterns = analyzePatterns(containers);
// Progressive exploration: if too few containers found, explore and suggest
let exploration = null;
const minExpectedContainers = 3; // At least header, main, footer
if (containers.length < minExpectedContainers) {
exploration = exploreMainContainers();
// Add warning message
exploration.warning = `Only ${containers.length} containers found. Consider adding these selectors to the script:`;
exploration.recommendation = exploration.suggestedSelectors.join(', ');
}
const result = {
metadata: {
extractedAt: new Date().toISOString(),
url: window.location.href,
framework: framework,
method: 'layout-structure-enhanced',
version: '2.2.0'
},
statistics: {
totalContainers: containers.length,
patterns: patterns
},
structure: containers
};
// Add exploration results if triggered
if (exploration) {
result.exploration = {
triggered: true,
reason: 'Insufficient containers found with standard selectors',
discoveredCandidates: exploration.candidates,
suggestedSelectors: exploration.suggestedSelectors,
warning: exploration.warning,
recommendation: exploration.recommendation
};
}
return result;
};
// Execute and return results
return extractLayout();
})();

View File

@@ -1,713 +0,0 @@
#!/bin/bash
# Generate documentation for modules and projects with multiple strategies
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
# strategy: full|single|project-readme|project-architecture|http-api
# source_path: Path to the source module directory (or project root for project-level docs)
# project_name: Project name for output path (e.g., "myproject")
# tool: gemini|qwen|codex (default: gemini)
# model: Model name (optional, uses tool defaults)
#
# Default Models:
# gemini: gemini-2.5-flash
# qwen: coder-model
# codex: gpt5-codex
#
# Module-Level Strategies:
# full: Full documentation generation
# - Read: All files in current and subdirectories (@**/*)
# - Generate: API.md + README.md for each directory containing code files
# - Use: Deep directories (Layer 3), comprehensive documentation
#
# single: Single-layer documentation
# - Read: Current directory code + child API.md/README.md files
# - Generate: API.md + README.md only in current directory
# - Use: Upper layers (Layer 1-2), incremental updates
#
# Project-Level Strategies:
# project-readme: Project overview documentation
# - Read: All module API.md and README.md files
# - Generate: README.md (project root)
# - Use: After all module docs are generated
#
# project-architecture: System design documentation
# - Read: All module docs + project README
# - Generate: ARCHITECTURE.md + EXAMPLES.md
# - Use: After project README is generated
#
# http-api: HTTP API documentation
# - Read: API route files + existing docs
# - Generate: api/README.md
# - Use: For projects with HTTP APIs
#
# Output Structure:
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
# Project docs: .workflow/docs/{project_name}/README.md
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
# API docs: .workflow/docs/{project_name}/api/README.md
#
# Features:
# - Path mirroring: source structure → docs structure
# - Template-driven generation
# - Respects .gitignore patterns
# - Detects code vs navigation folders
# - Tool fallback support
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
# Detect folder type (code vs navigation)
detect_folder_type() {
local target_path="$1"
local exclusion_filters="$2"
# Count code files (primary indicators)
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
if [ $code_count -gt 0 ]; then
echo "code"
else
echo "navigation"
fi
}
# Scan directory structure and generate structured information
scan_directory_structure() {
local target_path="$1"
local strategy="$2"
if [ ! -d "$target_path" ]; then
echo "Directory not found: $target_path"
return 1
fi
local exclusion_filters=$(build_exclusion_filters)
local structure_info=""
# Get basic directory info
local dir_name=$(basename "$target_path")
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
structure_info+="Directory: $dir_name\n"
structure_info+="Total files: $total_files\n"
structure_info+="Total directories: $total_dirs\n"
structure_info+="Folder type: $folder_type\n\n"
if [ "$strategy" = "full" ]; then
# For full: show all subdirectories with file counts
structure_info+="Subdirectories with files:\n"
while IFS= read -r dir; do
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
local rel_path=${dir#$target_path/}
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -gt 0 ]; then
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
fi
fi
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
else
# For single: show direct children only
structure_info+="Direct subdirectories:\n"
while IFS= read -r dir; do
if [ -n "$dir" ]; then
local dir_name=$(basename "$dir")
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
fi
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
fi
# Show main file types in current directory
structure_info+="\nCurrent directory files:\n"
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
structure_info+=" - Code files: $code_files\n"
structure_info+=" - Config files: $config_files\n"
structure_info+=" - Documentation: $doc_files\n"
printf "%b" "$structure_info"
}
# Calculate output path based on source path and project name
calculate_output_path() {
local source_path="$1"
local project_name="$2"
local project_root="$3"
# Get absolute path of source (normalize to Unix-style path)
local abs_source=$(cd "$source_path" && pwd)
# Normalize project root to same format
local norm_project_root=$(cd "$project_root" && pwd)
# Calculate relative path from project root
local rel_path="${abs_source#$norm_project_root}"
# Remove leading slash if present
rel_path="${rel_path#/}"
# If source is project root, use project name directly
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
echo "$norm_project_root/.workflow/docs/$project_name"
else
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
fi
}
generate_module_docs() {
local strategy="$1"
local source_path="$2"
local project_name="$3"
local tool="${4:-gemini}"
local model="$5"
# Validate parameters
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
echo "❌ Error: Strategy, source path, and project name are required"
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo "Module strategies: full, single"
echo "Project strategies: project-readme, project-architecture, http-api"
return 1
fi
# Validate strategy
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
local strategy_valid=false
for valid_strategy in "${valid_strategies[@]}"; do
if [ "$strategy" = "$valid_strategy" ]; then
strategy_valid=true
break
fi
done
if [ "$strategy_valid" = false ]; then
echo "❌ Error: Invalid strategy '$strategy'"
echo "Valid module strategies: full, single"
echo "Valid project strategies: project-readme, project-architecture, http-api"
return 1
fi
if [ ! -d "$source_path" ]; then
echo "❌ Error: Source directory '$source_path' does not exist"
return 1
fi
# Set default models if not specified
if [ -z "$model" ]; then
case "$tool" in
gemini)
model="gemini-2.5-flash"
;;
qwen)
model="coder-model"
;;
codex)
model="gpt5-codex"
;;
*)
model=""
;;
esac
fi
# Build exclusion filters
local exclusion_filters=$(build_exclusion_filters)
# Get project root
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
# Determine if this is a project-level strategy
local is_project_level=false
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
is_project_level=true
fi
# Calculate output path
local output_path
if [ "$is_project_level" = true ]; then
# Project-level docs go to project root
if [ "$strategy" = "http-api" ]; then
output_path="$project_root/.workflow/docs/$project_name/api"
else
output_path="$project_root/.workflow/docs/$project_name"
fi
else
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
fi
# Create output directory
mkdir -p "$output_path"
# Detect folder type (only for module-level strategies)
local folder_type=""
if [ "$is_project_level" = false ]; then
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
fi
# Load templates based on strategy
local api_template=""
local readme_template=""
local template_content=""
if [ "$is_project_level" = true ]; then
# Project-level templates
case "$strategy" in
project-readme)
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
if [ -f "$proj_readme_path" ]; then
template_content=$(cat "$proj_readme_path")
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
fi
;;
project-architecture)
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
if [ -f "$arch_path" ]; then
template_content=$(cat "$arch_path")
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
fi
if [ -f "$examples_path" ]; then
template_content="$template_content
EXAMPLES TEMPLATE:
$(cat "$examples_path")"
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
fi
;;
http-api)
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
if [ -f "$api_path" ]; then
template_content=$(cat "$api_path")
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
fi
;;
esac
else
# Module-level templates
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
if [ "$folder_type" = "code" ]; then
if [ -f "$api_template_path" ]; then
api_template=$(cat "$api_template_path")
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
fi
if [ -f "$readme_template_path" ]; then
readme_template=$(cat "$readme_template_path")
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
fi
else
# Navigation folder uses navigation template
if [ -f "$nav_template_path" ]; then
readme_template=$(cat "$nav_template_path")
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
fi
fi
fi
# Scan directory structure (only for module-level strategies)
local structure_info=""
if [ "$is_project_level" = false ]; then
echo " 🔍 Scanning directory structure..."
structure_info=$(scan_directory_structure "$source_path" "$strategy")
fi
# Prepare logging info
local module_name=$(basename "$source_path")
echo "⚡ Generating docs: $source_path$output_path"
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
echo " Output: $output_path"
# Build strategy-specific prompt
local final_prompt=""
# Project-level strategies
if [ "$strategy" = "project-readme" ]; then
final_prompt="PURPOSE: Generate comprehensive project overview documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All module documentation files from the project
Generate ONE documentation file in current directory:
- README.md - Project root documentation
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Synthesize information from all module docs
- Include project overview, getting started, and navigation
- Create clear module navigation with links
- Follow template structure exactly"
elif [ "$strategy" = "project-architecture" ]; then
final_prompt="PURPOSE: Generate system design and usage examples documentation
PROJECT: $project_name
OUTPUT: Current directory (files will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All project documentation including module docs and project README
Generate TWO documentation files in current directory:
1. ARCHITECTURE.md - System architecture and design patterns
2. EXAMPLES.md - End-to-end usage examples
Template:
$template_content
Instructions:
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
- Synthesize architectural patterns from module documentation
- Document system structure, module relationships, and design decisions
- Provide practical code examples and usage scenarios
- Follow template structure for both files"
elif [ "$strategy" = "http-api" ]; then
final_prompt="PURPOSE: Generate HTTP API reference documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
Context: API route files and existing documentation
Generate ONE documentation file in current directory:
- README.md - HTTP API documentation (in api/ subdirectory)
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Document all HTTP endpoints (routes, methods, parameters, responses)
- Include authentication requirements and error codes
- Provide request/response examples
- Follow template structure (Part B: HTTP API documentation)"
# Module-level strategies
elif [ "$strategy" = "full" ]; then
# Full strategy: read all files, generate for each directory
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate comprehensive API and module documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @**/*
Generate TWO documentation files in current directory:
1. API.md - Code API documentation (functions, classes, interfaces)
Template:
$api_template
2. README.md - Module overview documentation
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- If subdirectories contain code files, generate their docs too (recursive)
- Work bottom-up: deepest directories first
- Follow template structure exactly
- Use structure analysis for context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation for folder structure
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*
Generate ONE documentation file in current directory:
- README.md - Navigation and folder overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Focus on folder structure and navigation
- Link to subdirectory documentation
- Use structure analysis for context"
fi
else
# Single strategy: read current + child docs only
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate API and module documentation for current directory
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
Generate TWO documentation files in current directory:
1. API.md - Code API documentation
Template:
$api_template
2. README.md - Module overview
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- Reference child documentation, do not duplicate
- Follow template structure
- Use structure analysis for current directory context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @*/API.md @*/README.md @*.md
Generate ONE documentation file in current directory:
- README.md - Navigation and overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Link to child documentation
- Use structure analysis for navigation context"
fi
fi
# Execute documentation generation
local start_time=$(date +%s)
echo " 🔄 Starting documentation generation..."
if cd "$source_path" 2>/dev/null; then
local tool_result=0
# Store current output path for CLI context
export DOC_OUTPUT_PATH="$output_path"
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
local git_head_before=""
if git rev-parse --git-dir >/dev/null 2>&1; then
git_head_before=$(git rev-parse HEAD 2>/dev/null)
fi
# Execute with selected tool
case "$tool" in
qwen)
if [ "$model" = "coder-model" ]; then
qwen -p "$final_prompt" --yolo 2>&1
else
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
fi
tool_result=$?
;;
codex)
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
;;
gemini)
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
*)
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
esac
# Move generated files to output directory
local docs_created=0
local moved_files=""
if [ $tool_result -eq 0 ]; then
if [ "$is_project_level" = true ]; then
# Project-level documentation files
case "$strategy" in
project-readme)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
;;
project-architecture)
if [ -f "ARCHITECTURE.md" ]; then
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="ARCHITECTURE.md "
}
fi
if [ -f "EXAMPLES.md" ]; then
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="EXAMPLES.md "
}
fi
;;
http-api)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="api/README.md "
}
fi
;;
esac
else
# Module-level documentation files
# Check and move API.md if it exists
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
mv "API.md" "$output_path/API.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="API.md "
}
fi
# Check and move README.md if it exists
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
fi
fi
# Check if CLI tool auto-committed (and revert if needed)
if [ -n "$git_head_before" ]; then
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
if [ "$git_head_before" != "$git_head_after" ]; then
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
git reset --soft "$git_head_before" 2>/dev/null
echo " ✅ Auto-commit reverted (files remain staged)"
fi
fi
if [ $docs_created -gt 0 ]; then
local end_time=$(date +%s)
local duration=$((end_time - start_time))
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
cd - > /dev/null
return 0
else
echo " ❌ Documentation generation failed for $source_path"
cd - > /dev/null
return 1
fi
else
echo " ❌ Cannot access directory: $source_path"
return 1
fi
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Show help if no arguments or help requested
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo ""
echo "Module-Level Strategies:"
echo " full - Generate docs for all subdirectories with code"
echo " single - Generate docs only for current directory"
echo ""
echo "Project-Level Strategies:"
echo " project-readme - Generate project root README.md"
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
echo " http-api - Generate HTTP API documentation (api/README.md)"
echo ""
echo "Tools: gemini (default), qwen, codex"
echo "Models: Use tool defaults if not specified"
echo ""
echo "Module Examples:"
echo " ./generate_module_docs.sh full ./src/auth myproject"
echo " ./generate_module_docs.sh single ./components myproject gemini"
echo ""
echo "Project Examples:"
echo " ./generate_module_docs.sh project-readme . myproject"
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
echo " ./generate_module_docs.sh http-api . myproject"
exit 0
fi
generate_module_docs "$@"
fi

View File

@@ -1,166 +0,0 @@
#!/bin/bash
# Get modules organized by directory depth (deepest first)
# Usage: get_modules_by_depth.sh [format]
# format: list|grouped|json (default: list)
# Parse .gitignore patterns and build exclusion filters
build_exclusion_filters() {
local filters=""
# Always exclude these system/cache directories and common web dev packages
local system_excludes=(
# Version control and IDE
".git" ".gitignore" ".gitmodules" ".gitattributes"
".svn" ".hg" ".bzr"
".history" ".vscode" ".idea" ".vs" ".vscode-test"
".sublime-text" ".atom"
# Python
"__pycache__" ".pytest_cache" ".mypy_cache" ".tox"
".coverage" "htmlcov" ".nox" ".venv" "venv" "env"
".egg-info" "*.egg-info" ".eggs" ".wheel"
"site-packages" ".python-version" ".pyc"
# Node.js/JavaScript
"node_modules" ".npm" ".yarn" ".pnpm" "yarn-error.log"
".nyc_output" "coverage" ".next" ".nuxt"
".cache" ".parcel-cache" ".vite" "dist" "build"
".turbo" ".vercel" ".netlify"
# Package managers
".pnpm-store" "pnpm-lock.yaml" "yarn.lock" "package-lock.json"
".bundle" "vendor/bundle" "Gemfile.lock"
".gradle" "gradle" "gradlew" "gradlew.bat"
".mvn" "target" ".m2"
# Build/compile outputs
"dist" "build" "out" "output" "_site" "public"
".output" ".generated" "generated" "gen"
"bin" "obj" "Debug" "Release"
# Testing
".pytest_cache" ".coverage" "htmlcov" "test-results"
".nyc_output" "junit.xml" "test_results"
"cypress/screenshots" "cypress/videos"
"playwright-report" ".playwright"
# Logs and temp files
"logs" "*.log" "log" "tmp" "temp" ".tmp" ".temp"
".env" ".env.local" ".env.*.local"
".DS_Store" "Thumbs.db" "*.tmp" "*.swp" "*.swo"
# Documentation build outputs
"_book" "_site" "docs/_build" "site" "gh-pages"
".docusaurus" ".vuepress" ".gitbook"
# Database files
"*.sqlite" "*.sqlite3" "*.db" "data.db"
# OS and editor files
".DS_Store" "Thumbs.db" "desktop.ini"
"*.stackdump" "*.core"
# Cloud and deployment
".serverless" ".terraform" "terraform.tfstate"
".aws" ".azure" ".gcp"
# Mobile development
".gradle" "build" ".expo" ".metro"
"android/app/build" "ios/build" "DerivedData"
# Game development
"Library" "Temp" "ProjectSettings"
"Logs" "MemoryCaptures" "UserSettings"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Parse .gitignore if it exists
if [ -f ".gitignore" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < .gitignore
fi
echo "$filters"
}
get_modules_by_depth() {
local format="${1:-list}"
local exclusion_filters=$(build_exclusion_filters)
local max_depth=$(eval "find . -type d $exclusion_filters 2>/dev/null" | awk -F/ '{print NF-1}' | sort -n | tail -1)
case "$format" in
"grouped")
echo "📊 Modules by depth (deepest first):"
for depth in $(seq $max_depth -1 0); do
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
while read dir; do
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
local claude_indicator=""
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
echo "$dir$claude_indicator"
fi
done)
if [ -n "$dirs" ]; then
echo " 📁 Depth $depth:"
echo "$dirs" | sed 's/^/ - /'
fi
done
;;
"json")
echo "{"
echo " \"max_depth\": $max_depth,"
echo " \"modules\": {"
for depth in $(seq $max_depth -1 0); do
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
while read dir; do
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
local has_claude="false"
[ -f "$dir/CLAUDE.md" ] && has_claude="true"
echo "{\"path\":\"$dir\",\"has_claude\":$has_claude}"
fi
done | tr '\n' ',')
if [ -n "$dirs" ]; then
dirs=${dirs%,} # Remove trailing comma
echo " \"$depth\": [$dirs]"
[ $depth -gt 0 ] && echo ","
fi
done
echo " }"
echo "}"
;;
"list"|*)
# Simple list format (deepest first)
for depth in $(seq $max_depth -1 0); do
eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
while read dir; do
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
local has_claude="no"
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude"
fi
done
done
;;
esac
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
get_modules_by_depth "$@"
fi

View File

@@ -1,391 +0,0 @@
#!/bin/bash
#
# UI Generate Preview v2.0 - Template-Based Preview Generation
# Purpose: Generate compare.html and index.html using template substitution
# Template: ~/.claude/workflows/_template-compare-matrix.html
#
# Usage: ui-generate-preview.sh <prototypes_dir> [--template <path>]
#
set -e
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Default template path
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
# Parse arguments
prototypes_dir="${1:-.}"
shift || true
while [[ $# -gt 0 ]]; do
case $1 in
--template)
TEMPLATE_PATH="$2"
shift 2
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
exit 1
;;
esac
done
if [[ ! -d "$prototypes_dir" ]]; then
echo -e "${RED}Error: Directory not found: $prototypes_dir${NC}"
exit 1
fi
cd "$prototypes_dir" || exit 1
echo -e "${GREEN}📊 Auto-detecting matrix dimensions...${NC}"
# Auto-detect styles, layouts, targets from file patterns
# Pattern: {target}-style-{s}-layout-{l}.html
styles=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
sed 's/.*-style-\([0-9]\+\)-.*/\1/' | sort -un)
layouts=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
sed 's/.*-layout-\([0-9]\+\)\.html/\1/' | sort -un)
targets=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
sed 's/\.\///; s/-style-.*//' | sort -u)
S=$(echo "$styles" | wc -l)
L=$(echo "$layouts" | wc -l)
T=$(echo "$targets" | wc -l)
echo -e " Detected: ${GREEN}${S}${NC} styles × ${GREEN}${L}${NC} layouts × ${GREEN}${T}${NC} targets"
if [[ $S -eq 0 ]] || [[ $L -eq 0 ]] || [[ $T -eq 0 ]]; then
echo -e "${RED}Error: No prototype files found matching pattern {target}-style-{s}-layout-{l}.html${NC}"
exit 1
fi
# ============================================================================
# Generate compare.html from template
# ============================================================================
echo -e "${YELLOW}🎨 Generating compare.html from template...${NC}"
if [[ ! -f "$TEMPLATE_PATH" ]]; then
echo -e "${RED}Error: Template not found: $TEMPLATE_PATH${NC}"
exit 1
fi
# Build pages/targets JSON array
PAGES_JSON="["
first=true
for target in $targets; do
if [[ "$first" == true ]]; then
first=false
else
PAGES_JSON+=", "
fi
PAGES_JSON+="\"$target\""
done
PAGES_JSON+="]"
# Generate metadata
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
SESSION_ID="standalone"
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +"%Y-%m-%d")
# Replace placeholders in template
cat "$TEMPLATE_PATH" | \
sed "s|{{run_id}}|${RUN_ID}|g" | \
sed "s|{{session_id}}|${SESSION_ID}|g" | \
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
sed "s|{{style_variants}}|${S}|g" | \
sed "s|{{layout_variants}}|${L}|g" | \
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
> compare.html
echo -e "${GREEN} ✓ Generated compare.html from template${NC}"
# ============================================================================
# Generate index.html
# ============================================================================
echo -e "${YELLOW}📋 Generating index.html...${NC}"
cat > index.html << 'EOF'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>UI Prototypes Index</title>
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
max-width: 1200px;
margin: 0 auto;
padding: 40px 20px;
background: #f5f5f5;
}
h1 { margin-bottom: 10px; color: #333; }
.subtitle { color: #666; margin-bottom: 30px; }
.cta {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 20px;
border-radius: 8px;
margin-bottom: 30px;
box-shadow: 0 4px 6px rgba(0,0,0,0.1);
}
.cta h2 { margin-bottom: 10px; }
.cta a {
display: inline-block;
background: white;
color: #667eea;
padding: 10px 20px;
border-radius: 6px;
text-decoration: none;
font-weight: 600;
margin-top: 10px;
}
.cta a:hover { background: #f8f9fa; }
.style-section {
background: white;
padding: 20px;
border-radius: 8px;
margin-bottom: 20px;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
}
.style-section h2 {
color: #495057;
margin-bottom: 15px;
padding-bottom: 10px;
border-bottom: 2px solid #e9ecef;
}
.target-group {
margin-bottom: 20px;
}
.target-group h3 {
color: #6c757d;
font-size: 16px;
margin-bottom: 10px;
}
.link-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
gap: 10px;
}
.prototype-link {
padding: 12px 16px;
background: #f8f9fa;
border: 1px solid #dee2e6;
border-radius: 6px;
text-decoration: none;
color: #495057;
display: flex;
justify-content: space-between;
align-items: center;
transition: all 0.2s;
}
.prototype-link:hover {
background: #e9ecef;
border-color: #667eea;
transform: translateX(2px);
}
.prototype-link .label { font-weight: 500; }
.prototype-link .icon { color: #667eea; }
</style>
</head>
<body>
<h1>🎨 UI Prototypes Index</h1>
<p class="subtitle">Generated __S__×__L__×__T__ = __TOTAL__ prototypes</p>
<div class="cta">
<h2>📊 Interactive Comparison</h2>
<p>View all styles and layouts side-by-side in an interactive matrix</p>
<a href="compare.html">Open Matrix View →</a>
</div>
<h2>📂 All Prototypes</h2>
__CONTENT__
</body>
</html>
EOF
# Build content HTML
CONTENT=""
for style in $styles; do
CONTENT+="<div class='style-section'>"$'\n'
CONTENT+="<h2>Style ${style}</h2>"$'\n'
for target in $targets; do
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
CONTENT+="<div class='target-group'>"$'\n'
CONTENT+="<h3>${target_capitalized}</h3>"$'\n'
CONTENT+="<div class='link-grid'>"$'\n'
for layout in $layouts; do
html_file="${target}-style-${style}-layout-${layout}.html"
if [[ -f "$html_file" ]]; then
CONTENT+="<a href='${html_file}' class='prototype-link' target='_blank'>"$'\n'
CONTENT+="<span class='label'>Layout ${layout}</span>"$'\n'
CONTENT+="<span class='icon'>↗</span>"$'\n'
CONTENT+="</a>"$'\n'
fi
done
CONTENT+="</div></div>"$'\n'
done
CONTENT+="</div>"$'\n'
done
# Calculate total
TOTAL_PROTOTYPES=$((S * L * T))
# Replace placeholders (using a temp file for complex replacement)
{
echo "$CONTENT" > /tmp/content_tmp.txt
sed "s|__S__|${S}|g" index.html | \
sed "s|__L__|${L}|g" | \
sed "s|__T__|${T}|g" | \
sed "s|__TOTAL__|${TOTAL_PROTOTYPES}|g" | \
sed -e "/__CONTENT__/r /tmp/content_tmp.txt" -e "/__CONTENT__/d" > /tmp/index_tmp.html
mv /tmp/index_tmp.html index.html
rm -f /tmp/content_tmp.txt
}
echo -e "${GREEN} ✓ Generated index.html${NC}"
# ============================================================================
# Generate PREVIEW.md
# ============================================================================
echo -e "${YELLOW}📝 Generating PREVIEW.md...${NC}"
cat > PREVIEW.md << EOF
# UI Prototypes Preview Guide
Generated: $(date +"%Y-%m-%d %H:%M:%S")
## 📊 Matrix Dimensions
- **Styles**: ${S}
- **Layouts**: ${L}
- **Targets**: ${T}
- **Total Prototypes**: $((S*L*T))
## 🌐 How to View
### Option 1: Interactive Matrix (Recommended)
Open \`compare.html\` in your browser to see all prototypes in an interactive matrix view.
**Features**:
- Side-by-side comparison of all styles and layouts
- Switch between targets using the dropdown
- Adjust grid columns for better viewing
- Direct links to full-page views
- Selection system with export to JSON
- Fullscreen mode for detailed inspection
### Option 2: Simple Index
Open \`index.html\` for a simple list of all prototypes with direct links.
### Option 3: Direct File Access
Each prototype can be opened directly:
- Pattern: \`{target}-style-{s}-layout-{l}.html\`
- Example: \`dashboard-style-1-layout-1.html\`
## 📁 File Structure
\`\`\`
prototypes/
├── compare.html # Interactive matrix view
├── index.html # Simple navigation index
├── PREVIEW.md # This file
EOF
for style in $styles; do
for target in $targets; do
for layout in $layouts; do
echo "├── ${target}-style-${style}-layout-${layout}.html" >> PREVIEW.md
echo "├── ${target}-style-${style}-layout-${layout}.css" >> PREVIEW.md
done
done
done
cat >> PREVIEW.md << 'EOF2'
```
## 🎨 Style Variants
EOF2
for style in $styles; do
cat >> PREVIEW.md << EOF3
### Style ${style}
EOF3
style_guide="../style-extraction/style-${style}/style-guide.md"
if [[ -f "$style_guide" ]]; then
head -n 10 "$style_guide" | tail -n +2 >> PREVIEW.md 2>/dev/null || echo "Design philosophy and tokens" >> PREVIEW.md
else
echo "Design system ${style}" >> PREVIEW.md
fi
echo "" >> PREVIEW.md
done
cat >> PREVIEW.md << 'EOF4'
## 🎯 Targets
EOF4
for target in $targets; do
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
echo "- **${target_capitalized}**: ${L} layouts × ${S} styles = $((L*S)) variations" >> PREVIEW.md
done
cat >> PREVIEW.md << 'EOF5'
## 💡 Tips
1. **Comparison**: Use compare.html to see how different styles affect the same layout
2. **Navigation**: Use index.html for quick access to specific prototypes
3. **Selection**: Mark favorites in compare.html using star icons
4. **Export**: Download selection JSON for implementation planning
5. **Inspection**: Open browser DevTools to inspect HTML structure and CSS
6. **Sharing**: All files are standalone - can be shared or deployed directly
## 📝 Next Steps
1. Review prototypes in compare.html
2. Select preferred style × layout combinations
3. Export selections as JSON
4. Provide feedback for refinement
5. Use selected designs for implementation
---
Generated by /workflow:ui-design:generate-v2 (Style-Centric Architecture)
EOF5
echo -e "${GREEN} ✓ Generated PREVIEW.md${NC}"
# ============================================================================
# Completion Summary
# ============================================================================
echo ""
echo -e "${GREEN}✅ Preview generation complete!${NC}"
echo -e " Files created: compare.html, index.html, PREVIEW.md"
echo -e " Matrix: ${S} styles × ${L} layouts × ${T} targets = $((S*L*T)) prototypes"
echo ""
echo -e "${YELLOW}🌐 Next Steps:${NC}"
echo -e " 1. Open compare.html for interactive matrix view"
echo -e " 2. Open index.html for simple navigation"
echo -e " 3. Read PREVIEW.md for detailed usage guide"
echo ""

View File

@@ -1,811 +0,0 @@
#!/bin/bash
# UI Prototype Instantiation Script with Preview Generation (v3.0 - Auto-detect)
# Purpose: Generate S × L × P final prototypes from templates + interactive preview files
# Usage:
# Simple: ui-instantiate-prototypes.sh <prototypes_dir>
# Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
# Use safer error handling
set -o pipefail
# ============================================================================
# Helper Functions
# ============================================================================
log_info() {
echo "$1"
}
log_success() {
echo "$1"
}
log_error() {
echo "$1"
}
log_warning() {
echo "⚠️ $1"
}
# Auto-detect pages from templates directory
auto_detect_pages() {
local templates_dir="$1/_templates"
if [ ! -d "$templates_dir" ]; then
log_error "Templates directory not found: $templates_dir"
return 1
fi
# Find unique page names from template files (e.g., login-layout-1.html -> login)
local pages=$(find "$templates_dir" -name "*-layout-*.html" -type f | \
sed 's|.*/||' | \
sed 's|-layout-[0-9]*\.html||' | \
sort -u | \
tr '\n' ',' | \
sed 's/,$//')
echo "$pages"
}
# Auto-detect style variants count
auto_detect_style_variants() {
local base_path="$1"
local style_dir="$base_path/../style-extraction"
if [ ! -d "$style_dir" ]; then
log_warning "Style consolidation directory not found: $style_dir"
echo "3" # Default
return
fi
# Count style-* directories
local count=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" | wc -l)
if [ "$count" -eq 0 ]; then
echo "3" # Default
else
echo "$count"
fi
}
# Auto-detect layout variants count
auto_detect_layout_variants() {
local templates_dir="$1/_templates"
if [ ! -d "$templates_dir" ]; then
echo "3" # Default
return
fi
# Find the first page and count its layouts
local first_page=$(find "$templates_dir" -name "*-layout-1.html" -type f | head -1 | sed 's|.*/||' | sed 's|-layout-1\.html||')
if [ -z "$first_page" ]; then
echo "3" # Default
return
fi
# Count layout files for this page
local count=$(find "$templates_dir" -name "${first_page}-layout-*.html" -type f | wc -l)
if [ "$count" -eq 0 ]; then
echo "3" # Default
else
echo "$count"
fi
}
# ============================================================================
# Parse Arguments
# ============================================================================
show_usage() {
cat <<'EOF'
Usage:
Simple (auto-detect): ui-instantiate-prototypes.sh <prototypes_dir> [options]
Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
Simple Mode (Recommended):
prototypes_dir Path to prototypes directory (auto-detects everything)
Full Mode:
base_path Base path to prototypes directory
pages Comma-separated list of pages/components
style_variants Number of style variants (1-5)
layout_variants Number of layout variants (1-5)
Options:
--run-id <id> Run ID (default: auto-generated)
--session-id <id> Session ID (default: standalone)
--mode <page|component> Exploration mode (default: page)
--template <path> Path to compare.html template (default: ~/.claude/workflows/_template-compare-matrix.html)
--no-preview Skip preview file generation
--help Show this help message
Examples:
# Simple usage (auto-detect everything)
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes
# With options
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes --session-id WFS-auth
# Full manual mode
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes "login,dashboard" 3 3 --session-id WFS-auth
EOF
}
# Default values
BASE_PATH=""
PAGES=""
STYLE_VARIANTS=""
LAYOUT_VARIANTS=""
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
SESSION_ID="standalone"
MODE="page"
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
GENERATE_PREVIEW=true
AUTO_DETECT=false
# Parse arguments
if [ $# -lt 1 ]; then
log_error "Missing required arguments"
show_usage
exit 1
fi
# Check if using simple mode (only 1 positional arg before options)
if [ $# -eq 1 ] || [[ "$2" == --* ]]; then
# Simple mode - auto-detect
AUTO_DETECT=true
BASE_PATH="$1"
shift 1
else
# Full mode - manual parameters
if [ $# -lt 4 ]; then
log_error "Full mode requires 4 positional arguments"
show_usage
exit 1
fi
BASE_PATH="$1"
PAGES="$2"
STYLE_VARIANTS="$3"
LAYOUT_VARIANTS="$4"
shift 4
fi
# Parse optional arguments
while [[ $# -gt 0 ]]; do
case $1 in
--run-id)
RUN_ID="$2"
shift 2
;;
--session-id)
SESSION_ID="$2"
shift 2
;;
--mode)
MODE="$2"
shift 2
;;
--template)
TEMPLATE_PATH="$2"
shift 2
;;
--no-preview)
GENERATE_PREVIEW=false
shift
;;
--help)
show_usage
exit 0
;;
*)
log_error "Unknown option: $1"
show_usage
exit 1
;;
esac
done
# ============================================================================
# Auto-detection (if enabled)
# ============================================================================
if [ "$AUTO_DETECT" = true ]; then
log_info "🔍 Auto-detecting configuration from directory..."
# Detect pages
PAGES=$(auto_detect_pages "$BASE_PATH")
if [ -z "$PAGES" ]; then
log_error "Could not auto-detect pages from templates"
exit 1
fi
log_info " Pages: $PAGES"
# Detect style variants
STYLE_VARIANTS=$(auto_detect_style_variants "$BASE_PATH")
log_info " Style variants: $STYLE_VARIANTS"
# Detect layout variants
LAYOUT_VARIANTS=$(auto_detect_layout_variants "$BASE_PATH")
log_info " Layout variants: $LAYOUT_VARIANTS"
echo ""
fi
# ============================================================================
# Validation
# ============================================================================
# Validate base path
if [ ! -d "$BASE_PATH" ]; then
log_error "Base path not found: $BASE_PATH"
exit 1
fi
# Validate style and layout variants
if [ "$STYLE_VARIANTS" -lt 1 ] || [ "$STYLE_VARIANTS" -gt 5 ]; then
log_error "Style variants must be between 1 and 5 (got: $STYLE_VARIANTS)"
exit 1
fi
if [ "$LAYOUT_VARIANTS" -lt 1 ] || [ "$LAYOUT_VARIANTS" -gt 5 ]; then
log_error "Layout variants must be between 1 and 5 (got: $LAYOUT_VARIANTS)"
exit 1
fi
# Validate STYLE_VARIANTS against actual style directories
if [ "$STYLE_VARIANTS" -gt 0 ]; then
style_dir="$BASE_PATH/../style-extraction"
if [ ! -d "$style_dir" ]; then
log_error "Style consolidation directory not found: $style_dir"
log_info "Run /workflow:ui-design:consolidate first"
exit 1
fi
actual_styles=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | wc -l)
if [ "$actual_styles" -eq 0 ]; then
log_error "No style directories found in: $style_dir"
log_info "Run /workflow:ui-design:consolidate first to generate style design systems"
exit 1
fi
if [ "$STYLE_VARIANTS" -gt "$actual_styles" ]; then
log_warning "Requested $STYLE_VARIANTS style variants, but only found $actual_styles directories"
log_info "Available style directories:"
find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | sed 's|.*/||' | sort
log_info "Auto-correcting to $actual_styles style variants"
STYLE_VARIANTS=$actual_styles
fi
fi
# Parse pages into array
IFS=',' read -ra PAGE_ARRAY <<< "$PAGES"
if [ ${#PAGE_ARRAY[@]} -eq 0 ]; then
log_error "No pages found"
exit 1
fi
# ============================================================================
# Header Output
# ============================================================================
echo "========================================="
echo "UI Prototype Instantiation & Preview"
if [ "$AUTO_DETECT" = true ]; then
echo "(Auto-detected configuration)"
fi
echo "========================================="
echo "Base Path: $BASE_PATH"
echo "Mode: $MODE"
echo "Pages/Components: $PAGES"
echo "Style Variants: $STYLE_VARIANTS"
echo "Layout Variants: $LAYOUT_VARIANTS"
echo "Run ID: $RUN_ID"
echo "Session ID: $SESSION_ID"
echo "========================================="
echo ""
# Change to base path
cd "$BASE_PATH" || exit 1
# ============================================================================
# Phase 1: Instantiate Prototypes
# ============================================================================
log_info "🚀 Phase 1: Instantiating prototypes from templates..."
echo ""
total_generated=0
total_failed=0
for page in "${PAGE_ARRAY[@]}"; do
# Trim whitespace
page=$(echo "$page" | xargs)
log_info "Processing page/component: $page"
for s in $(seq 1 "$STYLE_VARIANTS"); do
for l in $(seq 1 "$LAYOUT_VARIANTS"); do
# Define file paths
TEMPLATE_HTML="_templates/${page}-layout-${l}.html"
STRUCTURAL_CSS="_templates/${page}-layout-${l}.css"
TOKEN_CSS="../style-extraction/style-${s}/tokens.css"
OUTPUT_HTML="${page}-style-${s}-layout-${l}.html"
# Copy template and replace placeholders
if [ -f "$TEMPLATE_HTML" ]; then
cp "$TEMPLATE_HTML" "$OUTPUT_HTML" || {
log_error "Failed to copy template: $TEMPLATE_HTML"
((total_failed++))
continue
}
# Replace CSS placeholders (Windows-compatible sed syntax)
sed -i "s|{{STRUCTURAL_CSS}}|${STRUCTURAL_CSS}|g" "$OUTPUT_HTML" || true
sed -i "s|{{TOKEN_CSS}}|${TOKEN_CSS}|g" "$OUTPUT_HTML" || true
log_success "Created: $OUTPUT_HTML"
((total_generated++))
# Create implementation notes (simplified)
NOTES_FILE="${page}-style-${s}-layout-${l}-notes.md"
# Generate notes with simple heredoc
cat > "$NOTES_FILE" <<NOTESEOF
# Implementation Notes: ${page}-style-${s}-layout-${l}
## Generation Details
- **Template**: ${TEMPLATE_HTML}
- **Structural CSS**: ${STRUCTURAL_CSS}
- **Style Tokens**: ${TOKEN_CSS}
- **Layout Strategy**: Layout ${l}
- **Style Variant**: Style ${s}
- **Mode**: ${MODE}
## Template Reuse
This prototype was generated from a shared layout template to ensure consistency
across all style variants. The HTML structure is identical for all ${page}-layout-${l}
prototypes, with only the design tokens (colors, fonts, spacing) varying.
## Design System Reference
Refer to \`../style-extraction/style-${s}/style-guide.md\` for:
- Design philosophy
- Token usage guidelines
- Component patterns
- Accessibility requirements
## Customization
To modify this prototype:
1. Edit the layout template: \`${TEMPLATE_HTML}\` (affects all styles)
2. Edit the structural CSS: \`${STRUCTURAL_CSS}\` (affects all styles)
3. Edit design tokens: \`${TOKEN_CSS}\` (affects only this style variant)
## Run Information
- **Run ID**: ${RUN_ID}
- **Session ID**: ${SESSION_ID}
- **Generated**: $(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
NOTESEOF
else
log_error "Template not found: $TEMPLATE_HTML"
((total_failed++))
fi
done
done
done
echo ""
log_success "Phase 1 complete: Generated ${total_generated} prototypes"
if [ $total_failed -gt 0 ]; then
log_warning "Failed: ${total_failed} prototypes"
fi
echo ""
# ============================================================================
# Phase 2: Generate Preview Files (if enabled)
# ============================================================================
if [ "$GENERATE_PREVIEW" = false ]; then
log_info "⏭️ Skipping preview generation (--no-preview flag)"
exit 0
fi
log_info "🎨 Phase 2: Generating preview files..."
echo ""
# ============================================================================
# 2a. Generate compare.html from template
# ============================================================================
if [ ! -f "$TEMPLATE_PATH" ]; then
log_warning "Template not found: $TEMPLATE_PATH"
log_info " Skipping compare.html generation"
else
log_info "📄 Generating compare.html from template..."
# Convert page array to JSON format
PAGES_JSON="["
for i in "${!PAGE_ARRAY[@]}"; do
page=$(echo "${PAGE_ARRAY[$i]}" | xargs)
PAGES_JSON+="\"$page\""
if [ $i -lt $((${#PAGE_ARRAY[@]} - 1)) ]; then
PAGES_JSON+=", "
fi
done
PAGES_JSON+="]"
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
# Read template and replace placeholders
cat "$TEMPLATE_PATH" | \
sed "s|{{run_id}}|${RUN_ID}|g" | \
sed "s|{{session_id}}|${SESSION_ID}|g" | \
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
sed "s|{{style_variants}}|${STYLE_VARIANTS}|g" | \
sed "s|{{layout_variants}}|${LAYOUT_VARIANTS}|g" | \
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
> compare.html
log_success "Generated: compare.html"
fi
# ============================================================================
# 2b. Generate index.html
# ============================================================================
log_info "📄 Generating index.html..."
# Calculate total prototypes
TOTAL_PROTOTYPES=$((STYLE_VARIANTS * LAYOUT_VARIANTS * ${#PAGE_ARRAY[@]}))
# Generate index.html with simple heredoc
cat > index.html <<'INDEXEOF'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>UI Prototypes - __MODE__ Mode - __RUN_ID__</title>
<style>
body {
font-family: system-ui, -apple-system, sans-serif;
max-width: 900px;
margin: 2rem auto;
padding: 0 2rem;
background: #f9fafb;
}
.header {
background: white;
padding: 2rem;
border-radius: 0.75rem;
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
margin-bottom: 2rem;
}
h1 {
color: #2563eb;
margin-bottom: 0.5rem;
font-size: 2rem;
}
.meta {
color: #6b7280;
font-size: 0.875rem;
margin-top: 0.5rem;
}
.info {
background: #f3f4f6;
padding: 1.5rem;
border-radius: 0.5rem;
margin: 1.5rem 0;
border-left: 4px solid #2563eb;
}
.cta {
display: inline-block;
background: #2563eb;
color: white;
padding: 1rem 2rem;
border-radius: 0.5rem;
text-decoration: none;
font-weight: 600;
margin: 1rem 0;
transition: background 0.2s;
}
.cta:hover {
background: #1d4ed8;
}
.stats {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
gap: 1rem;
margin: 1.5rem 0;
}
.stat {
background: white;
border: 1px solid #e5e7eb;
padding: 1.5rem;
border-radius: 0.5rem;
text-align: center;
box-shadow: 0 1px 2px rgba(0,0,0,0.05);
}
.stat-value {
font-size: 2.5rem;
font-weight: bold;
color: #2563eb;
margin-bottom: 0.25rem;
}
.stat-label {
color: #6b7280;
font-size: 0.875rem;
}
.section {
background: white;
padding: 2rem;
border-radius: 0.75rem;
margin-bottom: 2rem;
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
}
h2 {
color: #1f2937;
margin-bottom: 1rem;
font-size: 1.5rem;
}
ul {
line-height: 1.8;
color: #374151;
}
.pages-list {
list-style: none;
padding: 0;
}
.pages-list li {
background: #f9fafb;
padding: 0.75rem 1rem;
margin: 0.5rem 0;
border-radius: 0.375rem;
border-left: 3px solid #2563eb;
}
.badge {
display: inline-block;
background: #dbeafe;
color: #1e40af;
padding: 0.25rem 0.75rem;
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 600;
margin-left: 0.5rem;
}
</style>
</head>
<body>
<div class="header">
<h1>🎨 UI Prototype __MODE__ Mode</h1>
<div class="meta">
<strong>Run ID:</strong> __RUN_ID__ |
<strong>Session:</strong> __SESSION_ID__ |
<strong>Generated:</strong> __TIMESTAMP__
</div>
</div>
<div class="info">
<p><strong>Matrix Configuration:</strong> __STYLE_VARIANTS__ styles × __LAYOUT_VARIANTS__ layouts × __PAGE_COUNT__ __MODE__s</p>
<p><strong>Total Prototypes:</strong> __TOTAL_PROTOTYPES__ interactive HTML files</p>
</div>
<a href="compare.html" class="cta">🔍 Open Interactive Matrix Comparison →</a>
<div class="stats">
<div class="stat">
<div class="stat-value">__STYLE_VARIANTS__</div>
<div class="stat-label">Style Variants</div>
</div>
<div class="stat">
<div class="stat-value">__LAYOUT_VARIANTS__</div>
<div class="stat-label">Layout Options</div>
</div>
<div class="stat">
<div class="stat-value">__PAGE_COUNT__</div>
<div class="stat-label">__MODE__s</div>
</div>
<div class="stat">
<div class="stat-value">__TOTAL_PROTOTYPES__</div>
<div class="stat-label">Total Prototypes</div>
</div>
</div>
<div class="section">
<h2>🌟 Features</h2>
<ul>
<li><strong>Interactive Matrix View:</strong> __STYLE_VARIANTS__×__LAYOUT_VARIANTS__ grid with synchronized scrolling</li>
<li><strong>Flexible Zoom:</strong> 25%, 50%, 75%, 100% viewport scaling</li>
<li><strong>Fullscreen Mode:</strong> Detailed view for individual prototypes</li>
<li><strong>Selection System:</strong> Mark favorites with export to JSON</li>
<li><strong>__MODE__ Switcher:</strong> Compare different __MODE__s side-by-side</li>
<li><strong>Persistent State:</strong> Selections saved in localStorage</li>
</ul>
</div>
<div class="section">
<h2>📄 Generated __MODE__s</h2>
<ul class="pages-list">
__PAGES_LIST__
</ul>
</div>
<div class="section">
<h2>📚 Next Steps</h2>
<ol>
<li>Open <code>compare.html</code> to explore all variants in matrix view</li>
<li>Use zoom and sync scroll controls to compare details</li>
<li>Select your preferred style×layout combinations</li>
<li>Export selections as JSON for implementation planning</li>
<li>Review implementation notes in <code>*-notes.md</code> files</li>
</ol>
</div>
</body>
</html>
INDEXEOF
# Build pages list HTML
PAGES_LIST_HTML=""
for page in "${PAGE_ARRAY[@]}"; do
page=$(echo "$page" | xargs)
VARIANT_COUNT=$((STYLE_VARIANTS * LAYOUT_VARIANTS))
PAGES_LIST_HTML+=" <li>\n"
PAGES_LIST_HTML+=" <strong>${page}</strong>\n"
PAGES_LIST_HTML+=" <span class=\"badge\">${STYLE_VARIANTS}×${LAYOUT_VARIANTS} = ${VARIANT_COUNT} variants</span>\n"
PAGES_LIST_HTML+=" </li>\n"
done
# Replace all placeholders in index.html
MODE_UPPER=$(echo "$MODE" | awk '{print toupper(substr($0,1,1)) tolower(substr($0,2))}')
sed -i "s|__RUN_ID__|${RUN_ID}|g" index.html
sed -i "s|__SESSION_ID__|${SESSION_ID}|g" index.html
sed -i "s|__TIMESTAMP__|${TIMESTAMP}|g" index.html
sed -i "s|__MODE__|${MODE_UPPER}|g" index.html
sed -i "s|__STYLE_VARIANTS__|${STYLE_VARIANTS}|g" index.html
sed -i "s|__LAYOUT_VARIANTS__|${LAYOUT_VARIANTS}|g" index.html
sed -i "s|__PAGE_COUNT__|${#PAGE_ARRAY[@]}|g" index.html
sed -i "s|__TOTAL_PROTOTYPES__|${TOTAL_PROTOTYPES}|g" index.html
sed -i "s|__PAGES_LIST__|${PAGES_LIST_HTML}|g" index.html
log_success "Generated: index.html"
# ============================================================================
# 2c. Generate PREVIEW.md
# ============================================================================
log_info "📄 Generating PREVIEW.md..."
cat > PREVIEW.md <<PREVIEWEOF
# UI Prototype Preview Guide
## Quick Start
1. Open \`index.html\` for overview and navigation
2. Open \`compare.html\` for interactive matrix comparison
3. Use browser developer tools to inspect responsive behavior
## Configuration
- **Exploration Mode:** ${MODE_UPPER}
- **Run ID:** ${RUN_ID}
- **Session ID:** ${SESSION_ID}
- **Style Variants:** ${STYLE_VARIANTS}
- **Layout Options:** ${LAYOUT_VARIANTS}
- **${MODE_UPPER}s:** ${PAGES}
- **Total Prototypes:** ${TOTAL_PROTOTYPES}
- **Generated:** ${TIMESTAMP}
## File Naming Convention
\`\`\`
{${MODE}}-style-{s}-layout-{l}.html
\`\`\`
**Example:** \`dashboard-style-1-layout-2.html\`
- ${MODE_UPPER}: dashboard
- Style: Design system 1
- Layout: Layout variant 2
## Interactive Features (compare.html)
### Matrix View
- **Grid Layout:** ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} table with all prototypes visible
- **Synchronized Scroll:** All iframes scroll together (toggle with button)
- **Zoom Controls:** Adjust viewport scale (25%, 50%, 75%, 100%)
- **${MODE_UPPER} Selector:** Switch between different ${MODE}s instantly
### Prototype Actions
- **⭐ Selection:** Click star icon to mark favorites
- **⛶ Fullscreen:** View prototype in fullscreen overlay
- **↗ New Tab:** Open prototype in dedicated browser tab
### Selection Export
1. Select preferred prototypes using star icons
2. Click "Export Selection" button
3. Downloads JSON file: \`selection-${RUN_ID}.json\`
4. Use exported file for implementation planning
## Design System References
Each prototype references a specific style design system:
PREVIEWEOF
# Add style references
for s in $(seq 1 "$STYLE_VARIANTS"); do
cat >> PREVIEW.md <<STYLEEOF
### Style ${s}
- **Tokens:** \`../style-extraction/style-${s}/design-tokens.json\`
- **CSS Variables:** \`../style-extraction/style-${s}/tokens.css\`
- **Style Guide:** \`../style-extraction/style-${s}/style-guide.md\`
STYLEEOF
done
cat >> PREVIEW.md <<'FOOTEREOF'
## Responsive Testing
All prototypes are mobile-first responsive. Test at these breakpoints:
- **Mobile:** 375px - 767px
- **Tablet:** 768px - 1023px
- **Desktop:** 1024px+
Use browser DevTools responsive mode for testing.
## Accessibility Features
- Semantic HTML5 structure
- ARIA attributes for screen readers
- Keyboard navigation support
- Proper heading hierarchy
- Focus indicators
## Next Steps
1. **Review:** Open `compare.html` and explore all variants
2. **Select:** Mark preferred prototypes using star icons
3. **Export:** Download selection JSON for implementation
4. **Implement:** Use `/workflow:ui-design:update` to integrate selected designs
5. **Plan:** Run `/workflow:plan` to generate implementation tasks
---
**Generated by:** `ui-instantiate-prototypes.sh`
**Version:** 3.0 (auto-detect mode)
FOOTEREOF
log_success "Generated: PREVIEW.md"
# ============================================================================
# Completion Summary
# ============================================================================
echo ""
echo "========================================="
echo "✅ Generation Complete!"
echo "========================================="
echo ""
echo "📊 Summary:"
echo " Prototypes: ${total_generated} generated"
if [ $total_failed -gt 0 ]; then
echo " Failed: ${total_failed}"
fi
echo " Preview Files: compare.html, index.html, PREVIEW.md"
echo " Matrix: ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} (${#PAGE_ARRAY[@]} ${MODE}s)"
echo " Total Files: ${TOTAL_PROTOTYPES} prototypes + preview files"
echo ""
echo "🌐 Next Steps:"
echo " 1. Open: ${BASE_PATH}/index.html"
echo " 2. Explore: ${BASE_PATH}/compare.html"
echo " 3. Review: ${BASE_PATH}/PREVIEW.md"
echo ""
echo "Performance: Template-based approach with ${STYLE_VARIANTS}× speedup"
echo "========================================="

View File

@@ -1,333 +0,0 @@
#!/bin/bash
# Update CLAUDE.md for modules with two strategies
# Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]
# strategy: single-layer|multi-layer
# module_path: Path to the module directory
# tool: gemini|qwen|codex (default: gemini)
# model: Model name (optional, uses tool defaults)
#
# Default Models:
# gemini: gemini-2.5-flash
# qwen: coder-model
# codex: gpt5-codex
#
# Strategies:
# single-layer: Upward aggregation
# - Read: Current directory code + child CLAUDE.md files
# - Generate: Single ./CLAUDE.md in current directory
# - Use: Large projects, incremental bottom-up updates
#
# multi-layer: Downward distribution
# - Read: All files in current and subdirectories
# - Generate: CLAUDE.md for each directory containing files
# - Use: Small projects, full documentation generation
#
# Features:
# - Minimal prompts based on unified template
# - Respects .gitignore patterns
# - Path-focused processing (script only cares about paths)
# - Template-driven generation
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
# Scan directory structure and generate structured information
scan_directory_structure() {
local target_path="$1"
local strategy="$2"
if [ ! -d "$target_path" ]; then
echo "Directory not found: $target_path"
return 1
fi
local exclusion_filters=$(build_exclusion_filters)
local structure_info=""
# Get basic directory info
local dir_name=$(basename "$target_path")
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
structure_info+="Directory: $dir_name\n"
structure_info+="Total files: $total_files\n"
structure_info+="Total directories: $total_dirs\n\n"
if [ "$strategy" = "multi-layer" ]; then
# For multi-layer: show all subdirectories with file counts
structure_info+="Subdirectories with files:\n"
while IFS= read -r dir; do
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
local rel_path=${dir#$target_path/}
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -gt 0 ]; then
structure_info+=" - $rel_path/ ($file_count files)\n"
fi
fi
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
else
# For single-layer: show direct children only
structure_info+="Direct subdirectories:\n"
while IFS= read -r dir; do
if [ -n "$dir" ]; then
local dir_name=$(basename "$dir")
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
local has_claude=$([ -f "$dir/CLAUDE.md" ] && echo " [has CLAUDE.md]" || echo "")
structure_info+=" - $dir_name/ ($file_count files)$has_claude\n"
fi
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
fi
# Show main file types in current directory
structure_info+="\nCurrent directory files:\n"
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' \\) $exclusion_filters 2>/dev/null" | wc -l)
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
structure_info+=" - Code files: $code_files\n"
structure_info+=" - Config files: $config_files\n"
structure_info+=" - Documentation: $doc_files\n"
printf "%b" "$structure_info"
}
update_module_claude() {
local strategy="$1"
local module_path="$2"
local tool="${3:-gemini}"
local model="$4"
# Validate parameters
if [ -z "$strategy" ] || [ -z "$module_path" ]; then
echo "❌ Error: Strategy and module path are required"
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
echo "Strategies: single-layer|multi-layer"
return 1
fi
# Validate strategy
if [ "$strategy" != "single-layer" ] && [ "$strategy" != "multi-layer" ]; then
echo "❌ Error: Invalid strategy '$strategy'"
echo "Valid strategies: single-layer, multi-layer"
return 1
fi
if [ ! -d "$module_path" ]; then
echo "❌ Error: Directory '$module_path' does not exist"
return 1
fi
# Set default models if not specified
if [ -z "$model" ]; then
case "$tool" in
gemini)
model="gemini-2.5-flash"
;;
qwen)
model="coder-model"
;;
codex)
model="gpt5-codex"
;;
*)
model=""
;;
esac
fi
# Build exclusion filters from .gitignore
local exclusion_filters=$(build_exclusion_filters)
# Check if directory has files (excluding gitignored paths)
local file_count=$(eval "find \"$module_path\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -eq 0 ]; then
echo "⚠️ Skipping '$module_path' - no files found (after .gitignore filtering)"
return 0
fi
# Use unified template for all modules
local template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/02-document-module-structure.txt"
# Read template content directly
local template_content=""
if [ -f "$template_path" ]; then
template_content=$(cat "$template_path")
echo " 📋 Loaded template: $(wc -l < "$template_path") lines"
else
echo " ⚠️ Template not found: $template_path"
echo " Using fallback template..."
template_content="Create comprehensive CLAUDE.md documentation following standard structure with Purpose, Structure, Components, Dependencies, Integration, and Implementation sections."
fi
# Scan directory structure first
echo " 🔍 Scanning directory structure..."
local structure_info=$(scan_directory_structure "$module_path" "$strategy")
# Prepare logging info
local module_name=$(basename "$module_path")
echo "⚡ Updating: $module_path"
echo " Strategy: $strategy | Tool: $tool | Model: $model | Files: $file_count"
echo " Template: $(basename "$template_path") ($(echo "$template_content" | wc -l) lines)"
echo " Structure: Scanned $(echo "$structure_info" | wc -l) lines of structure info"
# Build minimal strategy-specific prompt with explicit paths and structure info
local final_prompt=""
if [ "$strategy" = "multi-layer" ]; then
# multi-layer strategy: read all, generate for each directory
final_prompt="Directory Structure Analysis:
$structure_info
Read: @**/*
Generate CLAUDE.md files:
- Primary: ./CLAUDE.md (current directory)
- Additional: CLAUDE.md in each subdirectory containing files
Template Guidelines:
$template_content
Instructions:
- Work bottom-up: deepest directories first
- Parent directories reference children
- Each CLAUDE.md file must be in its respective directory
- Follow the template guidelines above for consistent structure
- Use the structure analysis to understand directory hierarchy"
else
# single-layer strategy: read current + child CLAUDE.md, generate current only
final_prompt="Directory Structure Analysis:
$structure_info
Read: @*/CLAUDE.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.md @*.json @*.yaml @*.yml
Generate single file: ./CLAUDE.md
Template Guidelines:
$template_content
Instructions:
- Create exactly one CLAUDE.md file in the current directory
- Reference child CLAUDE.md files, do not duplicate their content
- Follow the template guidelines above for consistent structure
- Use the structure analysis to understand the current directory context"
fi
# Execute update
local start_time=$(date +%s)
echo " 🔄 Starting update..."
if cd "$module_path" 2>/dev/null; then
local tool_result=0
# Execute with selected tool
# NOTE: Model parameter (-m) is placed AFTER the prompt
case "$tool" in
qwen)
if [ "$model" = "coder-model" ]; then
# coder-model is default, -m is optional
qwen -p "$final_prompt" --yolo 2>&1
else
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
fi
tool_result=$?
;;
codex)
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
;;
gemini)
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
*)
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
esac
if [ $tool_result -eq 0 ]; then
local end_time=$(date +%s)
local duration=$((end_time - start_time))
echo " ✅ Completed in ${duration}s"
cd - > /dev/null
return 0
else
echo " ❌ Update failed for $module_path"
cd - > /dev/null
return 1
fi
else
echo " ❌ Cannot access directory: $module_path"
return 1
fi
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Show help if no arguments or help requested
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
echo ""
echo "Strategies:"
echo " single-layer - Read current dir code + child CLAUDE.md, generate ./CLAUDE.md"
echo " multi-layer - Read all files, generate CLAUDE.md for each directory"
echo ""
echo "Tools: gemini (default), qwen, codex"
echo "Models: Use tool defaults if not specified"
echo ""
echo "Examples:"
echo " ./update_module_claude.sh single-layer ./src/auth"
echo " ./update_module_claude.sh multi-layer ./components gemini gemini-2.5-flash"
exit 0
fi
update_module_claude "$@"
fi

View File

@@ -1,13 +1,13 @@
---
name: command-guide
description: Workflow command guide for Claude DMS3 (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
allowed-tools: Read, Grep, Glob, AskUserQuestion
version: 5.8.0
---
# Command Guide Skill
Comprehensive command guide for Claude DMS3 workflow system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
Comprehensive command guide for Claude Code Workflow (CCW) system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
## 🆕 What's New in v5.8.0
@@ -200,21 +200,21 @@ Comprehensive command guide for Claude DMS3 workflow system covering 78 commands
**Complex Query** (CLI-assisted analysis):
1. **Detect complexity indicators** (多个命令对比、工作流程分析、最佳实践)
2. **Design targeted analysis prompt** for gemini/qwen:
2. **Design targeted analysis prompt** for gemini/qwen via CCW:
- Frame user's question precisely
- Specify required analysis depth
- Request structured comparison/synthesis
```bash
gemini -p "
ccw cli -p "
PURPOSE: Analyze command documentation to answer user query
TASK: [extracted user question with context]
TASK: [extracted user question with context]
MODE: analysis
CONTEXT: @**/*
EXPECTED: Comprehensive answer with examples and recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage | analysis=READ-ONLY
" -m gemini-3-pro-preview-11-2025 --include-directories ~/.claude/skills/command-guide/reference
" --tool gemini --cd ~/.claude/skills/command-guide/reference
```
Note: Use absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
Note: Use `--cd` with absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
3. **Process and integrate CLI analysis**:
- Extract key insights from CLI output
- Add context-specific examples
@@ -385,4 +385,4 @@ This SKILL documentation is kept in sync with command implementations through a
- 4 issue templates for standardized problem reporting
- CLI-assisted complex query analysis with gemini/qwen integration
**Maintainer**: Claude DMS3 Team
**Maintainer**: CCW Team

View File

@@ -112,7 +112,7 @@
{
"name": "tech-research",
"command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
@@ -409,8 +409,8 @@
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
@@ -509,29 +509,18 @@
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--auto|--new] [optional: task description for new session]",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"file_path": "workflow/session/start.md"
},
{
"name": "workflow:status",
"command": "/workflow:status",
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Beginner",
"file_path": "workflow/status.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
@@ -564,7 +553,7 @@
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
@@ -575,7 +564,7 @@
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "[--use-codex] [--cli-execute] source-session-id",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
@@ -608,7 +597,7 @@
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id [--cli-execute]",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
@@ -619,7 +608,7 @@
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id [--cli-execute]",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
@@ -663,7 +652,7 @@
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",

View File

@@ -133,7 +133,7 @@
{
"name": "tech-research",
"command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
@@ -295,8 +295,8 @@
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
@@ -358,22 +358,11 @@
"difficulty": "Intermediate",
"file_path": "workflow/review.md"
},
{
"name": "workflow:status",
"command": "/workflow:status",
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Beginner",
"file_path": "workflow/status.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
@@ -406,7 +395,7 @@
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
@@ -417,7 +406,7 @@
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "[--use-codex] [--cli-execute] source-session-id",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
@@ -597,7 +586,7 @@
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--auto|--new] [optional: task description for new session]",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
@@ -632,7 +621,7 @@
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id [--cli-execute]",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
@@ -643,7 +632,7 @@
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id [--cli-execute]",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
@@ -687,7 +676,7 @@
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",

View File

@@ -36,7 +36,7 @@
{
"name": "tech-research",
"command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
@@ -224,7 +224,7 @@
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--auto|--new] [optional: task description for new session]",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
@@ -469,8 +469,8 @@
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
@@ -492,7 +492,7 @@
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
"description": "TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, and cycle tracking",
"arguments": "[--cli-execute] \\\"feature description\\\"|file.md",
"arguments": "\\\"feature description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
@@ -604,7 +604,7 @@
"name": "task-generate-agent",
"command": "/workflow:tools:task-generate-agent",
"description": "Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation",
"arguments": "--session WFS-session-id [--cli-execute]",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
@@ -615,7 +615,7 @@
"name": "task-generate-tdd",
"command": "/workflow:tools:task-generate-tdd",
"description": "Autonomous TDD task generation using action-planning-agent with Red-Green-Refactor cycles, test-first structure, and cycle validation",
"arguments": "--session WFS-session-id [--cli-execute]",
"arguments": "--session WFS-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
@@ -626,7 +626,7 @@
"name": "test-task-generate",
"command": "/workflow:tools:test-task-generate",
"description": "Generate test planning documents (IMPL_PLAN.md, test task JSONs, TODO_LIST.md) using action-planning-agent - produces test planning artifacts, does NOT execute tests",
"arguments": "[--use-codex] [--cli-execute] --session WFS-test-session-id",
"arguments": "--session WFS-test-session-id",
"category": "workflow",
"subcategory": "tools",
"usage_scenario": "implementation",
@@ -713,17 +713,6 @@
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"file_path": "workflow/session/resume.md"
},
{
"name": "workflow:status",
"command": "/workflow:status",
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Beginner",
"file_path": "workflow/status.md"
}
],
"testing": [
@@ -742,7 +731,7 @@
"name": "test-fix-gen",
"command": "/workflow:test-fix-gen",
"description": "Create test-fix workflow session from session ID, description, or file path with test strategy generation and task planning",
"arguments": "[--use-codex] [--cli-execute] (source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"arguments": "(source-session-id | \\\"feature description\\\" | /path/to/file.md)",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",
@@ -753,7 +742,7 @@
"name": "test-gen",
"command": "/workflow:test-gen",
"description": "Create independent test-fix workflow session from completed implementation session, analyzes code to generate test tasks",
"arguments": "[--use-codex] [--cli-execute] source-session-id",
"arguments": "source-session-id",
"category": "workflow",
"subcategory": null,
"usage_scenario": "testing",

View File

@@ -24,8 +24,8 @@
{
"name": "plan",
"command": "/workflow:plan",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs with optional CLI auto-execution",
"arguments": "[--cli-execute] \\\"text description\\\"|file.md",
"description": "5-phase planning workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs",
"arguments": "\\\"text description\\\"|file.md",
"category": "workflow",
"subcategory": null,
"usage_scenario": "planning",
@@ -43,22 +43,11 @@
"difficulty": "Intermediate",
"file_path": "workflow/execute.md"
},
{
"name": "workflow:status",
"command": "/workflow:status",
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Beginner",
"file_path": "workflow/status.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--auto|--new] [optional: task description for new session]",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",

View File

@@ -16,11 +16,9 @@ description: |
color: yellow
---
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
## Overview
**Agent Role**: Transform user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria.
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
**Core Capabilities**:
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
@@ -33,7 +31,7 @@ You are a pure execution agent specialized in creating actionable implementation
---
## 1. Execution Process
## 1. Input & Execution
### 1.1 Input Processing
@@ -43,7 +41,6 @@ You are a pure execution agent specialized in creating actionable implementation
- `context_package_path`: Context package with brainstorming artifacts catalog
- **Metadata**: Simple values
- `session_id`: Workflow session identifier (WFS-[topic])
- `execution_mode`: agent-mode | cli-execute-mode
- `mcp_capabilities`: Available MCP tools (exa_code, exa_web, code_index)
**Legacy Support** (backward compatibility):
@@ -51,7 +48,7 @@ You are a pure execution agent specialized in creating actionable implementation
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
- **Task requirements**: Direct task description
### 1.2 Two-Phase Execution Flow
### 1.2 Execution Flow
#### Phase 1: Context Loading & Assembly
@@ -89,6 +86,27 @@ You are a pure execution agent specialized in creating actionable implementation
6. Assess task complexity (simple/medium/complex)
```
**MCP Integration** (when `mcp_capabilities` available):
```javascript
// Exa Code Context (mcp_capabilities.exa_code = true)
mcp__exa__get_code_context_exa(
query="TypeScript OAuth2 JWT authentication patterns",
tokensNum="dynamic"
)
// Integration in flow_control.pre_analysis
{
"step": "local_codebase_exploration",
"action": "Explore codebase structure",
"commands": [
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure"
}
```
**Context Package Structure** (fields defined by context-search-agent):
**Always Present**:
@@ -170,30 +188,6 @@ if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
5. Update session state for execution readiness
```
### 1.3 MCP Integration Guidelines
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
```javascript
// Get best practices and examples
mcp__exa__get_code_context_exa(
query="TypeScript OAuth2 JWT authentication patterns",
tokensNum="dynamic"
)
```
**Integration in flow_control.pre_analysis**:
```json
{
"step": "local_codebase_exploration",
"action": "Explore codebase structure",
"commands": [
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure"
}
```
---
## 2. Output Specifications
@@ -209,15 +203,69 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"id": "IMPL-N",
"title": "Descriptive task name",
"status": "pending|active|completed|blocked",
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
"cli_execution_id": "WFS-{session}-IMPL-N",
"cli_execution": {
"strategy": "new|resume|fork|merge_fork",
"resume_from": "parent-cli-id",
"merge_from": ["id1", "id2"]
}
}
```
**Field Descriptions**:
- `id`: Task identifier (format: `IMPL-N`)
- `id`: Task identifier
- Single module format: `IMPL-N` (e.g., IMPL-001, IMPL-002)
- Multi-module format: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1, IMPL-C1)
- Prefix: A, B, C... (assigned by module detection order)
- Sequence: 1, 2, 3... (per-module increment)
- `title`: Descriptive task name summarizing the work
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
- `cli_execution`: CLI execution strategy based on task dependencies
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
| Dependency Pattern | Strategy | CLI Command Pattern |
|--------------------|----------|---------------------|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
**Strategy Selection Algorithm**:
```javascript
function computeCliStrategy(task, allTasks) {
const deps = task.context?.depends_on || []
const childCount = allTasks.filter(t =>
t.context?.depends_on?.includes(task.id)
).length
if (deps.length === 0) {
return { strategy: "new" }
} else if (deps.length === 1) {
const parentTask = allTasks.find(t => t.id === deps[0])
const parentChildCount = allTasks.filter(t =>
t.context?.depends_on?.includes(deps[0])
).length
if (parentChildCount === 1) {
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
} else {
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
}
} else {
const mergeFrom = deps.map(depId =>
allTasks.find(t => t.id === depId).cli_execution_id
)
return { strategy: "merge_fork", merge_from: mergeFrom }
}
}
```
#### Meta Object
@@ -226,7 +274,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"meta": {
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
"execution_group": "parallel-abc123|null"
"execution_group": "parallel-abc123|null",
"module": "frontend|backend|shared|null",
"execution_config": {
"method": "agent|hybrid|cli",
"cli_tool": "codex|gemini|qwen|auto",
"enable_resume": true,
"previous_cli_id": "string|null"
}
}
}
```
@@ -235,6 +290,12 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
- `agent`: Assigned agent for execution
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -244,8 +305,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"type": "test-gen|test-fix",
"agent": "@code-developer|@test-fix-agent",
"test_framework": "jest|vitest|pytest|junit|mocha",
"coverage_target": "80%",
"use_codex": true|false
"coverage_target": "80%"
}
}
```
@@ -253,7 +313,8 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
**Test-Specific Fields**:
- `test_framework`: Existing test framework from project (required for test tasks)
- `coverage_target`: Target code coverage percentage (optional)
- `use_codex`: Whether to use Codex for automated fixes in test-fix tasks (optional, default: false)
**Note**: CLI tool usage for test-fix tasks is now controlled via `flow_control.implementation_approach` steps with `command` fields, not via `meta.use_codex`.
#### Context Object
@@ -392,7 +453,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
// Pattern: Project structure analysis
{
"step": "analyze_project_architecture",
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
"output_to": "project_architecture"
},
@@ -409,14 +470,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
// Pattern: Gemini CLI deep analysis
{
"step": "gemini_analyze_[aspect]",
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
"output_to": "analysis_result"
},
// Pattern: Qwen CLI analysis (fallback/alternative)
{
"step": "qwen_analyze_[aspect]",
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
"output_to": "analysis_result"
},
@@ -457,7 +518,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
4. **Command Composition Patterns**:
- **Single command**: `bash([simple_search])`
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
- **MCP integration**: `mcp__[tool]__[function]([params])`
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
@@ -479,21 +540,38 @@ The `implementation_approach` supports **two execution modes** based on the pres
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`
- **Command patterns**:
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
- **Command patterns** (with resume support):
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
**Mode Selection Strategy**:
- **Default to agent execution** for most tasks
- **Use CLI mode** when:
- User explicitly requests CLI tool (codex/gemini/qwen)
- Task requires multi-step autonomous reasoning beyond agent capability
- Complex refactoring needs specialized tool analysis
- Building on previous CLI execution context (use `resume --last`)
**Semantic CLI Tool Selection**:
**Key Principle**: The `command` field is **optional**. Agent must decide based on task complexity and user preference.
Agent determines CLI tool usage per-step based on user semantics and task nature.
**Source**: Scan `metadata.task_description` from context-package.json for CLI tool preferences.
**User Semantic Triggers** (patterns to detect in task_description):
- "use Codex/codex" → Add `command` field with Codex CLI
- "use Gemini/gemini" → Add `command` field with Gemini CLI
- "use Qwen/qwen" → Add `command` field with Qwen CLI
- "CLI execution" / "automated" → Infer appropriate CLI tool
**Task-Based Selection** (when no explicit user preference):
- **Implementation/coding**: Codex preferred for autonomous development
- **Analysis/exploration**: Gemini preferred for large context analysis
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
- **Testing**: Depends on complexity - simple=agent, complex=Codex
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
- Agent orchestrates task execution
- When step has `command` field, agent executes it via CCW CLI
- When step has no `command` field, agent implements directly
- This maintains agent control while leveraging CLI tool power
**Key Principle**: The `command` field is **optional**. Agent decides based on user semantics and task complexity.
**Examples**:
@@ -543,11 +621,26 @@ The `implementation_approach` supports **two execution modes** based on the pres
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation"
"output": "cli_implementation",
"cli_output_id": "step3_cli_id" // Store execution ID for resume
},
// === CLI MODE with Resume: Continue from previous CLI execution ===
{
"step": 4,
"title": "Continue implementation with context",
"description": "Resume from previous step with accumulated context",
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
"modification_points": ["[Continue from step 3]"],
"logic_flow": ["[Build on previous output]"],
"depends_on": [3],
"output": "continued_implementation",
"cli_output_id": "step4_cli_id"
}
]
```
@@ -589,10 +682,42 @@ The `implementation_approach` supports **two execution modes** based on the pres
- Analysis results (technical approach, architecture decisions)
- Brainstorming artifacts (role analyses, guidance specifications)
**Multi-Module Format** (when modules detected):
When multiple modules are detected (frontend/backend, etc.), organize IMPL_PLAN.md by module:
```markdown
# Implementation Plan
## Module A: Frontend (N tasks)
### IMPL-A1: [Task Title]
[Task details...]
### IMPL-A2: [Task Title]
[Task details...]
## Module B: Backend (N tasks)
### IMPL-B1: [Task Title]
[Task details...]
### IMPL-B2: [Task Title]
[Task details...]
## Cross-Module Dependencies
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
- IMPL-A2 → IMPL-B2 (UI state depends on Backend service)
```
**Cross-Module Dependency Notation**:
- During parallel planning, use `CROSS::{module}::{pattern}` format
- Example: `depends_on: ["CROSS::B::api-endpoint"]`
- Integration phase resolves to actual task IDs: `CROSS::B::api → IMPL-B1`
### 2.3 TODO_LIST.md Structure
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
**Single Module Format**:
```markdown
# Tasks: {Session Topic}
@@ -606,30 +731,54 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
- `- [x]` = Completed task
```
**Multi-Module Format** (hierarchical by module):
```markdown
# Tasks: {Session Topic}
## Module A (Frontend)
- [ ] **IMPL-A1**: [Task Title] → [📋](./.task/IMPL-A1.json)
- [ ] **IMPL-A2**: [Task Title] → [📋](./.task/IMPL-A2.json)
## Module B (Backend)
- [ ] **IMPL-B1**: [Task Title] → [📋](./.task/IMPL-B1.json)
- [ ] **IMPL-B2**: [Task Title] → [📋](./.task/IMPL-B2.json)
## Cross-Module Dependencies
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
## Status Legend
- `- [ ]` = Pending task
- `- [x]` = Completed task
```
**Linking Rules**:
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
- Consistent ID schemes: IMPL-XXX
- Consistent ID schemes: `IMPL-N` (single) or `IMPL-{prefix}{seq}` (multi-module)
### 2.4 Complexity-Based Structure Selection
### 2.4 Complexity & Structure Selection
Use `analysis_results.complexity` or task count to determine structure:
**Simple Tasks** (≤5 tasks):
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- All tasks at same level
**Single Module Mode**:
- **Simple Tasks** (≤5 tasks): Flat structure
- **Medium Tasks** (6-12 tasks): Flat structure
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
**Medium Tasks** (6-12 tasks):
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- All tasks at same level
**Multi-Module Mode** (N+1 parallel planning):
- **Per-module limit**: ≤9 tasks per module
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
**Complex Tasks** (>12 tasks):
- **Re-scope required**: Maximum 12 tasks hard limit
- If analysis_results contains >12 tasks, consolidate or request re-scoping
**Multi-Module Detection Triggers**:
- Explicit frontend/backend separation (`src/frontend`, `src/backend`)
- Monorepo structure (`packages/*`, `apps/*`)
- Context-package dependency clustering (2+ distinct module groups)
---
## 3. Quality & Standards
## 3. Quality Standards
### 3.1 Quantification Requirements (MANDATORY)
@@ -655,47 +804,48 @@ Use `analysis_results.complexity` or task count to determine structure:
- [ ] Each implementation step has its own acceptance criteria
**Examples**:
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
- BAD: `"Implement new commands"`
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
- BAD: `"All commands implemented successfully"`
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
- BAD: `"Implement new commands"`
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
- BAD: `"All commands implemented successfully"`
### 3.2 Planning Principles
### 3.2 Planning & Organization Standards
**Planning Principles**:
- Each stage produces working, testable code
- Clear success criteria for each deliverable
- Dependencies clearly identified between stages
- Incremental progress over big bangs
### 3.3 File Organization
**File Organization**:
- Session naming: `WFS-[topic-slug]`
- Task IDs: IMPL-XXX (flat structure only)
- Directory structure: flat task organization
### 3.4 Document Standards
- Task IDs:
- Single module: `IMPL-N` (e.g., IMPL-001, IMPL-002)
- Multi-module: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- Directory structure: flat task organization (all tasks in `.task/`)
**Document Standards**:
- Proper linking between documents
- Consistent navigation and references
---
## 4. Key Reminders
### 3.3 Guidelines Checklist
**ALWAYS:**
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
- **Load IMPL_PLAN template**: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt) before generating IMPL_PLAN.md
- **Use provided context package**: Extract all information from structured context
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
- **Follow 6-field schema**: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- **Validate task count**: Maximum 12 tasks hard limit, request re-scope if exceeded
- **Use session paths**: Construct all paths using provided session_id
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
- **Apply 举一反三 principle**: Adapt pre-analysis patterns to task-specific needs dynamically
- **Follow template validation**: Complete IMPL_PLAN.md template validation checklist before finalization
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
- Use provided context package: Extract all information from structured context
- Respect memory-first rule: Use provided content (already loaded from memory/file)
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
- Use session paths: Construct all paths using provided session_id
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
**NEVER:**
- Load files directly (use provided context package instead)

View File

@@ -67,7 +67,7 @@ Score = 0
**1. Project Structure**:
```bash
~/.claude/scripts/get_modules_by_depth.sh
ccw tool exec get_modules_by_depth '{}'
```
**2. Content Search**:
@@ -100,7 +100,7 @@ CONTEXT: @**/*
# Specific patterns
CONTEXT: @CLAUDE.md @src/**/* @*.ts
# Cross-directory (requires --include-directories)
# Cross-directory (requires --includeDirs)
CONTEXT: @**/* @../shared/**/* @../types/**/*
```
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
```
analyze|plan → gemini (qwen fallback) + mode=analysis
execute (simple|medium) → gemini (qwen fallback) + mode=write
execute (complex) → codex + mode=auto
execute (complex) → codex + mode=write
discuss → multi (gemini + codex parallel)
```
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
- **Position**: `-m` after prompt, before flags
### Command Templates
### Command Templates (CCW Unified CLI)
**Gemini/Qwen (Analysis)**:
```bash
cd {dir} && gemini -p "
ccw cli -p "
PURPOSE: {goal}
TASK: {task}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {output}
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
" -m gemini-2.5-pro
" --tool gemini --mode analysis --cd {dir}
# Qwen fallback: Replace 'gemini' with 'qwen'
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
```
**Gemini/Qwen (Write)**:
```bash
cd {dir} && gemini -p "..." --approval-mode yolo
ccw cli -p "..." --tool gemini --mode write --cd {dir}
```
**Codex (Auto)**:
**Codex (Write)**:
```bash
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
# Resume: Add 'resume --last' after prompt
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
ccw cli -p "..." --tool codex --mode write --cd {dir}
```
**Cross-Directory** (Gemini/Qwen):
```bash
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
```
**Directory Scope**:
- `@` only references current directory + subdirectories
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)

View File

@@ -67,7 +67,7 @@ Phase 4: Output Generation
```bash
# Project structure
~/.claude/scripts/get_modules_by_depth.sh
ccw tool exec get_modules_by_depth '{}'
# Pattern discovery (adapt based on language)
rg "^export (class|interface|function) " --type ts -n
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
### Gemini Semantic Analysis (deep-scan, dependency-map)
```bash
cd {dir} && gemini -p "
ccw cli -p "
PURPOSE: {from prompt}
TASK: {from prompt}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {from prompt}
RULES: {from prompt, if template specified} | analysis=READ-ONLY
"
" --tool gemini --mode analysis --cd {dir}
```
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only

View File

@@ -1,140 +1,117 @@
---
name: cli-lite-planning-agent
description: |
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Core capabilities:
- Task decomposition (1-10 tasks with IDs: T1, T2...)
- Dependency analysis (depends_on references)
- Flow control (parallel/sequential phases)
- Multi-angle exploration context integration
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
color: cyan
---
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
## Output Schema
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
**planObject Structure**:
```javascript
{
summary: string, // 2-3 sentence overview
approach: string, // High-level strategy
tasks: [TaskObject], // 1-10 structured tasks
flow_control: { // Execution phases
execution_order: [{ phase, tasks, type }],
exit_conditions: { success, failure }
},
focus_paths: string[], // Affected files (aggregated)
estimated_time: string,
recommended_execution: "Agent" | "Codex",
complexity: "Low" | "Medium" | "High",
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
}
```
**TaskObject Structure**:
```javascript
{
id: string, // T1, T2, T3...
title: string, // Action verb + target
file: string, // Target file path
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
description: string, // What to implement (1-2 sentences)
modification_points: [{ // Precise changes (optional)
file: string,
target: string, // function:lineRange
change: string
}],
implementation: string[], // 2-7 actionable steps
reference: { // Pattern guidance (optional)
pattern: string,
files: string[],
examples: string
},
acceptance: string[], // 1-4 quantified criteria
depends_on: string[] // Task IDs: ["T1", "T2"]
}
```
## Input Context
```javascript
{
task_description: string,
explorationsContext: { [angle]: ExplorationResult } | null,
explorationAngles: string[],
// Required
task_description: string, // Task or bug description
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
session: { id, folder, artifacts },
// Context (one of these based on workflow)
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
contextAngles: string[], // Exploration or diagnosis angles
// Optional
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High",
cli_config: { tool, template, timeout, fallback },
session: { id, folder, artifacts }
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
cli_config: { tool, template, timeout, fallback }
}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
```javascript
// Step 1: Always read schema first
const schema = Bash(`cat ${schema_path}`)
// Step 2: Generate plan conforming to schema
const planObject = generatePlanFromSchema(schema, context)
```
## Execution Flow
```
Phase 1: CLI Execution
├─ Aggregate multi-angle exploration findings
Phase 1: Schema & Context Loading
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
├─ Aggregate multi-angle context (explorations or diagnoses)
└─ Determine output structure from schema
Phase 2: CLI Execution
├─ Construct CLI command with planning template
├─ Execute Gemini (fallback: Qwen → degraded mode)
└─ Timeout: 60 minutes
Phase 2: Parsing & Enhancement
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
Phase 3: Parsing & Enhancement
├─ Parse CLI output sections
├─ Validate and enhance task objects
└─ Infer missing fields from exploration context
└─ Infer missing fields from context
Phase 3: planObject Generation
├─ Build planObject from parsed results
├─ Generate flow_control from depends_on if not provided
├─ Aggregate focus_paths from all tasks
└─ Return to orchestrator (lite-plan)
Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Return to orchestrator
```
## CLI Command Template
```bash
cd {project_root} && {cli_tool} -p "
PURPOSE: Generate implementation plan for {complexity} task
ccw cli -p "
PURPOSE: Generate plan for {task_description}
TASK:
• Analyze: {task_description}
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
• Identify parallel vs sequential execution phases
• Analyze task/bug description and context
• Break down into tasks following schema structure
• Identify dependencies and execution phases
MODE: analysis
CONTEXT: @**/* | Memory: {exploration_summary}
CONTEXT: @**/* | Memory: {context_summary}
EXPECTED:
## Implementation Summary
## Summary
[overview]
## High-Level Approach
[strategy]
## Task Breakdown
### T1: [Title]
**File**: [path]
### T1: [Title] (or FIX1 for fix-plan)
**Scope**: [module/feature path]
**Action**: [type]
**Description**: [what]
**Modification Points**: - [file]: [target] - [change]
**Implementation**: 1. [step]
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
**Acceptance**: - [quantified criterion]
**Acceptance/Verification**: - [quantified criterion]
**Depends On**: []
## Flow Control
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
**Exit Conditions**: - Success: [condition] - Failure: [condition]
## Time Estimate
**Total**: [time]
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
- Acceptance must be quantified (counts, method names, metrics)
- Dependencies use task IDs (T1, T2)
- Follow schema structure from {schema_path}
- Acceptance/verification must be quantified
- Dependencies use task IDs
- analysis=READ-ONLY
"
" --tool {cli_tool} --mode analysis --cd {project_root}
```
## Core Functions
@@ -279,6 +256,51 @@ function inferFile(task, ctx) {
}
```
### CLI Execution ID Assignment (MANDATORY)
```javascript
function assignCliExecutionIds(tasks, sessionId) {
const taskMap = new Map(tasks.map(t => [t.id, t]))
const childCount = new Map()
// Count children for each task
tasks.forEach(task => {
(task.depends_on || []).forEach(depId => {
childCount.set(depId, (childCount.get(depId) || 0) + 1)
})
})
tasks.forEach(task => {
task.cli_execution_id = `${sessionId}-${task.id}`
const deps = task.depends_on || []
if (deps.length === 0) {
task.cli_execution = { strategy: "new" }
} else if (deps.length === 1) {
const parent = taskMap.get(deps[0])
const parentChildCount = childCount.get(deps[0]) || 0
task.cli_execution = parentChildCount === 1
? { strategy: "resume", resume_from: parent.cli_execution_id }
: { strategy: "fork", resume_from: parent.cli_execution_id }
} else {
task.cli_execution = {
strategy: "merge_fork",
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
}
}
})
return tasks
}
```
**Strategy Rules**:
| depends_on | Parent Children | Strategy | CLI Command |
|------------|-----------------|----------|-------------|
| [] | - | `new` | `--id {cli_execution_id}` |
| [T1] | 1 | `resume` | `--resume {resume_from}` |
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
### Flow Control Inference
```javascript
@@ -303,21 +325,44 @@ function inferFlowControl(tasks) {
### planObject Generation
```javascript
function generatePlanObject(parsed, enrichedContext, input) {
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
return {
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
approach: parsed.approach || "Step-by-step implementation",
// Base fields (common to both schemas)
const base = {
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
tasks,
flow_control,
focus_paths,
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
complexity: input.complexity,
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
_metadata: {
timestamp: new Date().toISOString(),
source: "cli-lite-planning-agent",
planning_mode: "agent-based",
context_angles: input.contextAngles || [],
duration_seconds: Math.round((Date.now() - startTime) / 1000)
}
}
// Schema-specific fields
if (schemaType === 'fix-plan') {
return {
...base,
root_cause: parsed.root_cause || "Root cause from diagnosis",
strategy: parsed.strategy || "comprehensive_fix",
severity: input.severity || "Medium",
risk_level: parsed.risk_level || "medium"
}
} else {
return {
...base,
approach: parsed.approach || "Step-by-step implementation",
complexity: input.complexity || "Medium"
}
}
}
```
@@ -383,9 +428,12 @@ function validateTask(task) {
## Key Reminders
**ALWAYS**:
- Generate task IDs (T1, T2, T3...)
- **Read schema first** to determine output structure
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
- Include depends_on (even if empty [])
- Quantify acceptance criteria
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
- **Compute cli_execution strategy** based on depends_on
- Quantify acceptance/verification criteria
- Generate flow_control from dependencies
- Handle CLI errors with fallback chain
@@ -394,3 +442,5 @@ function validateTask(task) {
- Use vague acceptance criteria
- Create circular dependencies
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**

View File

@@ -66,8 +66,7 @@ You are a specialized execution agent that bridges CLI analysis tools with task
"task_config": {
"agent": "@test-fix-agent",
"type": "test-fix-iteration",
"max_iterations": 5,
"use_codex": false
"max_iterations": 5
}
}
```
@@ -108,7 +107,7 @@ Phase 3: Task JSON Generation
**Template-Based Command Construction with Test Layer Awareness**:
```bash
cd {project_root} && {cli_tool} -p "
ccw cli -p "
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
TASK:
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
@@ -135,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
- Consider previous iteration failures
- Validate fix doesn't introduce new vulnerabilities
- analysis=READ-ONLY
" {timeout_flag}
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
```
**Layer-Specific Guidance Injection**:
@@ -263,7 +262,6 @@ function extractModificationPoints() {
"analysis_report": ".process/iteration-{iteration}-analysis.md",
"cli_output": ".process/iteration-{iteration}-cli-output.txt",
"max_iterations": "{task_config.max_iterations}",
"use_codex": "{task_config.use_codex}",
"parent_task": "{parent_task_id}",
"created_by": "@cli-planning-agent",
"created_at": "{timestamp}"
@@ -529,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
2. **Execute CLI**:
```bash
gemini -p "PURPOSE: Analyze integration test failure...
ccw cli -p "PURPOSE: Analyze integration test failure...
TASK: Examine component interactions, data flow, interface contracts...
RULES: Analyze full call stack and data flow across components"
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
```
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
4. **Generate Task JSON** (IMPL-fix-1.json):

View File

@@ -24,8 +24,6 @@ You are a code execution specialist focused on implementing high-quality, produc
- **Context-driven** - Use provided context and existing code patterns
- **Quality over speed** - Write boring, reliable code that works
## Execution Process
### 1. Context Assessment
@@ -36,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
- **context-package.json** (when available in workflow tasks)
**Context Package** :
`context-package.json` provides artifact paths - extract dynamically using `jq`:
`context-package.json` provides artifact paths - read using Read tool or ccw session:
```bash
# Get role analysis paths from context package
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
# Get context package content from session using Read tool
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
```
**Pre-Analysis: Smart Tech Stack Loading**:
@@ -123,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
- If `command` field present, execute it; otherwise use agent capabilities
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
**Test-Driven Development**:
- Write tests first (red → green → refactor)

View File

@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
3. **load_session_metadata**
- Action: Load session metadata
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Output: session_metadata
```
@@ -119,17 +119,6 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
- No dependency management
- Used for temporary context preparation
### NOT Handled by This Agent
**JSON format** (used by code-developer, test-fix-agent):
```json
"flow_control": {
"pre_analysis": [...],
"implementation_approach": [...]
}
```
This complete JSON format is stored in `.task/IMPL-*.json` files and handled by implementation agents, not conceptual-planning-agent.
### Role-Specific Analysis Dimensions
@@ -146,14 +135,14 @@ This complete JSON format is stored in `.task/IMPL-*.json` files and handled by
### Output Integration
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into the single role's output:
- Enhanced `analysis.md` with codebase insights and architectural patterns
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into role output documents:
- Enhanced analysis documents with codebase insights and architectural patterns
- Role-specific technical recommendations based on existing conventions
- Pattern-based best practices from actual code examination
- Realistic feasibility assessments based on current implementation
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
- Enhanced `analysis.md` with autonomous development recommendations
- Enhanced analysis documents with autonomous development recommendations
- Role-specific strategy based on intelligent system understanding
- Autonomous development approaches and implementation guidance
- Self-guided optimization and integration recommendations
@@ -166,7 +155,7 @@ When called, you receive:
- **User Context**: Specific requirements, constraints, and expectations from user discussion
- **Output Location**: Directory path for generated analysis files
- **Role Hint** (optional): Suggested role or role selection guidance
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **ASSIGNED_ROLE** (optional): Specific role assignment
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
@@ -229,26 +218,23 @@ Generate documents according to loaded role template specifications:
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
**Required Files**:
- **analysis.md**: Main role perspective analysis incorporating user context and role template
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
**Output Files**:
- **analysis.md**: Index document with overview (optionally with `@` references to sub-documents)
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
- **Content**: Includes both analysis AND recommendations sections within analysis files
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
- **analysis-{slug}.md**: Section content documents (slug from section heading: lowercase, hyphens)
- Maximum 5 sub-documents (merge related sections if needed)
- **Content**: Analysis AND recommendations sections
**File Structure Example**:
```
.workflow/WFS-[session]/.brainstorming/system-architect/
├── analysis.md # Main system architecture analysis with recommendations
├── analysis-1.md # (Optional) Continuation if content >800 lines
── deliverables/ # (Optional) Additional role-specific outputs
├── technical-architecture.md # System design specifications
├── technology-stack.md # Technology selection rationale
└── scalability-plan.md # Scaling strategy
├── analysis.md # Index with overview + @references
├── analysis-architecture-assessment.md # Section content
── analysis-technology-evaluation.md # Section content
├── analysis-integration-strategy.md # Section content
└── analysis-recommendations.md # Section content (max 5 sub-docs total)
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
```
## Role-Specific Planning Process
@@ -268,14 +254,10 @@ FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefi
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
### 3. Brainstorming Documentation Phase
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
- **Content**: Include both analysis AND recommendations sections within analysis files
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
- **Create analysis.md**: Main document with overview (optionally with `@` references)
- **Create sub-documents**: `analysis-{slug}.md` for major sections (max 5)
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
- **Naming Validation**: Verify ALL files start with `analysis` prefix
- **Quality Review**: Ensure outputs meet role template standards and user requirements
## Role-Specific Analysis Framework
@@ -324,5 +306,3 @@ When analysis is complete, ensure:
- **Relevance**: Directly addresses user's specified requirements
- **Actionability**: Provides concrete next steps and recommendations
### Windows Path Format Guidelines
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`

View File

@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
### 1. Reference Documentation (Project Standards)
**Tools**:
- `Read()` - Load CLAUDE.md, README.md, architecture docs
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
- `Glob()` - Find documentation files
**Use**: Phase 0 foundation setup
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
**Use**: Unfamiliar APIs/libraries/patterns
### 3. Existing Code Discovery
**Primary (Code-Index MCP)**:
- `mcp__code-index__set_project_path()` - Initialize index
- `mcp__code-index__find_files(pattern)` - File pattern matching
- `mcp__code-index__search_code_advanced()` - Content search
- `mcp__code-index__get_file_summary()` - File structure analysis
- `mcp__code-index__refresh_index()` - Update index
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast content search
- `find` - File discovery
- `Grep` - Pattern matching
**Priority**: Code-Index MCP > ripgrep > find > grep
**Priority**: CodexLens MCP > ripgrep > find > grep
## Simplified Execution Process (3 Phases)
@@ -77,12 +77,11 @@ if (file_exists(contextPackagePath)) {
**1.2 Foundation Setup**:
```javascript
// 1. Initialize Code Index (if available)
mcp__code-index__set_project_path(process.cwd())
mcp__code-index__refresh_index()
// 1. Initialize CodexLens (if available)
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
// 2. Project Structure
bash(~/.claude/scripts/get_modules_by_depth.sh)
bash(ccw tool exec get_modules_by_depth '{}')
// 3. Load Documentation (if not in memory)
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
@@ -100,10 +99,88 @@ if (!memory.has("README.md")) Read(README.md)
### Phase 2: Multi-Source Context Discovery
Execute all 3 tracks in parallel for comprehensive coverage.
Execute all tracks in parallel for comprehensive coverage.
**Note**: Historical archive analysis (querying `.workflow/archives/manifest.json`) is optional and should be performed if the manifest exists. Inject findings into `conflict_detection.historical_conflicts[]`.
#### Track 0: Exploration Synthesis (Optional)
**Trigger**: When `explorations-manifest.json` exists in session `.process/` folder
**Purpose**: Transform raw exploration data into prioritized, deduplicated insights. This is NOT simple aggregation - it synthesizes `critical_files` (priority-ranked), deduplicates patterns/integration_points, and generates `conflict_indicators`.
```javascript
// Check for exploration results from context-gather parallel explore phase
const manifestPath = `.workflow/active/${session_id}/.process/explorations-manifest.json`;
if (file_exists(manifestPath)) {
const manifest = JSON.parse(Read(manifestPath));
// Load full exploration data from each file
const explorationData = manifest.explorations.map(exp => ({
...exp,
data: JSON.parse(Read(exp.path))
}));
// Build explorations array with summaries
const explorations = explorationData.map(exp => ({
angle: exp.angle,
file: exp.file,
path: exp.path,
index: exp.data._metadata?.exploration_index || exp.index,
summary: {
relevant_files_count: exp.data.relevant_files?.length || 0,
key_patterns: exp.data.patterns,
integration_points: exp.data.integration_points
}
}));
// SYNTHESIS (not aggregation): Transform raw data into prioritized insights
const aggregated_insights = {
// CRITICAL: Synthesize priority-ranked critical_files from multiple relevant_files lists
// - Deduplicate by path
// - Rank by: mention count across angles + individual relevance scores
// - Top 10-15 files only (focused, actionable)
critical_files: synthesizeCriticalFiles(explorationData.flatMap(e => e.data.relevant_files || [])),
// SYNTHESIS: Generate conflict indicators from pattern mismatches, constraint violations
conflict_indicators: synthesizeConflictIndicators(explorationData),
// Deduplicate clarification questions (merge similar questions)
clarification_needs: deduplicateQuestions(explorationData.flatMap(e => e.data.clarification_needs || [])),
// Preserve source attribution for traceability
constraints: explorationData.map(e => ({ constraint: e.data.constraints, source_angle: e.angle })).filter(c => c.constraint),
// Deduplicate patterns across angles (merge identical patterns)
all_patterns: deduplicatePatterns(explorationData.map(e => ({ patterns: e.data.patterns, source_angle: e.angle }))),
// Deduplicate integration points (merge by file:line location)
all_integration_points: deduplicateIntegrationPoints(explorationData.map(e => ({ points: e.data.integration_points, source_angle: e.angle })))
};
// Store for Phase 3 packaging
exploration_results = { manifest_path: manifestPath, exploration_count: manifest.exploration_count,
complexity: manifest.complexity, angles: manifest.angles_explored,
explorations, aggregated_insights };
}
// Synthesis helper functions (conceptual)
function synthesizeCriticalFiles(allRelevantFiles) {
// 1. Group by path
// 2. Count mentions across angles
// 3. Average relevance scores
// 4. Rank by: (mention_count * 0.6) + (avg_relevance * 0.4)
// 5. Return top 10-15 with mentioned_by_angles attribution
}
function synthesizeConflictIndicators(explorationData) {
// 1. Detect pattern mismatches across angles
// 2. Identify constraint violations
// 3. Flag files mentioned with conflicting integration approaches
// 4. Assign severity: critical/high/medium/low
}
```
#### Track 1: Reference Documentation
Extract from Phase 0 loaded docs:
@@ -134,18 +211,18 @@ mcp__exa__web_search_exa({
**Layer 1: File Pattern Discovery**
```javascript
// Primary: Code-Index MCP
const files = mcp__code-index__find_files("*{keyword}*")
// Primary: CodexLens MCP
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
// Fallback: find . -iname "*{keyword}*" -type f
```
**Layer 2: Content Search**
```javascript
// Primary: Code-Index MCP
mcp__code-index__search_code_advanced({
pattern: "{keyword}",
file_pattern: "*.ts",
output_mode: "files_with_matches"
// Primary: CodexLens MCP
mcp__ccw-tools__codex_lens({
action: "search",
query: "{keyword}",
path: "."
})
// Fallback: rg "{keyword}" -t ts --files-with-matches
```
@@ -153,11 +230,10 @@ mcp__code-index__search_code_advanced({
**Layer 3: Semantic Patterns**
```javascript
// Find definitions (class, interface, function)
mcp__code-index__search_code_advanced({
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
regex: true,
output_mode: "content",
context_lines: 2
mcp__ccw-tools__codex_lens({
action: "search",
query: "^(export )?(class|interface|type|function) .*{keyword}",
path: "."
})
```
@@ -165,21 +241,22 @@ mcp__code-index__search_code_advanced({
```javascript
// Get file summaries for imports/exports
for (const file of discovered_files) {
const summary = mcp__code-index__get_file_summary(file)
// summary: {imports, functions, classes, line_count}
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
// summary: {symbols: [{name, type, line}]}
}
```
**Layer 5: Config & Tests**
```javascript
// Config files
mcp__code-index__find_files("*.config.*")
mcp__code-index__find_files("package.json")
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
// Tests
mcp__code-index__search_code_advanced({
pattern: "(describe|it|test).*{keyword}",
file_pattern: "*.{test,spec}.*"
mcp__ccw-tools__codex_lens({
action: "search",
query: "(describe|it|test).*{keyword}",
path: "."
})
```
@@ -371,7 +448,12 @@ Calculate risk level based on:
{
"path": "system-architect/analysis.md",
"type": "primary",
"content": "# System Architecture Analysis\n\n## Overview\n..."
"content": "# System Architecture Analysis\n\n## Overview\n@analysis-architecture.md\n@analysis-recommendations.md"
},
{
"path": "system-architect/analysis-architecture.md",
"type": "supplementary",
"content": "# Architecture Assessment\n\n..."
}
]
}
@@ -393,10 +475,39 @@ Calculate risk level based on:
},
"affected_modules": ["auth", "user-model", "middleware"],
"mitigation_strategy": "Incremental refactoring with backward compatibility"
},
"exploration_results": {
"manifest_path": ".workflow/active/{session}/.process/explorations-manifest.json",
"exploration_count": 3,
"complexity": "Medium",
"angles": ["architecture", "dependencies", "testing"],
"explorations": [
{
"angle": "architecture",
"file": "exploration-architecture.json",
"path": ".workflow/active/{session}/.process/exploration-architecture.json",
"index": 1,
"summary": {
"relevant_files_count": 5,
"key_patterns": "Service layer with DI",
"integration_points": "Container.registerService:45-60"
}
}
],
"aggregated_insights": {
"critical_files": [{"path": "src/auth/AuthService.ts", "relevance": 0.95, "mentioned_by_angles": ["architecture"]}],
"conflict_indicators": [{"type": "pattern_mismatch", "description": "...", "source_angle": "architecture", "severity": "medium"}],
"clarification_needs": [{"question": "...", "context": "...", "options": [], "source_angle": "architecture"}],
"constraints": [{"constraint": "Must follow existing DI pattern", "source_angle": "architecture"}],
"all_patterns": [{"patterns": "Service layer with DI", "source_angle": "architecture"}],
"all_integration_points": [{"points": "Container.registerService:45-60", "source_angle": "architecture"}]
}
}
}
```
**Note**: `exploration_results` is populated when exploration files exist (from context-gather parallel explore phase). If no explorations, this field is omitted or empty.
## Quality Validation
@@ -448,14 +559,14 @@ Output: .workflow/session/{session}/.process/context-package.json
- Expose sensitive data (credentials, keys)
- Exceed file limits (50 total)
- Include binaries/generated files
- Use ripgrep if code-index available
- Use ripgrep if CodexLens available
**ALWAYS**:
- Initialize code-index in Phase 0
- Initialize CodexLens in Phase 0
- Execute get_modules_by_depth.sh
- Load CLAUDE.md/README.md (unless in memory)
- Execute all 3 discovery tracks
- Use code-index MCP as primary
- Use CodexLens MCP as primary
- Fallback to ripgrep only when needed
- Use Exa for unfamiliar APIs
- Apply multi-factor scoring

View File

@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
**Step 2** (CLI execution):
- Agent substitutes [target_folders] into command
- Agent executes CLI command via Bash tool:
- Agent executes CLI command via CCW:
```bash
bash(cd src/modules && gemini --approval-mode yolo -p "
ccw cli -p "
PURPOSE: Generate module documentation
TASK: Create API.md and README.md for each module
MODE: write
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
./src/modules/api|code|code:3|dirs:0
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
")
" --tool gemini --mode write --cd src/modules
```
4. **CLI Execution** (Gemini CLI):
@@ -216,7 +216,7 @@ Before completion, verify:
{
"step": "analyze_module_structure",
"action": "Deep analysis of module structure and API",
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
"output_to": "module_analysis",
"on_error": "fail"
}

View File

@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
## Core Mission
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
## Input Context
@@ -42,12 +42,12 @@ TodoWrite([
# 3. Launch parallel jobs (max 4)
# Depth 5 example (Layer 3 - use multi-layer):
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
# Depth 1 example (Layer 2 - use single-layer):
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
# ... up to 4 concurrent jobs
# 4. Wait for all depth jobs to complete

View File

@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
**Use**: Phase 1 source context loading
### 2. Test Coverage Discovery
**Primary (Code-Index MCP)**:
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
- `mcp__code-index__search_code_advanced()` - Search test patterns
- `mcp__code-index__get_file_summary()` - Analyze test structure
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast test pattern search
@@ -120,9 +120,10 @@ for (const summary_path of summaries) {
**2.1 Existing Test Discovery**:
```javascript
// Method 1: Code-Index MCP (preferred)
const test_files = mcp__code-index__find_files({
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
// Method 1: CodexLens MCP (preferred)
const test_files = mcp__ccw-tools__codex_lens({
action: "search_files",
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
});
// Method 2: Fallback CLI

View File

@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- Identify test commands from project configuration
```bash
# Detect test framework and multi-layered commands
if [ -f "package.json" ]; then
# Extract layer-specific test commands
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
# Extract layer-specific test commands using Read tool or jq
PKG_JSON=$(cat package.json)
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
LINT_CMD="ruff check . || flake8 ."
UNIT_CMD="pytest tests/unit/"
@@ -142,9 +143,9 @@ run_test_layer "L1-unit" "$UNIT_CMD"
### 3. Failure Diagnosis & Fixing Loop
**Execution Modes**:
**Execution Modes** (determined by `flow_control.implementation_approach`):
**A. Manual Mode (Default, meta.use_codex=false)**:
**A. Agent Mode (Default, no `command` field in steps)**:
```
WHILE tests are failing AND iterations < max_iterations:
1. Use Gemini to diagnose failure (bug-fix template)
@@ -155,17 +156,17 @@ WHILE tests are failing AND iterations < max_iterations:
END WHILE
```
**B. Codex Mode (meta.use_codex=true)**:
**B. CLI Mode (`command` field present in implementation_approach steps)**:
```
WHILE tests are failing AND iterations < max_iterations:
1. Use Gemini to diagnose failure (bug-fix template)
2. Use Codex to apply fixes automatically with resume mechanism
2. Execute `command` field (e.g., Codex) to apply fixes automatically
3. Re-run test suite
4. Verify fix doesn't break other tests
END WHILE
```
**Codex Resume in Test-Fix Cycle** (when `meta.use_codex=true`):
**Codex Resume in Test-Fix Cycle** (when step has `command` with Codex):
- First iteration: Start new Codex session with full context
- Subsequent iterations: Use `resume --last` to maintain fix history and apply consistent strategies

View File

@@ -191,7 +191,7 @@ target/
### Step 2: Workspace Analysis (MANDATORY FIRST)
```bash
# Analyze workspace structure
bash(~/.claude/scripts/get_modules_by_depth.sh json)
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
```
### Step 3: Technology Detection

View File

@@ -101,10 +101,10 @@ src/ (depth 1) → SINGLE STRATEGY
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Get module structure with classification
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
// OR with path parameter
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
@@ -200,7 +200,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -263,7 +263,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
EXECUTION SCRIPT: ccw tool exec generate_module_docs
- Accepts strategy parameter: full | single
- Accepts folder type detection: code | navigation
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -273,7 +273,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
run_in_background: false
})
exit_code=$?
@@ -322,7 +322,7 @@ let project_root = get_project_root();
report("Generating project README.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -335,7 +335,7 @@ for (let tool of tool_order) {
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -350,7 +350,7 @@ if (bash_result.stdout.includes("API_FOUND")) {
report("Generating HTTP API documentation...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {

View File

@@ -51,7 +51,7 @@ Orchestrates context-aware documentation generation/update for changed modules u
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -123,7 +123,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -207,21 +207,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module

View File

@@ -64,12 +64,17 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
```bash
# Get target path, project name, and root
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
# Create session directories (replace timestamp)
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
```javascript
// Create docs session (type: docs)
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
// Parse output to get sessionId
```
# Create workflow-session.json (replace values)
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
```bash
# Update workflow-session.json with docs-specific fields
ccw session {sessionId} write workflow-session.json '{"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}'
```
### Phase 2: Analyze Structure
@@ -80,10 +85,10 @@ bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentati
```bash
# 1. Run folder analysis
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
# 2. Get top-level directories (first 2 path levels)
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
@@ -131,7 +136,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
```bash
# Count existing docs from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq '.existing_docs.file_list | length'
# Or read entire process file and parse
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -185,10 +191,10 @@ Large Projects (single dir >10 docs):
```bash
# 1. Get top-level directories from doc-planning-data.json
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq -r '.top_level_dirs[]'
# 2. Get mode from workflow-session.json
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
ccw session WFS-docs-{timestamp} read workflow-session.json --raw | jq -r '.mode // "full"'
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
@@ -217,7 +223,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
group_count=$(ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq '.groups.count')
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -230,12 +236,12 @@ api_id=$((group_count + 3))
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `cd dir && gemini -p "..."`
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
@@ -280,8 +286,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
"ccw session ${session_id} read .process/doc-planning-data.json",
"ccw session ${session_id} read .process/doc-planning-data.json --raw | jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories'"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -326,7 +332,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"depends_on": [1],
"output": "generated_docs"
}
@@ -358,7 +364,7 @@ api_id=$((group_count + 3))
},
{
"step": "analyze_project",
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
"output_to": "project_outline"
}
],
@@ -398,7 +404,7 @@ api_id=$((group_count + 3))
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
@@ -435,7 +441,7 @@ api_id=$((group_count + 3))
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
],
"implementation_approach": [
{
@@ -596,7 +602,7 @@ api_id=$((group_count + 3))
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`

View File

@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
allowed-tools: Task(*), Bash(*)
examples:
- /memory:load "在当前前端基础上开发用户认证功能"
- /memory:load --tool qwen -p "重构支付模块API"
- /memory:load --tool qwen "重构支付模块API"
---
# Memory Load Command (/memory:load)
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
3. **Extracts Keywords**: Derives core keywords from task description
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
6. **Generates Content Package**: Returns structured JSON core content package
@@ -109,7 +109,7 @@ Task(
1. **Project Structure**
\`\`\`bash
bash(~/.claude/scripts/get_modules_by_depth.sh)
bash(ccw tool exec get_modules_by_depth '{}')
\`\`\`
2. **Core Documentation**
@@ -136,7 +136,7 @@ Task(
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
\`\`\`bash
cd . && ${tool} -p "
ccw cli -p "
PURPOSE: Extract project core context for task: ${task_description}
TASK: Analyze project architecture, tech stack, key patterns, relevant files
MODE: analysis
@@ -147,7 +147,7 @@ RULES:
- Identify key architecture patterns and technical constraints
- Extract integration points and development standards
- Output concise, structured format
"
" --tool ${tool} --mode analysis
\`\`\`
### Step 4: Generate Core Content Package
@@ -212,7 +212,7 @@ Before returning:
### Example 2: Using Qwen Tool
```bash
/memory:load --tool qwen -p "重构支付模块API"
/memory:load --tool qwen "重构支付模块API"
```
Agent uses Qwen CLI for analysis, returns same structured package.

View File

@@ -1,477 +1,314 @@
---
name: tech-research
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Research SKILL Generator
# Tech Stack Rules Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Key Difference from SKILL Memory**:
- **SKILL**: Manual loading via `Skill(command: "tech-name")`
- **Rules**: Automatic loading when working with matching file paths
**Execution Paths**:
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Output Structure**:
```
.claude/rules/tech/{tech-stack}/
├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Agent Responsibility**:
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
- Orchestrator only provides context paths and waits for completion
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
---
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **No User Prompts**: Never ask user questions or wait for input between phases
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
1. **Start Immediately**: First action is TodoWrite initialization
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Template-Driven**: Agent reads templates before generating content
4. **Agent Produces Files**: Agent writes all rule files directly
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
---
## 3-Phase Execution
### Phase 1: Prepare Context Paths
### Phase 1: Prepare Context & Detect Tech Stack
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
**Goal**: Detect input mode, extract tech stack info, determine file extensions
**Input Mode Detection**:
```bash
# Get input parameter
input="$1"
# Detect mode
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
CONTEXT_PATH=".workflow/${SESSION_ID}"
# Read workflow-session.json to extract tech stack
else
MODE="direct"
TECH_STACK_NAME="$input"
CONTEXT_PATH="$input" # Pass tech stack name as context
fi
```
**Check Existing SKILL**:
```bash
# For session mode, peek at session to get tech stack name
if [[ "$MODE" == "session" ]]; then
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
Read(.workflow/${SESSION_ID}/workflow-session.json)
# Extract tech_stack_name (minimal extraction)
fi
**Tech Stack Analysis**:
```javascript
// Decompose composite tech stacks
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
# Normalize and check
const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
```bash
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
rules_dir=".claude/rules/tech/${normalized_name}"
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf ".claude/skills/${normalized_name}")
SKIP_GENERATION = false
message = "Regenerating tech stack SKILL from scratch."
} else {
SKIP_GENERATION = false
message = "No existing SKILL found, generating new tech stack documentation."
}
```
- If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
- If `--regenerate` → Delete existing and regenerate
**Output Variables**:
- `MODE`: `session` or `direct`
- `SESSION_ID`: Session ID (if session mode)
- `CONTEXT_PATH`: Path to session directory OR tech stack name
- `TECH_STACK_NAME`: Extracted or provided tech stack name
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
- `TECH_STACK_NAME`: Normalized name
- `PRIMARY_LANG`: Primary language
- `FILE_EXT`: File extension pattern
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
**TodoWrite**: Mark phase 1 completed
---
### Phase 2: Agent Produces All Files
### Phase 2: Agent Produces Path-Conditional Rules
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
**Agent Task Specification**:
**Goal**: Delegate to agent for Exa research and rule file generation
**Template Files**:
```
Task(
~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
subagent_type: "general-purpose",
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
prompt: "
Generate a complete tech stack SKILL package with Exa research.
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: `
You are generating path-conditional rules for Claude Code.
**Context Provided**:
- Mode: {MODE}
- Context Path: {CONTEXT_PATH}
## Context
- Tech Stack: ${TECH_STACK_NAME}
- Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
**Templates Available**:
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
## Instructions
**Your Responsibilities**:
Read the agent prompt template for detailed instructions:
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
1. **Extract Tech Stack Information**:
## Execution Steps
IF MODE == 'session':
- Read `.workflow/active/{session_id}/workflow-session.json`
- Read `.workflow/active/{session_id}/.process/context-package.json`
- Extract tech_stack: {language, frameworks, libraries}
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
- Example: \"typescript-react-nextjs\"
1. Execute Exa research queries (see agent prompt)
2. Read each rule template
3. Generate rule files following template structure
4. Write files to output directory
5. Write metadata.json
6. Report completion
IF MODE == 'direct':
- Tech stack name = CONTEXT_PATH
- Parse composite: split by '-' delimiter
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
## Variable Substitutions
2. **Execute Exa Research** (4-6 parallel queries):
Base Queries (always execute):
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
Component Queries (if composite):
- For each additional component:
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
3. **Read Module Format Template**:
Read template for structure guidance:
```bash
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
```
4. **Synthesize Content into 6 Modules**:
Follow template structure from tech-module-format.txt:
- **principles.md** - Core concepts, philosophies (~3K tokens)
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
- **testing.md** - Testing strategies, frameworks (~3K tokens)
- **config.md** - Setup, configuration, tooling (~3K tokens)
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
Each module follows template format:
- Frontmatter (YAML)
- Main sections with clear headings
- Code examples from Exa research
- Best practices sections
- References to Exa sources
5. **Write Files Directly**:
```javascript
// Create directory
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
// Write each module file using Write tool
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
// Write frameworks.md only if composite
// Write metadata.json
Write({
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
content: JSON.stringify({
tech_stack_name,
components,
is_composite,
generated_at: timestamp,
source: \"exa-research\",
research_summary: { total_queries, total_sources }
})
})
```
6. **Report Completion**:
Provide summary:
- Tech stack name
- Files created (count)
- Exa queries executed
- Sources consulted
**CRITICAL**:
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
- Do NOT return JSON or structured data - produce actual .md files
- Handle errors gracefully (Exa failures, missing files, template read failures)
- If tech stack cannot be determined, ask orchestrator to clarify
"
)
Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
- {PRIMARY_LANG} → ${PRIMARY_LANG}
- {FILE_EXT} → ${FILE_EXT}
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
`
})
```
**Completion Criteria**:
- Agent task executed successfully
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
- 4-6 rule files written with proper `paths` frontmatter
- metadata.json written
- Agent reports completion
- Agent reports files created
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
**TodoWrite**: Mark phase 2 completed
---
### Phase 3: Generate SKILL.md Index
### Phase 3: Verify & Report
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
**Goal**: Verify generated files and provide usage summary
**Steps**:
1. **Verify Generated Files**:
1. **Verify Files**:
```bash
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
```
2. **Read metadata.json**:
2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
// Extract: tech_stack_name, components, is_composite, research_summary
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
```
3. **Read Module Headers** (optional, first 20 lines):
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
// Repeat for other modules
4. **Generate Summary Report**:
```
Tech Stack Rules Generated
4. **Read SKILL Index Template**:
Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
```javascript
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
Files Created:
├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
```
5. **Generate SKILL.md Index**:
Follow template from tech-skill-index.txt with variable substitutions:
- `{TECH_STACK_NAME}`: From metadata.json
- `{MAIN_TECH}`: Primary technology
- `{ISO_TIMESTAMP}`: Current timestamp
- `{QUERY_COUNT}`: From research_summary
- `{SOURCE_COUNT}`: From research_summary
- Conditional sections for composite tech stacks
Template provides structure for:
- Frontmatter with metadata
- Overview and tech stack description
- Module organization (Core/Practical/Config sections)
- Loading recommendations (Quick/Implementation/Complete)
- Usage guidelines and auto-trigger keywords
- Research metadata and version history
6. **Write SKILL.md**:
```javascript
Write({
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All module files verified
- Loading recommendations included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Tech Stack SKILL Package Complete
Tech Stack: {TECH_STACK_NAME}
Location: .claude/skills/{TECH_STACK_NAME}/
Files: SKILL.md + 5-6 modules + metadata.json
Exa Research: {queries} queries, {sources} sources
Usage: Skill(command: "{TECH_STACK_NAME}")
```
---
## Implementation Details
## Path Pattern Reference
### TodoWrite Patterns
| Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Invalid session ID: Report error, verify session exists
- Missing context-package: Warn, fall back to direct mode
- No tech stack detected: Ask user to specify tech stack name
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- Exa API failures: Agent handles internally with retries
- Incomplete results: Warn user, proceed with partial data if minimum sections available
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
| Tech Stack | Core Pattern | Test Pattern |
|------------|--------------|--------------|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
| Go | `**/*.go` | `**/*_test.go` |
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
```
**Arguments**:
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
- **--tool**: Reserved for future CLI integration (default: gemini)
- **session-id**: `WFS-*` format - Extract from workflow session
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- **--regenerate**: Force regenerate existing rules
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/{tech-stack}/
├── SKILL.md # Index (Phase 3)
├── principles.md # Agent (Phase 2)
├── patterns.md # Agent
├── practices.md # Agent
├── testing.md # Agent
├── config.md # Agent
├── frameworks.md # Agent (if composite)
└── metadata.json # Agent
```
### Direct Mode - Single Stack
### Single Language
```bash
/memory:tech-research "typescript"
```
**Workflow**:
1. Phase 1: Detects direct mode, checks existing SKILL
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
3. Phase 3: Generates SKILL.md index
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
### Direct Mode - Composite Stack
### Frontend Stack
```bash
/memory:tech-research "typescript-react-nextjs"
/memory:tech-research "typescript-react"
```
**Workflow**:
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
3. Phase 3: Generates SKILL.md index with framework integration
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
### Session Mode - Extract from Workflow
### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**:
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
3. Phase 3: Generates SKILL.md index
**Workflow**: Extract tech stack from session → Generate rules
### Regenerate Existing
---
```bash
/memory:tech-research "react" --regenerate
```
**Workflow**:
1. Phase 1: Deletes existing SKILL due to --regenerate
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
3. Phase 3: Generates updated SKILL.md
### Skip Path - Fast Update
```bash
/memory:tech-research "python"
```
**Scenario**: SKILL already exists with 7 files
**Workflow**:
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
2. Phase 2: **SKIPPED**
3. Phase 3: Updates SKILL.md index only (5-10x faster)
## Comparison: Rules vs SKILL
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -99,10 +99,10 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
// Get module structure
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
// OR with --path
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
@@ -185,7 +185,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -244,7 +244,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
EXECUTION SCRIPT: ccw tool exec update_module_claude
- Accepts strategy parameter: multi-layer | single-layer
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -252,7 +252,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
run_in_background: false
})
exit_code=$?

View File

@@ -41,7 +41,7 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
```javascript
// Detect changed modules
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -102,7 +102,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -184,21 +184,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module

View File

@@ -187,7 +187,7 @@ Objectives:
3. Use Gemini for aggregation (optional):
Command pattern:
cd .workflow/.archives/{session_id} && gemini -p "
ccw cli -p "
PURPOSE: Extract lessons and conflicts from workflow session
TASK:
• Analyze IMPL_PLAN and lessons from manifest
@@ -198,7 +198,7 @@ Objectives:
CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt
"
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
@@ -334,7 +334,7 @@ Objectives:
- Sort sessions by date
2. Use Gemini for final aggregation:
gemini -p "
ccw cli -p "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK:
• Group successes by functional domain
@@ -345,7 +345,7 @@ Objectives:
CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt
"
" --tool gemini --mode analysis
3. Read templates for formatting (same 4 templates as single mode)

View File

@@ -101,7 +101,7 @@ Load only minimal necessary context from each artifact:
- Dependencies (depends_on, blocks)
- Context (requirements, focus_paths, acceptance, artifacts)
- Flow control (pre_analysis, implementation_approach)
- Meta (complexity, priority, use_codex)
- Meta (complexity, priority)
### 3. Build Semantic Models

Some files were not shown because too many files have changed in this diff Show More