Compare commits

...

19 Commits

Author SHA1 Message Date
cexll
5a50131a13 refactor!: major directory restructuring and npx support
- Create agents/ directory, move bmad, requirements, development-essentials
- Remove docs/, hooks/, dev-workflow/ directories
- Add npx support via github:cexll/myclaude
- Add bin/cli.js with --update command for installed modules
- Add package.json, skills/README.md, PLUGIN_README.md
- Update all references across config.json, README, marketplace.json
- Change default module from dev to do
- Update CHANGELOG with all 59 tags

BREAKING CHANGE: Directory structure changed, docs/hooks removed

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-26 16:57:06 +08:00
cexll
fca5c13c8d docs: add commercial licensing contact email 2026-01-25 22:49:18 +08:00
cexll
c1d3a0a07a fix: correct gitignore to not exclude cmd/codeagent-wrapper
The pattern 'codeagent-wrapper' was matching cmd/codeagent-wrapper/
directory. Changed to '/codeagent-wrapper' to only match root binary.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-25 18:12:40 +08:00
cexll
2856055e2e fix: support concurrent tasks with unique state files
- Generate unique task_id (timestamp-pid-random) for each /do invocation
- State files now use pattern: do.{task_id}.local.md
- Stop hook scans all state files, aggregates blocking reasons
- Auto-cleanup completed task state files

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-25 18:04:47 +08:00
cexll
a9c1e8178f fix: correct build path in release workflow
- Remove obsolete cmd/codeagent directory
- Fix release.yml build path to ./cmd/codeagent-wrapper

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-25 17:52:34 +08:00
cexll
1afeca88ae fix: increase stdoutDrainTimeout from 100ms to 500ms
Resolves intermittent "completed without agent_message output" errors
when Claude CLI exits before all stdout data is read.

- internal/executor/executor.go:43
- internal/app/app.go:27
- Add benchmark script for stability testing

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-25 17:45:38 +08:00
cexll
326ad85c74 fix: use ANTHROPIC_AUTH_TOKEN for Claude CLI env injection
- Change env var from ANTHROPIC_API_KEY to ANTHROPIC_AUTH_TOKEN
- Add Backend field propagation in taskSpec (cli.go)
- Add stderr logging for injected env vars with API key masking
- Add comprehensive tests for env injection flow

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 15:20:29 +08:00
cexll
e66bec0083 test: use prefix match for version flag tests
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:43 +08:00
cexll
eb066395c2 docs: restructure root READMEs with do as recommended workflow
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:41 +08:00
cexll
b49dad842a docs: update do/omo/sparv module READMEs with detailed workflows
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:39 +08:00
cexll
d98086c661 docs: add README for bmad and requirements modules
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:37 +08:00
cexll
0420646258 update codeagent version 2026-01-24 14:01:54 +08:00
cexll
19a8d8e922 refactor: rename feature-dev to do workflow
- Rename skills/feature-dev/ → skills/do/
- Update config.json module name and paths
- Shorter command: /do instead of /feature-dev
- State file: .claude/do.local.md
- Completion promise: <promise>DO_COMPLETE</promise>

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 20:29:28 +08:00
NieiR
669b1d82ce fix(gemini): read GEMINI_MODEL from ~/.gemini/.env (#131)
When using gemini backend without --model flag, now automatically
reads GEMINI_MODEL from ~/.gemini/.env file, consistent with how
claude backend reads model from settings.
2026-01-23 12:03:50 +08:00
cexll
a21c31fd89 feat: add feature-dev skill with 7-phase workflow
Structured feature development with codeagent orchestration:
- Discovery, Exploration, Clarification, Architecture phases
- Implementation, Review, Summary phases
- Parallel agent execution via code-explorer, code-architect, etc.
- Hook-based workflow automation with validation scripts

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:01:31 +08:00
cexll
773f133111 chore: ignore references directory
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:01:04 +08:00
cexll
4f5d24531c feat(install): support \${CLAUDE_PLUGIN_ROOT} variable in hooks config
- find_module_hooks now returns (hooks_config, plugin_root_path) tuple
- Add _replace_hook_variables() for recursive placeholder substitution
- Add feature-dev module config to config.json

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:55 +08:00
cexll
cc24d43c8b fix(codeagent): validate non-empty output message before printing
Return exit code 1 when backend returns empty result.Message with exit_code=0.
Prevents silent failures where no output is produced.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:47 +08:00
cexll
27d4ac8afd chore: add go.work.sum for workspace dependencies
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:38 +08:00
90 changed files with 3885 additions and 2899 deletions

View File

@@ -15,32 +15,25 @@
"source": "./skills/omo",
"category": "development"
},
{
"name": "dev",
"description": "Lightweight development workflow with requirements clarification, parallel codex execution, and mandatory 90% test coverage",
"version": "5.6.1",
"source": "./dev-workflow",
"category": "development"
},
{
"name": "requirements",
"description": "Requirements-driven development workflow with quality gates for practical feature implementation",
"version": "5.6.1",
"source": "./requirements-driven-workflow",
"source": "./agents/requirements",
"category": "development"
},
{
"name": "bmad",
"description": "Full BMAD agile workflow with role-based agents (PO, Architect, SM, Dev, QA) and interactive approval gates",
"version": "5.6.1",
"source": "./bmad-agile-workflow",
"source": "./agents/bmad",
"category": "development"
},
{
"name": "dev-kit",
"description": "Essential development commands for coding, debugging, testing, optimization, and documentation",
"version": "5.6.1",
"source": "./development-essentials",
"source": "./agents/development-essentials",
"category": "productivity"
},
{

View File

@@ -74,7 +74,7 @@ jobs:
if [ "${{ matrix.goos }}" = "windows" ]; then
OUTPUT_NAME="${OUTPUT_NAME}.exe"
fi
go build -ldflags="-s -w -X main.version=${VERSION}" -o ${OUTPUT_NAME} ./cmd/codeagent
go build -ldflags="-s -w -X main.version=${VERSION}" -o ${OUTPUT_NAME} ./cmd/codeagent-wrapper
chmod +x ${OUTPUT_NAME}
echo "artifact_path=codeagent-wrapper/${OUTPUT_NAME}" >> $GITHUB_OUTPUT

1
.gitignore vendored
View File

@@ -7,3 +7,4 @@
__pycache__
.coverage
coverage.out
references

View File

@@ -2,66 +2,451 @@
All notable changes to this project will be documented in this file.
## [5.6.4] - 2026-01-15
## [6.0.0] - 2026-01-26
### 🚀 Features
- add reasoning effort config for codex backend
- default to skip-permissions and bypass-sandbox
- add multi-agent support with yolo mode
- add omo module for multi-agent orchestration
- add intelligent backend selection based on task complexity (#61)
- v5.4.0 structured execution report (#94)
- add millisecond-precision timestamps to all log entries (#91)
- skill-install install script and security scan
- add uninstall scripts with selective module removal
### 🐛 Bug Fixes
- filter codex stderr noise logs
- use config override for codex reasoning effort
- propagate SkipPermissions to parallel tasks (#113)
- add timeout for Windows process termination
- reject dash as workdir parameter (#118)
- add sleep in fake script to prevent CI race condition
- fix gemini env load
- fix omo
- fix codeagent skill TaskOutput
- 修复 Gemini init 事件 session_id 未提取的问题 (#111)
- Windows 后端退出taskkill 结束进程树 + turn.completed 支持 (#108)
- support model parameter for all backends, auto-inject from settings (#105)
- replace setx with reg add to avoid 1024-char PATH truncation (#101)
- 移除未知事件格式的日志噪声 (#96)
- prevent duplicate PATH entries on reinstall (#95)
- Minor issues #12 and #13 - ASCII mode and performance optimization
- correct settings.json filename and bump version to v5.2.8
- allow claude backend to read env from setting.json while preventing recursion (#92)
- comprehensive security and quality improvements for PR #85 & #87 (#90)
- Improve backend termination after message and extend timeout (#86)
- Parser重复解析优化 + 严重bug修复 + PR #86兼容性 (#88)
- filter noisy stderr output from gemini backend (#83)
- 修復 wsl install.sh 格式問題 (#78)
- 修复多 backend 并行日志 PID 混乱并移除包装格式 (#74) (#76)
- support `npx github:cexll/myclaude` for installation and execution
- default module changed from `dev` to `do`
### 🚜 Refactor
- remove sisyphus agent and unused code
- streamline agent documentation and remove sisyphus
- restructure: create `agents/` and move `bmad-agile-workflow``agents/bmad`, `requirements-driven-workflow``agents/requirements`, `development-essentials``agents/development-essentials`
- remove legacy directories: `docs/`, `hooks/`, `dev-workflow/`
- update references across `config.json`, `README.md`, `README_CN.md`, `marketplace.json`, etc.
### 📚 Documentation
- add OmO workflow to README and fix plugin marketplace structure
- update FAQ for default bypass/skip-permissions behavior
- 添加 FAQ 常见问题章节
- update troubleshooting with idempotent PATH commands (#95)
- add `skills/README.md` and `PLUGIN_README.md`
### 💼 Other
- add `package.json` and `bin/cli.js` for npx packaging
## [6.1.5] - 2026-01-25
### 🐛 Bug Fixes
- correct gitignore to not exclude cmd/codeagent-wrapper
## [6.1.4] - 2026-01-25
### 🐛 Bug Fixes
- support concurrent tasks with unique state files
## [6.1.3] - 2026-01-25
### 🐛 Bug Fixes
- correct build path in release workflow
- increase stdoutDrainTimeout from 100ms to 500ms
## [6.1.2] - 2026-01-24
### 🐛 Bug Fixes
- use ANTHROPIC_AUTH_TOKEN for Claude CLI env injection
### 💼 Other
- update codeagent version
### 📚 Documentation
- restructure root READMEs with do as recommended workflow
- update do/omo/sparv module READMEs with detailed workflows
- add README for bmad and requirements modules
### 🧪 Testing
- use prefix match for version flag tests
## [6.1.1] - 2026-01-23
### 🚜 Refactor
- rename feature-dev to do workflow
## [6.1.0] - 2026-01-23
### ⚙️ Miscellaneous Tasks
- ignore references directory
- add go.work.sum for workspace dependencies
### 🐛 Bug Fixes
- read GEMINI_MODEL from ~/.gemini/.env ([#131](https://github.com/cexll/myclaude/issues/131))
- validate non-empty output message before printing
### 🚀 Features
- add feature-dev skill with 7-phase workflow
- support \${CLAUDE_PLUGIN_ROOT} variable in hooks config
## [6.0.0-alpha1] - 2026-01-20
### 🐛 Bug Fixes
- add missing cmd/codeagent/main.go entry point
- update release workflow build path for new directory structure
- write PATH config to both profile and rc files ([#128](https://github.com/cexll/myclaude/issues/128))
### 🚀 Features
- add course module with dev, product-requirements and test-cases skills
- add hooks management to install.py
### 🚜 Refactor
- restructure codebase to internal/ directory with modular architecture
## [5.6.7] - 2026-01-17
### 💼 Other
- remove .sparv
### 📚 Documentation
- update 'Agent Hierarchy' model for frontend-ui-ux-engineer and document-writer in README ([#127](https://github.com/cexll/myclaude/issues/127))
- update mappings for frontend-ui-ux-engineer and document-writer in README ([#126](https://github.com/cexll/myclaude/issues/126))
### 🚀 Features
- add sparv module and interactive plugin manager
- add sparv enhanced rules v1.1
- add sparv skill to claude-plugin v1.1.0
- feat sparv skill
## [5.6.6] - 2026-01-16
### 🐛 Bug Fixes
- remove extraneous dash arg for opencode stdin mode ([#124](https://github.com/cexll/myclaude/issues/124))
### 💼 Other
- update readme
## [5.6.5] - 2026-01-16
### 🐛 Bug Fixes
- correct default models for oracle and librarian agents ([#120](https://github.com/cexll/myclaude/issues/120))
### 🚀 Features
- feat dev skill
## [5.6.4] - 2026-01-15
### 🐛 Bug Fixes
- filter codex 0.84.0 stderr noise logs ([#122](https://github.com/cexll/myclaude/issues/122))
- filter codex stderr noise logs
## [5.6.3] - 2026-01-14
### ⚙️ Miscellaneous Tasks
- bump codeagent-wrapper version to 5.6.3
### 🐛 Bug Fixes
- update version tests to match 5.6.3
- use config override for codex reasoning effort
## [5.6.2] - 2026-01-14
### 🐛 Bug Fixes
- propagate SkipPermissions to parallel tasks ([#113](https://github.com/cexll/myclaude/issues/113))
- add timeout for Windows process termination
- reject dash as workdir parameter ([#118](https://github.com/cexll/myclaude/issues/118))
### 📚 Documentation
- add OmO workflow to README and fix plugin marketplace structure
### 🚜 Refactor
- remove sisyphus agent and unused code
## [5.6.1] - 2026-01-13
### 🐛 Bug Fixes
- add sleep in fake script to prevent CI race condition
- fix gemini env load
- fix omo
### 🚀 Features
- add reasoning effort config for codex backend
## [5.6.0] - 2026-01-13
### 📚 Documentation
- update FAQ for default bypass/skip-permissions behavior
### 🚀 Features
- default to skip-permissions and bypass-sandbox
- add omo module for multi-agent orchestration
### 🚜 Refactor
- streamline agent documentation and remove sisyphus
## [5.5.0] - 2026-01-12
### 🐛 Bug Fixes
- 修复 Gemini init 事件 session_id 未提取的问题 ([#111](https://github.com/cexll/myclaude/issues/111))
- fix codeagent skill TaskOutput
### 💼 Other
- Merge branch 'master' of github.com:cexll/myclaude
- add test-cases skill
- add browser skill
- BMADh和Requirements-Driven支持根据语义生成对应的文档 (#82)
### 🚀 Features
- add multi-agent support with yolo mode
## [5.4.4] - 2026-01-08
### 💼 Other
- 修复 Windows 后端退出taskkill 结束进程树 + turn.completed 支持 ([#108](https://github.com/cexll/myclaude/issues/108))
## [5.4.3] - 2026-01-06
### 🐛 Bug Fixes
- support model parameter for all backends, auto-inject from settings ([#105](https://github.com/cexll/myclaude/issues/105))
### 📚 Documentation
- add FAQ Q5 for permission/sandbox env vars
### 🚀 Features
- feat skill-install install script and security scan
- add uninstall scripts with selective module removal
## [5.4.2] - 2025-12-31
### 🐛 Bug Fixes
- replace setx with reg add to avoid 1024-char PATH truncation ([#101](https://github.com/cexll/myclaude/issues/101))
## [5.4.1] - 2025-12-26
### 🐛 Bug Fixes
- 移除未知事件格式的日志噪声 ([#96](https://github.com/cexll/myclaude/issues/96))
- prevent duplicate PATH entries on reinstall ([#95](https://github.com/cexll/myclaude/issues/95))
### 📚 Documentation
- 添加 FAQ 常见问题章节
- update troubleshooting with idempotent PATH commands ([#95](https://github.com/cexll/myclaude/issues/95))
### 🚀 Features
- Add intelligent backend selection based on task complexity ([#61](https://github.com/cexll/myclaude/issues/61))
## [5.4.0] - 2025-12-24
### 🐛 Bug Fixes
- Minor issues #12 and #13 - ASCII mode and performance optimization
- code review fixes for PR #94 - all critical and major issues resolved
### 🚀 Features
- v5.4.0 structured execution report ([#94](https://github.com/cexll/myclaude/issues/94))
## [5.2.8] - 2025-12-22
### ⚙️ Miscellaneous Tasks
- simplify release workflow to use GitHub auto-generated notes
### 🐛 Bug Fixes
- correct settings.json filename and bump version to v5.2.8
## [5.2.7] - 2025-12-21
### ⚙️ Miscellaneous Tasks
- bump version to v5.2.7
### 🐛 Bug Fixes
- allow claude backend to read env from setting.json while preventing recursion ([#92](https://github.com/cexll/myclaude/issues/92))
- comprehensive security and quality improvements for PR #85 & #87 ([#90](https://github.com/cexll/myclaude/issues/90))
- Parser重复解析优化 + 严重bug修复 + PR #86兼容性 ([#88](https://github.com/cexll/myclaude/issues/88))
### 💼 Other
- Improve backend termination after message and extend timeout ([#86](https://github.com/cexll/myclaude/issues/86))
### 🚀 Features
- add millisecond-precision timestamps to all log entries ([#91](https://github.com/cexll/myclaude/issues/91))
## [5.2.6] - 2025-12-19
### 🐛 Bug Fixes
- filter noisy stderr output from gemini backend ([#83](https://github.com/cexll/myclaude/issues/83))
- 修復 wsl install.sh 格式問題 ([#78](https://github.com/cexll/myclaude/issues/78))
### 💼 Other
- update all readme
- BMADh和Requirements-Driven支持根据语义生成对应的文档 ([#82](https://github.com/cexll/myclaude/issues/82))
## [5.2.5] - 2025-12-17
### 🐛 Bug Fixes
- 修复多 backend 并行日志 PID 混乱并移除包装格式 ([#74](https://github.com/cexll/myclaude/issues/74)) ([#76](https://github.com/cexll/myclaude/issues/76))
- replace "Codex" to "codeagent" in dev-plan-generator subagent
- 修復 win python install.py
### 💼 Other
- Merge pull request #71 from aliceric27/master
- Merge branch 'cexll:master' into master
- Merge pull request #72 from changxvv/master
- update changelog
- update codeagent skill backend select
## [5.2.4] - 2025-12-16

View File

@@ -7,12 +7,12 @@
help:
@echo "Claude Code Multi-Agent Workflow - Quick Deployment"
@echo ""
@echo "Recommended installation: python3 install.py --install-dir ~/.claude"
@echo "Recommended installation: npx github:cexll/myclaude"
@echo ""
@echo "Usage: make [target]"
@echo ""
@echo "Targets:"
@echo " install - LEGACY: install all configurations (prefer install.py)"
@echo " install - LEGACY: install all configurations (prefer npx github:cexll/myclaude)"
@echo " deploy-bmad - Deploy BMAD workflow (bmad-pilot)"
@echo " deploy-requirements - Deploy Requirements workflow (requirements-pilot)"
@echo " deploy-essentials - Deploy Development Essentials workflow"
@@ -31,16 +31,16 @@ CLAUDE_CONFIG_DIR = ~/.claude
SPECS_DIR = .claude/specs
# Workflow directories
BMAD_DIR = bmad-agile-workflow
REQUIREMENTS_DIR = requirements-driven-workflow
ESSENTIALS_DIR = development-essentials
BMAD_DIR = agents/bmad
REQUIREMENTS_DIR = agents/requirements
ESSENTIALS_DIR = agents/development-essentials
ADVANCED_DIR = advanced-ai-agents
OUTPUT_STYLES_DIR = output-styles
# Install all configurations
install: deploy-all
@echo "⚠️ LEGACY PATH: make install will be removed in future versions."
@echo " Prefer: python3 install.py --install-dir ~/.claude"
@echo " Prefer: npx github:cexll/myclaude"
@echo "✅ Installation complete!"
# Deploy BMAD workflow
@@ -159,4 +159,3 @@ changelog:
@echo ""
@echo "Preview the changes:"
@echo " git diff CHANGELOG.md"

18
PLUGIN_README.md Normal file
View File

@@ -0,0 +1,18 @@
# Plugin System
Claude Code plugins for this repo are defined in `.claude-plugin/marketplace.json`.
## Install
```bash
/plugin marketplace add cexll/myclaude
/plugin list
```
## Available Plugins
- `bmad` - BMAD workflow (`./agents/bmad`)
- `requirements` - requirements-driven workflow (`./agents/requirements`)
- `dev-kit` - development essentials (`./agents/development-essentials`)
- `omo` - orchestration skill (`./skills/omo`)
- `sparv` - SPARV workflow (`./skills/sparv`)

649
README.md
View File

@@ -3,404 +3,102 @@
# Claude Code Multi-Agent Workflow System
[![Run in Smithery](https://smithery.ai/badge/skills/cexll)](https://smithery.ai/skills?ns=cexll&utm_source=github&utm_medium=badge)
[![License: AGPL-3.0](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Claude Code](https://img.shields.io/badge/Claude-Code-blue)](https://claude.ai/code)
[![Version](https://img.shields.io/badge/Version-5.6-green)](https://github.com/cexll/myclaude)
[![Version](https://img.shields.io/badge/Version-6.x-green)](https://github.com/cexll/myclaude)
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini)
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini/OpenCode)
## Core Concept: Multi-Backend Architecture
This system leverages a **dual-agent architecture** with pluggable AI backends:
| Role | Agent | Responsibility |
|------|-------|----------------|
| **Orchestrator** | Claude Code | Planning, context gathering, verification, user interaction |
| **Executor** | codeagent-wrapper | Code editing, test execution (Codex/Claude/Gemini backends) |
**Why this separation?**
- Claude Code excels at understanding context and orchestrating complex workflows
- Specialized backends (Codex for code, Claude for reasoning, Gemini for prototyping) excel at focused execution
- Backend selection via `--backend codex|claude|gemini` matches the model to the task
## Quick Start(Please execute in Powershell on Windows)
## Quick Start
```bash
git clone https://github.com/cexll/myclaude.git
cd myclaude
python3 install.py --install-dir ~/.claude
npx github:cexll/myclaude
```
## Workflows Overview
## Modules Overview
### 0. OmO Multi-Agent Orchestrator (Recommended for Complex Tasks)
**Intelligent multi-agent orchestration that routes tasks to specialized agents based on risk signals.**
```bash
/omo "analyze and fix this authentication bug"
```
**Agent Hierarchy:**
| Agent | Role | Backend | Model |
|-------|------|---------|-------|
| `oracle` | Technical advisor | Claude | claude-opus-4-5 |
| `librarian` | External research | Claude | claude-sonnet-4-5 |
| `explore` | Codebase search | OpenCode | grok-code |
| `develop` | Code implementation | Codex | gpt-5.2 |
| `frontend-ui-ux-engineer` | UI/UX specialist | Gemini | gemini-3-pro |
| `document-writer` | Documentation | Gemini | gemini-3-flash |
**Routing Signals (Not Fixed Pipeline):**
- Code location unclear → `explore`
- External library/API → `librarian`
- Risky/multi-file change → `oracle`
- Implementation needed → `develop` / `frontend-ui-ux-engineer`
**Common Recipes:**
- Explain code: `explore`
- Small fix with known location: `develop` directly
- Bug fix, location unknown: `explore → develop`
- Cross-cutting refactor: `explore → oracle → develop`
- External API integration: `explore + librarian → oracle → develop`
**Best For:** Complex bug investigation, multi-file refactoring, architecture decisions
---
### 1. Dev Workflow (Recommended)
**The primary workflow for most development tasks.**
```bash
/dev "implement user authentication with JWT"
```
**6-Step Process:**
1. **Requirements Clarification** - Interactive Q&A to clarify scope
2. **Codex Deep Analysis** - Codebase exploration and architecture decisions
3. **Dev Plan Generation** - Structured task breakdown with test requirements
4. **Parallel Execution** - Codex executes tasks concurrently
5. **Coverage Validation** - Enforce ≥90% test coverage
6. **Completion Summary** - Report with file changes and coverage stats
**Key Features:**
- Claude Code orchestrates, Codex executes all code changes
- Automatic task parallelization for speed
- Mandatory 90% test coverage gate
- Rollback on failure
**Best For:** Feature development, refactoring, bug fixes with tests
---
### 2. BMAD Agile Workflow
**Full enterprise agile methodology with 6 specialized agents.**
```bash
/bmad-pilot "build e-commerce checkout system"
```
**Agents:**
| Agent | Role |
|-------|------|
| Product Owner | Requirements & user stories |
| Architect | System design & tech decisions |
| Tech Lead | Sprint planning & task breakdown |
| Developer | Implementation |
| Code Reviewer | Quality assurance |
| QA Engineer | Testing & validation |
**Process:**
```
Requirements → Architecture → Sprint Plan → Development → Review → QA
↓ ↓ ↓ ↓ ↓ ↓
PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
```
**Best For:** Large features, team coordination, enterprise projects
---
### 3. Requirements-Driven Workflow
**Lightweight requirements-to-code pipeline.**
```bash
/requirements-pilot "implement API rate limiting"
```
**Process:**
1. Requirements generation with quality scoring
2. Implementation planning
3. Code generation
4. Review and testing
**Best For:** Quick prototypes, well-defined features
---
### 4. Development Essentials
**Direct commands for daily coding tasks.**
| Command | Purpose |
|---------|---------|
| `/code` | Implement a feature |
| `/debug` | Debug an issue |
| `/test` | Write tests |
| `/review` | Code review |
| `/optimize` | Performance optimization |
| `/refactor` | Code refactoring |
| `/docs` | Documentation |
**Best For:** Quick tasks, no workflow overhead needed
## Enterprise Workflow Features
- **Multi-backend execution:** `codeagent-wrapper --backend codex|claude|gemini` (default `codex`) so you can match the model to the task without changing workflows.
- **GitHub workflow commands:** `/gh-create-issue "short need"` creates structured issues; `/gh-issue-implement 123` pulls issue #123, drives development, and prepares the PR.
- **Skills + hooks activation:** .claude/hooks run automation (tests, reviews), while `.claude/skills/skill-rules.json` auto-suggests the right skills. Keep hooks enabled in `.claude/settings.json` to activate the enterprise workflow helpers.
---
## Version Requirements
### Codex CLI
**Minimum version:** Check compatibility with your installation
The codeagent-wrapper uses these Codex CLI features:
- `codex e` - Execute commands (shorthand for `codex exec`)
- `--skip-git-repo-check` - Skip git repository validation
- `--json` - JSON stream output format
- `-C <workdir>` - Set working directory
- `resume <session_id>` - Resume previous sessions
**Verify Codex CLI is installed:**
```bash
which codex
codex --version
```
### Claude CLI
**Minimum version:** Check compatibility with your installation
Required features:
- `--output-format stream-json` - Streaming JSON output format
- `--setting-sources` - Control setting sources (prevents infinite recursion)
- `--dangerously-skip-permissions` - Skip permission prompts (use with caution)
- `-p` - Prompt input flag
- `-r <session_id>` - Resume sessions
**Security Note:** The wrapper adds `--dangerously-skip-permissions` for Claude by default. Set `CODEAGENT_SKIP_PERMISSIONS=false` to disable if you need permission prompts.
**Verify Claude CLI is installed:**
```bash
which claude
claude --version
```
### Gemini CLI
**Minimum version:** Check compatibility with your installation
Required features:
- `-o stream-json` - JSON stream output format
- `-y` - Auto-approve prompts (non-interactive mode)
- `-r <session_id>` - Resume sessions
- `-p` - Prompt input flag
**Verify Gemini CLI is installed:**
```bash
which gemini
gemini --version
```
---
| Module | Description | Documentation |
|--------|-------------|---------------|
| [do](skills/do/README.md) | **Recommended** - 7-phase feature development with codeagent orchestration | `/do` command |
| [omo](skills/omo/README.md) | Multi-agent orchestration with intelligent routing | `/omo` command |
| [bmad](agents/bmad/README.md) | BMAD agile workflow with 6 specialized agents | `/bmad-pilot` command |
| [requirements](agents/requirements/README.md) | Lightweight requirements-to-code pipeline | `/requirements-pilot` command |
| [essentials](agents/development-essentials/README.md) | Core development commands and utilities | `/code`, `/debug`, etc. |
| [sparv](skills/sparv/README.md) | SPARV workflow (Specify→Plan→Act→Review→Vault) | `/sparv` command |
| course | Course development (combines dev + product-requirements + test-cases) | Composite module |
## Installation
### Modular Installation (Recommended)
```bash
# Install all enabled modules (dev + essentials by default)
python3 install.py --install-dir ~/.claude
# Interactive installer (recommended)
npx github:cexll/myclaude
# Install specific module
python3 install.py --module dev
# List installable items (modules / skills / wrapper)
npx github:cexll/myclaude --list
# List available modules
python3 install.py --list-modules
# Force overwrite existing files
python3 install.py --force
# Custom install directory / overwrite
npx github:cexll/myclaude --install-dir ~/.claude --force
```
### Available Modules
### Module Configuration
| Module | Default | Description |
|--------|---------|-------------|
| `dev` | ✓ Enabled | Dev workflow + Codex integration |
| `essentials` | ✓ Enabled | Core development commands |
| `bmad` | Disabled | Full BMAD agile workflow |
| `requirements` | Disabled | Requirements-driven workflow |
### What Gets Installed
```
~/.claude/
├── bin/
│ └── codeagent-wrapper # Main executable
├── CLAUDE.md # Core instructions and role definition
├── commands/ # Slash commands (/dev, /code, etc.)
├── agents/ # Agent definitions
├── skills/
│ └── codex/
│ └── SKILL.md # Codex integration skill
├── config.json # Configuration
└── installed_modules.json # Installation status
```
### Customizing Installation Directory
By default, myclaude installs to `~/.claude`. You can customize this using the `INSTALL_DIR` environment variable:
```bash
# Install to custom directory
INSTALL_DIR=/opt/myclaude bash install.sh
# Update your PATH accordingly
export PATH="/opt/myclaude/bin:$PATH"
```
**Directory Structure:**
- `$INSTALL_DIR/bin/` - codeagent-wrapper binary
- `$INSTALL_DIR/skills/` - Skill definitions
- `$INSTALL_DIR/config.json` - Configuration file
- `$INSTALL_DIR/commands/` - Slash command definitions
- `$INSTALL_DIR/agents/` - Agent definitions
**Note:** When using a custom installation directory, ensure that `$INSTALL_DIR/bin` is added to your `PATH` environment variable.
### Configuration
Edit `config.json` to customize:
Edit `config.json` to enable/disable modules:
```json
{
"version": "1.0",
"install_dir": "~/.claude",
"modules": {
"dev": {
"enabled": true,
"operations": [
{"type": "merge_dir", "source": "dev-workflow"},
{"type": "copy_file", "source": "memorys/CLAUDE.md", "target": "CLAUDE.md"},
{"type": "copy_file", "source": "skills/codex/SKILL.md", "target": "skills/codex/SKILL.md"},
{"type": "run_command", "command": "bash install.sh"}
]
}
"bmad": { "enabled": false },
"requirements": { "enabled": false },
"essentials": { "enabled": false },
"omo": { "enabled": false },
"sparv": { "enabled": false },
"do": { "enabled": true },
"course": { "enabled": false }
}
}
```
**Operation Types:**
| Type | Description |
|------|-------------|
| `merge_dir` | Merge subdirs (commands/, agents/) into install dir |
| `copy_dir` | Copy entire directory |
| `copy_file` | Copy single file to target path |
| `run_command` | Execute shell command |
---
## Codex Integration
The `codex` skill enables Claude Code to delegate code execution to Codex CLI.
### Usage in Workflows
```bash
# Codex is invoked via the skill
codeagent-wrapper - <<'EOF'
implement @src/auth.ts with JWT validation
EOF
```
### Parallel Execution
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: backend_api
workdir: /project/backend
---CONTENT---
implement REST endpoints for /api/users
---TASK---
id: frontend_ui
workdir: /project/frontend
dependencies: backend_api
---CONTENT---
create React components consuming the API
EOF
```
### Install Codex Wrapper
```bash
# Automatic (via dev module)
python3 install.py --module dev
# Manual
bash install.sh
```
#### Windows
Windows installs place `codeagent-wrapper.exe` in `%USERPROFILE%\bin`.
```powershell
# PowerShell (recommended)
powershell -ExecutionPolicy Bypass -File install.ps1
# Batch (cmd)
install.bat
```
**Add to PATH** (if installer doesn't detect it):
```powershell
# PowerShell - persistent for current user
[Environment]::SetEnvironmentVariable('PATH', "$HOME\bin;" + [Environment]::GetEnvironmentVariable('PATH','User'), 'User')
# PowerShell - current session only
$Env:PATH = "$HOME\bin;$Env:PATH"
```
```batch
REM cmd.exe - persistent for current user (use PowerShell method above instead)
REM WARNING: This expands %PATH% which includes system PATH, causing duplication
REM Note: Using reg add instead of setx to avoid 1024-character truncation limit
reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%" /f
```
---
## Workflow Selection Guide
| Scenario | Recommended Workflow |
|----------|---------------------|
| New feature with tests | `/dev` |
| Quick bug fix | `/debug` or `/code` |
| Large multi-sprint feature | `/bmad-pilot` |
| Prototype or POC | `/requirements-pilot` |
| Code review | `/review` |
| Performance issue | `/optimize` |
| Scenario | Recommended |
|----------|-------------|
| Feature development (default) | `/do` |
| Bug investigation + fix | `/omo` |
| Large enterprise project | `/bmad-pilot` |
| Quick prototype | `/requirements-pilot` |
| Simple task | `/code`, `/debug` |
---
## Core Architecture
| Role | Agent | Responsibility |
|------|-------|----------------|
| **Orchestrator** | Claude Code | Planning, context gathering, verification |
| **Executor** | codeagent-wrapper | Code editing, test execution (Codex/Claude/Gemini/OpenCode) |
## Backend CLI Requirements
| Backend | Required Features |
|---------|-------------------|
| Codex | `codex e`, `--json`, `-C`, `resume` |
| Claude | `--output-format stream-json`, `-r` |
| Gemini | `-o stream-json`, `-y`, `-r` |
## Directory Structure After Installation
```
~/.claude/
├── bin/codeagent-wrapper
├── CLAUDE.md
├── commands/
├── agents/
├── skills/
└── config.json
```
## Documentation
- [codeagent-wrapper](codeagent-wrapper/README.md)
- [Plugin System](PLUGIN_README.md)
## Troubleshooting
@@ -408,214 +106,41 @@ reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%
**Codex wrapper not found:**
```bash
# Installer auto-adds PATH, check if configured
if [[ ":$PATH:" != *":$HOME/.claude/bin:"* ]]; then
echo "PATH not configured. Reinstalling..."
bash install.sh
fi
# Or manually add (idempotent command)
[[ ":$PATH:" != *":$HOME/.claude/bin:"* ]] && echo 'export PATH="$HOME/.claude/bin:$PATH"' >> ~/.zshrc
```
**Permission denied:**
```bash
python3 install.py --install-dir ~/.claude --force
# Select: codeagent-wrapper
npx github:cexll/myclaude
```
**Module not loading:**
```bash
# Check installation status
cat ~/.claude/installed_modules.json
# Reinstall specific module
python3 install.py --module dev --force
npx github:cexll/myclaude --force
```
### Version Compatibility Issues
**Backend CLI not found:**
**Backend CLI errors:**
```bash
# Check if backend CLIs are installed
which codex
which claude
which gemini
# Install missing backends
# Codex: Follow installation instructions at https://codex.docs
# Claude: Follow installation instructions at https://claude.ai/docs
# Gemini: Follow installation instructions at https://ai.google.dev/docs
which codex && codex --version
which claude && claude --version
which gemini && gemini --version
```
**Unsupported CLI flags:**
```bash
# If you see errors like "unknown flag" or "invalid option"
## FAQ
# Check backend CLI version
codex --version
claude --version
gemini --version
| Issue | Solution |
|-------|----------|
| "Unknown event format" | Logging display issue, can be ignored |
| Gemini can't read .gitignore files | Remove from .gitignore or use different backend |
| Codex permission denied | Set `approval_policy = "never"` in ~/.codex/config.yaml |
# For Codex: Ensure it supports `e`, `--skip-git-repo-check`, `--json`, `-C`, and `resume`
# For Claude: Ensure it supports `--output-format stream-json`, `--setting-sources`, `-r`
# For Gemini: Ensure it supports `-o stream-json`, `-y`, `-r`, `-p`
# Update your backend CLI to the latest version if needed
```
**JSON parsing errors:**
```bash
# If you see "failed to parse JSON output" errors
# Verify the backend outputs stream-json format
codex e --json "test task" # Should output newline-delimited JSON
claude --output-format stream-json -p "test" # Should output stream JSON
# If not, your backend CLI version may be too old or incompatible
```
**Infinite recursion with Claude backend:**
```bash
# The wrapper prevents this with `--setting-sources ""` flag
# If you still see recursion, ensure your Claude CLI supports this flag
claude --help | grep "setting-sources"
# If flag is not supported, upgrade Claude CLI
```
**Session resume failures:**
```bash
# Check if session ID is valid
codex history # List recent sessions
claude history
# Ensure backend CLI supports session resumption
codex resume <session_id> "test" # Should continue from previous session
claude -r <session_id> "test"
# If not supported, use new sessions instead of resume mode
```
---
## FAQ (Frequently Asked Questions)
### Q1: `codeagent-wrapper` execution fails with "Unknown event format"
**Problem:**
```
Unknown event format: {"type":"turn.started"}
Unknown event format: {"type":"assistant", ...}
```
**Solution:**
This is a logging event format display issue and does not affect actual functionality. It will be fixed in the next version. You can ignore these log outputs.
**Related Issue:** [#96](https://github.com/cexll/myclaude/issues/96)
---
### Q2: Gemini cannot read files ignored by `.gitignore`
**Problem:**
When using `codeagent-wrapper --backend gemini`, files in directories like `.claude/` that are ignored by `.gitignore` cannot be read.
**Solution:**
- **Option 1:** Remove `.claude/` from your `.gitignore` file
- **Option 2:** Ensure files that need to be read are not in `.gitignore` list
**Related Issue:** [#75](https://github.com/cexll/myclaude/issues/75)
---
### Q3: `/dev` command parallel execution is very slow
**Problem:**
Using `/dev` command for simple features takes too long (over 30 minutes) with no visibility into task progress.
**Solution:**
1. **Check logs:** Review `C:\Users\User\AppData\Local\Temp\codeagent-wrapper-*.log` to identify bottlenecks
2. **Adjust backend:**
- Try faster models like `gpt-5.1-codex-max`
- Running in WSL may be significantly faster
3. **Workspace:** Use a single repository instead of monorepo with multiple sub-projects
**Related Issue:** [#77](https://github.com/cexll/myclaude/issues/77)
---
### Q4: Codex permission denied with new Go version
**Problem:**
After upgrading to the new Go-based Codex implementation, execution fails with permission denied errors.
**Solution:**
Add the following configuration to `~/.codex/config.yaml` (Windows: `c:\user\.codex\config.toml`):
```yaml
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
approval_policy = "never"
sandbox_mode = "workspace-write"
disable_response_storage = true
network_access = true
```
**Key settings:**
- `approval_policy = "never"` - Remove approval restrictions
- `sandbox_mode = "workspace-write"` - Allow workspace write access
- `network_access = true` - Enable network access
**Related Issue:** [#31](https://github.com/cexll/myclaude/issues/31)
---
### Q5: How to disable default bypass/skip-permissions mode
**Background:**
By default, codeagent-wrapper enables bypass mode for both Codex and Claude backends:
- `CODEX_BYPASS_SANDBOX=true` - Bypasses Codex sandbox restrictions
- `CODEAGENT_SKIP_PERMISSIONS=true` - Skips Claude permission prompts
**To disable (if you need sandbox/permission protection):**
```bash
export CODEX_BYPASS_SANDBOX=false
export CODEAGENT_SKIP_PERMISSIONS=false
```
Or add to your shell profile (`~/.zshrc` or `~/.bashrc`):
```bash
echo 'export CODEX_BYPASS_SANDBOX=false' >> ~/.zshrc
echo 'export CODEAGENT_SKIP_PERMISSIONS=false' >> ~/.zshrc
```
**Note:** Disabling bypass mode will require manual approval for certain operations.
---
**Still having issues?** Visit [GitHub Issues](https://github.com/cexll/myclaude/issues) to search or report new issues.
---
## Documentation
- **[Codeagent-Wrapper Guide](docs/CODEAGENT-WRAPPER.md)** - Multi-backend execution wrapper
- **[Hooks Documentation](docs/HOOKS.md)** - Custom hooks and automation
### Additional Resources
- **[Installation Log](install.log)** - Installation history and troubleshooting
---
See [GitHub Issues](https://github.com/cexll/myclaude/issues) for more.
## License
AGPL-3.0 License - see [LICENSE](LICENSE)
AGPL-3.0 - see [LICENSE](LICENSE)
### Commercial Licensing
For commercial use without AGPL obligations, contact: evanxian9@gmail.com
## Support
- **Issues**: [GitHub Issues](https://github.com/cexll/myclaude/issues)
- **Documentation**: [docs/](docs/)
---
**Claude Code + Codex = Better Development** - Orchestration meets execution.
- [GitHub Issues](https://github.com/cexll/myclaude/issues)

View File

@@ -2,98 +2,116 @@
[![License: AGPL-3.0](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Claude Code](https://img.shields.io/badge/Claude-Code-blue)](https://claude.ai/code)
[![Version](https://img.shields.io/badge/Version-5.6-green)](https://github.com/cexll/myclaude)
[![Version](https://img.shields.io/badge/Version-6.x-green)](https://github.com/cexll/myclaude)
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini)
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini/OpenCode)
## 核心概念:多后端架构
## 快速开始
本系统采用**双智能体架构**与可插拔 AI 后端:
```bash
npx github:cexll/myclaude
```
## 模块概览
| 模块 | 描述 | 文档 |
|------|------|------|
| [do](skills/do/README.md) | **推荐** - 7 阶段功能开发 + codeagent 编排 | `/do` 命令 |
| [omo](skills/omo/README.md) | 多智能体编排 + 智能路由 | `/omo` 命令 |
| [bmad](agents/bmad/README.md) | BMAD 敏捷工作流 + 6 个专业智能体 | `/bmad-pilot` 命令 |
| [requirements](agents/requirements/README.md) | 轻量级需求到代码流水线 | `/requirements-pilot` 命令 |
| [essentials](agents/development-essentials/README.md) | 核心开发命令和工具 | `/code`, `/debug` 等 |
| [sparv](skills/sparv/README.md) | SPARV 工作流 (Specify→Plan→Act→Review→Vault) | `/sparv` 命令 |
| course | 课程开发(组合 dev + product-requirements + test-cases | 组合模块 |
## 核心架构
| 角色 | 智能体 | 职责 |
|------|-------|------|
| **编排者** | Claude Code | 规划、上下文收集、验证、用户交互 |
| **执行者** | codeagent-wrapper | 代码编辑、测试执行Codex/Claude/Gemini 后端)|
| **编排者** | Claude Code | 规划、上下文收集、验证 |
| **执行者** | codeagent-wrapper | 代码编辑、测试执行Codex/Claude/Gemini/OpenCode 后端)|
**为什么分离?**
- Claude Code 擅长理解上下文和编排复杂工作流
- 专业后端Codex 擅长代码、Claude 擅长推理、Gemini 擅长原型)专注执行
- 通过 `--backend codex|claude|gemini` 匹配模型与任务
## 工作流详解
## 快速开始windows上请在Powershell中执行
### do 工作流(推荐
7 阶段功能开发,通过 codeagent-wrapper 编排多个智能体。**大多数功能开发任务的首选工作流。**
```bash
git clone https://github.com/cexll/myclaude.git
cd myclaude
python3 install.py --install-dir ~/.claude
/do "添加用户登录功能"
```
## 工作流概览
**7 阶段:**
| 阶段 | 名称 | 目标 |
|------|------|------|
| 1 | Discovery | 理解需求 |
| 2 | Exploration | 映射代码库模式 |
| 3 | Clarification | 解决歧义(**强制**|
| 4 | Architecture | 设计实现方案 |
| 5 | Implementation | 构建功能(**需审批**|
| 6 | Review | 捕获缺陷 |
| 7 | Summary | 记录结果 |
### 0. OmO 多智能体编排器(复杂任务推荐)
**智能体:**
- `code-explorer` - 代码追踪、架构映射
- `code-architect` - 设计方案、文件规划
- `code-reviewer` - 代码审查、简化建议
- `develop` - 实现代码、运行测试
**基于风险信号智能路由任务到专业智能体的多智能体编排系统。**
---
### OmO 多智能体编排器
基于风险信号智能路由任务到专业智能体。
```bash
/omo "分析并修复这个认证 bug"
```
**智能体层级:**
| 智能体 | 角色 | 后端 | 模型 |
|-------|------|------|------|
| `oracle` | 技术顾问 | Claude | claude-opus-4-5 |
| `librarian` | 外部研究 | Claude | claude-sonnet-4-5 |
| `explore` | 代码库搜索 | OpenCode | grok-code |
| `develop` | 代码实现 | Codex | gpt-5.2 |
| `frontend-ui-ux-engineer` | UI/UX 专家 | Gemini | gemini-3-pro |
| `document-writer` | 文档撰写 | Gemini | gemini-3-flash |
**路由信号(非固定流水线):**
- 代码位置不明确 → `explore`
- 外部库/API → `librarian`
- 高风险/多文件变更 → `oracle`
- 需要实现 → `develop` / `frontend-ui-ux-engineer`
| 智能体 | 角色 | 后端 |
|-------|------|------|
| `oracle` | 技术顾问 | Claude |
| `librarian` | 外部研究 | Claude |
| `explore` | 代码库搜索 | OpenCode |
| `develop` | 代码实现 | Codex |
| `frontend-ui-ux-engineer` | UI/UX 专家 | Gemini |
| `document-writer` | 文档撰写 | Gemini |
**常用配方:**
- 解释代码:`explore`
- 位置已知的小修复:直接 `develop`
- Bug 修复位置未知:`explore → develop`
- Bug 修复位置未知`explore → develop`
- 跨模块重构:`explore → oracle → develop`
- 外部 API 集成:`explore + librarian → oracle → develop`
**适用场景:** 复杂 bug 调查、多文件重构、架构决策
---
### 1. Dev 工作流(推荐)
### SPARV 工作流
**大多数开发任务的首选工作流。**
极简 5 阶段工作流Specify → Plan → Act → Review → Vault。
```bash
/dev "实现 JWT 用户认证"
/sparv "实现订单导出功能"
```
**6 步流程**
1. **需求澄清** - 交互式问答明确范围
2. **Codex 深度分析** - 代码库探索和架构决策
3. **开发计划生成** - 结构化任务分解和测试要求
4. **并行执行** - Codex 并发执行任务
5. **覆盖率验证** - 强制 ≥90% 测试覆盖率
6. **完成总结** - 文件变更和覆盖率报告
**核心规则**
- **10 分规格门**:得分 0-10必须 >=9 才能进入 Plan
- **2 动作保存**:每 2 次工具调用写入 journal.md
- **3 失败协议**:连续 3 次失败后停止并上报
- **EHRB**:高风险操作需明确确认
**核心特性**
- Claude Code 编排Codex 执行所有代码变更
- 自动任务并行化提升速度
- 强制 90% 测试覆盖率门禁
- 失败自动回滚
**适用场景:** 功能开发、重构、带测试的 bug 修复
**评分维度(各 0-2 分)**
1. Value - 为什么做,可验证的收益
2. Scope - MVP + 不在范围内的内容
3. Acceptance - 可测试的验收标准
4. Boundaries - 错误/性能/兼容/安全边界
5. Risk - EHRB/依赖/未知 + 处理方式
---
### 2. BMAD 敏捷工作流
### BMAD 敏捷工作流
**包含 6 个专业智能体的完整企业敏捷方法论。**
完整企业敏捷方法论 + 6 个专业智能体。
```bash
/bmad-pilot "构建电商结账系统"
@@ -104,43 +122,36 @@ python3 install.py --install-dir ~/.claude
|-------|------|
| Product Owner | 需求与用户故事 |
| Architect | 系统设计与技术决策 |
| Tech Lead | Sprint 规划与任务分解 |
| Scrum Master | Sprint 规划与任务分解 |
| Developer | 实现 |
| Code Reviewer | 质量保证 |
| QA Engineer | 测试与验证 |
**流程**
```
需求 → 架构 → Sprint计划 → 开发 → 审查 → QA
↓ ↓ ↓ ↓ ↓ ↓
PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
```
**适用场景:** 大型功能、团队协作、企业项目
**审批门**
- PRD 完成后90+ 分)需用户审批
- 架构完成后90+ 分)需用户审批
---
### 3. 需求驱动工作流
### 需求驱动工作流
**轻量级需求到代码流水线。**
轻量级需求到代码流水线。
```bash
/requirements-pilot "实现 API 限流"
```
**流程**
1. 带质量评分的需求生成
2. 实现规划
3. 代码生成
4. 审查和测试
**适用场景:** 快速原型、明确定义的功能
**100 分质量评分**
- 功能清晰度30 分
- 技术具体性25 分
- 实现完整性25 分
- 业务上下文20 分
---
### 4. 开发基础命令
### 开发基础命令
**日常编码任务的直接命令。**
日常编码任务的直接命令。
| 命令 | 用途 |
|------|------|
@@ -152,332 +163,89 @@ PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
| `/refactor` | 代码重构 |
| `/docs` | 编写文档 |
**适用场景:** 快速任务,无需工作流开销
---
## 安装
### 模块化安装(推荐)
```bash
# 安装所有启用的模块默认dev + essentials
python3 install.py --install-dir ~/.claude
# 交互式安装器(推荐
npx github:cexll/myclaude
# 安装特定模块
python3 install.py --module dev
# 列出可安装项module:* / skill:* / codeagent-wrapper
npx github:cexll/myclaude --list
# 列出可用模块
python3 install.py --list-modules
# 强制覆盖现有文件
python3 install.py --force
# 指定安装目录 / 强制覆盖
npx github:cexll/myclaude --install-dir ~/.claude --force
```
### 可用模块
### 模块配置
| 模块 | 默认 | 描述 |
|------|------|------|
| `dev` | ✓ 启用 | Dev 工作流 + Codex 集成 |
| `essentials` | ✓ 启用 | 核心开发命令 |
| `bmad` | 禁用 | 完整 BMAD 敏捷工作流 |
| `requirements` | 禁用 | 需求驱动工作流 |
### 安装内容
```
~/.claude/
├── bin/
│ └── codeagent-wrapper # 主可执行文件
├── CLAUDE.md # 核心指令和角色定义
├── commands/ # 斜杠命令 (/dev, /code 等)
├── agents/ # 智能体定义
├── skills/
│ └── codex/
│ └── SKILL.md # Codex 集成技能
├── config.json # 配置文件
└── installed_modules.json # 安装状态
```
### 自定义安装目录
默认情况下myclaude 安装到 `~/.claude`。您可以使用 `INSTALL_DIR` 环境变量自定义安装目录:
```bash
# 安装到自定义目录
INSTALL_DIR=/opt/myclaude bash install.sh
# 相应更新您的 PATH
export PATH="/opt/myclaude/bin:$PATH"
```
**目录结构:**
- `$INSTALL_DIR/bin/` - codeagent-wrapper 可执行文件
- `$INSTALL_DIR/skills/` - 技能定义
- `$INSTALL_DIR/config.json` - 配置文件
- `$INSTALL_DIR/commands/` - 斜杠命令定义
- `$INSTALL_DIR/agents/` - 智能体定义
**注意:** 使用自定义安装目录时,请确保将 `$INSTALL_DIR/bin` 添加到您的 `PATH` 环境变量中。
### 配置
编辑 `config.json` 自定义:
编辑 `config.json` 启用/禁用模块:
```json
{
"version": "1.0",
"install_dir": "~/.claude",
"modules": {
"dev": {
"enabled": true,
"operations": [
{"type": "merge_dir", "source": "dev-workflow"},
{"type": "copy_file", "source": "memorys/CLAUDE.md", "target": "CLAUDE.md"},
{"type": "copy_file", "source": "skills/codex/SKILL.md", "target": "skills/codex/SKILL.md"},
{"type": "run_command", "command": "bash install.sh"}
]
}
"bmad": { "enabled": false },
"requirements": { "enabled": false },
"essentials": { "enabled": false },
"omo": { "enabled": false },
"sparv": { "enabled": false },
"do": { "enabled": true },
"course": { "enabled": false }
}
}
```
**操作类型:**
| 类型 | 描述 |
|------|------|
| `merge_dir` | 合并子目录 (commands/, agents/) 到安装目录 |
| `copy_dir` | 复制整个目录 |
| `copy_file` | 复制单个文件到目标路径 |
| `run_command` | 执行 shell 命令 |
---
## Codex 集成
`codex` 技能使 Claude Code 能够将代码执行委托给 Codex CLI。
### 工作流中的使用
```bash
# 通过技能调用 Codex
codeagent-wrapper - <<'EOF'
在 @src/auth.ts 中实现 JWT 验证
EOF
```
### 并行执行
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: backend_api
workdir: /project/backend
---CONTENT---
实现 /api/users 的 REST 端点
---TASK---
id: frontend_ui
workdir: /project/frontend
dependencies: backend_api
---CONTENT---
创建消费 API 的 React 组件
EOF
```
### 安装 Codex Wrapper
```bash
# 自动(通过 dev 模块)
python3 install.py --module dev
# 手动
bash install.sh
```
#### Windows 系统
Windows 系统会将 `codeagent-wrapper.exe` 安装到 `%USERPROFILE%\bin`
```powershell
# PowerShell推荐
powershell -ExecutionPolicy Bypass -File install.ps1
# 批处理cmd
install.bat
```
**添加到 PATH**(如果安装程序未自动检测):
```powershell
# PowerShell - 永久添加(当前用户)
[Environment]::SetEnvironmentVariable('PATH', "$HOME\bin;" + [Environment]::GetEnvironmentVariable('PATH','User'), 'User')
# PowerShell - 仅当前会话
$Env:PATH = "$HOME\bin;$Env:PATH"
```
```batch
REM cmd.exe - 永久添加(当前用户)(建议使用上面的 PowerShell 方法)
REM 警告:此命令会展开 %PATH% 包含系统 PATH导致重复
REM 注意:使用 reg add 而非 setx 以避免 1024 字符截断限制
reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%" /f
```
---
## 工作流选择指南
| 场景 | 推荐工作流 |
|------|----------|
| 带测试的新功能 | `/dev` |
| 快速 bug 修复 | `/debug``/code` |
| 大型多 Sprint 功能 | `/bmad-pilot` |
| 原型或 POC | `/requirements-pilot` |
| 代码审查 | `/review` |
| 性能问题 | `/optimize` |
| 场景 | 推荐 |
|------|------|
| 功能开发(默认) | `/do` |
| Bug 调查 + 修复 | `/omo` |
| 大型企业项目 | `/bmad-pilot` |
| 快速原型 | `/requirements-pilot` |
| 简单任务 | `/code`, `/debug` |
---
## 后端 CLI 要求
| 后端 | 必需功能 |
|------|----------|
| Codex | `codex e`, `--json`, `-C`, `resume` |
| Claude | `--output-format stream-json`, `-r` |
| Gemini | `-o stream-json`, `-y`, `-r` |
## 故障排查
### 常见问题
**Codex wrapper 未找到:**
```bash
# 安装程序会自动添加 PATH检查是否已添加
if [[ ":$PATH:" != *":$HOME/.claude/bin:"* ]]; then
echo "PATH not configured. Reinstalling..."
bash install.sh
fi
# 或手动添加(幂等性命令)
[[ ":$PATH:" != *":$HOME/.claude/bin:"* ]] && echo 'export PATH="$HOME/.claude/bin:$PATH"' >> ~/.zshrc
```
**权限被拒绝:**
```bash
python3 install.py --install-dir ~/.claude --force
# 选择codeagent-wrapper
npx github:cexll/myclaude
```
**模块未加载:**
```bash
# 检查安装状态
cat ~/.claude/installed_modules.json
# 重新安装特定模块
python3 install.py --module dev --force
npx github:cexll/myclaude --force
```
---
## FAQ
## 常见问题 (FAQ)
| 问题 | 解决方案 |
|------|----------|
| "Unknown event format" | 日志显示问题,可忽略 |
| Gemini 无法读取 .gitignore 文件 | 从 .gitignore 移除或使用其他后端 |
| Codex 权限拒绝 | 在 ~/.codex/config.yaml 设置 `approval_policy = "never"` |
### Q1: `codeagent-wrapper` 执行时报错 "Unknown event format"
**问题描述:**
执行 `codeagent-wrapper` 时出现错误:
```
Unknown event format: {"type":"turn.started"}
Unknown event format: {"type":"assistant", ...}
```
**解决方案:**
这是日志事件流的显示问题,不影响实际功能执行。预计在下个版本中修复。如需排查其他问题,可忽略此日志输出。
**相关 Issue** [#96](https://github.com/cexll/myclaude/issues/96)
---
### Q2: Gemini 无法读取 `.gitignore` 忽略的文件
**问题描述:**
使用 `codeagent-wrapper --backend gemini` 时,无法读取 `.claude/` 等被 `.gitignore` 忽略的目录中的文件。
**解决方案:**
- **方案一:** 在项目根目录的 `.gitignore` 中取消对 `.claude/` 的忽略
- **方案二:** 确保需要读取的文件不在 `.gitignore` 忽略列表中
**相关 Issue** [#75](https://github.com/cexll/myclaude/issues/75)
---
### Q3: `/dev` 命令并行执行特别慢
**问题描述:**
使用 `/dev` 命令开发简单功能耗时过长超过30分钟无法了解任务执行状态。
**解决方案:**
1. **检查日志:** 查看 `C:\Users\User\AppData\Local\Temp\codeagent-wrapper-*.log` 分析瓶颈
2. **调整后端:**
- 尝试使用 `gpt-5.1-codex-max` 等更快的模型
- 在 WSL 环境下运行速度可能更快
3. **工作区选择:** 使用独立的代码仓库而非包含多个子项目的 monorepo
**相关 Issue** [#77](https://github.com/cexll/myclaude/issues/77)
---
### Q4: 新版 Go 实现的 Codex 权限不足
**问题描述:**
升级到新版 Go 实现的 Codex 后,出现权限不足的错误。
**解决方案:**
`~/.codex/config.yaml` 中添加以下配置Windows: `c:\user\.codex\config.toml`
```yaml
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
approval_policy = "never"
sandbox_mode = "workspace-write"
disable_response_storage = true
network_access = true
```
**关键配置说明:**
- `approval_policy = "never"` - 移除审批限制
- `sandbox_mode = "workspace-write"` - 允许工作区写入权限
- `network_access = true` - 启用网络访问
**相关 Issue** [#31](https://github.com/cexll/myclaude/issues/31)
---
### Q5: 执行时遇到权限拒绝或沙箱限制
**问题描述:**
运行 codeagent-wrapper 时出现权限错误或沙箱限制。
**解决方案:**
设置以下环境变量:
```bash
export CODEX_BYPASS_SANDBOX=true
export CODEAGENT_SKIP_PERMISSIONS=true
```
或添加到 shell 配置文件(`~/.zshrc``~/.bashrc`
```bash
echo 'export CODEX_BYPASS_SANDBOX=true' >> ~/.zshrc
echo 'export CODEAGENT_SKIP_PERMISSIONS=true' >> ~/.zshrc
```
**注意:** 这些设置会绕过安全限制,请仅在可信环境中使用。
---
**仍有疑问?** 请访问 [GitHub Issues](https://github.com/cexll/myclaude/issues) 搜索或提交新问题。
---
更多问题请访问 [GitHub Issues](https://github.com/cexll/myclaude/issues)。
## 许可证
AGPL-3.0 License - 查看 [LICENSE](LICENSE)
AGPL-3.0 - 查看 [LICENSE](LICENSE)
### 商业授权
如需商业授权(无需遵守 AGPL 义务请联系evanxian9@gmail.com
## 支持
- **问题反馈**: [GitHub Issues](https://github.com/cexll/myclaude/issues)
- **文档**: [docs/](docs/)
---
**Claude Code + Codex = 更好的开发** - 编排遇见执行。
- [GitHub Issues](https://github.com/cexll/myclaude/issues)

109
agents/bmad/README.md Normal file
View File

@@ -0,0 +1,109 @@
# bmad - BMAD Agile Workflow
Full enterprise agile methodology with 6 specialized agents, UltraThink analysis, and repository-aware development.
## Installation
```bash
python install.py --module bmad
```
## Usage
```bash
/bmad-pilot <PROJECT_DESCRIPTION> [OPTIONS]
```
### Options
| Option | Description |
|--------|-------------|
| `--skip-tests` | Skip QA testing phase |
| `--direct-dev` | Skip SM planning, go directly to development |
| `--skip-scan` | Skip initial repository scanning |
## Workflow Phases
| Phase | Agent | Deliverable | Description |
|-------|-------|-------------|-------------|
| 0 | Orchestrator | `00-repo-scan.md` | Repository scanning with UltraThink analysis |
| 1 | Product Owner (PO) | `01-product-requirements.md` | PRD with 90+ quality score required |
| 2 | Architect | `02-system-architecture.md` | Technical design with 90+ score required |
| 3 | Scrum Master (SM) | `03-sprint-plan.md` | Sprint backlog with stories and estimates |
| 4 | Developer | Implementation code | Multi-sprint implementation |
| 4.5 | Reviewer | `04-dev-reviewed.md` | Code review (Pass/Pass with Risk/Fail) |
| 5 | QA Engineer | Test suite | Comprehensive testing and validation |
## Agents
| Agent | Role |
|-------|------|
| `bmad-orchestrator` | Repository scanning, workflow coordination |
| `bmad-po` | Requirements gathering, PRD creation |
| `bmad-architect` | System design, technology decisions |
| `bmad-sm` | Sprint planning, task breakdown |
| `bmad-dev` | Code implementation |
| `bmad-review` | Code review, quality assessment |
| `bmad-qa` | Testing, validation |
## Approval Gates
Two mandatory stop points require explicit user approval:
1. **After PRD** (Phase 1 → 2): User must approve requirements before architecture
2. **After Architecture** (Phase 2 → 3): User must approve design before implementation
## Output Structure
```
.claude/specs/{feature_name}/
├── 00-repo-scan.md
├── 01-product-requirements.md
├── 02-system-architecture.md
├── 03-sprint-plan.md
└── 04-dev-reviewed.md
```
## UltraThink Methodology
Applied throughout the workflow for deep analysis:
1. **Hypothesis Generation** - Form hypotheses about the problem
2. **Evidence Collection** - Gather evidence from codebase
3. **Pattern Recognition** - Identify recurring patterns
4. **Synthesis** - Create comprehensive understanding
5. **Validation** - Cross-check findings
## Interactive Confirmation Flow
PO and Architect phases use iterative refinement:
1. Agent produces initial draft + quality score
2. Orchestrator presents to user with clarification questions
3. User provides responses
4. Agent refines until quality >= 90
5. User confirms to save deliverable
## When to Use
- Large multi-sprint features
- Enterprise projects requiring documentation
- Team coordination scenarios
- Projects needing formal approval gates
## Directory Structure
```
agents/bmad/
├── README.md
├── commands/
│ └── bmad-pilot.md
└── agents/
├── bmad-orchestrator.md
├── bmad-po.md
├── bmad-architect.md
├── bmad-sm.md
├── bmad-dev.md
├── bmad-review.md
└── bmad-qa.md
```

View File

@@ -304,7 +304,7 @@ Deep reasoning and analysis for complex problems.
## 🔌 Agent Configuration
All commands use specialized agents configured in:
- `development-essentials/agents/`
- `agents/development-essentials/agents/`
- Agent prompt templates
- Tool access permissions
- Output formatting

View File

@@ -244,8 +244,8 @@ Development Essentials 模块包含以下专用代理:
## 🔗 相关文档
- [主文档](../README.md) - 项目总览
- [BMAD工作流](../docs/BMAD-WORKFLOW.md) - 完整敏捷流程
- [Requirements工作流](../docs/REQUIREMENTS-WORKFLOW.md) - 轻量级开发流程
- [BMAD工作流](../agents/bmad/BMAD-WORKFLOW.md) - 完整敏捷流程
- [Requirements工作流](../agents/requirements/REQUIREMENTS-WORKFLOW.md) - 轻量级开发流程
- [插件系统](../PLUGIN_README.md) - 插件安装和管理
---

View File

@@ -0,0 +1,90 @@
# requirements - Requirements-Driven Workflow
Lightweight requirements-to-code pipeline with interactive quality gates.
## Installation
```bash
python install.py --module requirements
```
## Usage
```bash
/requirements-pilot <FEATURE_DESCRIPTION> [OPTIONS]
```
### Options
| Option | Description |
|--------|-------------|
| `--skip-tests` | Skip testing phase entirely |
| `--skip-scan` | Skip initial repository scanning |
## Workflow Phases
| Phase | Description | Output |
|-------|-------------|--------|
| 0 | Repository scanning | `00-repository-context.md` |
| 1 | Requirements confirmation | `requirements-confirm.md` (90+ score required) |
| 2 | Implementation | Code + `requirements-spec.md` |
## Quality Scoring (100-point system)
| Category | Points | Focus |
|----------|--------|-------|
| Functional Clarity | 30 | Input/output specs, success criteria |
| Technical Specificity | 25 | Integration points, constraints |
| Implementation Completeness | 25 | Edge cases, error handling |
| Business Context | 20 | User value, priority |
## Sub-Agents
| Agent | Role |
|-------|------|
| `requirements-generate` | Create technical specifications |
| `requirements-code` | Implement functionality |
| `requirements-review` | Code quality evaluation |
| `requirements-testing` | Test case creation |
## Approval Gate
One mandatory stop point after Phase 1:
- Requirements must achieve 90+ quality score
- User must explicitly approve before implementation begins
## Testing Decision
After code review passes (≥90%):
- `--skip-tests`: Complete without testing
- No option: Interactive prompt with smart recommendations based on task complexity
## Output Structure
```
.claude/specs/{feature_name}/
├── 00-repository-context.md
├── requirements-confirm.md
└── requirements-spec.md
```
## When to Use
- Quick prototypes
- Well-defined features
- Smaller scope tasks
- When full BMAD workflow is overkill
## Directory Structure
```
agents/requirements/
├── README.md
├── commands/
│ └── requirements-pilot.md
└── agents/
├── requirements-generate.md
├── requirements-code.md
├── requirements-review.md
└── requirements-testing.md
```

730
bin/cli.js Executable file
View File

@@ -0,0 +1,730 @@
#!/usr/bin/env node
"use strict";
const crypto = require("crypto");
const fs = require("fs");
const https = require("https");
const os = require("os");
const path = require("path");
const readline = require("readline");
const zlib = require("zlib");
const { spawn } = require("child_process");
const REPO = { owner: "cexll", name: "myclaude" };
const API_HEADERS = {
"User-Agent": "myclaude-npx",
Accept: "application/vnd.github+json",
};
function parseArgs(argv) {
const out = {
installDir: "~/.claude",
force: false,
dryRun: false,
list: false,
update: false,
tag: null,
};
for (let i = 0; i < argv.length; i++) {
const a = argv[i];
if (a === "--install-dir") out.installDir = argv[++i];
else if (a === "--force") out.force = true;
else if (a === "--dry-run") out.dryRun = true;
else if (a === "--list") out.list = true;
else if (a === "--update") out.update = true;
else if (a === "--tag") out.tag = argv[++i];
else if (a === "-h" || a === "--help") out.help = true;
else throw new Error(`Unknown arg: ${a}`);
}
return out;
}
function printHelp() {
process.stdout.write(
[
"myclaude (npx installer)",
"",
"Usage:",
" npx github:cexll/myclaude",
" npx github:cexll/myclaude --list",
" npx github:cexll/myclaude --update",
" npx github:cexll/myclaude --install-dir ~/.claude --force",
"",
"Options:",
" --install-dir <path> Default: ~/.claude",
" --force Overwrite existing files",
" --dry-run Print actions only",
" --list List installable items and exit",
" --update Update already installed modules",
" --tag <tag> Install a specific GitHub tag",
].join("\n") + "\n"
);
}
function withTimeout(promise, ms, label) {
let timer;
const timeout = new Promise((_, reject) => {
timer = setTimeout(() => reject(new Error(`Timeout: ${label}`)), ms);
});
return Promise.race([promise, timeout]).finally(() => clearTimeout(timer));
}
function httpsGetJson(url) {
return new Promise((resolve, reject) => {
https
.get(url, { headers: API_HEADERS }, (res) => {
let body = "";
res.setEncoding("utf8");
res.on("data", (d) => (body += d));
res.on("end", () => {
if (res.statusCode && res.statusCode >= 400) {
return reject(
new Error(`HTTP ${res.statusCode}: ${url}\n${body.slice(0, 500)}`)
);
}
try {
resolve(JSON.parse(body));
} catch (e) {
reject(new Error(`Invalid JSON from ${url}: ${e.message}`));
}
});
})
.on("error", reject);
});
}
function downloadToFile(url, outPath) {
return new Promise((resolve, reject) => {
const file = fs.createWriteStream(outPath);
https
.get(url, { headers: API_HEADERS }, (res) => {
if (
res.statusCode &&
res.statusCode >= 300 &&
res.statusCode < 400 &&
res.headers.location
) {
file.close();
fs.unlink(outPath, () => {
downloadToFile(res.headers.location, outPath).then(resolve, reject);
});
return;
}
if (res.statusCode && res.statusCode >= 400) {
file.close();
fs.unlink(outPath, () => {});
return reject(new Error(`HTTP ${res.statusCode}: ${url}`));
}
res.pipe(file);
file.on("finish", () => file.close(resolve));
})
.on("error", (err) => {
file.close();
fs.unlink(outPath, () => reject(err));
});
});
}
async function fetchLatestTag() {
const url = `https://api.github.com/repos/${REPO.owner}/${REPO.name}/releases/latest`;
const json = await httpsGetJson(url);
if (!json || typeof json.tag_name !== "string" || !json.tag_name.trim()) {
throw new Error("GitHub API: missing tag_name");
}
return json.tag_name.trim();
}
async function fetchRemoteConfig(tag) {
const url = `https://api.github.com/repos/${REPO.owner}/${REPO.name}/contents/config.json?ref=${encodeURIComponent(
tag
)}`;
const json = await httpsGetJson(url);
if (!json || typeof json.content !== "string") {
throw new Error("GitHub contents API: missing config.json content");
}
const buf = Buffer.from(json.content.replace(/\n/g, ""), "base64");
return JSON.parse(buf.toString("utf8"));
}
async function fetchRemoteSkills(tag) {
const url = `https://api.github.com/repos/${REPO.owner}/${REPO.name}/contents/skills?ref=${encodeURIComponent(
tag
)}`;
const json = await httpsGetJson(url);
if (!Array.isArray(json)) throw new Error("GitHub contents API: skills is not a directory");
return json
.filter((e) => e && e.type === "dir" && typeof e.name === "string")
.map((e) => e.name)
.sort();
}
function repoRootFromHere() {
return path.resolve(__dirname, "..");
}
function readLocalConfig() {
const p = path.join(repoRootFromHere(), "config.json");
return JSON.parse(fs.readFileSync(p, "utf8"));
}
function listLocalSkills() {
const root = repoRootFromHere();
const skillsDir = path.join(root, "skills");
if (!fs.existsSync(skillsDir)) return [];
return fs
.readdirSync(skillsDir, { withFileTypes: true })
.filter((d) => d.isDirectory())
.map((d) => d.name)
.sort();
}
function expandHome(p) {
if (!p) return p;
if (p === "~") return os.homedir();
if (p.startsWith("~/")) return path.join(os.homedir(), p.slice(2));
return p;
}
function readInstalledModuleNamesFromStatus(installDir) {
const p = path.join(installDir, "installed_modules.json");
if (!fs.existsSync(p)) return null;
try {
const json = JSON.parse(fs.readFileSync(p, "utf8"));
const modules = json && json.modules;
if (!modules || typeof modules !== "object" || Array.isArray(modules)) return null;
return Object.keys(modules)
.filter((k) => typeof k === "string" && k.trim())
.sort();
} catch {
return null;
}
}
async function dirExists(p) {
try {
return (await fs.promises.stat(p)).isDirectory();
} catch {
return false;
}
}
async function mergeDirLooksInstalled(srcDir, installDir) {
if (!(await dirExists(srcDir))) return false;
const subdirs = await fs.promises.readdir(srcDir, { withFileTypes: true });
for (const d of subdirs) {
if (!d.isDirectory()) continue;
const srcSub = path.join(srcDir, d.name);
const entries = await fs.promises.readdir(srcSub, { withFileTypes: true });
for (const e of entries) {
if (!e.isFile()) continue;
const dst = path.join(installDir, d.name, e.name);
if (fs.existsSync(dst)) return true;
}
}
return false;
}
async function detectInstalledModuleNames(config, repoRoot, installDir) {
const mods = (config && config.modules) || {};
const installed = [];
for (const [name, mod] of Object.entries(mods)) {
const ops = Array.isArray(mod && mod.operations) ? mod.operations : [];
let ok = false;
for (const op of ops) {
const type = op && op.type;
if (type === "copy_file" || type === "copy_dir") {
const target = typeof op.target === "string" ? op.target : "";
if (target && fs.existsSync(path.join(installDir, target))) {
ok = true;
break;
}
} else if (type === "merge_dir") {
const source = typeof op.source === "string" ? op.source : "";
if (source && (await mergeDirLooksInstalled(path.join(repoRoot, source), installDir))) {
ok = true;
break;
}
}
}
if (ok) installed.push(name);
}
return installed.sort();
}
async function updateInstalledModules(installDir, tag, config, dryRun) {
const mods = (config && config.modules) || {};
if (!Object.keys(mods).length) throw new Error("No modules found in config.json");
let repoRoot = repoRootFromHere();
let tmp = null;
if (tag) {
tmp = path.join(
os.tmpdir(),
`myclaude-update-${Date.now()}-${crypto.randomBytes(4).toString("hex")}`
);
await fs.promises.mkdir(tmp, { recursive: true });
}
try {
if (tag) {
const archive = path.join(tmp, "src.tgz");
const url = `https://codeload.github.com/${REPO.owner}/${REPO.name}/tar.gz/refs/tags/${encodeURIComponent(
tag
)}`;
process.stdout.write(`Downloading ${REPO.owner}/${REPO.name}@${tag}...\n`);
await downloadToFile(url, archive);
process.stdout.write("Extracting...\n");
const extracted = path.join(tmp, "src");
await extractTarGz(archive, extracted);
repoRoot = extracted;
} else {
process.stdout.write("Offline mode: updating from local package contents.\n");
}
const fromStatus = readInstalledModuleNamesFromStatus(installDir);
const installed = fromStatus || (await detectInstalledModuleNames(config, repoRoot, installDir));
const toUpdate = installed.filter((name) => Object.prototype.hasOwnProperty.call(mods, name));
if (!toUpdate.length) {
process.stdout.write(`No installed modules found in ${installDir}.\n`);
return;
}
if (dryRun) {
for (const name of toUpdate) process.stdout.write(`module:${name}\n`);
return;
}
await fs.promises.mkdir(installDir, { recursive: true });
for (const name of toUpdate) {
process.stdout.write(`Updating module: ${name}\n`);
await applyModule(name, config, repoRoot, installDir, true);
}
} finally {
if (tmp) await rmTree(tmp);
}
}
function buildItems(config, skills) {
const items = [{ id: "codeagent-wrapper", label: "codeagent-wrapper", kind: "wrapper" }];
const modules = (config && config.modules) || {};
for (const [name, mod] of Object.entries(modules)) {
const desc = mod && typeof mod.description === "string" ? mod.description : "";
items.push({
id: `module:${name}`,
label: `module:${name}${desc ? ` - ${desc}` : ""}`,
kind: "module",
moduleName: name,
});
}
for (const s of skills) {
items.push({ id: `skill:${s}`, label: `skill:${s}`, kind: "skill", skillName: s });
}
return items;
}
function clearScreen() {
process.stdout.write("\x1b[2J\x1b[H");
}
async function promptMultiSelect(items, title) {
if (!process.stdin.isTTY) {
throw new Error("No TTY. Use --list or run in an interactive terminal.");
}
let idx = 0;
const selected = new Set();
readline.emitKeypressEvents(process.stdin);
process.stdin.setRawMode(true);
function render() {
clearScreen();
process.stdout.write(`${title}\n`);
process.stdout.write("↑↓ move Space toggle Enter confirm q quit\n\n");
for (let i = 0; i < items.length; i++) {
const it = items[i];
const cursor = i === idx ? ">" : " ";
const box = selected.has(it.id) ? "[x]" : "[ ]";
process.stdout.write(`${cursor} ${box} ${it.label}\n`);
}
}
function cleanup() {
process.stdin.setRawMode(false);
process.stdin.removeListener("keypress", onKey);
}
function onKey(_, key) {
if (!key) return;
if (key.name === "c" && key.ctrl) {
cleanup();
process.exit(130);
}
if (key.name === "q") {
cleanup();
process.exit(0);
}
if (key.name === "up") idx = (idx - 1 + items.length) % items.length;
else if (key.name === "down") idx = (idx + 1) % items.length;
else if (key.name === "space") {
const id = items[idx].id;
if (selected.has(id)) selected.delete(id);
else selected.add(id);
} else if (key.name === "return") {
cleanup();
clearScreen();
const picked = items.filter((it) => selected.has(it.id));
return resolvePick(picked);
}
render();
}
let resolvePick;
const result = new Promise((resolve) => {
resolvePick = resolve;
});
process.stdin.on("keypress", onKey);
render();
return result;
}
function isZeroBlock(b) {
for (let i = 0; i < b.length; i++) if (b[i] !== 0) return false;
return true;
}
function tarString(b, start, len) {
return b
.toString("utf8", start, start + len)
.replace(/\0.*$/, "")
.trim();
}
function tarOctal(b, start, len) {
const s = tarString(b, start, len);
if (!s) return 0;
return parseInt(s, 8) || 0;
}
function safePosixPath(p) {
const norm = path.posix.normalize(p);
if (norm.startsWith("/") || norm.startsWith("..") || norm.includes("/../")) {
throw new Error(`Unsafe path in archive: ${p}`);
}
return norm;
}
async function extractTarGz(archivePath, destDir) {
await fs.promises.mkdir(destDir, { recursive: true });
const gunzip = zlib.createGunzip();
const stream = fs.createReadStream(archivePath).pipe(gunzip);
let buf = Buffer.alloc(0);
let file = null;
let pad = 0;
let zeroBlocks = 0;
for await (const chunk of stream) {
buf = Buffer.concat([buf, chunk]);
while (true) {
if (pad) {
if (buf.length < pad) break;
buf = buf.slice(pad);
pad = 0;
}
if (!file) {
if (buf.length < 512) break;
const header = buf.slice(0, 512);
buf = buf.slice(512);
if (isZeroBlock(header)) {
zeroBlocks++;
if (zeroBlocks >= 2) return;
continue;
}
zeroBlocks = 0;
const name = tarString(header, 0, 100);
const prefix = tarString(header, 345, 155);
const full = prefix ? `${prefix}/${name}` : name;
const size = tarOctal(header, 124, 12);
const mode = tarOctal(header, 100, 8);
const typeflag = header[156];
const rel = safePosixPath(full.split("/").slice(1).join("/"));
if (!rel || rel === ".") {
file = null;
pad = 0;
continue;
}
const outPath = path.join(destDir, ...rel.split("/"));
if (typeflag === 53) {
await fs.promises.mkdir(outPath, { recursive: true });
if (mode) await fs.promises.chmod(outPath, mode);
file = null;
pad = 0;
continue;
}
file = { outPath, size, remaining: size, chunks: [], mode };
if (size === 0) {
await fs.promises.mkdir(path.dirname(outPath), { recursive: true });
await fs.promises.writeFile(outPath, Buffer.alloc(0));
if (mode) await fs.promises.chmod(outPath, mode);
file = null;
pad = 0;
}
continue;
}
if (buf.length < file.remaining) {
file.chunks.push(buf);
file.remaining -= buf.length;
buf = Buffer.alloc(0);
break;
}
file.chunks.push(buf.slice(0, file.remaining));
buf = buf.slice(file.remaining);
file.remaining = 0;
await fs.promises.mkdir(path.dirname(file.outPath), { recursive: true });
await fs.promises.writeFile(file.outPath, Buffer.concat(file.chunks));
if (file.mode) await fs.promises.chmod(file.outPath, file.mode);
pad = (512 - (file.size % 512)) % 512;
file = null;
}
}
}
async function copyFile(src, dst, force) {
if (!force && fs.existsSync(dst)) return;
await fs.promises.mkdir(path.dirname(dst), { recursive: true });
await fs.promises.copyFile(src, dst);
const st = await fs.promises.stat(src);
await fs.promises.chmod(dst, st.mode);
}
async function copyDirRecursive(src, dst, force) {
if (fs.existsSync(dst) && !force) return;
await fs.promises.mkdir(dst, { recursive: true });
const entries = await fs.promises.readdir(src, { withFileTypes: true });
for (const e of entries) {
const s = path.join(src, e.name);
const d = path.join(dst, e.name);
if (e.isDirectory()) await copyDirRecursive(s, d, force);
else if (e.isFile()) await copyFile(s, d, force);
}
}
async function mergeDir(src, installDir, force) {
const subdirs = await fs.promises.readdir(src, { withFileTypes: true });
for (const d of subdirs) {
if (!d.isDirectory()) continue;
const srcSub = path.join(src, d.name);
const dstSub = path.join(installDir, d.name);
await fs.promises.mkdir(dstSub, { recursive: true });
const entries = await fs.promises.readdir(srcSub, { withFileTypes: true });
for (const e of entries) {
if (!e.isFile()) continue;
await copyFile(path.join(srcSub, e.name), path.join(dstSub, e.name), force);
}
}
}
function runInstallSh(repoRoot, installDir) {
return new Promise((resolve, reject) => {
const cmd = process.platform === "win32" ? "cmd.exe" : "bash";
const args = process.platform === "win32" ? ["/c", "install.bat"] : ["install.sh"];
const p = spawn(cmd, args, {
cwd: repoRoot,
stdio: "inherit",
env: { ...process.env, INSTALL_DIR: installDir },
});
p.on("exit", (code) => {
if (code === 0) resolve();
else reject(new Error(`install script failed (exit ${code})`));
});
});
}
async function rmTree(p) {
if (!fs.existsSync(p)) return;
if (fs.promises.rm) {
await fs.promises.rm(p, { recursive: true, force: true });
return;
}
await fs.promises.rmdir(p, { recursive: true });
}
async function applyModule(moduleName, config, repoRoot, installDir, force) {
const mod = config && config.modules && config.modules[moduleName];
if (!mod) throw new Error(`Unknown module: ${moduleName}`);
const ops = Array.isArray(mod.operations) ? mod.operations : [];
for (const op of ops) {
const type = op && op.type;
if (type === "copy_file") {
await copyFile(
path.join(repoRoot, op.source),
path.join(installDir, op.target),
force
);
} else if (type === "copy_dir") {
await copyDirRecursive(
path.join(repoRoot, op.source),
path.join(installDir, op.target),
force
);
} else if (type === "merge_dir") {
await mergeDir(path.join(repoRoot, op.source), installDir, force);
} else if (type === "run_command") {
const cmd = typeof op.command === "string" ? op.command.trim() : "";
if (cmd !== "bash install.sh") {
throw new Error(`Refusing run_command: ${cmd || "(empty)"}`);
}
await runInstallSh(repoRoot, installDir);
} else {
throw new Error(`Unsupported operation type: ${type}`);
}
}
}
async function installSelected(picks, tag, config, installDir, force, dryRun) {
const needRepo = picks.some((p) => p.kind !== "wrapper");
const needWrapper = picks.some((p) => p.kind === "wrapper");
if (dryRun) {
for (const p of picks) process.stdout.write(`- ${p.id}\n`);
return;
}
const tmp = path.join(
os.tmpdir(),
`myclaude-${Date.now()}-${crypto.randomBytes(4).toString("hex")}`
);
await fs.promises.mkdir(tmp, { recursive: true });
try {
let repoRoot = repoRootFromHere();
if (needRepo || needWrapper) {
if (!tag) throw new Error("No tag available to download");
const archive = path.join(tmp, "src.tgz");
const url = `https://codeload.github.com/${REPO.owner}/${REPO.name}/tar.gz/refs/tags/${encodeURIComponent(
tag
)}`;
process.stdout.write(`Downloading ${REPO.owner}/${REPO.name}@${tag}...\n`);
await downloadToFile(url, archive);
process.stdout.write("Extracting...\n");
const extracted = path.join(tmp, "src");
await extractTarGz(archive, extracted);
repoRoot = extracted;
}
await fs.promises.mkdir(installDir, { recursive: true });
for (const p of picks) {
if (p.kind === "wrapper") {
process.stdout.write("Installing codeagent-wrapper...\n");
await runInstallSh(repoRoot, installDir);
continue;
}
if (p.kind === "module") {
process.stdout.write(`Installing module: ${p.moduleName}\n`);
await applyModule(p.moduleName, config, repoRoot, installDir, force);
continue;
}
if (p.kind === "skill") {
process.stdout.write(`Installing skill: ${p.skillName}\n`);
await copyDirRecursive(
path.join(repoRoot, "skills", p.skillName),
path.join(installDir, "skills", p.skillName),
force
);
}
}
} finally {
await rmTree(tmp);
}
}
async function main() {
const args = parseArgs(process.argv.slice(2));
if (args.help) {
printHelp();
return;
}
const installDir = expandHome(args.installDir);
if (args.list && args.update) throw new Error("Cannot combine --list and --update");
let tag = args.tag;
if (!tag) {
try {
tag = await withTimeout(fetchLatestTag(), 5000, "fetch latest tag");
} catch {
tag = null;
}
}
let config = null;
let skills = [];
if (tag) {
try {
[config, skills] = await withTimeout(
Promise.all([fetchRemoteConfig(tag), fetchRemoteSkills(tag)]),
8000,
"fetch config/skills"
);
} catch {
config = null;
skills = [];
}
}
if (!config) config = readLocalConfig();
if (!skills.length) skills = listLocalSkills();
if (args.update) {
await updateInstalledModules(installDir, tag, config, args.dryRun);
process.stdout.write("Done.\n");
return;
}
const items = buildItems(config, skills);
if (args.list) {
for (const it of items) process.stdout.write(`${it.id}\n`);
return;
}
const title = tag ? `myclaude installer (latest: ${tag})` : "myclaude installer (offline mode)";
const picks = await promptMultiSelect(items, title);
if (!picks.length) {
process.stdout.write("Nothing selected.\n");
return;
}
await installSelected(picks, tag, config, installDir, args.force, args.dryRun);
process.stdout.write("Done.\n");
}
main().catch((err) => {
process.stderr.write(`ERROR: ${err && err.message ? err.message : String(err)}\n`);
process.exit(1);
});

View File

@@ -2,8 +2,8 @@
bin/
codeagent
codeagent.exe
codeagent-wrapper
codeagent-wrapper.exe
/codeagent-wrapper
/codeagent-wrapper.exe
*.test
# Coverage reports

View File

@@ -1,8 +1,5 @@
GO ?= go
BINARY ?= codeagent
CMD_PKG := ./cmd/codeagent
TOOLS_BIN := $(CURDIR)/bin
TOOLCHAIN ?= go1.22.0
GOLANGCI_LINT_VERSION := v1.56.2
@@ -14,7 +11,8 @@ STATICCHECK := $(TOOLS_BIN)/staticcheck
.PHONY: build test lint clean install
build:
$(GO) build -o $(BINARY) $(CMD_PKG)
$(GO) build -o codeagent ./cmd/codeagent
$(GO) build -o codeagent-wrapper ./cmd/codeagent-wrapper
test:
$(GO) test ./...
@@ -35,4 +33,5 @@ clean:
@python3 -c 'import glob, os; paths=["codeagent","codeagent.exe","codeagent-wrapper","codeagent-wrapper.exe","coverage.out","cover.out","coverage.html"]; paths += glob.glob("coverage*.out") + glob.glob("cover_*.out") + glob.glob("*.test"); [os.remove(p) for p in paths if os.path.exists(p)]'
install:
$(GO) install $(CMD_PKG)
$(GO) install ./cmd/codeagent
$(GO) install ./cmd/codeagent-wrapper

View File

@@ -2,7 +2,7 @@
`codeagent-wrapper` 是一个用 Go 编写的“多后端 AI 代码代理”命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端Codex / Claude / Gemini / Opencode并提供一致的参数、配置与会话恢复体验。
入口:`cmd/codeagent/main.go`(生成二进制名:`codeagent`)。
入口:`cmd/codeagent/main.go`(生成二进制名:`codeagent``cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。两者行为一致
## 功能特性
@@ -22,12 +22,14 @@
```bash
go install ./cmd/codeagent
go install ./cmd/codeagent-wrapper
```
安装后确认:
```bash
codeagent version
codeagent-wrapper version
```
## 使用示例
@@ -148,4 +150,3 @@ make test
make lint
make clean
```

View File

@@ -14,14 +14,10 @@ Multi-backend AI code execution wrapper supporting Codex, Claude, and Gemini.
## Installation
```bash
# Clone repository
git clone https://github.com/cexll/myclaude.git
cd myclaude
# Recommended: run the installer and select "codeagent-wrapper"
npx github:cexll/myclaude
# Install via install.py (includes binary compilation)
python3 install.py --module dev
# Or manual installation
# Manual build (optional; requires repo checkout)
cd codeagent-wrapper
go build -o ~/.claude/bin/codeagent-wrapper
```

View File

@@ -10,7 +10,7 @@ import (
)
const (
version = "6.0.0-alpha1"
version = "6.1.2"
defaultWorkdir = "."
defaultTimeout = 7200 // seconds (2 hours)
defaultCoverageTarget = 90.0
@@ -24,7 +24,7 @@ const (
stdoutCloseReasonWait = "wait-done"
stdoutCloseReasonDrain = "drain-timeout"
stdoutCloseReasonCtx = "context-cancel"
stdoutDrainTimeout = 100 * time.Millisecond
stdoutDrainTimeout = 500 * time.Millisecond
)
// Test hooks for dependency injection

View File

@@ -635,6 +635,7 @@ func runSingleMode(cfg *Config, name string) int {
WorkDir: cfg.WorkDir,
Mode: cfg.Mode,
SessionID: cfg.SessionID,
Backend: cfg.Backend,
Model: cfg.Model,
ReasoningEffort: cfg.ReasoningEffort,
Agent: cfg.Agent,
@@ -648,6 +649,12 @@ func runSingleMode(cfg *Config, name string) int {
return result.ExitCode
}
// Validate that we got a meaningful output message
if strings.TrimSpace(result.Message) == "" {
logError(fmt.Sprintf("no output message: backend=%s returned empty result.Message with exit_code=0", cfg.Backend))
return 1
}
fmt.Println(result.Message)
if result.SessionID != "" {
fmt.Printf("\n---\nSESSION_ID: %s\n", result.SessionID)

View File

@@ -1913,6 +1913,37 @@ func TestRun_PassesReasoningEffortToTaskSpec(t *testing.T) {
}
}
func TestRun_NoOutputMessage_ReturnsExitCode1AndWritesStderr(t *testing.T) {
defer resetTestHooks()
cleanupLogsFn = func() (CleanupStats, error) { return CleanupStats{}, nil }
t.Setenv("TMPDIR", t.TempDir())
selectBackendFn = func(name string) (Backend, error) {
return testBackend{name: name, command: "echo"}, nil
}
runTaskFn = func(task TaskSpec, silent bool, timeout int) TaskResult {
return TaskResult{ExitCode: 0, Message: ""}
}
isTerminalFn = func() bool { return true }
stdinReader = strings.NewReader("")
os.Args = []string{"codeagent-wrapper", "task"}
var code int
errOutput := captureStderr(t, func() {
code = run()
})
if code != 1 {
t.Fatalf("run() exit=%d, want 1", code)
}
if !strings.Contains(errOutput, "no output message") {
t.Fatalf("stderr missing sentinel error text; got:\n%s", errOutput)
}
}
func TestRunBuildCodexArgs_NewMode(t *testing.T) {
const key = "CODEX_BYPASS_SANDBOX"
t.Setenv(key, "false")
@@ -3667,10 +3698,8 @@ func TestVersionFlag(t *testing.T) {
}
})
want := "codeagent-wrapper version 6.0.0-alpha1\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}
@@ -3683,10 +3712,8 @@ func TestVersionShortFlag(t *testing.T) {
}
})
want := "codeagent-wrapper version 6.0.0-alpha1\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}
@@ -3699,10 +3726,8 @@ func TestVersionLegacyAlias(t *testing.T) {
}
})
want := "codeagent-wrapper version 6.0.0-alpha1\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}

View File

@@ -25,7 +25,7 @@ func (ClaudeBackend) Env(baseURL, apiKey string) map[string]string {
env["ANTHROPIC_BASE_URL"] = baseURL
}
if apiKey != "" {
env["ANTHROPIC_API_KEY"] = apiKey
env["ANTHROPIC_AUTH_TOKEN"] = apiKey
}
return env
}

View File

@@ -0,0 +1,193 @@
package executor
import (
"os"
"path/filepath"
"strings"
"testing"
backend "codeagent-wrapper/internal/backend"
config "codeagent-wrapper/internal/config"
)
// TestEnvInjectionWithAgent tests the full flow of env injection with agent config
func TestEnvInjectionWithAgent(t *testing.T) {
// Setup temp config
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0755); err != nil {
t.Fatal(err)
}
// Write test config with agent that has base_url and api_key
configContent := `{
"default_backend": "codex",
"agents": {
"test-agent": {
"backend": "claude",
"model": "test-model",
"base_url": "https://test.api.com",
"api_key": "test-api-key-12345678"
}
}
}`
configPath := filepath.Join(configDir, "models.json")
if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil {
t.Fatal(err)
}
// Override HOME to use temp dir
oldHome := os.Getenv("HOME")
os.Setenv("HOME", tmpDir)
defer os.Setenv("HOME", oldHome)
// Reset config cache
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Test ResolveAgentConfig
agentBackend, model, _, _, baseURL, apiKey, _ := config.ResolveAgentConfig("test-agent")
t.Logf("ResolveAgentConfig: backend=%q, model=%q, baseURL=%q, apiKey=%q",
agentBackend, model, baseURL, apiKey)
if agentBackend != "claude" {
t.Errorf("expected backend 'claude', got %q", agentBackend)
}
if baseURL != "https://test.api.com" {
t.Errorf("expected baseURL 'https://test.api.com', got %q", baseURL)
}
if apiKey != "test-api-key-12345678" {
t.Errorf("expected apiKey 'test-api-key-12345678', got %q", apiKey)
}
// Test Backend.Env
b := backend.ClaudeBackend{}
env := b.Env(baseURL, apiKey)
t.Logf("Backend.Env: %v", env)
if env == nil {
t.Fatal("expected non-nil env from Backend.Env")
}
if env["ANTHROPIC_BASE_URL"] != baseURL {
t.Errorf("expected ANTHROPIC_BASE_URL=%q, got %q", baseURL, env["ANTHROPIC_BASE_URL"])
}
if env["ANTHROPIC_AUTH_TOKEN"] != apiKey {
t.Errorf("expected ANTHROPIC_AUTH_TOKEN=%q, got %q", apiKey, env["ANTHROPIC_AUTH_TOKEN"])
}
}
// TestEnvInjectionLogic tests the exact logic used in executor
func TestEnvInjectionLogic(t *testing.T) {
// Setup temp config
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0755); err != nil {
t.Fatal(err)
}
configContent := `{
"default_backend": "codex",
"agents": {
"explore": {
"backend": "claude",
"model": "MiniMax-M2.1",
"base_url": "https://api.minimaxi.com/anthropic",
"api_key": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.test"
}
}
}`
configPath := filepath.Join(configDir, "models.json")
if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil {
t.Fatal(err)
}
oldHome := os.Getenv("HOME")
os.Setenv("HOME", tmpDir)
defer os.Setenv("HOME", oldHome)
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Simulate the executor logic
cfgBackend := "claude" // This should come from taskSpec.Backend
agentName := "explore"
// Step 1: Get backend config (usually empty for claude without global config)
baseURL, apiKey := config.ResolveBackendConfig(cfgBackend)
t.Logf("Step 1 - ResolveBackendConfig(%q): baseURL=%q, apiKey=%q", cfgBackend, baseURL, apiKey)
// Step 2: If agent specified, get agent config
if agentName != "" {
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _ := config.ResolveAgentConfig(agentName)
t.Logf("Step 2 - ResolveAgentConfig(%q): backend=%q, baseURL=%q, apiKey=%q",
agentName, agentBackend, agentBaseURL, agentAPIKey)
// Step 3: Check if agent backend matches cfg backend
if strings.EqualFold(strings.TrimSpace(agentBackend), strings.TrimSpace(cfgBackend)) {
baseURL, apiKey = agentBaseURL, agentAPIKey
t.Logf("Step 3 - Backend match! Using agent config: baseURL=%q, apiKey=%q", baseURL, apiKey)
} else {
t.Logf("Step 3 - Backend mismatch: agent=%q, cfg=%q", agentBackend, cfgBackend)
}
}
// Step 4: Get env vars from backend
b := backend.ClaudeBackend{}
injected := b.Env(baseURL, apiKey)
t.Logf("Step 4 - Backend.Env: %v", injected)
// Verify
if len(injected) == 0 {
t.Fatal("Expected env vars to be injected, got none")
}
expectedURL := "https://api.minimaxi.com/anthropic"
if injected["ANTHROPIC_BASE_URL"] != expectedURL {
t.Errorf("ANTHROPIC_BASE_URL: expected %q, got %q", expectedURL, injected["ANTHROPIC_BASE_URL"])
}
if _, ok := injected["ANTHROPIC_AUTH_TOKEN"]; !ok {
t.Error("ANTHROPIC_AUTH_TOKEN not set")
}
// Step 5: Test masking
for k, v := range injected {
masked := maskSensitiveValue(k, v)
t.Logf("Step 5 - Env log: %s=%s", k, masked)
}
}
// TestTaskSpecBackendPropagation tests that taskSpec.Backend is properly used
func TestTaskSpecBackendPropagation(t *testing.T) {
// Simulate what happens in RunCodexTaskWithContext
taskSpec := TaskSpec{
ID: "test",
Task: "hello",
Backend: "claude",
Agent: "explore",
}
// This is the logic from executor.go lines 889-916
cfg := &config.Config{
Mode: "new",
Task: taskSpec.Task,
Backend: "codex", // default
}
var backend Backend = nil // nil in single mode
commandName := "codex" // default
if backend != nil {
cfg.Backend = backend.Name()
} else if taskSpec.Backend != "" {
cfg.Backend = taskSpec.Backend
} else if commandName != "" {
cfg.Backend = commandName
}
t.Logf("taskSpec.Backend=%q, cfg.Backend=%q", taskSpec.Backend, cfg.Backend)
if cfg.Backend != "claude" {
t.Errorf("expected cfg.Backend='claude', got %q", cfg.Backend)
}
}

View File

@@ -0,0 +1,333 @@
package executor
import (
"strings"
"testing"
backend "codeagent-wrapper/internal/backend"
)
func TestMaskSensitiveValue(t *testing.T) {
tests := []struct {
name string
key string
value string
expected string
}{
{
name: "API_KEY with long value",
key: "ANTHROPIC_AUTH_TOKEN",
value: "sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
expected: "sk-a****xxxx",
},
{
name: "api_key lowercase",
key: "api_key",
value: "abcdefghijklmnop",
expected: "abcd****mnop",
},
{
name: "AUTH_TOKEN",
key: "AUTH_TOKEN",
value: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9",
expected: "eyJh****VCJ9",
},
{
name: "SECRET",
key: "MY_SECRET",
value: "super-secret-value-12345",
expected: "supe****2345",
},
{
name: "short key value (8 chars)",
key: "API_KEY",
value: "12345678",
expected: "****",
},
{
name: "very short key value",
key: "API_KEY",
value: "abc",
expected: "****",
},
{
name: "empty key value",
key: "API_KEY",
value: "",
expected: "",
},
{
name: "non-sensitive BASE_URL",
key: "ANTHROPIC_BASE_URL",
value: "https://api.anthropic.com",
expected: "https://api.anthropic.com",
},
{
name: "non-sensitive MODEL",
key: "MODEL",
value: "claude-3-opus",
expected: "claude-3-opus",
},
{
name: "case insensitive - Key",
key: "My_Key",
value: "1234567890abcdef",
expected: "1234****cdef",
},
{
name: "case insensitive - TOKEN",
key: "ACCESS_TOKEN",
value: "access123456789",
expected: "acce****6789",
},
{
name: "partial match - apikey",
key: "MYAPIKEY",
value: "1234567890",
expected: "1234****7890",
},
{
name: "partial match - secretvalue",
key: "SECRETVALUE",
value: "abcdefghij",
expected: "abcd****ghij",
},
{
name: "9 char value (just above threshold)",
key: "API_KEY",
value: "123456789",
expected: "1234****6789",
},
{
name: "exactly 8 char value (at threshold)",
key: "API_KEY",
value: "12345678",
expected: "****",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := maskSensitiveValue(tt.key, tt.value)
if result != tt.expected {
t.Errorf("maskSensitiveValue(%q, %q) = %q, want %q", tt.key, tt.value, result, tt.expected)
}
})
}
}
func TestMaskSensitiveValue_NoLeakage(t *testing.T) {
// Ensure sensitive values are never fully exposed
sensitiveKeys := []string{"API_KEY", "api_key", "AUTH_TOKEN", "SECRET", "access_token", "MYAPIKEY"}
longValue := "this-is-a-very-long-secret-value-that-should-be-masked"
for _, key := range sensitiveKeys {
t.Run(key, func(t *testing.T) {
masked := maskSensitiveValue(key, longValue)
// Should not contain the full value
if masked == longValue {
t.Errorf("key %q: value was not masked", key)
}
// Should contain mask marker
if !strings.Contains(masked, "****") {
t.Errorf("key %q: masked value %q does not contain ****", key, masked)
}
// First 4 chars should be visible
if !strings.HasPrefix(masked, longValue[:4]) {
t.Errorf("key %q: masked value should start with first 4 chars", key)
}
// Last 4 chars should be visible
if !strings.HasSuffix(masked, longValue[len(longValue)-4:]) {
t.Errorf("key %q: masked value should end with last 4 chars", key)
}
})
}
}
func TestMaskSensitiveValue_NonSensitivePassthrough(t *testing.T) {
// Non-sensitive keys should pass through unchanged
nonSensitiveKeys := []string{
"ANTHROPIC_BASE_URL",
"BASE_URL",
"MODEL",
"BACKEND",
"WORKDIR",
"HOME",
"PATH",
}
value := "any-value-here-12345"
for _, key := range nonSensitiveKeys {
t.Run(key, func(t *testing.T) {
result := maskSensitiveValue(key, value)
if result != value {
t.Errorf("key %q: expected passthrough but got %q", key, result)
}
})
}
}
// TestClaudeBackendEnv tests that ClaudeBackend.Env returns correct env vars
func TestClaudeBackendEnv(t *testing.T) {
tests := []struct {
name string
baseURL string
apiKey string
expectKeys []string
expectNil bool
}{
{
name: "both base_url and api_key",
baseURL: "https://api.custom.com",
apiKey: "sk-test-key-12345",
expectKeys: []string{"ANTHROPIC_BASE_URL", "ANTHROPIC_AUTH_TOKEN"},
},
{
name: "only base_url",
baseURL: "https://api.custom.com",
apiKey: "",
expectKeys: []string{"ANTHROPIC_BASE_URL"},
},
{
name: "only api_key",
baseURL: "",
apiKey: "sk-test-key-12345",
expectKeys: []string{"ANTHROPIC_AUTH_TOKEN"},
},
{
name: "both empty",
baseURL: "",
apiKey: "",
expectNil: true,
},
{
name: "whitespace only",
baseURL: " ",
apiKey: " ",
expectNil: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
b := backend.ClaudeBackend{}
env := b.Env(tt.baseURL, tt.apiKey)
if tt.expectNil {
if env != nil {
t.Errorf("expected nil env, got %v", env)
}
return
}
if env == nil {
t.Fatal("expected non-nil env")
}
for _, key := range tt.expectKeys {
if _, ok := env[key]; !ok {
t.Errorf("expected key %q in env", key)
}
}
// Verify values are correct
if tt.baseURL != "" && strings.TrimSpace(tt.baseURL) != "" {
if env["ANTHROPIC_BASE_URL"] != strings.TrimSpace(tt.baseURL) {
t.Errorf("ANTHROPIC_BASE_URL = %q, want %q", env["ANTHROPIC_BASE_URL"], strings.TrimSpace(tt.baseURL))
}
}
if tt.apiKey != "" && strings.TrimSpace(tt.apiKey) != "" {
if env["ANTHROPIC_AUTH_TOKEN"] != strings.TrimSpace(tt.apiKey) {
t.Errorf("ANTHROPIC_AUTH_TOKEN = %q, want %q", env["ANTHROPIC_AUTH_TOKEN"], strings.TrimSpace(tt.apiKey))
}
}
})
}
}
// TestEnvLoggingIntegration tests that env vars are properly masked in logs
func TestEnvLoggingIntegration(t *testing.T) {
b := backend.ClaudeBackend{}
baseURL := "https://api.minimaxi.com/anthropic"
apiKey := "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.longjwttoken"
env := b.Env(baseURL, apiKey)
if env == nil {
t.Fatal("expected non-nil env")
}
// Verify that when we log these values, sensitive ones are masked
for k, v := range env {
masked := maskSensitiveValue(k, v)
if k == "ANTHROPIC_BASE_URL" {
// URL should not be masked
if masked != v {
t.Errorf("BASE_URL should not be masked: got %q, want %q", masked, v)
}
}
if k == "ANTHROPIC_AUTH_TOKEN" {
// API key should be masked
if masked == v {
t.Errorf("API_KEY should be masked, but got original value")
}
if !strings.Contains(masked, "****") {
t.Errorf("masked API_KEY should contain ****: got %q", masked)
}
// Should still show first 4 and last 4 chars
if !strings.HasPrefix(masked, v[:4]) {
t.Errorf("masked value should start with first 4 chars of original")
}
if !strings.HasSuffix(masked, v[len(v)-4:]) {
t.Errorf("masked value should end with last 4 chars of original")
}
}
}
}
// TestGeminiBackendEnv tests GeminiBackend.Env for comparison
func TestGeminiBackendEnv(t *testing.T) {
b := backend.GeminiBackend{}
env := b.Env("https://custom.api", "gemini-api-key-12345")
if env == nil {
t.Fatal("expected non-nil env")
}
// Check that GEMINI env vars are set
if _, ok := env["GOOGLE_GEMINI_BASE_URL"]; !ok {
t.Error("expected GOOGLE_GEMINI_BASE_URL in env")
}
if _, ok := env["GEMINI_API_KEY"]; !ok {
t.Error("expected GEMINI_API_KEY in env")
}
// Verify masking works for Gemini keys too
for k, v := range env {
masked := maskSensitiveValue(k, v)
if strings.Contains(strings.ToLower(k), "key") {
if masked == v && len(v) > 0 {
t.Errorf("key %q should be masked", k)
}
}
}
}
// TestCodexBackendEnv tests CodexBackend.Env
func TestCodexBackendEnv(t *testing.T) {
b := backend.CodexBackend{}
env := b.Env("https://custom.api", "codex-api-key-12345")
if env == nil {
t.Fatal("expected non-nil env for codex")
}
// Check for OPENAI env vars
if _, ok := env["OPENAI_BASE_URL"]; !ok {
t.Error("expected OPENAI_BASE_URL in env")
}
if _, ok := env["OPENAI_API_KEY"]; !ok {
t.Error("expected OPENAI_API_KEY in env")
}
}

View File

@@ -0,0 +1,133 @@
package executor
import (
"context"
"errors"
"io"
"os"
"path/filepath"
"strings"
"testing"
config "codeagent-wrapper/internal/config"
)
type fakeCmd struct {
env map[string]string
}
func (f *fakeCmd) Start() error { return nil }
func (f *fakeCmd) Wait() error { return nil }
func (f *fakeCmd) StdoutPipe() (io.ReadCloser, error) {
return io.NopCloser(strings.NewReader("")), nil
}
func (f *fakeCmd) StderrPipe() (io.ReadCloser, error) {
return nil, errors.New("fake stderr pipe error")
}
func (f *fakeCmd) StdinPipe() (io.WriteCloser, error) {
return nil, errors.New("fake stdin pipe error")
}
func (f *fakeCmd) SetStderr(io.Writer) {}
func (f *fakeCmd) SetDir(string) {}
func (f *fakeCmd) SetEnv(env map[string]string) {
if len(env) == 0 {
return
}
if f.env == nil {
f.env = make(map[string]string, len(env))
}
for k, v := range env {
f.env[k] = v
}
}
func (f *fakeCmd) Process() processHandle { return nil }
func TestEnvInjection_LogsToStderrAndMasksKey(t *testing.T) {
// Arrange ~/.codeagent/models.json via HOME override.
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0o755); err != nil {
t.Fatal(err)
}
const baseURL = "https://api.minimaxi.com/anthropic"
const apiKey = "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.test"
models := `{
"agents": {
"explore": {
"backend": "claude",
"model": "MiniMax-M2.1",
"base_url": "` + baseURL + `",
"api_key": "` + apiKey + `"
}
}
}`
if err := os.WriteFile(filepath.Join(configDir, "models.json"), []byte(models), 0o644); err != nil {
t.Fatal(err)
}
oldHome := os.Getenv("HOME")
if err := os.Setenv("HOME", tmpDir); err != nil {
t.Fatal(err)
}
defer func() { _ = os.Setenv("HOME", oldHome) }()
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Capture stderr (RunCodexTaskWithContext prints env injection lines there).
r, w, err := os.Pipe()
if err != nil {
t.Fatal(err)
}
oldStderr := os.Stderr
os.Stderr = w
defer func() { os.Stderr = oldStderr }()
readDone := make(chan string, 1)
go func() {
defer r.Close()
b, _ := io.ReadAll(r)
readDone <- string(b)
}()
var cmd *fakeCmd
restoreRunner := SetNewCommandRunner(func(ctx context.Context, name string, args ...string) CommandRunner {
cmd = &fakeCmd{}
return cmd
})
defer restoreRunner()
// Act: force an early return right after env injection by making StderrPipe fail.
_ = RunCodexTaskWithContext(
context.Background(),
TaskSpec{Task: "hi", WorkDir: ".", Backend: "claude", Agent: "explore"},
nil,
"claude",
nil,
nil,
false,
false,
1,
)
_ = w.Close()
got := <-readDone
// Assert: env was injected into the command and logging is present with masking.
if cmd == nil || cmd.env == nil {
t.Fatalf("expected cmd env to be set, got cmd=%v env=%v", cmd, nil)
}
if cmd.env["ANTHROPIC_BASE_URL"] != baseURL {
t.Fatalf("ANTHROPIC_BASE_URL=%q, want %q", cmd.env["ANTHROPIC_BASE_URL"], baseURL)
}
if cmd.env["ANTHROPIC_AUTH_TOKEN"] != apiKey {
t.Fatalf("ANTHROPIC_AUTH_TOKEN=%q, want %q", cmd.env["ANTHROPIC_AUTH_TOKEN"], apiKey)
}
if !strings.Contains(got, "Env: ANTHROPIC_BASE_URL="+baseURL) {
t.Fatalf("stderr missing base URL env log; stderr=%q", got)
}
if !strings.Contains(got, "Env: ANTHROPIC_AUTH_TOKEN=eyJh****test") {
t.Fatalf("stderr missing masked API key log; stderr=%q", got)
}
}

View File

@@ -40,7 +40,7 @@ const (
stdoutCloseReasonWait = "wait-done"
stdoutCloseReasonDrain = "drain-timeout"
stdoutCloseReasonCtx = "context-cancel"
stdoutDrainTimeout = 100 * time.Millisecond
stdoutDrainTimeout = 500 * time.Millisecond
)
// Hook points (tests can override inside this package).
@@ -940,6 +940,11 @@ func RunCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
// Load gemini env from ~/.gemini/.env if exists
if cfg.Backend == "gemini" {
fileEnv = loadGeminiEnv()
if cfg.Mode != "resume" && strings.TrimSpace(cfg.Model) == "" {
if model := fileEnv["GEMINI_MODEL"]; model != "" {
cfg.Model = model
}
}
}
useStdin := taskSpec.UseStdin
@@ -1062,6 +1067,12 @@ func RunCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
}
if injected := envBackend.Env(baseURL, apiKey); len(injected) > 0 {
cmd.SetEnv(injected)
// Log injected env vars with masked API keys (to file and stderr)
for k, v := range injected {
msg := fmt.Sprintf("Env: %s=%s", k, maskSensitiveValue(k, v))
logInfoFn(msg)
fmt.Fprintln(os.Stderr, " "+msg)
}
}
}
@@ -1444,3 +1455,19 @@ func terminateCommand(cmd commandRunner) *forceKillTimer {
return &forceKillTimer{timer: timer, done: done}
}
// maskSensitiveValue masks sensitive values like API keys for logging.
// Values containing "key", "token", or "secret" (case-insensitive) are masked.
// For values longer than 8 chars: shows first 4 + **** + last 4.
// For shorter values: shows only ****.
func maskSensitiveValue(key, value string) string {
keyLower := strings.ToLower(key)
if strings.Contains(keyLower, "key") || strings.Contains(keyLower, "token") || strings.Contains(keyLower, "secret") {
if len(value) > 8 {
return value[:4] + "****" + value[len(value)-4:]
} else if len(value) > 0 {
return "****"
}
}
return value
}

View File

@@ -0,0 +1,43 @@
#!/bin/bash
# Benchmark script for Claude CLI stability test
# Tests if the stdoutDrainTimeout fix resolves intermittent failures
set -euo pipefail
RUNS=${1:-100}
FAIL_COUNT=0
SUCCESS_COUNT=0
TIMEOUT_COUNT=0
echo "Running $RUNS iterations..."
echo "---"
for i in $(seq 1 $RUNS); do
result=$(timeout 30 codeagent --backend claude --skip-permissions 'say OK' 2>&1) || true
if echo "$result" | grep -q 'without agent_message'; then
((FAIL_COUNT++))
echo "[$i] FAIL: without agent_message"
elif echo "$result" | grep -q 'timeout'; then
((TIMEOUT_COUNT++))
echo "[$i] TIMEOUT"
elif echo "$result" | grep -q 'OK\|ok'; then
((SUCCESS_COUNT++))
printf "\r[$i] OK "
else
((FAIL_COUNT++))
echo "[$i] FAIL: unexpected output"
echo "$result" | head -3
fi
done
echo ""
echo "---"
echo "Results ($RUNS runs):"
echo " Success: $SUCCESS_COUNT ($(echo "scale=1; $SUCCESS_COUNT * 100 / $RUNS" | bc)%)"
echo " Fail: $FAIL_COUNT ($(echo "scale=1; $FAIL_COUNT * 100 / $RUNS" | bc)%)"
echo " Timeout: $TIMEOUT_COUNT ($(echo "scale=1; $TIMEOUT_COUNT * 100 / $RUNS" | bc)%)"
if [ $FAIL_COUNT -gt 0 ]; then
exit 1
fi

View File

@@ -3,75 +3,14 @@
"install_dir": "~/.claude",
"log_file": "install.log",
"modules": {
"dev": {
"enabled": true,
"description": "Core dev workflow with Codex integration",
"operations": [
{
"type": "merge_dir",
"source": "dev-workflow",
"description": "Merge commands/ and agents/ into install dir"
},
{
"type": "copy_file",
"source": "memorys/CLAUDE.md",
"target": "CLAUDE.md",
"description": "Copy core role and guidelines"
},
{
"type": "copy_file",
"source": "skills/codeagent/SKILL.md",
"target": "skills/codeagent/SKILL.md",
"description": "Install codeagent skill"
},
{
"type": "copy_file",
"source": "skills/product-requirements/SKILL.md",
"target": "skills/product-requirements/SKILL.md",
"description": "Install product-requirements skill"
},
{
"type": "copy_file",
"source": "skills/prototype-prompt-generator/SKILL.md",
"target": "skills/prototype-prompt-generator/SKILL.md",
"description": "Install prototype-prompt-generator skill"
},
{
"type": "copy_file",
"source": "skills/prototype-prompt-generator/references/prompt-structure.md",
"target": "skills/prototype-prompt-generator/references/prompt-structure.md",
"description": "Install prototype-prompt-generator prompt structure reference"
},
{
"type": "copy_file",
"source": "skills/prototype-prompt-generator/references/design-systems.md",
"target": "skills/prototype-prompt-generator/references/design-systems.md",
"description": "Install prototype-prompt-generator design systems reference"
},
{
"type": "run_command",
"command": "bash install.sh",
"description": "Install codeagent-wrapper binary",
"env": {
"INSTALL_DIR": "${install_dir}"
}
}
]
},
"bmad": {
"enabled": false,
"description": "BMAD agile workflow with multi-agent orchestration",
"operations": [
{
"type": "merge_dir",
"source": "bmad-agile-workflow",
"source": "agents/bmad",
"description": "Merge BMAD commands and agents"
},
{
"type": "copy_file",
"source": "docs/BMAD-WORKFLOW.md",
"target": "docs/BMAD-WORKFLOW.md",
"description": "Copy BMAD workflow documentation"
}
]
},
@@ -81,14 +20,8 @@
"operations": [
{
"type": "merge_dir",
"source": "requirements-driven-workflow",
"source": "agents/requirements",
"description": "Merge requirements workflow commands and agents"
},
{
"type": "copy_file",
"source": "docs/REQUIREMENTS-WORKFLOW.md",
"target": "docs/REQUIREMENTS-WORKFLOW.md",
"description": "Copy requirements workflow documentation"
}
]
},
@@ -98,14 +31,8 @@
"operations": [
{
"type": "merge_dir",
"source": "development-essentials",
"source": "agents/development-essentials",
"description": "Merge essential development commands"
},
{
"type": "copy_file",
"source": "docs/DEVELOPMENT-COMMANDS.md",
"target": "docs/DEVELOPMENT-COMMANDS.md",
"description": "Copy development commands documentation"
}
]
},
@@ -169,6 +96,18 @@
}
]
},
"do": {
"enabled": true,
"description": "7-phase feature development workflow with codeagent orchestration",
"operations": [
{
"type": "copy_dir",
"source": "skills/do",
"target": "skills/do",
"description": "Install do skill with hooks"
}
]
},
"course": {
"enabled": false,
"description": "课程开发工作流,包含 dev、产品需求和测试用例技能",

View File

@@ -1,9 +0,0 @@
{
"name": "dev",
"description": "Lightweight development workflow with requirements clarification, parallel codex execution, and mandatory 90% test coverage",
"version": "5.6.1",
"author": {
"name": "cexll",
"email": "cexll@cexll.com"
}
}

View File

@@ -1,192 +0,0 @@
# /dev - Minimal Dev Workflow
## Overview
A freshly designed lightweight development workflow with no legacy baggage, focused on delivering high-quality code fast.
## Flow
```
/dev trigger
AskUserQuestion (backend selection)
AskUserQuestion (requirements clarification)
codeagent analysis (plan mode + task typing + UI auto-detection)
dev-plan-generator (create dev doc)
codeagent concurrent development (25 tasks, backend routing)
codeagent testing & verification (≥90% coverage)
Done (generate summary)
```
## Step 0 + The 6 Steps
### 0. Select Allowed Backends (FIRST ACTION)
- Use **AskUserQuestion** with multiSelect to ask which backends are allowed for this run
- Options (user can select multiple):
- `codex` - Stable, high quality, best cost-performance (default for most tasks)
- `claude` - Fast, lightweight (for quick fixes and config changes)
- `gemini` - UI/UX specialist (for frontend styling and components)
- If user selects ONLY `codex`, ALL subsequent tasks must use `codex` (including UI/quick-fix)
### 1. Clarify Requirements
- Use **AskUserQuestion** to ask the user directly
- No scoring system, no complex logic
- 23 rounds of Q&A until the requirement is clear
### 2. codeagent Analysis + Task Typing + UI Detection
- Call codeagent to analyze the request in plan mode style
- Extract: core functions, technical points, task list (25 items)
- For each task, assign exactly one type: `default` / `ui` / `quick-fix`
- UI auto-detection: needs UI work when task involves style assets (.css, .scss, styled-components, CSS modules, tailwindcss) OR frontend component files (.tsx, .jsx, .vue); output yes/no plus evidence
### 3. Generate Dev Doc
- Call the **dev-plan-generator** agent
- Produce a single `dev-plan.md`
- Append a dedicated UI task when Step 2 marks `needs_ui: true`
- Include: task breakdown, `type`, file scope, dependencies, test commands
### 4. Concurrent Development
- Work from the task list in dev-plan.md
- Route backend per task type (with user constraints + fallback):
- `default``codex`
- `ui``gemini` (enforced when allowed)
- `quick-fix``claude`
- Missing `type` → treat as `default`
- If the preferred backend is not allowed, fallback to an allowed backend by priority: `codex``claude``gemini`
- Independent tasks → run in parallel
- Conflicting tasks → run serially
### 5. Testing & Verification
- Each codeagent task:
- Implements the feature
- Writes tests
- Runs coverage
- Reports results (≥90%)
### 6. Complete
- Summarize task status
- Record coverage
## Usage
```bash
/dev "Implement user login with email + password"
```
No CLI flags required; workflow starts with an interactive backend selection.
## Output Structure
```
.claude/specs/{feature_name}/
└── dev-plan.md # Dev document generated by agent
```
Only one file—minimal and clear.
## Core Components
### Tools
- **AskUserQuestion**: interactive requirement clarification
- **codeagent skill**: analysis, development, testing; supports `--backend` for `codex` / `claude` / `gemini`
- **dev-plan-generator agent**: generate dev doc (subagent via Task tool, saves context)
## Backend Selection & Routing
- **Step 0**: user selects allowed backends; if `仅 codex`, all tasks use codex
- **UI detection standard**: style files (.css, .scss, styled-components, CSS modules, tailwindcss) OR frontend component code (.tsx, .jsx, .vue) trigger `needs_ui: true`
- **Task type field**: each task in `dev-plan.md` must have `type: default|ui|quick-fix`
- **Routing**: `default`→codex, `ui`→gemini, `quick-fix`→claude; if disallowed, fallback to an allowed backend by priority: codex→claude→gemini
## Key Features
### ✅ Fresh Design
- No legacy project residue
- No complex scoring logic
- No extra abstraction layers
### ✅ Minimal Orchestration
- Orchestrator controls the flow directly
- Only three tools/components
- Steps are straightforward
### ✅ Concurrency
- Tasks split based on natural functional boundaries
- Auto-detect dependencies and conflicts
- codeagent executes independently with optimal backend
### ✅ Quality Assurance
- Enforces 90% coverage
- codeagent tests and verifies its own work
- Automatic retry on failure
## Example
```bash
# Trigger
/dev "Add user login feature"
# Step 0: Select backends
Q: Which backends are allowed? (multiSelect)
A: Selected: codex, claude
# Step 1: Clarify requirements
Q: What login methods are supported?
A: Email + password
Q: Should login be remembered?
A: Yes, use JWT token
# Step 2: codeagent analysis
Output:
- Core: email/password login + JWT auth
- Task 1: Backend API (type=default)
- Task 2: Password hashing (type=default)
- Task 3: Frontend form (type=ui)
UI detection: needs_ui = true (tailwindcss classes in frontend form)
# Step 3: Generate doc
dev-plan.md generated with typed tasks ✓
# Step 4-5: Concurrent development (routing + fallback)
[task-1] Backend API (codex) → tests → 92% ✓
[task-2] Password hashing (codex) → tests → 95% ✓
[task-3] Frontend form (fallback to codex; gemini not allowed) → tests → 91% ✓
```
## Directory Structure
```
dev-workflow/
├── README.md # This doc
├── commands/
│ └── dev.md # /dev workflow orchestrator definition
└── agents/
└── dev-plan-generator.md # Dev plan document generator agent
```
Minimal structure, only three files.
## When to Use
**Good for**:
- Any feature size
- Fast iterations
- High test coverage needs
- Wanting concurrent speed-up
## Design Principles
1. **KISS**: keep it simple
2. **Disposable**: no persistent config
3. **Quality first**: enforce 90% coverage
4. **Concurrency first**: leverage codeagent
5. **No legacy baggage**: clean-slate design
---
**Philosophy**: zero tolerance for complexity—ship the smallest usable solution, like Linus would.

View File

@@ -1,124 +0,0 @@
---
name: dev-plan-generator
description: Use this agent when you need to generate a structured development plan document (`dev-plan.md`) that breaks down a feature into concrete implementation tasks with testing requirements and acceptance criteria. This agent should be called after requirements analysis and before actual implementation begins.\n\n<example>\nContext: User is orchestrating a feature development workflow and needs to create a development plan after codeagent analysis is complete.\nuser: "Create a development plan for the user authentication feature based on the requirements and analysis"\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to create the structured development plan document."\n<commentary>\nThe user needs a dev-plan.md document generated from requirements and analysis. Use the dev-plan-generator agent to create the structured task breakdown.\n</commentary>\n</example>\n\n<example>\nContext: Orchestrator has completed requirements gathering and codeagent analysis for a new feature and needs to generate the development plan before moving to implementation.\nuser: "We've completed the analysis for the payment integration feature. Generate the development plan."\nassistant: "I'm going to use the Task tool to launch the dev-plan-generator agent to create the dev-plan.md document with task breakdown and testing requirements."\n<commentary>\nThis is the step in the workflow where the development plan document needs to be generated. Use the dev-plan-generator agent to create the structured plan.\n</commentary>\n</example>\n\n<example>\nContext: User is working through a requirements-driven workflow and has just approved the technical specifications.\nuser: "The specs look good. Let's move forward with creating the implementation plan."\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to generate the dev-plan.md document with the task breakdown."\n<commentary>\nAfter spec approval, the next step is generating the development plan. Use the dev-plan-generator agent to create the structured document.\n</commentary>\n</example>
tools: Glob, Grep, Read, Edit, Write, TodoWrite
model: sonnet
color: green
---
You are a specialized Development Plan Document Generator. Your sole responsibility is to create structured, actionable development plan documents (`dev-plan.md`) that break down features into concrete implementation tasks.
## Your Role
You receive context from an orchestrator including:
- Feature requirements description
- codeagent analysis results (feature highlights, task decomposition, UI detection flag, and task typing hints)
- Feature name (in kebab-case format)
Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
## Document Structure You Must Follow
```markdown
# {Feature Name} - Development Plan
## Overview
[One-sentence description of core functionality]
## Task Breakdown
### Task 1: [Task Name]
- **ID**: task-1
- **type**: default|ui|quick-fix
- **Description**: [What needs to be done]
- **File Scope**: [Directories or files involved, e.g., src/auth/**, tests/auth/]
- **Dependencies**: [None or depends on task-x]
- **Test Command**: [e.g., pytest tests/auth --cov=src/auth --cov-report=term]
- **Test Focus**: [Scenarios to cover]
### Task 2: [Task Name]
...
(Tasks based on natural functional boundaries, typically 2-5)
## Acceptance Criteria
- [ ] Feature point 1
- [ ] Feature point 2
- [ ] All unit tests pass
- [ ] Code coverage ≥90%
## Technical Notes
- [Key technical decisions]
- [Constraints to be aware of]
```
## Generation Rules You Must Enforce
1. **Task Count**: Generate tasks based on natural functional boundaries (no artificial limits)
- Typical range: 2-5 tasks
- Quality over quantity: prefer fewer well-scoped tasks over excessive fragmentation
- Each task should be independently completable by one agent
2. **Task Requirements**: Each task MUST include:
- Clear ID (task-1, task-2, etc.)
- A single task type field: `type: default|ui|quick-fix`
- Specific description of what needs to be done
- Explicit file scope (directories or files affected)
- Dependency declaration ("None" or "depends on task-x")
- Complete test command with coverage parameters
- Testing focus points (scenarios to cover)
3. **Task Independence**: Design tasks to be as independent as possible to enable parallel execution
4. **Test Commands**: Must include coverage parameters (e.g., `--cov=module --cov-report=term` for pytest, `--coverage` for npm)
5. **Coverage Threshold**: Always require ≥90% code coverage in acceptance criteria
## Your Workflow
1. **Analyze Input**: Review the requirements description and codeagent analysis results (including `needs_ui` and any task typing hints)
2. **Identify Tasks**: Break down the feature into 2-5 logical, independent tasks
3. **Determine Dependencies**: Map out which tasks depend on others (minimize dependencies)
4. **Assign Task Type**: For each task, set exactly one `type`:
- `ui`: touches UI/style/component work (e.g., .css/.scss/.tsx/.jsx/.vue, tailwind, design tweaks)
- `quick-fix`: small, fast changes (config tweaks, small bug fix, minimal scope); do NOT use for UI work
- `default`: everything else
- Note: `/dev` Step 4 routes backend by `type` (default→codex, ui→gemini, quick-fix→claude; missing type → default)
5. **Specify Testing**: For each task, define the exact test command and coverage requirements
6. **Define Acceptance**: List concrete, measurable acceptance criteria including the 90% coverage requirement
7. **Document Technical Points**: Note key technical decisions and constraints
8. **Write File**: Use the Write tool to create `./.claude/specs/{feature_name}/dev-plan.md`
## Quality Checks Before Writing
- [ ] Task count is between 2-5
- [ ] Every task has all required fields (ID, type, Description, File Scope, Dependencies, Test Command, Test Focus)
- [ ] Test commands include coverage parameters
- [ ] Dependencies are explicitly stated
- [ ] Acceptance criteria includes 90% coverage requirement
- [ ] File scope is specific (not vague like "all files")
- [ ] Testing focus is concrete (not generic like "test everything")
## Critical Constraints
- **Document Only**: You generate documentation. You do NOT execute code, run tests, or modify source files.
- **Single Output**: You produce exactly one file: `dev-plan.md` in the correct location
- **Path Accuracy**: The path must be `./.claude/specs/{feature_name}/dev-plan.md` where {feature_name} matches the input
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc)
- **Structured Format**: Follow the exact markdown structure provided
## Example Output Quality
Refer to the user login example in your instructions as the quality benchmark. Your outputs should have:
- Clear, actionable task descriptions
- Specific file paths (not generic)
- Realistic test commands for the actual tech stack
- Concrete testing scenarios (not abstract)
- Measurable acceptance criteria
- Relevant technical decisions
## Error Handling
If the input context is incomplete or unclear:
1. Request the missing information explicitly
2. Do NOT proceed with generating a low-quality document
3. Do NOT make up requirements or technical details
4. Ask for clarification on: feature scope, tech stack, testing framework, file structure
Remember: Your document will be used by other agents to implement the feature. Precision and completeness are critical. Every field must be filled with specific, actionable information.

View File

@@ -1,213 +0,0 @@
---
description: Extreme lightweight end-to-end development workflow with requirements clarification, intelligent backend selection, parallel codeagent execution, and mandatory 90% test coverage
---
You are the /dev Workflow Orchestrator, an expert development workflow manager specializing in orchestrating minimal, efficient end-to-end development processes with parallel task execution and rigorous test coverage validation.
---
## CRITICAL CONSTRAINTS (NEVER VIOLATE)
These rules have HIGHEST PRIORITY and override all other instructions:
1. **NEVER use Edit, Write, or MultiEdit tools directly** - ALL code changes MUST go through codeagent-wrapper
2. **MUST use AskUserQuestion in Step 0** - Backend selection MUST be the FIRST action (before requirement clarification)
3. **MUST use AskUserQuestion in Step 1** - Do NOT skip requirement clarification
4. **MUST use TodoWrite after Step 1** - Create task tracking list before any analysis
5. **MUST use codeagent-wrapper for Step 2 analysis** - Do NOT use Read/Glob/Grep directly for deep analysis
6. **MUST wait for user confirmation in Step 3** - Do NOT proceed to Step 4 without explicit approval
7. **MUST invoke codeagent-wrapper --parallel for Step 4 execution** - Use Bash tool, NOT Edit/Write or Task tool
**Violation of any constraint above invalidates the entire workflow. Stop and restart if violated.**
---
**Core Responsibilities**
- Orchestrate a streamlined 7-step development workflow (Step 0 + Step 16):
0. Backend selection (user constrained)
1. Requirement clarification through targeted questioning
2. Technical analysis using codeagent-wrapper
3. Development documentation generation
4. Parallel development execution (backend routing per task type)
5. Coverage validation (≥90% requirement)
6. Completion summary
**Workflow Execution**
- **Step 0: Backend Selection [MANDATORY - FIRST ACTION]**
- MUST use AskUserQuestion tool as the FIRST action with multiSelect enabled
- Ask which backends are allowed for this /dev run
- Options (user can select multiple):
- `codex` - Stable, high quality, best cost-performance (default for most tasks)
- `claude` - Fast, lightweight (for quick fixes and config changes)
- `gemini` - UI/UX specialist (for frontend styling and components)
- Store the selected backends as `allowed_backends` set for routing in Step 4
- Special rule: if user selects ONLY `codex`, then ALL subsequent tasks (including UI/quick-fix) MUST use `codex` (no exceptions)
- **Step 1: Requirement Clarification [MANDATORY - DO NOT SKIP]**
- MUST use AskUserQuestion tool
- Focus questions on functional boundaries, inputs/outputs, constraints, testing, and required unit-test coverage levels
- Iterate 2-3 rounds until clear; rely on judgment; keep questions concise
- After clarification complete: MUST use TodoWrite to create task tracking list with workflow steps
- **Step 2: codeagent-wrapper Deep Analysis (Plan Mode Style) [USE CODEAGENT-WRAPPER ONLY]**
MUST use Bash tool to invoke `codeagent-wrapper` for deep analysis. Do NOT use Read/Glob/Grep tools directly - delegate all exploration to codeagent-wrapper.
**How to invoke for analysis**:
```bash
# analysis_backend selection:
# - prefer codex if it is in allowed_backends
# - otherwise pick the first backend in allowed_backends
codeagent-wrapper --backend {analysis_backend} - <<'EOF'
Analyze the codebase for implementing [feature name].
Requirements:
- [requirement 1]
- [requirement 2]
Deliverables:
1. Explore codebase structure and existing patterns
2. Evaluate implementation options with trade-offs
3. Make architectural decisions
4. Break down into 2-5 parallelizable tasks with dependencies and file scope
5. Classify each task with a single `type`: `default` / `ui` / `quick-fix`
6. Determine if UI work is needed (check for .css/.tsx/.vue files)
Output the analysis following the structure below.
EOF
```
**When Deep Analysis is Needed** (any condition triggers):
- Multiple valid approaches exist (e.g., Redis vs in-memory vs file-based caching)
- Significant architectural decisions required (e.g., WebSockets vs SSE vs polling)
- Large-scale changes touching many files or systems
- Unclear scope requiring exploration first
**UI Detection Requirements**:
- During analysis, output whether the task needs UI work (yes/no) and the evidence
- UI criteria: presence of style assets (.css, .scss, styled-components, CSS modules, tailwindcss) OR frontend component files (.tsx, .jsx, .vue)
**What the AI backend does in Analysis Mode** (when invoked via codeagent-wrapper):
1. **Explore Codebase**: Use Glob, Grep, Read to understand structure, patterns, architecture
2. **Identify Existing Patterns**: Find how similar features are implemented, reuse conventions
3. **Evaluate Options**: When multiple approaches exist, list trade-offs (complexity, performance, security, maintainability)
4. **Make Architectural Decisions**: Choose patterns, APIs, data models with justification
5. **Design Task Breakdown**: Produce parallelizable tasks based on natural functional boundaries with file scope and dependencies
**Analysis Output Structure**:
```
## Context & Constraints
[Tech stack, existing patterns, constraints discovered]
## Codebase Exploration
[Key files, modules, patterns found via Glob/Grep/Read]
## Implementation Options (if multiple approaches)
| Option | Pros | Cons | Recommendation |
## Technical Decisions
[API design, data models, architecture choices made]
## Task Breakdown
[2-5 tasks with: ID, description, file scope, dependencies, test command, type(default|ui|quick-fix)]
## UI Determination
needs_ui: [true/false]
evidence: [files and reasoning tied to style + component criteria]
```
**Skip Deep Analysis When**:
- Simple, straightforward implementation with obvious approach
- Small changes confined to 1-2 files
- Clear requirements with single implementation path
- **Step 3: Generate Development Documentation**
- invoke agent dev-plan-generator
- When creating `dev-plan.md`, ensure every task has `type: default|ui|quick-fix`
- Append a dedicated UI task if Step 2 marked `needs_ui: true` but no UI task exists
- Output a brief summary of dev-plan.md:
- Number of tasks and their IDs
- Task type for each task
- File scope for each task
- Dependencies between tasks
- Test commands
- Use AskUserQuestion to confirm with user:
- Question: "Proceed with this development plan?" (state backend routing rules and any forced fallback due to allowed_backends)
- Options: "Confirm and execute" / "Need adjustments"
- If user chooses "Need adjustments", return to Step 1 or Step 2 based on feedback
- **Step 4: Parallel Development Execution [CODEAGENT-WRAPPER ONLY - NO DIRECT EDITS]**
- MUST use Bash tool to invoke `codeagent-wrapper --parallel` for ALL code changes
- NEVER use Edit, Write, MultiEdit, or Task tools to modify code directly
- Backend routing (must be deterministic and enforceable):
- Task field: `type: default|ui|quick-fix` (missing → treat as `default`)
- Preferred backend by type:
- `default` → `codex`
- `ui` → `gemini` (enforced when allowed)
- `quick-fix` → `claude`
- If user selected `仅 codex`: all tasks MUST use `codex`
- Otherwise, if preferred backend is not in `allowed_backends`, fallback to the first available backend by priority: `codex` → `claude` → `gemini`
- Build ONE `--parallel` config that includes all tasks in `dev-plan.md` and submit it once via Bash tool:
```bash
# One shot submission - wrapper handles topology + concurrency
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: [task-id-1]
backend: [routed-backend-from-type-and-allowed_backends]
workdir: .
dependencies: [optional, comma-separated ids]
---CONTENT---
Task: [task-id-1]
Reference: @.claude/specs/{feature_name}/dev-plan.md
Scope: [task file scope]
Test: [test command]
Deliverables: code + unit tests + coverage ≥90% + coverage summary
---TASK---
id: [task-id-2]
backend: [routed-backend-from-type-and-allowed_backends]
workdir: .
dependencies: [optional, comma-separated ids]
---CONTENT---
Task: [task-id-2]
Reference: @.claude/specs/{feature_name}/dev-plan.md
Scope: [task file scope]
Test: [test command]
Deliverables: code + unit tests + coverage ≥90% + coverage summary
EOF
```
- **Note**: Use `workdir: .` (current directory) for all tasks unless specific subdirectory is required
- Execute independent tasks concurrently; serialize conflicting ones; track coverage reports
- Backend is routed deterministically based on task `type`, no manual intervention needed
- **Step 5: Coverage Validation**
- Validate each tasks coverage:
- All ≥90% → pass
- Any <90% → request more tests (max 2 rounds)
- **Step 6: Completion Summary**
- Provide completed task list, coverage per task, key file changes
**Error Handling**
- **codeagent-wrapper failure**: Retry once with same input; if still fails, log error and ask user for guidance
- **Insufficient coverage (<90%)**: Request more tests from the failed task (max 2 rounds); if still fails, report to user
- **Dependency conflicts**:
- Circular dependencies: codeagent-wrapper will detect and fail with error; revise task breakdown to remove cycles
- Missing dependencies: Ensure all task IDs referenced in `dependencies` field exist
- **Parallel execution timeout**: Individual tasks timeout after 2 hours (configurable via CODEX_TIMEOUT); failed tasks can be retried individually
- **Backend unavailable**: If a routed backend is unavailable, fallback to another backend in `allowed_backends` (priority: codex → claude → gemini); if none works, fail with a clear error message
**Quality Standards**
- Code coverage ≥90%
- Tasks based on natural functional boundaries (typically 2-5)
- Each task has exactly one `type: default|ui|quick-fix`
- Backend routed by `type`: `default`→codex, `ui`→gemini, `quick-fix`→claude (with allowed_backends fallback)
- Documentation must be minimal yet actionable
- No verbose implementations; only essential code
**Communication Style**
- Be direct and concise
- Report progress at each workflow step
- Highlight blockers immediately
- Provide actionable next steps when coverage fails
- Prioritize speed via parallelization while enforcing coverage validation

View File

@@ -1,197 +0,0 @@
# Claude Code Hooks Guide
Hooks are shell scripts or commands that execute in response to Claude Code events.
## Available Hook Types
### 1. UserPromptSubmit
Runs after user submits a prompt, before Claude processes it.
**Use cases:**
- Auto-activate skills based on keywords
- Add context injection
- Log user requests
### 2. PostToolUse
Runs after Claude uses a tool.
**Use cases:**
- Validate tool outputs
- Run additional checks (linting, formatting)
- Log tool usage
### 3. Stop
Runs when Claude Code session ends.
**Use cases:**
- Cleanup temporary files
- Generate session reports
- Commit changes automatically
## Configuration
Hooks are configured in `.claude/settings.json`:
```json
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "$CLAUDE_PROJECT_DIR/hooks/skill-activation-prompt.sh"
}
]
}
],
"PostToolUse": [
{
"hooks": [
{
"type": "command",
"command": "$CLAUDE_PROJECT_DIR/hooks/post-tool-check.sh"
}
]
}
]
}
}
```
## Creating Custom Hooks
### Example: Pre-Commit Hook
**File:** `hooks/pre-commit.sh`
```bash
#!/bin/bash
set -e
# Get staged files
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM)
# Run tests on Go files
GO_FILES=$(echo "$STAGED_FILES" | grep '\.go$' || true)
if [ -n "$GO_FILES" ]; then
go test ./... -short || exit 1
fi
# Validate JSON files
JSON_FILES=$(echo "$STAGED_FILES" | grep '\.json$' || true)
if [ -n "$JSON_FILES" ]; then
for file in $JSON_FILES; do
jq empty "$file" || exit 1
done
fi
echo "✅ Pre-commit checks passed"
```
**Register in settings.json:**
```json
{
"hooks": {
"PostToolUse": [
{
"hooks": [
{
"type": "command",
"command": "$CLAUDE_PROJECT_DIR/hooks/pre-commit.sh"
}
]
}
]
}
}
```
### Example: Auto-Format Hook
**File:** `hooks/auto-format.sh`
```bash
#!/bin/bash
# Format Go files
find . -name "*.go" -exec gofmt -w {} \;
# Format JSON files
find . -name "*.json" -exec jq --indent 2 . {} \; -exec mv {} {}.tmp \; -exec mv {}.tmp {} \;
echo "✅ Files formatted"
```
## Environment Variables
Hooks have access to:
- `$CLAUDE_PROJECT_DIR` - Project root directory
- `$PWD` - Current working directory
- All shell environment variables
## Best Practices
1. **Keep hooks fast** - Slow hooks block Claude Code
2. **Handle errors gracefully** - Return non-zero on failure
3. **Use absolute paths** - Reference `$CLAUDE_PROJECT_DIR`
4. **Make scripts executable** - `chmod +x hooks/script.sh`
5. **Test independently** - Run hooks manually first
6. **Document behavior** - Add comments explaining logic
## Debugging Hooks
Enable verbose logging:
```bash
# Add to your hook
set -x # Print commands
set -e # Exit on error
```
Test manually:
```bash
cd /path/to/project
./hooks/your-hook.sh
echo $? # Check exit code
```
## Built-in Hooks
This repository includes:
| Hook | File | Purpose |
|------|------|---------|
| Skill Activation | `skill-activation-prompt.sh` | Auto-suggest skills |
| Pre-commit | `pre-commit.sh` | Code quality checks |
## Disabling Hooks
Remove hook configuration from `.claude/settings.json` or set empty array:
```json
{
"hooks": {
"UserPromptSubmit": []
}
}
```
## Troubleshooting
**Hook not running?**
- Check `.claude/settings.json` syntax
- Verify script is executable: `ls -l hooks/`
- Check script path is correct
**Hook failing silently?**
- Add `set -e` to script
- Check exit codes: `echo $?`
- Add logging: `echo "debug" >> /tmp/hook.log`
## Further Reading
- [Claude Code Hooks Documentation](https://docs.anthropic.com/claude-code/hooks)
- [Bash Scripting Guide](https://www.gnu.org/software/bash/manual/)

View File

@@ -1,348 +0,0 @@
# Plugin System Guide
> Native Claude Code plugin support for modular workflow installation
## 🎯 Overview
This repository provides 4 ready-to-use Claude Code plugins that can be installed individually or as a complete suite.
## 📦 Available Plugins
### 1. bmad-agile-workflow
**Complete BMAD methodology with 6 specialized agents**
**Commands**:
- `/bmad-pilot` - Full agile workflow orchestration
**Agents**:
- `bmad-po` - Product Owner (Sarah)
- `bmad-architect` - System Architect (Winston)
- `bmad-sm` - Scrum Master (Mike)
- `bmad-dev` - Developer (Alex)
- `bmad-review` - Code Reviewer
- `bmad-qa` - QA Engineer (Emma)
- `bmad-orchestrator` - Main orchestrator
**Use for**: Enterprise projects, complex features, full agile process
### 2. requirements-driven-workflow
**Streamlined requirements-to-code workflow**
**Commands**:
- `/requirements-pilot` - Requirements-driven development flow
**Agents**:
- `requirements-generate` - Requirements generation
- `requirements-code` - Code implementation
- `requirements-review` - Code review
- `requirements-testing` - Testing strategy
**Use for**: Quick prototyping, simple features, rapid development
### 3. development-essentials
**Core development slash commands**
**Commands**:
- `/code` - Direct implementation
- `/debug` - Systematic debugging
- `/test` - Testing strategy
- `/optimize` - Performance tuning
- `/bugfix` - Bug resolution
- `/refactor` - Code improvement
- `/review` - Code validation
- `/ask` - Technical consultation
- `/docs` - Documentation
- `/think` - Advanced analysis
**Agents**:
- `code` - Code implementation
- `bugfix` - Bug fixing
- `debug` - Debugging
- `develop` - General development
**Use for**: Daily coding tasks, quick implementations
### 4. advanced-ai-agents
**GPT-5 deep reasoning integration**
**Commands**: None (agent-only)
**Agents**:
- `gpt5` - Deep reasoning and analysis
**Use for**: Complex architectural decisions, strategic planning
## 🚀 Installation Methods
### Method 1: Plugin Commands (Recommended)
```bash
# List all available plugins
/plugin list
# Get detailed information about a plugin
/plugin info bmad-agile-workflow
# Install a specific plugin
/plugin install bmad-agile-workflow
# Install all plugins
/plugin install bmad-agile-workflow
/plugin install requirements-driven-workflow
/plugin install development-essentials
/plugin install advanced-ai-agents
# Remove an installed plugin
/plugin remove development-essentials
```
### Method 2: Repository Reference
```bash
# Install from GitHub repository
/plugin marketplace add cexll/myclaude
```
This will present all available plugins from the repository.
### Method 3: Make Commands
For traditional installation or selective deployment:
```bash
# Install everything
make install
# Deploy specific workflows
make deploy-bmad # BMAD workflow only
make deploy-requirements # Requirements workflow only
make deploy-commands # All slash commands
make deploy-agents # All agents
# Deploy everything
make deploy-all
# View all options
make help
```
### Method 4: Manual Installation
Copy files to Claude Code configuration directories:
**Commands**:
```bash
cp bmad-agile-workflow/commands/*.md ~/.config/claude/commands/
cp requirements-driven-workflow/commands/*.md ~/.config/claude/commands/
cp development-essentials/commands/*.md ~/.config/claude/commands/
```
**Agents**:
```bash
cp bmad-agile-workflow/agents/*.md ~/.config/claude/agents/
cp requirements-driven-workflow/agents/*.md ~/.config/claude/agents/
cp development-essentials/agents/*.md ~/.config/claude/agents/
cp advanced-ai-agents/agents/*.md ~/.config/claude/agents/
```
**Output Styles** (optional):
```bash
cp output-styles/*.md ~/.config/claude/output-styles/
```
## 📋 Plugin Configuration
Plugins are defined in `.claude-plugin/marketplace.json` following the Claude Code plugin specification.
### Plugin Metadata Structure
```json
{
"name": "plugin-name",
"displayName": "Human Readable Name",
"description": "Plugin description",
"version": "1.0.0",
"author": "Author Name",
"category": "workflow|development|analysis",
"keywords": ["keyword1", "keyword2"],
"commands": ["command1", "command2"],
"agents": ["agent1", "agent2"]
}
```
## 🔧 Plugin Management
### Check Installed Plugins
```bash
/plugin list
```
Shows all installed plugins with their status.
### Plugin Information
```bash
/plugin info <plugin-name>
```
Displays detailed information:
- Description
- Version
- Commands provided
- Agents included
- Author and keywords
### Update Plugins
Plugins are updated when you pull the latest repository changes:
```bash
git pull origin main
make install
```
### Uninstall Plugins
```bash
/plugin remove <plugin-name>
```
Or manually remove files:
```bash
# Remove commands
rm ~/.config/claude/commands/<command-name>.md
# Remove agents
rm ~/.config/claude/agents/<agent-name>.md
```
## 🎯 Plugin Selection Guide
### Install Everything (Recommended for New Users)
```bash
make install
```
Provides complete functionality with all workflows and commands.
### Selective Installation
**For Agile Teams**:
```bash
/plugin install bmad-agile-workflow
```
**For Rapid Development**:
```bash
/plugin install requirements-driven-workflow
/plugin install development-essentials
```
**For Individual Developers**:
```bash
/plugin install development-essentials
/plugin install advanced-ai-agents
```
**For Code Quality Focus**:
```bash
/plugin install development-essentials # Includes /review
/plugin install bmad-agile-workflow # Includes bmad-review
```
## 📁 Directory Structure
```
myclaude/
├── .claude-plugin/
│ └── marketplace.json # Plugin registry
├── bmad-agile-workflow/
│ ├── commands/
│ │ └── bmad-pilot.md
│ └── agents/
│ ├── bmad-po.md
│ ├── bmad-architect.md
│ ├── bmad-sm.md
│ ├── bmad-dev.md
│ ├── bmad-review.md
│ ├── bmad-qa.md
│ └── bmad-orchestrator.md
├── requirements-driven-workflow/
│ ├── commands/
│ │ └── requirements-pilot.md
│ └── agents/
│ ├── requirements-generate.md
│ ├── requirements-code.md
│ ├── requirements-review.md
│ └── requirements-testing.md
├── development-essentials/
│ ├── commands/
│ │ ├── code.md
│ │ ├── debug.md
│ │ ├── test.md
│ │ └── ... (more commands)
│ └── agents/
│ ├── code.md
│ ├── bugfix.md
│ ├── debug.md
│ └── develop.md
├── advanced-ai-agents/
│ └── agents/
│ └── gpt5.md
└── output-styles/
└── bmad-phase-context.md
```
## 🔄 Plugin Dependencies
**No Dependencies**: All plugins work independently
**Complementary Combinations**:
- BMAD + Advanced Agents (enhanced reviews)
- Requirements + Development Essentials (complete toolkit)
- All four plugins (full suite)
## 🛠️ Makefile Reference
```bash
# Installation
make install # Install all plugins
make deploy-all # Deploy all configurations
# Selective Deployment
make deploy-bmad # BMAD workflow only
make deploy-requirements # Requirements workflow only
make deploy-commands # All slash commands only
make deploy-agents # All agents only
# Testing
make test-bmad # Test BMAD workflow
make test-requirements # Test Requirements workflow
# Cleanup
make clean # Remove generated artifacts
make help # Show all available commands
```
## 📚 Related Documentation
- **[BMAD Workflow](BMAD-WORKFLOW.md)** - Complete BMAD guide
- **[Requirements Workflow](REQUIREMENTS-WORKFLOW.md)** - Lightweight workflow guide
- **[Development Commands](DEVELOPMENT-COMMANDS.md)** - Command reference
- **[Quick Start Guide](QUICK-START.md)** - Get started quickly
## 🔗 External Resources
- **[Claude Code Plugin Docs](https://docs.claude.com/en/docs/claude-code/plugins)** - Official plugin documentation
- **[Claude Code CLI](https://claude.ai/code)** - Claude Code interface
---
**Modular Installation** - Install only what you need, when you need it.

View File

@@ -1,326 +0,0 @@
# Quick Start Guide
> Get started with Claude Code Multi-Agent Workflow System in 5 minutes
## 🚀 Installation (2 minutes)
### Option 1: Plugin System (Fastest)
```bash
# Install everything with one command
/plugin marketplace add cexll/myclaude
```
### Option 2: Make Install
```bash
git clone https://github.com/cexll/myclaude.git
cd myclaude
make install
```
### Option 3: Selective Install
```bash
# Install only what you need
/plugin install bmad-agile-workflow # Full agile workflow
/plugin install development-essentials # Daily coding commands
```
## 🎯 Your First Workflow (3 minutes)
### Try BMAD Workflow
Complete agile development automation:
```bash
/bmad-pilot "Build a simple todo list API with CRUD operations"
```
**What happens**:
1. **Product Owner** generates requirements (PRD)
2. **Architect** designs system architecture
3. **Scrum Master** creates sprint plan
4. **Developer** implements code
5. **Reviewer** performs code review
6. **QA** runs tests
All documents saved to `.claude/specs/todo-list-api/`
### Try Requirements Workflow
Fast prototyping:
```bash
/requirements-pilot "Add user authentication to existing API"
```
**What happens**:
1. Generate functional requirements
2. Implement code
3. Review implementation
4. Create tests
### Try Direct Commands
Quick coding without workflow:
```bash
# Implement a feature
/code "Add input validation for email fields"
# Debug an issue
/debug "API returns 500 on missing parameters"
# Add tests
/test "Create unit tests for validation logic"
```
## 📋 Common Use Cases
### 1. New Feature Development
**Complex Feature** (use BMAD):
```bash
/bmad-pilot "User authentication system with OAuth2, MFA, and role-based access control"
```
**Simple Feature** (use Requirements):
```bash
/requirements-pilot "Add pagination to user list endpoint"
```
**Tiny Feature** (use direct command):
```bash
/code "Add created_at timestamp to user model"
```
### 2. Bug Fixing
**Complex Bug** (use debug):
```bash
/debug "Memory leak in background job processor"
```
**Simple Bug** (use bugfix):
```bash
/bugfix "Login button not working on mobile Safari"
```
### 3. Code Quality
**Full Review**:
```bash
/review "Review authentication module for security issues"
```
**Refactoring**:
```bash
/refactor "Simplify user validation logic and remove duplication"
```
**Optimization**:
```bash
/optimize "Reduce database queries in dashboard API"
```
## 🎨 Workflow Selection Guide
```
┌─────────────────────────────────────────────────────────┐
│ Choose Your Workflow │
└─────────────────────────────────────────────────────────┘
Complex Business Feature + Architecture Needed
🏢 Use BMAD Workflow
/bmad-pilot "description"
• 6 specialized agents
• Quality gates (PRD ≥90, Design ≥90)
• Complete documentation
• Sprint planning included
────────────────────────────────────────────────────────
Clear Requirements + Fast Iteration Needed
⚡ Use Requirements Workflow
/requirements-pilot "description"
• 4 phases: Requirements → Code → Review → Test
• Quality gate (Requirements ≥90)
• Minimal documentation
• Direct to implementation
────────────────────────────────────────────────────────
Well-Defined Task + No Workflow Overhead
🔧 Use Direct Commands
/code | /debug | /test | /optimize
• Single-purpose commands
• Immediate execution
• No documentation overhead
• Perfect for daily tasks
```
## 💡 Tips for Success
### 1. Be Specific
**❌ Bad**:
```bash
/bmad-pilot "Build an app"
```
**✅ Good**:
```bash
/bmad-pilot "Build a task management API with user authentication, task CRUD,
task assignment, and real-time notifications via WebSocket"
```
### 2. Provide Context
Include relevant technical details:
```bash
/code "Add Redis caching to user profile endpoint, cache TTL 5 minutes,
invalidate on profile update"
```
### 3. Engage with Agents
During BMAD workflow, provide feedback at quality gates:
```
PO: "Here's the PRD (Score: 85/100)"
You: "Add mobile app support and offline mode requirements"
PO: "Updated PRD (Score: 94/100) ✅"
```
### 4. Review Generated Artifacts
Check documents before confirming:
- `.claude/specs/{feature}/01-product-requirements.md`
- `.claude/specs/{feature}/02-system-architecture.md`
- `.claude/specs/{feature}/03-sprint-plan.md`
### 5. Chain Commands for Complex Tasks
Break down complex work:
```bash
/ask "Best approach for implementing real-time chat"
/bmad-pilot "Real-time chat system with message history and typing indicators"
/test "Add integration tests for chat message delivery"
/docs "Document chat API endpoints and WebSocket events"
```
## 🎓 Learning Path
**Day 1**: Try direct commands
```bash
/code "simple task"
/test "add some tests"
/review "check my code"
```
**Day 2**: Try Requirements workflow
```bash
/requirements-pilot "small feature"
```
**Week 2**: Try BMAD workflow
```bash
/bmad-pilot "larger feature"
```
**Week 3**: Combine workflows
```bash
# Use BMAD for planning
/bmad-pilot "new module" --direct-dev
# Use Requirements for sprint tasks
/requirements-pilot "individual task from sprint"
# Use commands for daily work
/code "quick fix"
/test "add test"
```
## 📚 Next Steps
### Explore Documentation
- **[BMAD Workflow Guide](BMAD-WORKFLOW.md)** - Deep dive into full agile workflow
- **[Requirements Workflow Guide](REQUIREMENTS-WORKFLOW.md)** - Learn lightweight development
- **[Development Commands Reference](DEVELOPMENT-COMMANDS.md)** - All command details
- **[Plugin System Guide](PLUGIN-SYSTEM.md)** - Plugin management
### Try Advanced Features
**BMAD Options**:
```bash
# Skip testing for prototype
/bmad-pilot "prototype" --skip-tests
# Skip sprint planning for quick dev
/bmad-pilot "feature" --direct-dev
# Skip repo scan (if context exists)
/bmad-pilot "feature" --skip-scan
```
**Individual Agents**:
```bash
# Just requirements
/bmad-po "feature requirements"
# Just architecture
/bmad-architect "system design"
# Just orchestration
/bmad-orchestrator "complex project coordination"
```
### Check Quality
Run tests and validation:
```bash
make test-bmad # Test BMAD workflow
make test-requirements # Test Requirements workflow
```
## 🆘 Troubleshooting
**Commands not found**?
```bash
# Verify installation
/plugin list
# Reinstall if needed
make install
```
**Agents not working**?
```bash
# Check agent configuration
ls ~/.config/claude/agents/
# Redeploy agents
make deploy-agents
```
**Output styles missing**?
```bash
# Deploy output styles
cp output-styles/*.md ~/.config/claude/output-styles/
```
## 📞 Get Help
- **Issues**: [GitHub Issues](https://github.com/cexll/myclaude/issues)
- **Documentation**: [docs/](.)
- **Examples**: Check `.claude/specs/` after running workflows
- **Make Help**: Run `make help` for all commands
---
**You're ready!** Start with `/code "your first task"` and explore from there.

143
go.work.sum Normal file
View File

@@ -0,0 +1,143 @@
cloud.google.com/go v0.112.1 h1:uJSeirPke5UNZHIb4SxfZklVSiWWVqW4oXlETwZziwM=
cloud.google.com/go v0.112.1/go.mod h1:+Vbu+Y1UU+I1rjmzeMOb/8RfkKJK2Gyxi1X6jJCZLo4=
cloud.google.com/go/compute v1.24.0 h1:phWcR2eWzRJaL/kOiJwfFsPs4BaKq1j6vnpZrc1YlVg=
cloud.google.com/go/compute v1.24.0/go.mod h1:kw1/T+h/+tK2LJK0wiPPx1intgdAM3j/g3hFDlscY40=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
cloud.google.com/go/firestore v1.15.0 h1:/k8ppuWOtNuDHt2tsRV42yI21uaGnKDEQnRFeBpbFF8=
cloud.google.com/go/firestore v1.15.0/go.mod h1:GWOxFXcv8GZUtYpWHw/w6IuYNux/BtmeVTMmjrm4yhk=
cloud.google.com/go/iam v1.1.5 h1:1jTsCu4bcsNsE4iiqNT5SHwrDRCfRmIaaaVFhRveTJI=
cloud.google.com/go/iam v1.1.5/go.mod h1:rB6P/Ic3mykPbFio+vo7403drjlgvoWfYpJhMXEbzv8=
cloud.google.com/go/longrunning v0.5.5 h1:GOE6pZFdSrTb4KAiKnXsJBtlE6mEyaW44oKyMILWnOg=
cloud.google.com/go/longrunning v0.5.5/go.mod h1:WV2LAxD8/rg5Z1cNW6FJ/ZpX4E4VnDnoTk0yawPBB7s=
cloud.google.com/go/storage v1.35.1 h1:B59ahL//eDfx2IIKFBeT5Atm9wnNmj3+8xG/W4WB//w=
cloud.google.com/go/storage v1.35.1/go.mod h1:M6M/3V/D3KpzMTJyPOR/HU6n2Si5QdaXYEsng2xgOs8=
github.com/armon/go-metrics v0.4.1 h1:hR91U9KYmb6bLBYLQjyM+3j+rcd/UhE+G78SFnF8gJA=
github.com/armon/go-metrics v0.4.1/go.mod h1:E6amYzXo6aW1tqzoZGT755KkbgrJsSdpwZ+3JqfkOG4=
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/cpuguy83/go-md2man/v2 v2.0.4 h1:wfIWP927BUkWJb2NmU/kNDYIBTh/ziUX91+lVfRxZq4=
github.com/fatih/color v1.14.1 h1:qfhVLaG5s+nCROl1zJsZRxFeYrHLqWroPOQ8BWiNb4w=
github.com/fatih/color v1.14.1/go.mod h1:2oHN61fhTpgcxD3TSWCgKDiH1+x4OiDVVGH8WlgGZGg=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/godbus/dbus/v5 v5.0.4 h1:9349emZab16e7zQvpmsbtjc18ykshndd8y2PG3sgJbA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/s2a-go v0.1.7 h1:60BLSyTrOV4/haCDW4zb1guZItoSq8foHCXrAnjBo/o=
github.com/google/s2a-go v0.1.7/go.mod h1:50CgR4k1jNlWBu4UfS4AcfhVe1r6pdZPygJ3R8F0Qdw=
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfFxPRy3Bf7vr3h0cechB90XaQs=
github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0=
github.com/googleapis/gax-go/v2 v2.12.3 h1:5/zPPDvw8Q1SuXjrqrZslrqT7dL/uJT2CQii/cLCKqA=
github.com/googleapis/gax-go/v2 v2.12.3/go.mod h1:AKloxT6GtNbaLm8QTNSidHUVsHYcBHwWRvkNFJUQcS4=
github.com/googleapis/google-cloud-go-testing v0.0.0-20210719221736-1c9a4c676720 h1:zC34cGQu69FG7qzJ3WiKW244WfhDC3xxYMeNOX2gtUQ=
github.com/googleapis/google-cloud-go-testing v0.0.0-20210719221736-1c9a4c676720/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
github.com/hashicorp/consul/api v1.28.2 h1:mXfkRHrpHN4YY3RqL09nXU1eHKLNiuAN4kHvDQ16k/8=
github.com/hashicorp/consul/api v1.28.2/go.mod h1:KyzqzgMEya+IZPcD65YFoOVAgPpbfERu4I/tzG6/ueE=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-hclog v1.5.0 h1:bI2ocEMgcVlz55Oj1xZNBsVi900c7II+fWDyV9o+13c=
github.com/hashicorp/go-hclog v1.5.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/go-immutable-radix v1.3.1 h1:DKHmCUm2hRBK510BaiZlwvpD40f8bJFeZnpfm2KLowc=
github.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/serf v0.10.1 h1:Z1H2J60yRKvfDYAOZLd2MU0ND4AH/WDz7xYHDWQsIPY=
github.com/hashicorp/serf v0.10.1/go.mod h1:yL2t6BqATOLGc5HF7qbFkTfXoPIY0WZdWHfEvMqbG+4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/klauspost/compress v1.17.2 h1:RlWWUY/Dr4fL8qk9YG7DTZ7PDgME2V4csBXA8L/ixi4=
github.com/klauspost/compress v1.17.2/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/nats-io/nats.go v1.34.0 h1:fnxnPCNiwIG5w08rlMcEKTUw4AV/nKyGCOJE8TdhSPk=
github.com/nats-io/nats.go v1.34.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/sftp v1.13.6 h1:JFZT4XbOU7l77xGSpOdW+pwIMqP044IyjXX6FGyEKFo=
github.com/pkg/sftp v1.13.6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Qk=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/sagikazarmark/crypt v0.19.0 h1:WMyLTjHBo64UvNcWqpzY3pbZTYgnemZU8FBZigKc42E=
github.com/sagikazarmark/crypt v0.19.0/go.mod h1:c6vimRziqqERhtSe0MhIvzE1w54FrCHtrXb5NH/ja78=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
go.etcd.io/etcd/api/v3 v3.5.12 h1:W4sw5ZoU2Juc9gBWuLk5U6fHfNVyY1WC5g9uiXZio/c=
go.etcd.io/etcd/api/v3 v3.5.12/go.mod h1:Ot+o0SWSyT6uHhA56al1oCED0JImsRiU9Dc26+C2a+4=
go.etcd.io/etcd/client/pkg/v3 v3.5.12 h1:EYDL6pWwyOsylrQyLp2w+HkQ46ATiOvoEdMarindU2A=
go.etcd.io/etcd/client/pkg/v3 v3.5.12/go.mod h1:seTzl2d9APP8R5Y2hFL3NVlD6qC/dOT+3kvrqPyTas4=
go.etcd.io/etcd/client/v2 v2.305.12 h1:0m4ovXYo1CHaA/Mp3X/Fak5sRNIWf01wk/X1/G3sGKI=
go.etcd.io/etcd/client/v2 v2.305.12/go.mod h1:aQ/yhsxMu+Oht1FOupSr60oBvcS9cKXHrzBpDsPTf9E=
go.etcd.io/etcd/client/v3 v3.5.12 h1:v5lCPXn1pf1Uu3M4laUE2hp/geOTc5uPcYYsNe1lDxg=
go.etcd.io/etcd/client/v3 v3.5.12/go.mod h1:tSbBCakoWmmddL+BKVAJHa9km+O/E+bumDe9mSbPiqw=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0/go.mod h1:Mjt1i1INqiaoZOMGR1RIUJN+i3ChKoFRqzrRQhlkbs0=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw=
go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo=
go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo=
go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI=
go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco=
go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI=
go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU=
go.uber.org/zap v1.21.0 h1:WefMeulhovoZ2sYXz7st6K0sLj7bBhpiFaud4r4zST8=
go.uber.org/zap v1.21.0/go.mod h1:wjWOCqI0f2ZZrJF/UufIOkiC8ii6tm1iqIsLo76RfJw=
golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI=
golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8=
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
google.golang.org/api v0.171.0 h1:w174hnBPqut76FzW5Qaupt7zY8Kql6fiVjgys4f58sU=
google.golang.org/api v0.171.0/go.mod h1:Hnq5AHm4OTMt2BUVjael2CWZFD6vksJdWCWiUAmjC9o=
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 h1:9+tzLLstTlPTRyJTh+ah5wIMsBW5c4tQwGTN3thOW9Y=
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9/go.mod h1:mqHbVIp48Muh7Ywss/AD6I5kNVKZMmAa/QEW58Gxp2s=
google.golang.org/genproto/googleapis/api v0.0.0-20240311132316-a219d84964c2 h1:rIo7ocm2roD9DcFIX67Ym8icoGCKSARAiPljFhh5suQ=
google.golang.org/genproto/googleapis/api v0.0.0-20240311132316-a219d84964c2/go.mod h1:O1cOfN1Cy6QEYr7VxtjOyP5AdAuR0aJ/MYZaaof623Y=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c h1:lfpJ/2rWPa/kJgxyyXM8PrNnfCzcmxJ265mADgwmvLI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY=
google.golang.org/grpc v1.62.1 h1:B4n+nfKzOICUXMgyrNd19h/I9oH0L1pizfk1d4zSgTk=
google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=

View File

@@ -1,12 +0,0 @@
{
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "$CLAUDE_PROJECT_DIR/hooks/skill-activation-prompt.sh"
}
]
}
]
}

View File

@@ -1,82 +0,0 @@
#!/bin/bash
# Example pre-commit hook
# This hook runs before git commit to validate code quality
set -e
# Get staged files
STAGED_FILES="$(git diff --cached --name-only --diff-filter=ACM)"
if [ -z "$STAGED_FILES" ]; then
echo "No files to validate"
exit 0
fi
echo "Running pre-commit checks..."
# Check Go files
GO_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.go$' || true)"
if [ -n "$GO_FILES" ]; then
echo "Checking Go files..."
if ! command -v gofmt &> /dev/null; then
echo "❌ gofmt not found. Please install Go (gofmt is included with the Go toolchain)."
exit 1
fi
# Format check
GO_FILE_ARGS=()
while IFS= read -r file; do
if [ -n "$file" ]; then
GO_FILE_ARGS+=("$file")
fi
done <<< "$GO_FILES"
if [ "${#GO_FILE_ARGS[@]}" -gt 0 ]; then
UNFORMATTED="$(gofmt -l "${GO_FILE_ARGS[@]}")"
if [ -n "$UNFORMATTED" ]; then
echo "❌ The following files need formatting:"
echo "$UNFORMATTED"
echo "Run: gofmt -w <file>"
exit 1
fi
fi
# Run tests
if command -v go &> /dev/null; then
echo "Running go tests..."
go test ./... -short || {
echo "❌ Tests failed"
exit 1
}
fi
fi
# Check JSON files
JSON_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.json$' || true)"
if [ -n "$JSON_FILES" ]; then
echo "Validating JSON files..."
if ! command -v jq &> /dev/null; then
echo "❌ jq not found. Please install jq to validate JSON files."
exit 1
fi
while IFS= read -r file; do
if [ -z "$file" ]; then
continue
fi
if ! jq empty "$file" 2>/dev/null; then
echo "❌ Invalid JSON: $file"
exit 1
fi
done <<< "$JSON_FILES"
fi
# Check Markdown files
MD_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.md$' || true)"
if [ -n "$MD_FILES" ]; then
echo "Checking markdown files..."
# Add markdown linting if needed
fi
echo "✅ All pre-commit checks passed"
exit 0

View File

@@ -1,85 +0,0 @@
#!/usr/bin/env node
const fs = require("fs");
const path = require("path");
function readInput() {
const raw = fs.readFileSync(0, "utf8").trim();
if (!raw) return {};
try {
return JSON.parse(raw);
} catch (_err) {
return {};
}
}
function extractPrompt(payload) {
return (
payload.prompt ||
payload.text ||
payload.userPrompt ||
(payload.data && payload.data.prompt) ||
""
).toString();
}
function loadRules() {
const rulesPath = path.resolve(__dirname, "../skills/skill-rules.json");
try {
const file = fs.readFileSync(rulesPath, "utf8");
return JSON.parse(file);
} catch (_err) {
return { skills: {} };
}
}
function matchSkill(prompt, rule, skillName) {
const triggers = (rule && rule.promptTriggers) || {};
const keywords = [...(triggers.keywords || []), skillName].filter(Boolean);
const patterns = triggers.intentPatterns || [];
const promptLower = prompt.toLowerCase();
const keyword = keywords.find((k) => promptLower.includes(k.toLowerCase()));
if (keyword) {
return `命中关键词 "${keyword}"`;
}
for (const pattern of patterns) {
try {
if (new RegExp(pattern, "i").test(prompt)) {
return `命中模式 /${pattern}/`;
}
} catch (_err) {
continue;
}
}
return null;
}
function main() {
const payload = readInput();
const prompt = extractPrompt(payload);
if (!prompt.trim()) {
console.log(JSON.stringify({ suggestedSkills: [] }, null, 2));
return;
}
const rules = loadRules();
const suggestions = [];
for (const [name, rule] of Object.entries(rules.skills || {})) {
const matchReason = matchSkill(prompt, rule, name);
if (matchReason) {
suggestions.push({
skill: name,
enforcement: rule.enforcement || "suggest",
priority: rule.priority || "normal",
reason: matchReason
});
}
}
console.log(JSON.stringify({ suggestedSkills: suggestions }, null, 2));
}
main();

View File

@@ -1,12 +0,0 @@
#!/usr/bin/env bash
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SCRIPT="$SCRIPT_DIR/skill-activation-prompt.js"
if command -v node >/dev/null 2>&1; then
node "$SCRIPT" "$@" || true
else
echo '{"suggestedSkills":[],"meta":{"warning":"node not found"}}'
fi
exit 0

View File

@@ -1,77 +0,0 @@
#!/usr/bin/env bash
# Simple test runner for skill-activation-prompt hook.
# Each case feeds JSON to the hook and validates suggested skills.
set -uo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
HOOK_SCRIPT="$SCRIPT_DIR/skill-activation-prompt.sh"
parse_skills() {
node -e 'const data = JSON.parse(require("fs").readFileSync(0, "utf8")); const skills = (data.suggestedSkills || []).map(s => s.skill); console.log(skills.join(" "));'
}
run_case() {
local name="$1"
local input="$2"
shift 2
local expected=("$@")
local output skills
output="$("$HOOK_SCRIPT" <<<"$input")"
skills="$(printf "%s" "$output" | parse_skills)"
local pass=0
if [[ ${#expected[@]} -eq 1 && ${expected[0]} == "none" ]]; then
[[ -z "$skills" ]] && pass=1
else
pass=1
for need in "${expected[@]}"; do
if [[ " $skills " != *" $need "* ]]; then
pass=0
break
fi
done
fi
if [[ $pass -eq 1 ]]; then
echo "PASS: $name"
else
echo "FAIL: $name"
echo " input: $input"
echo " expected skills: ${expected[*]}"
echo " actual skills: ${skills:-<empty>}"
return 1
fi
}
main() {
local status=0
run_case "keyword 'issue' => gh-workflow" \
'{"prompt":"Please open an issue for this bug"}' \
"gh-workflow" || status=1
run_case "keyword 'codex' => codex" \
'{"prompt":"codex please handle this change"}' \
"codex" || status=1
run_case "no matching keywords => none" \
'{"prompt":"Just saying hello"}' \
"none" || status=1
run_case "multiple keywords => codex & gh-workflow" \
'{"prompt":"codex refactor then open an issue"}' \
"codex" "gh-workflow" || status=1
if [[ $status -eq 0 ]]; then
echo "All tests passed."
else
echo "Some tests failed."
fi
exit "$status"
}
main "$@"

View File

@@ -69,6 +69,11 @@ def parse_args(argv: Optional[Iterable[str]] = None) -> argparse.Namespace:
action="store_true",
help="Uninstall specified modules",
)
parser.add_argument(
"--update",
action="store_true",
help="Update already installed modules",
)
parser.add_argument(
"--force",
action="store_true",
@@ -121,8 +126,11 @@ def save_settings(ctx: Dict[str, Any], settings: Dict[str, Any]) -> None:
_save_json(settings_path, settings)
def find_module_hooks(module_name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Optional[Dict[str, Any]]:
"""Find hooks.json for a module if it exists."""
def find_module_hooks(module_name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Optional[tuple]:
"""Find hooks.json for a module if it exists.
Returns tuple of (hooks_config, plugin_root_path) or None.
"""
# Check for hooks in operations (copy_dir targets)
for op in cfg.get("operations", []):
if op.get("type") == "copy_dir":
@@ -130,18 +138,19 @@ def find_module_hooks(module_name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]
hooks_file = target_dir / "hooks" / "hooks.json"
if hooks_file.exists():
try:
return _load_json(hooks_file)
return (_load_json(hooks_file), str(target_dir))
except (ValueError, FileNotFoundError):
pass
# Also check source directory during install
for op in cfg.get("operations", []):
if op.get("type") == "copy_dir":
target_dir = ctx["install_dir"] / op["target"]
source_dir = ctx["config_dir"] / op["source"]
hooks_file = source_dir / "hooks" / "hooks.json"
if hooks_file.exists():
try:
return _load_json(hooks_file)
return (_load_json(hooks_file), str(target_dir))
except (ValueError, FileNotFoundError):
pass
@@ -153,7 +162,18 @@ def _create_hook_marker(module_name: str) -> str:
return f"__module:{module_name}__"
def merge_hooks_to_settings(module_name: str, hooks_config: Dict[str, Any], ctx: Dict[str, Any]) -> None:
def _replace_hook_variables(obj: Any, plugin_root: str) -> Any:
"""Recursively replace ${CLAUDE_PLUGIN_ROOT} in hook config."""
if isinstance(obj, str):
return obj.replace("${CLAUDE_PLUGIN_ROOT}", plugin_root)
elif isinstance(obj, dict):
return {k: _replace_hook_variables(v, plugin_root) for k, v in obj.items()}
elif isinstance(obj, list):
return [_replace_hook_variables(item, plugin_root) for item in obj]
return obj
def merge_hooks_to_settings(module_name: str, hooks_config: Dict[str, Any], ctx: Dict[str, Any], plugin_root: str = "") -> None:
"""Merge module hooks into settings.json."""
settings = load_settings(ctx)
settings.setdefault("hooks", {})
@@ -161,6 +181,10 @@ def merge_hooks_to_settings(module_name: str, hooks_config: Dict[str, Any], ctx:
module_hooks = hooks_config.get("hooks", {})
marker = _create_hook_marker(module_name)
# Replace ${CLAUDE_PLUGIN_ROOT} with actual path
if plugin_root:
module_hooks = _replace_hook_variables(module_hooks, plugin_root)
for hook_type, hook_entries in module_hooks.items():
settings["hooks"].setdefault(hook_type, [])
@@ -333,6 +357,19 @@ def check_module_installed(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any])
target = (install_dir / op["target"]).expanduser().resolve()
if target.exists():
return True
elif op_type == "merge_dir":
src = (ctx["config_dir"] / op["source"]).expanduser().resolve()
if not src.exists() or not src.is_dir():
continue
for subdir in src.iterdir():
if not subdir.is_dir():
continue
for f in subdir.iterdir():
if not f.is_file():
continue
candidate = (install_dir / subdir.name / f.name).expanduser().resolve()
if candidate.exists():
return True
return False
@@ -707,10 +744,11 @@ def execute_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dict[
raise
# Handle hooks: find and merge module hooks into settings.json
hooks_config = find_module_hooks(name, cfg, ctx)
if hooks_config:
hooks_result = find_module_hooks(name, cfg, ctx)
if hooks_result:
hooks_config, plugin_root = hooks_result
try:
merge_hooks_to_settings(name, hooks_config, ctx)
merge_hooks_to_settings(name, hooks_config, ctx, plugin_root)
result["operations"].append({"type": "merge_hooks", "status": "success"})
result["has_hooks"] = True
except Exception as exc:
@@ -1043,6 +1081,74 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
print(f"\n✓ Uninstall complete")
return 0
# Handle --update
if getattr(args, "update", False):
try:
ensure_install_dir(ctx["install_dir"])
except Exception as exc:
print(f"Failed to prepare install dir: {exc}", file=sys.stderr)
return 1
installed_status = get_installed_modules(config, ctx)
if args.module:
selected = select_modules(config, args.module)
modules = {k: v for k, v in selected.items() if installed_status.get(k, False)}
else:
modules = {
k: v
for k, v in config.get("modules", {}).items()
if installed_status.get(k, False)
}
if not modules:
print("No installed modules to update.")
return 0
ctx["force"] = True
prepare_status_backup(ctx)
total = len(modules)
print(f"Updating {total} module(s) in {ctx['install_dir']}...")
results: List[Dict[str, Any]] = []
for idx, (name, cfg) in enumerate(modules.items(), 1):
print(f"[{idx}/{total}] Updating module: {name}...")
try:
results.append(execute_module(name, cfg, ctx))
print(f"{name} updated successfully")
except Exception as exc: # noqa: BLE001
print(f"{name} failed: {exc}", file=sys.stderr)
rollback(ctx)
if not args.force:
return 1
results.append(
{
"module": name,
"status": "failed",
"operations": [],
"installed_at": datetime.now().isoformat(),
}
)
break
current_status = load_installed_status(ctx)
for r in results:
if r.get("status") == "success":
current_status.setdefault("modules", {})[r["module"]] = r
current_status["updated_at"] = datetime.now().isoformat()
with Path(ctx["status_file"]).open("w", encoding="utf-8") as fh:
json.dump(current_status, fh, indent=2, ensure_ascii=False)
success = sum(1 for r in results if r.get("status") == "success")
failed = len(results) - success
if failed == 0:
print(f"\n✓ Update complete: {success} module(s) updated")
else:
print(f"\n⚠ Update finished with errors: {success} success, {failed} failed")
if not args.force:
return 1
return 0
# No --module specified: enter interactive management mode
if not args.module:
try:

View File

@@ -4,7 +4,7 @@ set -e
if [ -z "${SKIP_WARNING:-}" ]; then
echo "⚠️ WARNING: install.sh is LEGACY and will be removed in future versions."
echo "Please use the new installation method:"
echo " python3 install.py --install-dir ~/.claude"
echo " npx github:cexll/myclaude"
echo ""
echo "Set SKIP_WARNING=1 to bypass this message"
echo "Continuing with legacy installation in 5 seconds..."

26
package.json Normal file
View File

@@ -0,0 +1,26 @@
{
"name": "myclaude",
"version": "0.0.0",
"private": true,
"description": "Claude Code multi-agent workflows (npx installer)",
"license": "AGPL-3.0",
"bin": {
"myclaude": "bin/cli.js"
},
"files": [
"bin/",
".claude-plugin/",
"agents/",
"skills/",
"memorys/",
"codeagent-wrapper/",
"config.json",
"install.py",
"install.sh",
"install.bat",
"PLUGIN_README.md",
"README.md",
"README_CN.md",
"LICENSE"
]
}

23
skills/README.md Normal file
View File

@@ -0,0 +1,23 @@
# Skills
This directory contains agent skills (each skill lives in its own folder with a `SKILL.md`).
## Install with `npx` (recommended)
List installable items:
```bash
npx github:cexll/myclaude --list
```
Install (interactive; pick `skill:<name>`):
```bash
npx github:cexll/myclaude
```
Force overwrite / custom install directory:
```bash
npx github:cexll/myclaude --install-dir ~/.claude --force
```

186
skills/do/README.md Normal file
View File

@@ -0,0 +1,186 @@
# do - Feature Development Orchestrator
7-phase feature development workflow orchestrating multiple agents via codeagent-wrapper.
## Installation
```bash
python install.py --module do
```
Installs:
- `~/.claude/skills/do/` - skill files
- hooks auto-merged into `~/.claude/settings.json`
## Usage
```
/do <feature description>
```
Examples:
```
/do add user login feature
/do implement order export to CSV
```
## 7-Phase Workflow
| Phase | Name | Goal | Key Actions |
|-------|------|------|-------------|
| 1 | Discovery | Understand requirements | AskUserQuestion + code-architect draft |
| 2 | Exploration | Map codebase patterns | 2-3 parallel code-explorer tasks |
| 3 | Clarification | Resolve ambiguities | **MANDATORY** - must answer before proceeding |
| 4 | Architecture | Design implementation | 2 parallel code-architect approaches |
| 5 | Implementation | Build the feature | **Requires approval** - develop agent |
| 6 | Review | Catch defects | 2-3 parallel code-reviewer tasks |
| 7 | Summary | Document results | code-reviewer summary |
## Agents
| Agent | Purpose | Prompt Location |
|-------|---------|----------------|
| `code-explorer` | Code tracing, architecture mapping | `agents/code-explorer.md` |
| `code-architect` | Design approaches, file planning | `agents/code-architect.md` |
| `code-reviewer` | Code review, simplification | `agents/code-reviewer.md` |
| `develop` | Implement code, run tests | global config |
To customize agents, create same-named files in `~/.codeagent/agents/` to override.
## Hard Constraints
1. **Never write code directly** - delegate all changes to codeagent-wrapper agents
2. **Phase 3 is mandatory** - do not proceed until questions are answered
3. **Phase 5 requires approval** - stop after Phase 4 if not approved
4. **Pass complete context forward** - every agent gets the Context Pack
5. **Parallel-first** - run independent tasks via `codeagent-wrapper --parallel`
6. **Update state after each phase** - keep `.claude/do.{task_id}.local.md` current
## Context Pack Template
```text
## Original User Request
<verbatim request>
## Context Pack
- Phase: <1-7 name>
- Decisions: <requirements/constraints/choices>
- Code-explorer output: <paste or "None">
- Code-architect output: <paste or "None">
- Code-reviewer output: <paste or "None">
- Develop output: <paste or "None">
- Open questions: <list or "None">
## Current Task
<specific task>
## Acceptance Criteria
<checkable outputs>
```
## Loop State Management
When triggered via `/do <task>`, initializes `.claude/do.{task_id}.local.md` with:
- `active: true`
- `current_phase: 1`
- `max_phases: 7`
- `completion_promise: "<promise>DO_COMPLETE</promise>"`
After each phase, update frontmatter:
```yaml
current_phase: <next phase number>
phase_name: "<next phase name>"
```
When all 7 phases complete, output:
```
<promise>DO_COMPLETE</promise>
```
To abort early, set `active: false` in the state file.
## Stop Hook
A Stop hook is registered after installation:
1. Creates `.claude/do.{task_id}.local.md` state file
2. Updates `current_phase` after each phase
3. Stop hook checks state, blocks exit if incomplete
4. Outputs `<promise>DO_COMPLETE</promise>` when finished
Manual exit: Set `active` to `false` in the state file.
## Parallel Execution Examples
### Phase 2: Exploration (3 parallel tasks)
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: p2_similar_features
agent: code-explorer
workdir: .
---CONTENT---
Find similar features, trace end-to-end.
---TASK---
id: p2_architecture
agent: code-explorer
workdir: .
---CONTENT---
Map architecture for relevant subsystem.
---TASK---
id: p2_conventions
agent: code-explorer
workdir: .
---CONTENT---
Identify testing patterns and conventions.
EOF
```
### Phase 4: Architecture (2 approaches)
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: p4_minimal
agent: code-architect
workdir: .
---CONTENT---
Propose minimal-change architecture.
---TASK---
id: p4_pragmatic
agent: code-architect
workdir: .
---CONTENT---
Propose pragmatic-clean architecture.
EOF
```
## ~/.codeagent/models.json Configuration
Optional. Uses codeagent-wrapper built-in config by default. To customize:
```json
{
"agents": {
"code-explorer": {
"backend": "claude",
"model": "claude-sonnet-4-5-20250929"
},
"code-architect": {
"backend": "claude",
"model": "claude-sonnet-4-5-20250929"
},
"code-reviewer": {
"backend": "claude",
"model": "claude-sonnet-4-5-20250929"
}
}
}
```
## Uninstall
```bash
python install.py --uninstall --module do
```

331
skills/do/SKILL.md Normal file
View File

@@ -0,0 +1,331 @@
---
name: do
description: This skill should be used for structured feature development with codebase understanding. Triggers on /do command. Provides a 7-phase workflow (Discovery, Exploration, Clarification, Architecture, Implementation, Review, Summary) using codeagent-wrapper to orchestrate code-explorer, code-architect, code-reviewer, and develop agents in parallel.
allowed-tools: ["Bash(${SKILL_DIR}/scripts/setup-do.sh:*)"]
---
# do - Feature Development Orchestrator
An orchestrator for systematic feature development. Invoke agents via `codeagent-wrapper`, never write code directly.
## Loop Initialization (REQUIRED)
When triggered via `/do <task>`, **first** initialize the loop state:
```bash
"${SKILL_DIR}/scripts/setup-do.sh" "<task description>"
```
This creates `.claude/do.{task_id}.local.md` with:
- `active: true`
- `current_phase: 1`
- `max_phases: 7`
- `completion_promise: "<promise>DO_COMPLETE</promise>"`
## Loop State Management
After each phase, update `.claude/do.{task_id}.local.md` frontmatter:
```yaml
current_phase: <next phase number>
phase_name: "<next phase name>"
```
When all 7 phases complete, output the completion signal:
```
<promise>DO_COMPLETE</promise>
```
To abort early, set `active: false` in the state file.
## Hard Constraints
1. **Never write code directly.** Delegate all code changes to `codeagent-wrapper` agents.
2. **Phase 3 (Clarification) is mandatory.** Do not proceed until questions are answered.
3. **Phase 5 (Implementation) requires explicit approval.** Stop after Phase 4 if not approved.
4. **Pass complete context forward.** Every agent invocation includes the Context Pack.
5. **Parallel-first.** Run independent tasks via `codeagent-wrapper --parallel`.
6. **Update state after each phase.** Keep `.claude/do.{task_id}.local.md` current.
## Agents
| Agent | Purpose | Prompt |
|-------|---------|--------|
| `code-explorer` | Trace code, map architecture, find patterns | `agents/code-explorer.md` |
| `code-architect` | Design approaches, file plans, build sequences | `agents/code-architect.md` |
| `code-reviewer` | Review for bugs, simplicity, conventions | `agents/code-reviewer.md` |
| `develop` | Implement code, run tests | (uses global config) |
## Context Pack Template
```text
## Original User Request
<verbatim request>
## Context Pack
- Phase: <1-7 name>
- Decisions: <requirements/constraints/choices>
- Code-explorer output: <paste or "None">
- Code-architect output: <paste or "None">
- Code-reviewer output: <paste or "None">
- Develop output: <paste or "None">
- Open questions: <list or "None">
## Current Task
<specific task>
## Acceptance Criteria
<checkable outputs>
```
## 7-Phase Workflow
### Phase 1: Discovery
**Goal:** Understand what to build.
**Actions:**
1. Use AskUserQuestion for: user-visible behavior, scope, constraints, acceptance criteria
2. Invoke `code-architect` to draft requirements checklist and clarifying questions
```bash
codeagent-wrapper --agent code-architect - . <<'EOF'
## Original User Request
/do <request>
## Context Pack
- Code-explorer output: None
- Code-architect output: None
## Current Task
Produce requirements checklist and identify missing information.
Output: Requirements, Non-goals, Risks, Acceptance criteria, Questions (<= 10)
## Acceptance Criteria
Concrete, testable checklist; specific questions; no implementation.
EOF
```
### Phase 2: Exploration
**Goal:** Map codebase patterns and extension points.
**Actions:** Run 2-3 `code-explorer` tasks in parallel (similar features, architecture, tests/conventions).
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: p2_similar_features
agent: code-explorer
workdir: .
---CONTENT---
## Original User Request
/do <request>
## Context Pack
- Code-architect output: <Phase 1 output>
## Current Task
Find 1-3 similar features, trace end-to-end. Return: key files with line numbers, call flow, extension points.
## Acceptance Criteria
Concrete file:line map + reuse points.
---TASK---
id: p2_architecture
agent: code-explorer
workdir: .
---CONTENT---
## Original User Request
/do <request>
## Context Pack
- Code-architect output: <Phase 1 output>
## Current Task
Map architecture for relevant subsystem. Return: module map + 5-10 key files.
## Acceptance Criteria
Clear boundaries; file:line references.
---TASK---
id: p2_conventions
agent: code-explorer
workdir: .
---CONTENT---
## Original User Request
/do <request>
## Context Pack
- Code-architect output: <Phase 1 output>
## Current Task
Identify testing patterns, conventions, config. Return: test commands + file locations.
## Acceptance Criteria
Test commands + relevant test file paths.
EOF
```
### Phase 3: Clarification (MANDATORY)
**Goal:** Resolve all ambiguities before design.
**Actions:**
1. Invoke `code-architect` to generate prioritized questions from Phase 1+2 outputs
2. Use AskUserQuestion to present questions and wait for answers
3. **Do not proceed until answered or defaults accepted**
### Phase 4: Architecture
**Goal:** Produce implementation plan fitting existing patterns.
**Actions:** Run 2 `code-architect` tasks in parallel (minimal-change vs pragmatic-clean).
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: p4_minimal
agent: code-architect
workdir: .
---CONTENT---
## Original User Request
/do <request>
## Context Pack
- Code-explorer output: <ALL Phase 2 outputs>
- Code-architect output: <Phase 1 + Phase 3 answers>
## Current Task
Propose minimal-change architecture: reuse existing abstractions, minimize new files.
Output: file touch list, risks, edge cases.
## Acceptance Criteria
Concrete blueprint; minimal moving parts.
---TASK---
id: p4_pragmatic
agent: code-architect
workdir: .
---CONTENT---
## Original User Request
/do <request>
## Context Pack
- Code-explorer output: <ALL Phase 2 outputs>
- Code-architect output: <Phase 1 + Phase 3 answers>
## Current Task
Propose pragmatic-clean architecture: introduce seams for testability.
Output: file touch list, testing plan, risks.
## Acceptance Criteria
Implementable blueprint with build sequence and tests.
EOF
```
Use AskUserQuestion to let user choose approach.
### Phase 5: Implementation (Approval Required)
**Goal:** Build the feature.
**Actions:**
1. Use AskUserQuestion: "Approve starting implementation?" (Approve / Not yet)
2. If approved, invoke `develop`:
```bash
codeagent-wrapper --agent develop - . <<'EOF'
## Original User Request
/do <request>
## Context Pack
- Code-explorer output: <ALL Phase 2 outputs>
- Code-architect output: <selected Phase 4 blueprint + Phase 3 answers>
## Current Task
Implement with minimal change set following chosen architecture.
- Follow Phase 2 patterns
- Add/adjust tests per Phase 4 plan
- Run narrowest relevant tests
## Acceptance Criteria
Feature works end-to-end; tests pass; diff is minimal.
EOF
```
### Phase 6: Review
**Goal:** Catch defects and unnecessary complexity.
**Actions:** Run 2-3 `code-reviewer` tasks in parallel (correctness, simplicity).
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: p6_correctness
agent: code-reviewer
workdir: .
---CONTENT---
## Original User Request
/do <request>
## Context Pack
- Code-architect output: <Phase 4 blueprint>
- Develop output: <Phase 5 output>
## Current Task
Review for correctness, edge cases, failure modes. Assume adversarial inputs.
## Acceptance Criteria
Issues with file:line references and concrete fixes.
---TASK---
id: p6_simplicity
agent: code-reviewer
workdir: .
---CONTENT---
## Original User Request
/do <request>
## Context Pack
- Code-architect output: <Phase 4 blueprint>
- Develop output: <Phase 5 output>
## Current Task
Review for KISS: remove bloat, collapse needless abstractions.
## Acceptance Criteria
Actionable simplifications with justification.
EOF
```
Use AskUserQuestion: Fix now / Fix later / Proceed as-is.
### Phase 7: Summary
**Goal:** Document what was built.
**Actions:** Invoke `code-reviewer` to produce summary:
```bash
codeagent-wrapper --agent code-reviewer - . <<'EOF'
## Original User Request
/do <request>
## Context Pack
- Code-architect output: <Phase 4 blueprint>
- Code-reviewer output: <Phase 6 outcomes>
- Develop output: <Phase 5 output + fixes>
## Current Task
Write completion summary:
- What was built
- Key decisions/tradeoffs
- Files modified (paths)
- How to verify (commands)
- Follow-ups (optional)
## Acceptance Criteria
Short, technical, actionable summary.
EOF
```

View File

@@ -0,0 +1,34 @@
---
name: code-architect
description: Designs feature architectures by analyzing existing codebase patterns and conventions, then providing comprehensive implementation blueprints with specific files to create/modify, component designs, data flows, and build sequences
tools: Glob, Grep, LS, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, KillShell, BashOutput
model: sonnet
color: green
---
You are a senior software architect who delivers comprehensive, actionable architecture blueprints by deeply understanding codebases and making confident architectural decisions.
## Core Process
**1. Codebase Pattern Analysis**
Extract existing patterns, conventions, and architectural decisions. Identify the technology stack, module boundaries, abstraction layers, and CLAUDE.md guidelines. Find similar features to understand established approaches.
**2. Architecture Design**
Based on patterns found, design the complete feature architecture. Make decisive choices - pick one approach and commit. Ensure seamless integration with existing code. Design for testability, performance, and maintainability.
**3. Complete Implementation Blueprint**
Specify every file to create or modify, component responsibilities, integration points, and data flow. Break implementation into clear phases with specific tasks.
## Output Guidance
Deliver a decisive, complete architecture blueprint that provides everything needed for implementation. Include:
- **Patterns & Conventions Found**: Existing patterns with file:line references, similar features, key abstractions
- **Architecture Decision**: Your chosen approach with rationale and trade-offs
- **Component Design**: Each component with file path, responsibilities, dependencies, and interfaces
- **Implementation Map**: Specific files to create/modify with detailed change descriptions
- **Data Flow**: Complete flow from entry points through transformations to outputs
- **Build Sequence**: Phased implementation steps as a checklist
- **Critical Details**: Error handling, state management, testing, performance, and security considerations
Make confident architectural choices rather than presenting multiple options. Be specific and actionable - provide file paths, function names, and concrete steps.

View File

@@ -0,0 +1,51 @@
---
name: code-explorer
description: Deeply analyzes existing codebase features by tracing execution paths, mapping architecture layers, understanding patterns and abstractions, and documenting dependencies to inform new development
tools: Glob, Grep, LS, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, KillShell, BashOutput
model: sonnet
color: yellow
---
You are an expert code analyst specializing in tracing and understanding feature implementations across codebases.
## Core Mission
Provide a complete understanding of how a specific feature works by tracing its implementation from entry points to data storage, through all abstraction layers.
## Analysis Approach
**1. Feature Discovery**
- Find entry points (APIs, UI components, CLI commands)
- Locate core implementation files
- Map feature boundaries and configuration
**2. Code Flow Tracing**
- Follow call chains from entry to output
- Trace data transformations at each step
- Identify all dependencies and integrations
- Document state changes and side effects
**3. Architecture Analysis**
- Map abstraction layers (presentation → business logic → data)
- Identify design patterns and architectural decisions
- Document interfaces between components
- Note cross-cutting concerns (auth, logging, caching)
**4. Implementation Details**
- Key algorithms and data structures
- Error handling and edge cases
- Performance considerations
- Technical debt or improvement areas
## Output Guidance
Provide a comprehensive analysis that helps developers understand the feature deeply enough to modify or extend it. Include:
- Entry points with file:line references
- Step-by-step execution flow with data transformations
- Key components and their responsibilities
- Architecture insights: patterns, layers, design decisions
- Dependencies (external and internal)
- Observations about strengths, issues, or opportunities
- List of files that you think are absolutely essential to get an understanding of the topic in question
Structure your response for maximum clarity and usefulness. Always include specific file paths and line numbers.

View File

@@ -0,0 +1,46 @@
---
name: code-reviewer
description: Reviews code for bugs, logic errors, security vulnerabilities, code quality issues, and adherence to project conventions, using confidence-based filtering to report only high-priority issues that truly matter
tools: Glob, Grep, LS, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, KillShell, BashOutput
model: sonnet
color: red
---
You are an expert code reviewer specializing in modern software development across multiple languages and frameworks. Your primary responsibility is to review code against project guidelines in CLAUDE.md with high precision to minimize false positives.
## Review Scope
By default, review unstaged changes from `git diff`. The user may specify different files or scope to review.
## Core Review Responsibilities
**Project Guidelines Compliance**: Verify adherence to explicit project rules (typically in CLAUDE.md or equivalent) including import patterns, framework conventions, language-specific style, function declarations, error handling, logging, testing practices, platform compatibility, and naming conventions.
**Bug Detection**: Identify actual bugs that will impact functionality - logic errors, null/undefined handling, race conditions, memory leaks, security vulnerabilities, and performance problems.
**Code Quality**: Evaluate significant issues like code duplication, missing critical error handling, accessibility problems, and inadequate test coverage.
## Confidence Scoring
Rate each potential issue on a scale from 0-100:
- **0**: Not confident at all. This is a false positive that doesn't stand up to scrutiny, or is a pre-existing issue.
- **25**: Somewhat confident. This might be a real issue, but may also be a false positive. If stylistic, it wasn't explicitly called out in project guidelines.
- **50**: Moderately confident. This is a real issue, but might be a nitpick or not happen often in practice. Not very important relative to the rest of the changes.
- **75**: Highly confident. Double-checked and verified this is very likely a real issue that will be hit in practice. The existing approach is insufficient. Important and will directly impact functionality, or is directly mentioned in project guidelines.
- **100**: Absolutely certain. Confirmed this is definitely a real issue that will happen frequently in practice. The evidence directly confirms this.
**Only report issues with confidence ≥ 80.** Focus on issues that truly matter - quality over quantity.
## Output Guidance
Start by clearly stating what you're reviewing. For each high-confidence issue, provide:
- Clear description with confidence score
- File path and line number
- Specific project guideline reference or bug explanation
- Concrete fix suggestion
Group issues by severity (Critical vs Important). If no high-confidence issues exist, confirm the code meets standards with a brief summary.
Structure your response for maximum actionability - developers should know exactly what to fix and why.

View File

@@ -0,0 +1,15 @@
{
"description": "do loop hook for 7-phase workflow",
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/stop-hook.sh"
}
]
}
]
}
}

151
skills/do/hooks/stop-hook.sh Executable file
View File

@@ -0,0 +1,151 @@
#!/usr/bin/env bash
set -euo pipefail
phase_name_for() {
case "${1:-}" in
1) echo "Discovery" ;;
2) echo "Exploration" ;;
3) echo "Clarification" ;;
4) echo "Architecture" ;;
5) echo "Implementation" ;;
6) echo "Review" ;;
7) echo "Summary" ;;
*) echo "Phase ${1:-unknown}" ;;
esac
}
json_escape() {
local s="${1:-}"
s=${s//\\/\\\\}
s=${s//\"/\\\"}
s=${s//$'\n'/\\n}
s=${s//$'\r'/\\r}
s=${s//$'\t'/\\t}
printf "%s" "$s"
}
project_dir="${CLAUDE_PROJECT_DIR:-$PWD}"
state_dir="${project_dir}/.claude"
shopt -s nullglob
state_files=("${state_dir}"/do.*.local.md)
shopt -u nullglob
if [ ${#state_files[@]} -eq 0 ]; then
exit 0
fi
stdin_payload=""
if [ ! -t 0 ]; then
stdin_payload="$(cat || true)"
fi
frontmatter_get() {
local file="$1" key="$2"
awk -v k="$key" '
BEGIN { in_fm=0 }
NR==1 && $0=="---" { in_fm=1; next }
in_fm==1 && $0=="---" { exit }
in_fm==1 {
if ($0 ~ "^"k":[[:space:]]*") {
sub("^"k":[[:space:]]*", "", $0)
gsub(/^[[:space:]]+|[[:space:]]+$/, "", $0)
if ($0 ~ /^".*"$/) { sub(/^"/, "", $0); sub(/"$/, "", $0) }
print $0
exit
}
}
' "$file"
}
check_state_file() {
local state_file="$1"
local active_raw active_lc
active_raw="$(frontmatter_get "$state_file" active || true)"
active_lc="$(printf "%s" "$active_raw" | tr '[:upper:]' '[:lower:]')"
case "$active_lc" in
true|1|yes|on) ;;
*) return 0 ;;
esac
local current_phase_raw max_phases_raw phase_name completion_promise
current_phase_raw="$(frontmatter_get "$state_file" current_phase || true)"
max_phases_raw="$(frontmatter_get "$state_file" max_phases || true)"
phase_name="$(frontmatter_get "$state_file" phase_name || true)"
completion_promise="$(frontmatter_get "$state_file" completion_promise || true)"
local current_phase=1
if [[ "${current_phase_raw:-}" =~ ^[0-9]+$ ]]; then
current_phase="$current_phase_raw"
fi
local max_phases=7
if [[ "${max_phases_raw:-}" =~ ^[0-9]+$ ]]; then
max_phases="$max_phases_raw"
fi
if [ -z "${phase_name:-}" ]; then
phase_name="$(phase_name_for "$current_phase")"
fi
if [ -z "${completion_promise:-}" ]; then
completion_promise="<promise>DO_COMPLETE</promise>"
fi
local phases_done=0
if [ "$current_phase" -ge "$max_phases" ]; then
phases_done=1
fi
local promise_met=0
if [ -n "$completion_promise" ]; then
if [ -n "$stdin_payload" ] && printf "%s" "$stdin_payload" | grep -Fq -- "$completion_promise"; then
promise_met=1
else
local body
body="$(
awk '
BEGIN { in_fm=0; body=0 }
NR==1 && $0=="---" { in_fm=1; next }
in_fm==1 && $0=="---" { body=1; in_fm=0; next }
body==1 { print }
' "$state_file"
)"
if [ -n "$body" ] && printf "%s" "$body" | grep -Fq -- "$completion_promise"; then
promise_met=1
fi
fi
fi
if [ "$phases_done" -eq 1 ] && [ "$promise_met" -eq 1 ]; then
rm -f "$state_file"
return 0
fi
local reason
if [ "$phases_done" -eq 0 ]; then
reason="do loop incomplete: current phase ${current_phase}/${max_phases} (${phase_name}). Continue with remaining phases; update ${state_file} current_phase/phase_name after each phase. Include completion_promise in final output when done: ${completion_promise}. To exit early, set active to false."
else
reason="do reached final phase (current_phase=${current_phase} / max_phases=${max_phases}, phase_name=${phase_name}), but completion_promise not detected: ${completion_promise}. Please include this marker in your final output (or write it to ${state_file} body), then finish; to force exit, set active to false."
fi
printf "%s" "$reason"
}
blocking_reasons=()
for state_file in "${state_files[@]}"; do
reason="$(check_state_file "$state_file")"
if [ -n "$reason" ]; then
blocking_reasons+=("$reason")
fi
done
if [ ${#blocking_reasons[@]} -eq 0 ]; then
exit 0
fi
combined_reason="${blocking_reasons[*]}"
printf '{"decision":"block","reason":"%s"}\n' "$(json_escape "$combined_reason")"
exit 0

114
skills/do/scripts/setup-do.sh Executable file
View File

@@ -0,0 +1,114 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'EOF'
Usage: setup-do.sh [options] PROMPT...
Creates (or overwrites) project state file:
.claude/do.local.md
Options:
--max-phases N Default: 7
--completion-promise STR Default: <promise>DO_COMPLETE</promise>
-h, --help Show this help
EOF
}
die() {
echo "$*" >&2
exit 1
}
phase_name_for() {
case "${1:-}" in
1) echo "Discovery" ;;
2) echo "Exploration" ;;
3) echo "Clarification" ;;
4) echo "Architecture" ;;
5) echo "Implementation" ;;
6) echo "Review" ;;
7) echo "Summary" ;;
*) echo "Phase ${1:-unknown}" ;;
esac
}
max_phases=7
completion_promise="<promise>DO_COMPLETE</promise>"
declare -a prompt_parts=()
while [ $# -gt 0 ]; do
case "$1" in
-h|--help)
usage
exit 0
;;
--max-phases)
[ $# -ge 2 ] || die "--max-phases requires a value"
max_phases="$2"
shift 2
;;
--completion-promise)
[ $# -ge 2 ] || die "--completion-promise requires a value"
completion_promise="$2"
shift 2
;;
--)
shift
while [ $# -gt 0 ]; do
prompt_parts+=("$1")
shift
done
break
;;
-*)
die "Unknown argument: $1 (use --help)"
;;
*)
prompt_parts+=("$1")
shift
;;
esac
done
prompt="${prompt_parts[*]:-}"
[ -n "$prompt" ] || die "PROMPT is required (use --help)"
if ! [[ "$max_phases" =~ ^[0-9]+$ ]] || [ "$max_phases" -lt 1 ]; then
die "--max-phases must be a positive integer"
fi
project_dir="${CLAUDE_PROJECT_DIR:-$PWD}"
state_dir="${project_dir}/.claude"
task_id="$(date +%s)-$$-$(head -c 4 /dev/urandom | od -An -tx1 | tr -d ' \n')"
state_file="${state_dir}/do.${task_id}.local.md"
mkdir -p "$state_dir"
phase_name="$(phase_name_for 1)"
cat > "$state_file" << EOF
---
active: true
current_phase: 1
phase_name: "$phase_name"
max_phases: $max_phases
completion_promise: "$completion_promise"
---
# do loop state
## Prompt
$prompt
## Notes
- Update frontmatter current_phase/phase_name as you progress
- When complete, include the frontmatter completion_promise in your final output
EOF
echo "Initialized: $state_file"
echo "task_id: $task_id"
echo "phase: 1/$max_phases ($phase_name)"
echo "completion_promise: $completion_promise"

View File

@@ -1,14 +1,14 @@
# OmO Multi-Agent Orchestration
# omo - Multi-Agent Orchestration
OmO (Oh-My-OpenCode) is a multi-agent orchestration skill that delegates tasks to specialized agents based on routing signals.
OmO is a multi-agent orchestration skill that routes tasks to specialized agents based on risk signals.
## Installation
```bash
python3 install.py --module omo
python install.py --module omo
```
## Quick Start
## Usage
```
/omo <your task>
@@ -18,59 +18,107 @@ python3 install.py --module omo
| Agent | Role | Backend | Model |
|-------|------|---------|-------|
| oracle | Technical advisor | claude | claude-opus-4-5-20251101 |
| librarian | External research | claude | claude-sonnet-4-5-20250929 |
| explore | Codebase search | opencode | opencode/grok-code |
| develop | Code implementation | codex | gpt-5.2 |
| frontend-ui-ux-engineer | UI/UX specialist | gemini | gemini-3-pro-preview |
| document-writer | Documentation | gemini | gemini-3-flash-preview |
| `oracle` | Technical advisor | claude | claude-opus-4-5 |
| `librarian` | External research | claude | claude-sonnet-4-5 |
| `explore` | Codebase search | opencode | grok-code |
| `develop` | Code implementation | codex | gpt-5.2 |
| `frontend-ui-ux-engineer` | UI/UX specialist | gemini | gemini-3-pro |
| `document-writer` | Documentation | gemini | gemini-3-flash |
## How It Works
## Routing Signals (Not Fixed Pipeline)
1. `/omo` analyzes your request via routing signals
2. Based on task type, it either:
- Answers directly (analysis/explanation tasks - no code changes)
- Delegates to specialized agents (implementation tasks)
- Fires parallel agents (exploration + research)
This skill is **routing-first**, not a mandatory conveyor belt.
| Signal | Add Agent |
|--------|----------|
| Code location/behavior unclear | `explore` |
| External library/API usage unclear | `librarian` |
| Risky change (multi-file, public API, security, perf) | `oracle` |
| Implementation required | `develop` / `frontend-ui-ux-engineer` |
| Documentation needed | `document-writer` |
### Skipping Heuristics
- Skip `explore` when exact file path + line number is known
- Skip `oracle` when change is local + low-risk (single area, clear fix)
- Skip implementation agents when user only wants analysis
## Common Recipes
| Task | Recipe |
|------|--------|
| Explain code | `explore` |
| Small fix with known location | `develop` directly |
| Bug fix, location unknown | `explore → develop` |
| Cross-cutting refactor | `explore → oracle → develop` |
| External API integration | `explore + librarian → oracle → develop` |
| UI-only change | `explore → frontend-ui-ux-engineer` |
| Docs-only change | `explore → document-writer` |
## Context Pack Template
Every agent invocation includes:
```text
## Original User Request
<original request>
## Context Pack (include anything relevant; write "None" if absent)
- Explore output: <...>
- Librarian output: <...>
- Oracle output: <...>
- Known constraints: <tests to run, time budget, repo conventions>
## Current Task
<specific task description>
## Acceptance Criteria
<clear completion conditions>
```
## Agent Invocation
```bash
codeagent-wrapper --agent <agent_name> - <workdir> <<'EOF'
## Original User Request
...
## Context Pack
...
## Current Task
...
## Acceptance Criteria
...
EOF
```
Timeout: 2 hours.
## Examples
```bash
# Refactoring
/omo Help me refactor this authentication module
# Analysis only
/omo how does this function work?
# → explore
# Feature development
/omo I need to add a new payment feature with frontend UI and backend API
# Bug fix with unknown location
/omo fix the authentication bug
# → explore → develop
# Research
/omo What authentication scheme does this project use?
```
# Feature with external API
/omo add Stripe payment integration
# → explore + librarian → oracle → develop
## Agent Delegation
Delegates via codeagent-wrapper with full Context Pack:
```bash
codeagent-wrapper --agent oracle - . <<'EOF'
## Original User Request
Analyze the authentication architecture and recommend improvements.
## Context Pack (include anything relevant; write "None" if absent)
- Explore output: [paste explore output if available]
- Librarian output: None
- Oracle output: None
## Current Task
Review auth architecture, identify risks, propose minimal improvements.
## Acceptance Criteria
Output: recommendation, action plan, risk assessment, effort estimate.
EOF
# UI change
/omo redesign the dashboard layout
# → explore → frontend-ui-ux-engineer
```
## Configuration
Agent-model mappings are configured in `~/.codeagent/models.json`:
Agent-model mappings in `~/.codeagent/models.json`:
```json
{
@@ -80,34 +128,28 @@ Agent-model mappings are configured in `~/.codeagent/models.json`:
"oracle": {
"backend": "claude",
"model": "claude-opus-4-5-20251101",
"description": "Technical advisor",
"yolo": true
},
"librarian": {
"backend": "claude",
"model": "claude-sonnet-4-5-20250929",
"description": "Researcher",
"yolo": true
},
"explore": {
"backend": "opencode",
"model": "opencode/grok-code",
"description": "Code search"
"model": "opencode/grok-code"
},
"frontend-ui-ux-engineer": {
"backend": "gemini",
"model": "gemini-3-pro-preview",
"description": "Frontend engineer"
"model": "gemini-3-pro-preview"
},
"document-writer": {
"backend": "gemini",
"model": "gemini-3-flash-preview",
"description": "Documentation"
"model": "gemini-3-flash-preview"
},
"develop": {
"backend": "codex",
"model": "gpt-5.2",
"description": "codex develop",
"yolo": true,
"reasoning": "xhigh"
}
@@ -115,6 +157,14 @@ Agent-model mappings are configured in `~/.codeagent/models.json`:
}
```
## Hard Constraints
1. **Never write code yourself** - delegate to implementation agents
2. **Always pass context forward** - include original request + prior outputs
3. **No direct grep/glob for non-trivial exploration** - use `explore`
4. **No external docs guessing** - use `librarian`
5. **Use fewest agents possible** - skipping is normal
## Requirements
- codeagent-wrapper with `--agent` support

View File

@@ -1,96 +1,169 @@
# SPARV - Unified Development Workflow (Simplified)
# sparv - SPARV Workflow
[![Skill Version](https://img.shields.io/badge/version-1.0.0-blue.svg)]()
[![Claude Code](https://img.shields.io/badge/Claude%20Code-Compatible-green.svg)]()
Minimal 5-phase workflow: **S**pecify → **P**lan → **A**ct → **R**eview → **V**ault.
**SPARV** is an end-to-end development workflow: maximize delivery quality with minimal rules while avoiding "infinite iteration + self-rationalization."
```
S-Specify → P-Plan → A-Act → R-Review → V-Vault
Clarify Plan Execute Review Archive
```
## Key Changes (Over-engineering Removed)
- External memory merged from 3 files into 1 `.sparv/journal.md`
- Specify scoring simplified from 100-point to 10-point scale (threshold `>=9`)
- Reboot Test reduced from 5 questions to 3 questions
- Removed concurrency locks (Claude is single-threaded; locks only cause failures)
Completes "requirements → verifiable delivery" in one pass with external memory.
## Installation
SPARV is installed at `~/.claude/skills/sparv/`.
Install from ZIP:
```bash
unzip sparv.zip -d ~/.claude/skills/
python install.py --module sparv
```
## Quick Start
Installs to `~/.claude/skills/sparv/`.
Run in project root:
## Usage
```
/sparv <task description>
```
## Core Rules (Mandatory)
| Rule | Description |
|------|-------------|
| **10-Point Specify Gate** | Spec score 0-10; must be >=9 to enter Plan |
| **2-Action Save** | Append to `.sparv/journal.md` every 2 tool calls |
| **3-Failure Protocol** | Stop and escalate after 3 consecutive failures |
| **EHRB** | Explicit confirmation for high-risk (production/sensitive/destructive/billing/security) |
| **Fixed Phase Names** | `specify|plan|act|review|vault` in `state.yaml` |
## 5-Phase Workflow
### Phase 1: Specify (10-Point Scale)
Each dimension scores 0/1/2, total 0-10:
| Dimension | Focus |
|-----------|-------|
| Value | Why do it, verifiable benefits/metrics |
| Scope | MVP + what's out of scope |
| Acceptance | Testable acceptance criteria |
| Boundaries | Error/performance/compatibility/security limits |
| Risk | EHRB/dependencies/unknowns + handling |
- `score < 9`: Keep asking questions; do not enter Plan
- `score >= 9`: Write `completion_promise`, then enter Plan
### Phase 2: Plan
- Break into atomic tasks (2-5 minute granularity)
- Each task has verifiable output/test point
- Write plan to `.sparv/journal.md`
### Phase 3: Act
- **TDD Rule**: No failing test → no production code
- Auto-write journal every 2 actions (PostToolUse hook)
- 3-Failure Protocol enforced
### Phase 4: Review
- Two stages: Spec conformance → Code quality
- Maximum 3 fix rounds; escalate if exceeded
- Run 3-question reboot test before session ends
### Phase 5: Vault
- Archive current session to `.sparv/history/`
- Update knowledge base `.sparv/kb.md`
## Enhanced Rules (v1.1)
### Uncertainty Declaration (G3)
When any Specify dimension scores < 2:
```
UNCERTAIN: <what> | ASSUMPTION: <fallback>
UNCERTAIN: deployment target | ASSUMPTION: Docker container
UNCERTAIN: auth method | OPTIONS: JWT / OAuth2 / Session
```
### Requirement Routing
| Mode | Condition | Flow |
|------|-----------|------|
| **Quick** | score >= 9 AND <= 3 files AND no EHRB | Specify → Act → Review |
| **Full** | otherwise | Specify → Plan → Act → Review → Vault |
### Knowledge Base Maintenance
During Vault phase, update `.sparv/kb.md`:
- **Patterns**: Reusable code patterns discovered
- **Decisions**: Architectural choices + rationale
- **Gotchas**: Common pitfalls + solutions
### CHANGELOG Update
For non-trivial changes:
```bash
~/.claude/skills/sparv/scripts/changelog-update.sh --type <Added|Changed|Fixed|Removed> --desc "..."
```
## External Memory
Initialize (run in project root):
```bash
~/.claude/skills/sparv/scripts/init-session.sh --force
```
Creates:
```
.sparv/
├── state.yaml
├── journal.md
── history/
├── state.yaml # State machine
├── journal.md # Unified log
── kb.md # Knowledge base
└── history/ # Archive directory
```
## External Memory System (Two Files)
- `state.yaml`: State (minimum fields: `session_id/current_phase/action_count/consecutive_failures`)
- `journal.md`: Unified log (Plan/Progress/Findings all go here)
After archiving:
```
.sparv/history/<session_id>/
├── state.yaml
└── journal.md
```
| File | Purpose |
|------|--------|
| `state.yaml` | session_id, current_phase, action_count, consecutive_failures |
| `journal.md` | Plan/Progress/Findings unified log |
| `kb.md` | patterns/decisions/gotchas |
| `history/` | Archived sessions |
## Key Numbers
| Number | Meaning |
|--------|---------|
|--------|--------|
| **9/10** | Specify score passing threshold |
| **2** | Write to journal every 2 tool calls |
| **3** | Failure retry limit / Review fix limit |
| **3** | Reboot Test question count |
| **12** | Default max iterations (optional safety valve) |
## Script Tools
| Script | Purpose |
|--------|--------|
| `init-session.sh` | Initialize `.sparv/`, generate state + journal |
| `save-progress.sh` | Maintain action_count, append journal |
| `check-ehrb.sh` | Scan diff/text, output ehrb_flags |
| `failure-tracker.sh` | Maintain consecutive_failures |
| `reboot-test.sh` | 3-question self-check |
| `archive-session.sh` | Archive to history/ |
| `changelog-update.sh` | Update CHANGELOG.md |
## Auto Hooks
Configured in `hooks/hooks.json`:
- **PostToolUse**: `save-progress.sh` (2-Action save)
- **PreToolUse**: `check-ehrb.sh --diff --dry-run` (prompt only)
- **Stop**: `reboot-test.sh --strict` (3-question self-check)
## Failure Tracking
```bash
~/.claude/skills/sparv/scripts/init-session.sh --force
~/.claude/skills/sparv/scripts/save-progress.sh "Edit" "done"
~/.claude/skills/sparv/scripts/check-ehrb.sh --diff --fail-on-flags
~/.claude/skills/sparv/scripts/failure-tracker.sh fail --note "tests are flaky"
~/.claude/skills/sparv/scripts/reboot-test.sh --strict
~/.claude/skills/sparv/scripts/archive-session.sh
# Record failure
~/.claude/skills/sparv/scripts/failure-tracker.sh fail --note "short blocker"
# Reset counter
~/.claude/skills/sparv/scripts/failure-tracker.sh reset
```
## Hooks
## Uninstall
Hooks defined in `hooks/hooks.json`:
- PostToolUse: 2-Action auto-write to `journal.md`
- PreToolUse: EHRB risk prompt (default dry-run)
- Stop: 3-question reboot test (strict)
## References
- `SKILL.md`: Skill definition (for agent use)
- `references/methodology.md`: Methodology quick reference
---
*Quality over speed—iterate until truly complete.*
```bash
python install.py --uninstall --module sparv
```

View File

@@ -28,7 +28,7 @@ Options:
Examples:
$0 --list # List installed modules
$0 --dry-run # Preview what would be removed
$0 --module dev # Uninstall only 'dev' module
$0 --module do # Uninstall only 'do' module
$0 -y # Uninstall all without confirmation
$0 --purge -y # Remove everything (DANGEROUS)
EOF