Compare commits

..

32 Commits

Author SHA1 Message Date
cexll
5a50131a13 refactor!: major directory restructuring and npx support
- Create agents/ directory, move bmad, requirements, development-essentials
- Remove docs/, hooks/, dev-workflow/ directories
- Add npx support via github:cexll/myclaude
- Add bin/cli.js with --update command for installed modules
- Add package.json, skills/README.md, PLUGIN_README.md
- Update all references across config.json, README, marketplace.json
- Change default module from dev to do
- Update CHANGELOG with all 59 tags

BREAKING CHANGE: Directory structure changed, docs/hooks removed

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-26 16:57:06 +08:00
cexll
fca5c13c8d docs: add commercial licensing contact email 2026-01-25 22:49:18 +08:00
cexll
c1d3a0a07a fix: correct gitignore to not exclude cmd/codeagent-wrapper
The pattern 'codeagent-wrapper' was matching cmd/codeagent-wrapper/
directory. Changed to '/codeagent-wrapper' to only match root binary.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-25 18:12:40 +08:00
cexll
2856055e2e fix: support concurrent tasks with unique state files
- Generate unique task_id (timestamp-pid-random) for each /do invocation
- State files now use pattern: do.{task_id}.local.md
- Stop hook scans all state files, aggregates blocking reasons
- Auto-cleanup completed task state files

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-25 18:04:47 +08:00
cexll
a9c1e8178f fix: correct build path in release workflow
- Remove obsolete cmd/codeagent directory
- Fix release.yml build path to ./cmd/codeagent-wrapper

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-25 17:52:34 +08:00
cexll
1afeca88ae fix: increase stdoutDrainTimeout from 100ms to 500ms
Resolves intermittent "completed without agent_message output" errors
when Claude CLI exits before all stdout data is read.

- internal/executor/executor.go:43
- internal/app/app.go:27
- Add benchmark script for stability testing

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-25 17:45:38 +08:00
cexll
326ad85c74 fix: use ANTHROPIC_AUTH_TOKEN for Claude CLI env injection
- Change env var from ANTHROPIC_API_KEY to ANTHROPIC_AUTH_TOKEN
- Add Backend field propagation in taskSpec (cli.go)
- Add stderr logging for injected env vars with API key masking
- Add comprehensive tests for env injection flow

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 15:20:29 +08:00
cexll
e66bec0083 test: use prefix match for version flag tests
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:43 +08:00
cexll
eb066395c2 docs: restructure root READMEs with do as recommended workflow
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:41 +08:00
cexll
b49dad842a docs: update do/omo/sparv module READMEs with detailed workflows
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:39 +08:00
cexll
d98086c661 docs: add README for bmad and requirements modules
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:37 +08:00
cexll
0420646258 update codeagent version 2026-01-24 14:01:54 +08:00
cexll
19a8d8e922 refactor: rename feature-dev to do workflow
- Rename skills/feature-dev/ → skills/do/
- Update config.json module name and paths
- Shorter command: /do instead of /feature-dev
- State file: .claude/do.local.md
- Completion promise: <promise>DO_COMPLETE</promise>

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 20:29:28 +08:00
NieiR
669b1d82ce fix(gemini): read GEMINI_MODEL from ~/.gemini/.env (#131)
When using gemini backend without --model flag, now automatically
reads GEMINI_MODEL from ~/.gemini/.env file, consistent with how
claude backend reads model from settings.
2026-01-23 12:03:50 +08:00
cexll
a21c31fd89 feat: add feature-dev skill with 7-phase workflow
Structured feature development with codeagent orchestration:
- Discovery, Exploration, Clarification, Architecture phases
- Implementation, Review, Summary phases
- Parallel agent execution via code-explorer, code-architect, etc.
- Hook-based workflow automation with validation scripts

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:01:31 +08:00
cexll
773f133111 chore: ignore references directory
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:01:04 +08:00
cexll
4f5d24531c feat(install): support \${CLAUDE_PLUGIN_ROOT} variable in hooks config
- find_module_hooks now returns (hooks_config, plugin_root_path) tuple
- Add _replace_hook_variables() for recursive placeholder substitution
- Add feature-dev module config to config.json

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:55 +08:00
cexll
cc24d43c8b fix(codeagent): validate non-empty output message before printing
Return exit code 1 when backend returns empty result.Message with exit_code=0.
Prevents silent failures where no output is produced.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:47 +08:00
cexll
27d4ac8afd chore: add go.work.sum for workspace dependencies
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:38 +08:00
cexll
2e5d12570d fix: add missing cmd/codeagent/main.go entry point
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-20 17:45:50 +08:00
cexll
7c89c40e8f fix(ci): update release workflow build path for new directory structure
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-20 17:42:10 +08:00
cexll
fa617d1599 refactor: restructure codebase to internal/ directory with modular architecture
- Move all source files to internal/{app,backend,config,executor,logger,parser,utils}
- Integrate third-party libraries: zerolog, goccy/go-json, gopsutil, cobra/viper
- Add comprehensive unit tests for utils package (94.3% coverage)
- Add performance benchmarks for string operations
- Fix error display: cleanup warnings no longer pollute Recent Errors
- Add GitHub Actions CI workflow
- Add Makefile for build automation
- Add README documentation

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-20 17:34:26 +08:00
cexll
90c630e30e fix: write PATH config to both profile and rc files (#128)
Previously install.sh only wrote PATH to ~/.bashrc (or ~/.zshrc),
which doesn't work for non-interactive login shells like `bash -lc`
used by Claude Code, because Ubuntu's ~/.bashrc exits early for
non-interactive shells.

Now writes to both:
- bash: ~/.bashrc + ~/.profile
- zsh: ~/.zshrc + ~/.zprofile

Each file is checked independently for idempotency.

Closes #128

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 22:22:41 +08:00
cexll
25bbbc32a7 feat: add course module with dev, product-requirements and test-cases skills
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 16:28:59 +08:00
cexll
d8304bf2b9 feat: add hooks management to install.py
- Add merge_hooks_to_settings: merge module hooks into settings.json
- Add unmerge_hooks_from_settings: remove module hooks from settings.json
- Add find_module_hooks: detect hooks.json in module directories
- Update execute_module: auto-merge hooks after install
- Update uninstall_module: auto-remove hooks on uninstall
- Add __module__ marker to track hook ownership

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 14:16:16 +08:00
cexll
7240e08900 feat: add sparv module and interactive plugin manager
- Add sparv module to config.json (SPARV workflow v1.1)
- Disable essentials module by default
- Add --status to show installation status of all modules
- Add --uninstall to remove installed modules
- Add interactive management mode (install/uninstall via menu)
- Add filesystem-based installation detection
- Support both module numbers and names in selection
- Merge install status instead of overwriting

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 13:38:52 +08:00
cexll
e122d8ff25 feat: add sparv enhanced rules v1.1
- Add Uncertainty Declaration (G3): declare assumptions when score < 2
- Add Requirement Routing: Quick/Full mode based on scope
- Add Context Acquisition: optional kb.md check before Specify
- Add Knowledge Base: .sparv/kb.md for cross-session patterns
- Add changelog-update.sh: maintain CHANGELOG by type

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 13:12:43 +08:00
马雨
6985a30a6a docs: update 'Agent Hierarchy' model for frontend-ui-ux-engineer and document-writer in README (#127)
* docs: update mappings for frontend-ui-ux-engineer and document-writer in README

* docs: update 'Agent Hierarchy' model for frontend-ui-ux-engineer and document-writer in README
2026-01-17 12:32:32 +08:00
马雨
dd4c12b8e2 docs: update mappings for frontend-ui-ux-engineer and document-writer in README (#126) 2026-01-17 12:04:12 +08:00
cexll
a88315d92d feat: add sparv skill to claude-plugin v1.1.0
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-16 22:26:31 +08:00
cexll
d1f13b3379 remove .sparv 2026-01-16 14:35:11 +08:00
cexll
5d362852ab feat sparv skill 2026-01-16 14:34:03 +08:00
176 changed files with 10444 additions and 6561 deletions

View File

@@ -15,33 +15,33 @@
"source": "./skills/omo",
"category": "development"
},
{
"name": "dev",
"description": "Lightweight development workflow with requirements clarification, parallel codex execution, and mandatory 90% test coverage",
"version": "5.6.1",
"source": "./dev-workflow",
"category": "development"
},
{
"name": "requirements",
"description": "Requirements-driven development workflow with quality gates for practical feature implementation",
"version": "5.6.1",
"source": "./requirements-driven-workflow",
"source": "./agents/requirements",
"category": "development"
},
{
"name": "bmad",
"description": "Full BMAD agile workflow with role-based agents (PO, Architect, SM, Dev, QA) and interactive approval gates",
"version": "5.6.1",
"source": "./bmad-agile-workflow",
"source": "./agents/bmad",
"category": "development"
},
{
"name": "dev-kit",
"description": "Essential development commands for coding, debugging, testing, optimization, and documentation",
"version": "5.6.1",
"source": "./development-essentials",
"source": "./agents/development-essentials",
"category": "productivity"
},
{
"name": "sparv",
"description": "Minimal SPARV workflow (Specify→Plan→Act→Review→Vault) with 10-point spec gate, unified journal, 2-action saves, 3-failure protocol, and EHRB risk detection",
"version": "1.1.0",
"source": "./skills/sparv",
"category": "development"
}
]
}

View File

@@ -74,7 +74,7 @@ jobs:
if [ "${{ matrix.goos }}" = "windows" ]; then
OUTPUT_NAME="${OUTPUT_NAME}.exe"
fi
go build -ldflags="-s -w -X main.version=${VERSION}" -o ${OUTPUT_NAME} .
go build -ldflags="-s -w -X main.version=${VERSION}" -o ${OUTPUT_NAME} ./cmd/codeagent-wrapper
chmod +x ${OUTPUT_NAME}
echo "artifact_path=codeagent-wrapper/${OUTPUT_NAME}" >> $GITHUB_OUTPUT

1
.gitignore vendored
View File

@@ -7,3 +7,4 @@
__pycache__
.coverage
coverage.out
references

View File

@@ -2,66 +2,451 @@
All notable changes to this project will be documented in this file.
## [5.6.4] - 2026-01-15
## [6.0.0] - 2026-01-26
### 🚀 Features
- add reasoning effort config for codex backend
- default to skip-permissions and bypass-sandbox
- add multi-agent support with yolo mode
- add omo module for multi-agent orchestration
- add intelligent backend selection based on task complexity (#61)
- v5.4.0 structured execution report (#94)
- add millisecond-precision timestamps to all log entries (#91)
- skill-install install script and security scan
- add uninstall scripts with selective module removal
### 🐛 Bug Fixes
- filter codex stderr noise logs
- use config override for codex reasoning effort
- propagate SkipPermissions to parallel tasks (#113)
- add timeout for Windows process termination
- reject dash as workdir parameter (#118)
- add sleep in fake script to prevent CI race condition
- fix gemini env load
- fix omo
- fix codeagent skill TaskOutput
- 修复 Gemini init 事件 session_id 未提取的问题 (#111)
- Windows 后端退出taskkill 结束进程树 + turn.completed 支持 (#108)
- support model parameter for all backends, auto-inject from settings (#105)
- replace setx with reg add to avoid 1024-char PATH truncation (#101)
- 移除未知事件格式的日志噪声 (#96)
- prevent duplicate PATH entries on reinstall (#95)
- Minor issues #12 and #13 - ASCII mode and performance optimization
- correct settings.json filename and bump version to v5.2.8
- allow claude backend to read env from setting.json while preventing recursion (#92)
- comprehensive security and quality improvements for PR #85 & #87 (#90)
- Improve backend termination after message and extend timeout (#86)
- Parser重复解析优化 + 严重bug修复 + PR #86兼容性 (#88)
- filter noisy stderr output from gemini backend (#83)
- 修復 wsl install.sh 格式問題 (#78)
- 修复多 backend 并行日志 PID 混乱并移除包装格式 (#74) (#76)
- support `npx github:cexll/myclaude` for installation and execution
- default module changed from `dev` to `do`
### 🚜 Refactor
- remove sisyphus agent and unused code
- streamline agent documentation and remove sisyphus
- restructure: create `agents/` and move `bmad-agile-workflow``agents/bmad`, `requirements-driven-workflow``agents/requirements`, `development-essentials``agents/development-essentials`
- remove legacy directories: `docs/`, `hooks/`, `dev-workflow/`
- update references across `config.json`, `README.md`, `README_CN.md`, `marketplace.json`, etc.
### 📚 Documentation
- add OmO workflow to README and fix plugin marketplace structure
- update FAQ for default bypass/skip-permissions behavior
- 添加 FAQ 常见问题章节
- update troubleshooting with idempotent PATH commands (#95)
- add `skills/README.md` and `PLUGIN_README.md`
### 💼 Other
- add `package.json` and `bin/cli.js` for npx packaging
## [6.1.5] - 2026-01-25
### 🐛 Bug Fixes
- correct gitignore to not exclude cmd/codeagent-wrapper
## [6.1.4] - 2026-01-25
### 🐛 Bug Fixes
- support concurrent tasks with unique state files
## [6.1.3] - 2026-01-25
### 🐛 Bug Fixes
- correct build path in release workflow
- increase stdoutDrainTimeout from 100ms to 500ms
## [6.1.2] - 2026-01-24
### 🐛 Bug Fixes
- use ANTHROPIC_AUTH_TOKEN for Claude CLI env injection
### 💼 Other
- update codeagent version
### 📚 Documentation
- restructure root READMEs with do as recommended workflow
- update do/omo/sparv module READMEs with detailed workflows
- add README for bmad and requirements modules
### 🧪 Testing
- use prefix match for version flag tests
## [6.1.1] - 2026-01-23
### 🚜 Refactor
- rename feature-dev to do workflow
## [6.1.0] - 2026-01-23
### ⚙️ Miscellaneous Tasks
- ignore references directory
- add go.work.sum for workspace dependencies
### 🐛 Bug Fixes
- read GEMINI_MODEL from ~/.gemini/.env ([#131](https://github.com/cexll/myclaude/issues/131))
- validate non-empty output message before printing
### 🚀 Features
- add feature-dev skill with 7-phase workflow
- support \${CLAUDE_PLUGIN_ROOT} variable in hooks config
## [6.0.0-alpha1] - 2026-01-20
### 🐛 Bug Fixes
- add missing cmd/codeagent/main.go entry point
- update release workflow build path for new directory structure
- write PATH config to both profile and rc files ([#128](https://github.com/cexll/myclaude/issues/128))
### 🚀 Features
- add course module with dev, product-requirements and test-cases skills
- add hooks management to install.py
### 🚜 Refactor
- restructure codebase to internal/ directory with modular architecture
## [5.6.7] - 2026-01-17
### 💼 Other
- remove .sparv
### 📚 Documentation
- update 'Agent Hierarchy' model for frontend-ui-ux-engineer and document-writer in README ([#127](https://github.com/cexll/myclaude/issues/127))
- update mappings for frontend-ui-ux-engineer and document-writer in README ([#126](https://github.com/cexll/myclaude/issues/126))
### 🚀 Features
- add sparv module and interactive plugin manager
- add sparv enhanced rules v1.1
- add sparv skill to claude-plugin v1.1.0
- feat sparv skill
## [5.6.6] - 2026-01-16
### 🐛 Bug Fixes
- remove extraneous dash arg for opencode stdin mode ([#124](https://github.com/cexll/myclaude/issues/124))
### 💼 Other
- update readme
## [5.6.5] - 2026-01-16
### 🐛 Bug Fixes
- correct default models for oracle and librarian agents ([#120](https://github.com/cexll/myclaude/issues/120))
### 🚀 Features
- feat dev skill
## [5.6.4] - 2026-01-15
### 🐛 Bug Fixes
- filter codex 0.84.0 stderr noise logs ([#122](https://github.com/cexll/myclaude/issues/122))
- filter codex stderr noise logs
## [5.6.3] - 2026-01-14
### ⚙️ Miscellaneous Tasks
- bump codeagent-wrapper version to 5.6.3
### 🐛 Bug Fixes
- update version tests to match 5.6.3
- use config override for codex reasoning effort
## [5.6.2] - 2026-01-14
### 🐛 Bug Fixes
- propagate SkipPermissions to parallel tasks ([#113](https://github.com/cexll/myclaude/issues/113))
- add timeout for Windows process termination
- reject dash as workdir parameter ([#118](https://github.com/cexll/myclaude/issues/118))
### 📚 Documentation
- add OmO workflow to README and fix plugin marketplace structure
### 🚜 Refactor
- remove sisyphus agent and unused code
## [5.6.1] - 2026-01-13
### 🐛 Bug Fixes
- add sleep in fake script to prevent CI race condition
- fix gemini env load
- fix omo
### 🚀 Features
- add reasoning effort config for codex backend
## [5.6.0] - 2026-01-13
### 📚 Documentation
- update FAQ for default bypass/skip-permissions behavior
### 🚀 Features
- default to skip-permissions and bypass-sandbox
- add omo module for multi-agent orchestration
### 🚜 Refactor
- streamline agent documentation and remove sisyphus
## [5.5.0] - 2026-01-12
### 🐛 Bug Fixes
- 修复 Gemini init 事件 session_id 未提取的问题 ([#111](https://github.com/cexll/myclaude/issues/111))
- fix codeagent skill TaskOutput
### 💼 Other
- Merge branch 'master' of github.com:cexll/myclaude
- add test-cases skill
- add browser skill
- BMADh和Requirements-Driven支持根据语义生成对应的文档 (#82)
### 🚀 Features
- add multi-agent support with yolo mode
## [5.4.4] - 2026-01-08
### 💼 Other
- 修复 Windows 后端退出taskkill 结束进程树 + turn.completed 支持 ([#108](https://github.com/cexll/myclaude/issues/108))
## [5.4.3] - 2026-01-06
### 🐛 Bug Fixes
- support model parameter for all backends, auto-inject from settings ([#105](https://github.com/cexll/myclaude/issues/105))
### 📚 Documentation
- add FAQ Q5 for permission/sandbox env vars
### 🚀 Features
- feat skill-install install script and security scan
- add uninstall scripts with selective module removal
## [5.4.2] - 2025-12-31
### 🐛 Bug Fixes
- replace setx with reg add to avoid 1024-char PATH truncation ([#101](https://github.com/cexll/myclaude/issues/101))
## [5.4.1] - 2025-12-26
### 🐛 Bug Fixes
- 移除未知事件格式的日志噪声 ([#96](https://github.com/cexll/myclaude/issues/96))
- prevent duplicate PATH entries on reinstall ([#95](https://github.com/cexll/myclaude/issues/95))
### 📚 Documentation
- 添加 FAQ 常见问题章节
- update troubleshooting with idempotent PATH commands ([#95](https://github.com/cexll/myclaude/issues/95))
### 🚀 Features
- Add intelligent backend selection based on task complexity ([#61](https://github.com/cexll/myclaude/issues/61))
## [5.4.0] - 2025-12-24
### 🐛 Bug Fixes
- Minor issues #12 and #13 - ASCII mode and performance optimization
- code review fixes for PR #94 - all critical and major issues resolved
### 🚀 Features
- v5.4.0 structured execution report ([#94](https://github.com/cexll/myclaude/issues/94))
## [5.2.8] - 2025-12-22
### ⚙️ Miscellaneous Tasks
- simplify release workflow to use GitHub auto-generated notes
### 🐛 Bug Fixes
- correct settings.json filename and bump version to v5.2.8
## [5.2.7] - 2025-12-21
### ⚙️ Miscellaneous Tasks
- bump version to v5.2.7
### 🐛 Bug Fixes
- allow claude backend to read env from setting.json while preventing recursion ([#92](https://github.com/cexll/myclaude/issues/92))
- comprehensive security and quality improvements for PR #85 & #87 ([#90](https://github.com/cexll/myclaude/issues/90))
- Parser重复解析优化 + 严重bug修复 + PR #86兼容性 ([#88](https://github.com/cexll/myclaude/issues/88))
### 💼 Other
- Improve backend termination after message and extend timeout ([#86](https://github.com/cexll/myclaude/issues/86))
### 🚀 Features
- add millisecond-precision timestamps to all log entries ([#91](https://github.com/cexll/myclaude/issues/91))
## [5.2.6] - 2025-12-19
### 🐛 Bug Fixes
- filter noisy stderr output from gemini backend ([#83](https://github.com/cexll/myclaude/issues/83))
- 修復 wsl install.sh 格式問題 ([#78](https://github.com/cexll/myclaude/issues/78))
### 💼 Other
- update all readme
- BMADh和Requirements-Driven支持根据语义生成对应的文档 ([#82](https://github.com/cexll/myclaude/issues/82))
## [5.2.5] - 2025-12-17
### 🐛 Bug Fixes
- 修复多 backend 并行日志 PID 混乱并移除包装格式 ([#74](https://github.com/cexll/myclaude/issues/74)) ([#76](https://github.com/cexll/myclaude/issues/76))
- replace "Codex" to "codeagent" in dev-plan-generator subagent
- 修復 win python install.py
### 💼 Other
- Merge pull request #71 from aliceric27/master
- Merge branch 'cexll:master' into master
- Merge pull request #72 from changxvv/master
- update changelog
- update codeagent skill backend select
## [5.2.4] - 2025-12-16

View File

@@ -7,12 +7,12 @@
help:
@echo "Claude Code Multi-Agent Workflow - Quick Deployment"
@echo ""
@echo "Recommended installation: python3 install.py --install-dir ~/.claude"
@echo "Recommended installation: npx github:cexll/myclaude"
@echo ""
@echo "Usage: make [target]"
@echo ""
@echo "Targets:"
@echo " install - LEGACY: install all configurations (prefer install.py)"
@echo " install - LEGACY: install all configurations (prefer npx github:cexll/myclaude)"
@echo " deploy-bmad - Deploy BMAD workflow (bmad-pilot)"
@echo " deploy-requirements - Deploy Requirements workflow (requirements-pilot)"
@echo " deploy-essentials - Deploy Development Essentials workflow"
@@ -31,16 +31,16 @@ CLAUDE_CONFIG_DIR = ~/.claude
SPECS_DIR = .claude/specs
# Workflow directories
BMAD_DIR = bmad-agile-workflow
REQUIREMENTS_DIR = requirements-driven-workflow
ESSENTIALS_DIR = development-essentials
BMAD_DIR = agents/bmad
REQUIREMENTS_DIR = agents/requirements
ESSENTIALS_DIR = agents/development-essentials
ADVANCED_DIR = advanced-ai-agents
OUTPUT_STYLES_DIR = output-styles
# Install all configurations
install: deploy-all
@echo "⚠️ LEGACY PATH: make install will be removed in future versions."
@echo " Prefer: python3 install.py --install-dir ~/.claude"
@echo " Prefer: npx github:cexll/myclaude"
@echo "✅ Installation complete!"
# Deploy BMAD workflow
@@ -159,4 +159,3 @@ changelog:
@echo ""
@echo "Preview the changes:"
@echo " git diff CHANGELOG.md"

18
PLUGIN_README.md Normal file
View File

@@ -0,0 +1,18 @@
# Plugin System
Claude Code plugins for this repo are defined in `.claude-plugin/marketplace.json`.
## Install
```bash
/plugin marketplace add cexll/myclaude
/plugin list
```
## Available Plugins
- `bmad` - BMAD workflow (`./agents/bmad`)
- `requirements` - requirements-driven workflow (`./agents/requirements`)
- `dev-kit` - development essentials (`./agents/development-essentials`)
- `omo` - orchestration skill (`./skills/omo`)
- `sparv` - SPARV workflow (`./skills/sparv`)

649
README.md
View File

@@ -3,404 +3,102 @@
# Claude Code Multi-Agent Workflow System
[![Run in Smithery](https://smithery.ai/badge/skills/cexll)](https://smithery.ai/skills?ns=cexll&utm_source=github&utm_medium=badge)
[![License: AGPL-3.0](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Claude Code](https://img.shields.io/badge/Claude-Code-blue)](https://claude.ai/code)
[![Version](https://img.shields.io/badge/Version-5.6-green)](https://github.com/cexll/myclaude)
[![Version](https://img.shields.io/badge/Version-6.x-green)](https://github.com/cexll/myclaude)
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini)
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini/OpenCode)
## Core Concept: Multi-Backend Architecture
This system leverages a **dual-agent architecture** with pluggable AI backends:
| Role | Agent | Responsibility |
|------|-------|----------------|
| **Orchestrator** | Claude Code | Planning, context gathering, verification, user interaction |
| **Executor** | codeagent-wrapper | Code editing, test execution (Codex/Claude/Gemini backends) |
**Why this separation?**
- Claude Code excels at understanding context and orchestrating complex workflows
- Specialized backends (Codex for code, Claude for reasoning, Gemini for prototyping) excel at focused execution
- Backend selection via `--backend codex|claude|gemini` matches the model to the task
## Quick Start(Please execute in Powershell on Windows)
## Quick Start
```bash
git clone https://github.com/cexll/myclaude.git
cd myclaude
python3 install.py --install-dir ~/.claude
npx github:cexll/myclaude
```
## Workflows Overview
## Modules Overview
### 0. OmO Multi-Agent Orchestrator (Recommended for Complex Tasks)
**Intelligent multi-agent orchestration that routes tasks to specialized agents based on risk signals.**
```bash
/omo "analyze and fix this authentication bug"
```
**Agent Hierarchy:**
| Agent | Role | Backend | Model |
|-------|------|---------|-------|
| `oracle` | Technical advisor | Claude | claude-opus-4-5 |
| `librarian` | External research | Claude | claude-sonnet-4-5 |
| `explore` | Codebase search | OpenCode | grok-code |
| `develop` | Code implementation | Codex | gpt-5.2 |
| `frontend-ui-ux-engineer` | UI/UX specialist | Gemini | gemini-3-pro |
| `document-writer` | Documentation | Gemini | gemini-3-flash |
**Routing Signals (Not Fixed Pipeline):**
- Code location unclear → `explore`
- External library/API → `librarian`
- Risky/multi-file change → `oracle`
- Implementation needed → `develop` / `frontend-ui-ux-engineer`
**Common Recipes:**
- Explain code: `explore`
- Small fix with known location: `develop` directly
- Bug fix, location unknown: `explore → develop`
- Cross-cutting refactor: `explore → oracle → develop`
- External API integration: `explore + librarian → oracle → develop`
**Best For:** Complex bug investigation, multi-file refactoring, architecture decisions
---
### 1. Dev Workflow (Recommended)
**The primary workflow for most development tasks.**
```bash
/dev "implement user authentication with JWT"
```
**6-Step Process:**
1. **Requirements Clarification** - Interactive Q&A to clarify scope
2. **Codex Deep Analysis** - Codebase exploration and architecture decisions
3. **Dev Plan Generation** - Structured task breakdown with test requirements
4. **Parallel Execution** - Codex executes tasks concurrently
5. **Coverage Validation** - Enforce ≥90% test coverage
6. **Completion Summary** - Report with file changes and coverage stats
**Key Features:**
- Claude Code orchestrates, Codex executes all code changes
- Automatic task parallelization for speed
- Mandatory 90% test coverage gate
- Rollback on failure
**Best For:** Feature development, refactoring, bug fixes with tests
---
### 2. BMAD Agile Workflow
**Full enterprise agile methodology with 6 specialized agents.**
```bash
/bmad-pilot "build e-commerce checkout system"
```
**Agents:**
| Agent | Role |
|-------|------|
| Product Owner | Requirements & user stories |
| Architect | System design & tech decisions |
| Tech Lead | Sprint planning & task breakdown |
| Developer | Implementation |
| Code Reviewer | Quality assurance |
| QA Engineer | Testing & validation |
**Process:**
```
Requirements → Architecture → Sprint Plan → Development → Review → QA
↓ ↓ ↓ ↓ ↓ ↓
PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
```
**Best For:** Large features, team coordination, enterprise projects
---
### 3. Requirements-Driven Workflow
**Lightweight requirements-to-code pipeline.**
```bash
/requirements-pilot "implement API rate limiting"
```
**Process:**
1. Requirements generation with quality scoring
2. Implementation planning
3. Code generation
4. Review and testing
**Best For:** Quick prototypes, well-defined features
---
### 4. Development Essentials
**Direct commands for daily coding tasks.**
| Command | Purpose |
|---------|---------|
| `/code` | Implement a feature |
| `/debug` | Debug an issue |
| `/test` | Write tests |
| `/review` | Code review |
| `/optimize` | Performance optimization |
| `/refactor` | Code refactoring |
| `/docs` | Documentation |
**Best For:** Quick tasks, no workflow overhead needed
## Enterprise Workflow Features
- **Multi-backend execution:** `codeagent-wrapper --backend codex|claude|gemini` (default `codex`) so you can match the model to the task without changing workflows.
- **GitHub workflow commands:** `/gh-create-issue "short need"` creates structured issues; `/gh-issue-implement 123` pulls issue #123, drives development, and prepares the PR.
- **Skills + hooks activation:** .claude/hooks run automation (tests, reviews), while `.claude/skills/skill-rules.json` auto-suggests the right skills. Keep hooks enabled in `.claude/settings.json` to activate the enterprise workflow helpers.
---
## Version Requirements
### Codex CLI
**Minimum version:** Check compatibility with your installation
The codeagent-wrapper uses these Codex CLI features:
- `codex e` - Execute commands (shorthand for `codex exec`)
- `--skip-git-repo-check` - Skip git repository validation
- `--json` - JSON stream output format
- `-C <workdir>` - Set working directory
- `resume <session_id>` - Resume previous sessions
**Verify Codex CLI is installed:**
```bash
which codex
codex --version
```
### Claude CLI
**Minimum version:** Check compatibility with your installation
Required features:
- `--output-format stream-json` - Streaming JSON output format
- `--setting-sources` - Control setting sources (prevents infinite recursion)
- `--dangerously-skip-permissions` - Skip permission prompts (use with caution)
- `-p` - Prompt input flag
- `-r <session_id>` - Resume sessions
**Security Note:** The wrapper adds `--dangerously-skip-permissions` for Claude by default. Set `CODEAGENT_SKIP_PERMISSIONS=false` to disable if you need permission prompts.
**Verify Claude CLI is installed:**
```bash
which claude
claude --version
```
### Gemini CLI
**Minimum version:** Check compatibility with your installation
Required features:
- `-o stream-json` - JSON stream output format
- `-y` - Auto-approve prompts (non-interactive mode)
- `-r <session_id>` - Resume sessions
- `-p` - Prompt input flag
**Verify Gemini CLI is installed:**
```bash
which gemini
gemini --version
```
---
| Module | Description | Documentation |
|--------|-------------|---------------|
| [do](skills/do/README.md) | **Recommended** - 7-phase feature development with codeagent orchestration | `/do` command |
| [omo](skills/omo/README.md) | Multi-agent orchestration with intelligent routing | `/omo` command |
| [bmad](agents/bmad/README.md) | BMAD agile workflow with 6 specialized agents | `/bmad-pilot` command |
| [requirements](agents/requirements/README.md) | Lightweight requirements-to-code pipeline | `/requirements-pilot` command |
| [essentials](agents/development-essentials/README.md) | Core development commands and utilities | `/code`, `/debug`, etc. |
| [sparv](skills/sparv/README.md) | SPARV workflow (Specify→Plan→Act→Review→Vault) | `/sparv` command |
| course | Course development (combines dev + product-requirements + test-cases) | Composite module |
## Installation
### Modular Installation (Recommended)
```bash
# Install all enabled modules (dev + essentials by default)
python3 install.py --install-dir ~/.claude
# Interactive installer (recommended)
npx github:cexll/myclaude
# Install specific module
python3 install.py --module dev
# List installable items (modules / skills / wrapper)
npx github:cexll/myclaude --list
# List available modules
python3 install.py --list-modules
# Force overwrite existing files
python3 install.py --force
# Custom install directory / overwrite
npx github:cexll/myclaude --install-dir ~/.claude --force
```
### Available Modules
### Module Configuration
| Module | Default | Description |
|--------|---------|-------------|
| `dev` | ✓ Enabled | Dev workflow + Codex integration |
| `essentials` | ✓ Enabled | Core development commands |
| `bmad` | Disabled | Full BMAD agile workflow |
| `requirements` | Disabled | Requirements-driven workflow |
### What Gets Installed
```
~/.claude/
├── bin/
│ └── codeagent-wrapper # Main executable
├── CLAUDE.md # Core instructions and role definition
├── commands/ # Slash commands (/dev, /code, etc.)
├── agents/ # Agent definitions
├── skills/
│ └── codex/
│ └── SKILL.md # Codex integration skill
├── config.json # Configuration
└── installed_modules.json # Installation status
```
### Customizing Installation Directory
By default, myclaude installs to `~/.claude`. You can customize this using the `INSTALL_DIR` environment variable:
```bash
# Install to custom directory
INSTALL_DIR=/opt/myclaude bash install.sh
# Update your PATH accordingly
export PATH="/opt/myclaude/bin:$PATH"
```
**Directory Structure:**
- `$INSTALL_DIR/bin/` - codeagent-wrapper binary
- `$INSTALL_DIR/skills/` - Skill definitions
- `$INSTALL_DIR/config.json` - Configuration file
- `$INSTALL_DIR/commands/` - Slash command definitions
- `$INSTALL_DIR/agents/` - Agent definitions
**Note:** When using a custom installation directory, ensure that `$INSTALL_DIR/bin` is added to your `PATH` environment variable.
### Configuration
Edit `config.json` to customize:
Edit `config.json` to enable/disable modules:
```json
{
"version": "1.0",
"install_dir": "~/.claude",
"modules": {
"dev": {
"enabled": true,
"operations": [
{"type": "merge_dir", "source": "dev-workflow"},
{"type": "copy_file", "source": "memorys/CLAUDE.md", "target": "CLAUDE.md"},
{"type": "copy_file", "source": "skills/codex/SKILL.md", "target": "skills/codex/SKILL.md"},
{"type": "run_command", "command": "bash install.sh"}
]
}
"bmad": { "enabled": false },
"requirements": { "enabled": false },
"essentials": { "enabled": false },
"omo": { "enabled": false },
"sparv": { "enabled": false },
"do": { "enabled": true },
"course": { "enabled": false }
}
}
```
**Operation Types:**
| Type | Description |
|------|-------------|
| `merge_dir` | Merge subdirs (commands/, agents/) into install dir |
| `copy_dir` | Copy entire directory |
| `copy_file` | Copy single file to target path |
| `run_command` | Execute shell command |
---
## Codex Integration
The `codex` skill enables Claude Code to delegate code execution to Codex CLI.
### Usage in Workflows
```bash
# Codex is invoked via the skill
codeagent-wrapper - <<'EOF'
implement @src/auth.ts with JWT validation
EOF
```
### Parallel Execution
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: backend_api
workdir: /project/backend
---CONTENT---
implement REST endpoints for /api/users
---TASK---
id: frontend_ui
workdir: /project/frontend
dependencies: backend_api
---CONTENT---
create React components consuming the API
EOF
```
### Install Codex Wrapper
```bash
# Automatic (via dev module)
python3 install.py --module dev
# Manual
bash install.sh
```
#### Windows
Windows installs place `codeagent-wrapper.exe` in `%USERPROFILE%\bin`.
```powershell
# PowerShell (recommended)
powershell -ExecutionPolicy Bypass -File install.ps1
# Batch (cmd)
install.bat
```
**Add to PATH** (if installer doesn't detect it):
```powershell
# PowerShell - persistent for current user
[Environment]::SetEnvironmentVariable('PATH', "$HOME\bin;" + [Environment]::GetEnvironmentVariable('PATH','User'), 'User')
# PowerShell - current session only
$Env:PATH = "$HOME\bin;$Env:PATH"
```
```batch
REM cmd.exe - persistent for current user (use PowerShell method above instead)
REM WARNING: This expands %PATH% which includes system PATH, causing duplication
REM Note: Using reg add instead of setx to avoid 1024-character truncation limit
reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%" /f
```
---
## Workflow Selection Guide
| Scenario | Recommended Workflow |
|----------|---------------------|
| New feature with tests | `/dev` |
| Quick bug fix | `/debug` or `/code` |
| Large multi-sprint feature | `/bmad-pilot` |
| Prototype or POC | `/requirements-pilot` |
| Code review | `/review` |
| Performance issue | `/optimize` |
| Scenario | Recommended |
|----------|-------------|
| Feature development (default) | `/do` |
| Bug investigation + fix | `/omo` |
| Large enterprise project | `/bmad-pilot` |
| Quick prototype | `/requirements-pilot` |
| Simple task | `/code`, `/debug` |
---
## Core Architecture
| Role | Agent | Responsibility |
|------|-------|----------------|
| **Orchestrator** | Claude Code | Planning, context gathering, verification |
| **Executor** | codeagent-wrapper | Code editing, test execution (Codex/Claude/Gemini/OpenCode) |
## Backend CLI Requirements
| Backend | Required Features |
|---------|-------------------|
| Codex | `codex e`, `--json`, `-C`, `resume` |
| Claude | `--output-format stream-json`, `-r` |
| Gemini | `-o stream-json`, `-y`, `-r` |
## Directory Structure After Installation
```
~/.claude/
├── bin/codeagent-wrapper
├── CLAUDE.md
├── commands/
├── agents/
├── skills/
└── config.json
```
## Documentation
- [codeagent-wrapper](codeagent-wrapper/README.md)
- [Plugin System](PLUGIN_README.md)
## Troubleshooting
@@ -408,214 +106,41 @@ reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%
**Codex wrapper not found:**
```bash
# Installer auto-adds PATH, check if configured
if [[ ":$PATH:" != *":$HOME/.claude/bin:"* ]]; then
echo "PATH not configured. Reinstalling..."
bash install.sh
fi
# Or manually add (idempotent command)
[[ ":$PATH:" != *":$HOME/.claude/bin:"* ]] && echo 'export PATH="$HOME/.claude/bin:$PATH"' >> ~/.zshrc
```
**Permission denied:**
```bash
python3 install.py --install-dir ~/.claude --force
# Select: codeagent-wrapper
npx github:cexll/myclaude
```
**Module not loading:**
```bash
# Check installation status
cat ~/.claude/installed_modules.json
# Reinstall specific module
python3 install.py --module dev --force
npx github:cexll/myclaude --force
```
### Version Compatibility Issues
**Backend CLI not found:**
**Backend CLI errors:**
```bash
# Check if backend CLIs are installed
which codex
which claude
which gemini
# Install missing backends
# Codex: Follow installation instructions at https://codex.docs
# Claude: Follow installation instructions at https://claude.ai/docs
# Gemini: Follow installation instructions at https://ai.google.dev/docs
which codex && codex --version
which claude && claude --version
which gemini && gemini --version
```
**Unsupported CLI flags:**
```bash
# If you see errors like "unknown flag" or "invalid option"
## FAQ
# Check backend CLI version
codex --version
claude --version
gemini --version
| Issue | Solution |
|-------|----------|
| "Unknown event format" | Logging display issue, can be ignored |
| Gemini can't read .gitignore files | Remove from .gitignore or use different backend |
| Codex permission denied | Set `approval_policy = "never"` in ~/.codex/config.yaml |
# For Codex: Ensure it supports `e`, `--skip-git-repo-check`, `--json`, `-C`, and `resume`
# For Claude: Ensure it supports `--output-format stream-json`, `--setting-sources`, `-r`
# For Gemini: Ensure it supports `-o stream-json`, `-y`, `-r`, `-p`
# Update your backend CLI to the latest version if needed
```
**JSON parsing errors:**
```bash
# If you see "failed to parse JSON output" errors
# Verify the backend outputs stream-json format
codex e --json "test task" # Should output newline-delimited JSON
claude --output-format stream-json -p "test" # Should output stream JSON
# If not, your backend CLI version may be too old or incompatible
```
**Infinite recursion with Claude backend:**
```bash
# The wrapper prevents this with `--setting-sources ""` flag
# If you still see recursion, ensure your Claude CLI supports this flag
claude --help | grep "setting-sources"
# If flag is not supported, upgrade Claude CLI
```
**Session resume failures:**
```bash
# Check if session ID is valid
codex history # List recent sessions
claude history
# Ensure backend CLI supports session resumption
codex resume <session_id> "test" # Should continue from previous session
claude -r <session_id> "test"
# If not supported, use new sessions instead of resume mode
```
---
## FAQ (Frequently Asked Questions)
### Q1: `codeagent-wrapper` execution fails with "Unknown event format"
**Problem:**
```
Unknown event format: {"type":"turn.started"}
Unknown event format: {"type":"assistant", ...}
```
**Solution:**
This is a logging event format display issue and does not affect actual functionality. It will be fixed in the next version. You can ignore these log outputs.
**Related Issue:** [#96](https://github.com/cexll/myclaude/issues/96)
---
### Q2: Gemini cannot read files ignored by `.gitignore`
**Problem:**
When using `codeagent-wrapper --backend gemini`, files in directories like `.claude/` that are ignored by `.gitignore` cannot be read.
**Solution:**
- **Option 1:** Remove `.claude/` from your `.gitignore` file
- **Option 2:** Ensure files that need to be read are not in `.gitignore` list
**Related Issue:** [#75](https://github.com/cexll/myclaude/issues/75)
---
### Q3: `/dev` command parallel execution is very slow
**Problem:**
Using `/dev` command for simple features takes too long (over 30 minutes) with no visibility into task progress.
**Solution:**
1. **Check logs:** Review `C:\Users\User\AppData\Local\Temp\codeagent-wrapper-*.log` to identify bottlenecks
2. **Adjust backend:**
- Try faster models like `gpt-5.1-codex-max`
- Running in WSL may be significantly faster
3. **Workspace:** Use a single repository instead of monorepo with multiple sub-projects
**Related Issue:** [#77](https://github.com/cexll/myclaude/issues/77)
---
### Q4: Codex permission denied with new Go version
**Problem:**
After upgrading to the new Go-based Codex implementation, execution fails with permission denied errors.
**Solution:**
Add the following configuration to `~/.codex/config.yaml` (Windows: `c:\user\.codex\config.toml`):
```yaml
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
approval_policy = "never"
sandbox_mode = "workspace-write"
disable_response_storage = true
network_access = true
```
**Key settings:**
- `approval_policy = "never"` - Remove approval restrictions
- `sandbox_mode = "workspace-write"` - Allow workspace write access
- `network_access = true` - Enable network access
**Related Issue:** [#31](https://github.com/cexll/myclaude/issues/31)
---
### Q5: How to disable default bypass/skip-permissions mode
**Background:**
By default, codeagent-wrapper enables bypass mode for both Codex and Claude backends:
- `CODEX_BYPASS_SANDBOX=true` - Bypasses Codex sandbox restrictions
- `CODEAGENT_SKIP_PERMISSIONS=true` - Skips Claude permission prompts
**To disable (if you need sandbox/permission protection):**
```bash
export CODEX_BYPASS_SANDBOX=false
export CODEAGENT_SKIP_PERMISSIONS=false
```
Or add to your shell profile (`~/.zshrc` or `~/.bashrc`):
```bash
echo 'export CODEX_BYPASS_SANDBOX=false' >> ~/.zshrc
echo 'export CODEAGENT_SKIP_PERMISSIONS=false' >> ~/.zshrc
```
**Note:** Disabling bypass mode will require manual approval for certain operations.
---
**Still having issues?** Visit [GitHub Issues](https://github.com/cexll/myclaude/issues) to search or report new issues.
---
## Documentation
- **[Codeagent-Wrapper Guide](docs/CODEAGENT-WRAPPER.md)** - Multi-backend execution wrapper
- **[Hooks Documentation](docs/HOOKS.md)** - Custom hooks and automation
### Additional Resources
- **[Installation Log](install.log)** - Installation history and troubleshooting
---
See [GitHub Issues](https://github.com/cexll/myclaude/issues) for more.
## License
AGPL-3.0 License - see [LICENSE](LICENSE)
AGPL-3.0 - see [LICENSE](LICENSE)
### Commercial Licensing
For commercial use without AGPL obligations, contact: evanxian9@gmail.com
## Support
- **Issues**: [GitHub Issues](https://github.com/cexll/myclaude/issues)
- **Documentation**: [docs/](docs/)
---
**Claude Code + Codex = Better Development** - Orchestration meets execution.
- [GitHub Issues](https://github.com/cexll/myclaude/issues)

View File

@@ -2,98 +2,116 @@
[![License: AGPL-3.0](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Claude Code](https://img.shields.io/badge/Claude-Code-blue)](https://claude.ai/code)
[![Version](https://img.shields.io/badge/Version-5.6-green)](https://github.com/cexll/myclaude)
[![Version](https://img.shields.io/badge/Version-6.x-green)](https://github.com/cexll/myclaude)
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini)
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini/OpenCode)
## 核心概念:多后端架构
## 快速开始
本系统采用**双智能体架构**与可插拔 AI 后端:
```bash
npx github:cexll/myclaude
```
## 模块概览
| 模块 | 描述 | 文档 |
|------|------|------|
| [do](skills/do/README.md) | **推荐** - 7 阶段功能开发 + codeagent 编排 | `/do` 命令 |
| [omo](skills/omo/README.md) | 多智能体编排 + 智能路由 | `/omo` 命令 |
| [bmad](agents/bmad/README.md) | BMAD 敏捷工作流 + 6 个专业智能体 | `/bmad-pilot` 命令 |
| [requirements](agents/requirements/README.md) | 轻量级需求到代码流水线 | `/requirements-pilot` 命令 |
| [essentials](agents/development-essentials/README.md) | 核心开发命令和工具 | `/code`, `/debug` 等 |
| [sparv](skills/sparv/README.md) | SPARV 工作流 (Specify→Plan→Act→Review→Vault) | `/sparv` 命令 |
| course | 课程开发(组合 dev + product-requirements + test-cases | 组合模块 |
## 核心架构
| 角色 | 智能体 | 职责 |
|------|-------|------|
| **编排者** | Claude Code | 规划、上下文收集、验证、用户交互 |
| **执行者** | codeagent-wrapper | 代码编辑、测试执行Codex/Claude/Gemini 后端)|
| **编排者** | Claude Code | 规划、上下文收集、验证 |
| **执行者** | codeagent-wrapper | 代码编辑、测试执行Codex/Claude/Gemini/OpenCode 后端)|
**为什么分离?**
- Claude Code 擅长理解上下文和编排复杂工作流
- 专业后端Codex 擅长代码、Claude 擅长推理、Gemini 擅长原型)专注执行
- 通过 `--backend codex|claude|gemini` 匹配模型与任务
## 工作流详解
## 快速开始windows上请在Powershell中执行
### do 工作流(推荐
7 阶段功能开发,通过 codeagent-wrapper 编排多个智能体。**大多数功能开发任务的首选工作流。**
```bash
git clone https://github.com/cexll/myclaude.git
cd myclaude
python3 install.py --install-dir ~/.claude
/do "添加用户登录功能"
```
## 工作流概览
**7 阶段:**
| 阶段 | 名称 | 目标 |
|------|------|------|
| 1 | Discovery | 理解需求 |
| 2 | Exploration | 映射代码库模式 |
| 3 | Clarification | 解决歧义(**强制**|
| 4 | Architecture | 设计实现方案 |
| 5 | Implementation | 构建功能(**需审批**|
| 6 | Review | 捕获缺陷 |
| 7 | Summary | 记录结果 |
### 0. OmO 多智能体编排器(复杂任务推荐)
**智能体:**
- `code-explorer` - 代码追踪、架构映射
- `code-architect` - 设计方案、文件规划
- `code-reviewer` - 代码审查、简化建议
- `develop` - 实现代码、运行测试
**基于风险信号智能路由任务到专业智能体的多智能体编排系统。**
---
### OmO 多智能体编排器
基于风险信号智能路由任务到专业智能体。
```bash
/omo "分析并修复这个认证 bug"
```
**智能体层级:**
| 智能体 | 角色 | 后端 | 模型 |
|-------|------|------|------|
| `oracle` | 技术顾问 | Claude | claude-opus-4-5 |
| `librarian` | 外部研究 | Claude | claude-sonnet-4-5 |
| `explore` | 代码库搜索 | OpenCode | grok-code |
| `develop` | 代码实现 | Codex | gpt-5.2 |
| `frontend-ui-ux-engineer` | UI/UX 专家 | Gemini | gemini-3-pro |
| `document-writer` | 文档撰写 | Gemini | gemini-3-flash |
**路由信号(非固定流水线):**
- 代码位置不明确 → `explore`
- 外部库/API → `librarian`
- 高风险/多文件变更 → `oracle`
- 需要实现 → `develop` / `frontend-ui-ux-engineer`
| 智能体 | 角色 | 后端 |
|-------|------|------|
| `oracle` | 技术顾问 | Claude |
| `librarian` | 外部研究 | Claude |
| `explore` | 代码库搜索 | OpenCode |
| `develop` | 代码实现 | Codex |
| `frontend-ui-ux-engineer` | UI/UX 专家 | Gemini |
| `document-writer` | 文档撰写 | Gemini |
**常用配方:**
- 解释代码:`explore`
- 位置已知的小修复:直接 `develop`
- Bug 修复位置未知:`explore → develop`
- Bug 修复位置未知`explore → develop`
- 跨模块重构:`explore → oracle → develop`
- 外部 API 集成:`explore + librarian → oracle → develop`
**适用场景:** 复杂 bug 调查、多文件重构、架构决策
---
### 1. Dev 工作流(推荐)
### SPARV 工作流
**大多数开发任务的首选工作流。**
极简 5 阶段工作流Specify → Plan → Act → Review → Vault。
```bash
/dev "实现 JWT 用户认证"
/sparv "实现订单导出功能"
```
**6 步流程**
1. **需求澄清** - 交互式问答明确范围
2. **Codex 深度分析** - 代码库探索和架构决策
3. **开发计划生成** - 结构化任务分解和测试要求
4. **并行执行** - Codex 并发执行任务
5. **覆盖率验证** - 强制 ≥90% 测试覆盖率
6. **完成总结** - 文件变更和覆盖率报告
**核心规则**
- **10 分规格门**:得分 0-10必须 >=9 才能进入 Plan
- **2 动作保存**:每 2 次工具调用写入 journal.md
- **3 失败协议**:连续 3 次失败后停止并上报
- **EHRB**:高风险操作需明确确认
**核心特性**
- Claude Code 编排Codex 执行所有代码变更
- 自动任务并行化提升速度
- 强制 90% 测试覆盖率门禁
- 失败自动回滚
**适用场景:** 功能开发、重构、带测试的 bug 修复
**评分维度(各 0-2 分)**
1. Value - 为什么做,可验证的收益
2. Scope - MVP + 不在范围内的内容
3. Acceptance - 可测试的验收标准
4. Boundaries - 错误/性能/兼容/安全边界
5. Risk - EHRB/依赖/未知 + 处理方式
---
### 2. BMAD 敏捷工作流
### BMAD 敏捷工作流
**包含 6 个专业智能体的完整企业敏捷方法论。**
完整企业敏捷方法论 + 6 个专业智能体。
```bash
/bmad-pilot "构建电商结账系统"
@@ -104,43 +122,36 @@ python3 install.py --install-dir ~/.claude
|-------|------|
| Product Owner | 需求与用户故事 |
| Architect | 系统设计与技术决策 |
| Tech Lead | Sprint 规划与任务分解 |
| Scrum Master | Sprint 规划与任务分解 |
| Developer | 实现 |
| Code Reviewer | 质量保证 |
| QA Engineer | 测试与验证 |
**流程**
```
需求 → 架构 → Sprint计划 → 开发 → 审查 → QA
↓ ↓ ↓ ↓ ↓ ↓
PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
```
**适用场景:** 大型功能、团队协作、企业项目
**审批门**
- PRD 完成后90+ 分)需用户审批
- 架构完成后90+ 分)需用户审批
---
### 3. 需求驱动工作流
### 需求驱动工作流
**轻量级需求到代码流水线。**
轻量级需求到代码流水线。
```bash
/requirements-pilot "实现 API 限流"
```
**流程**
1. 带质量评分的需求生成
2. 实现规划
3. 代码生成
4. 审查和测试
**适用场景:** 快速原型、明确定义的功能
**100 分质量评分**
- 功能清晰度30 分
- 技术具体性25 分
- 实现完整性25 分
- 业务上下文20 分
---
### 4. 开发基础命令
### 开发基础命令
**日常编码任务的直接命令。**
日常编码任务的直接命令。
| 命令 | 用途 |
|------|------|
@@ -152,332 +163,89 @@ PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
| `/refactor` | 代码重构 |
| `/docs` | 编写文档 |
**适用场景:** 快速任务,无需工作流开销
---
## 安装
### 模块化安装(推荐)
```bash
# 安装所有启用的模块默认dev + essentials
python3 install.py --install-dir ~/.claude
# 交互式安装器(推荐
npx github:cexll/myclaude
# 安装特定模块
python3 install.py --module dev
# 列出可安装项module:* / skill:* / codeagent-wrapper
npx github:cexll/myclaude --list
# 列出可用模块
python3 install.py --list-modules
# 强制覆盖现有文件
python3 install.py --force
# 指定安装目录 / 强制覆盖
npx github:cexll/myclaude --install-dir ~/.claude --force
```
### 可用模块
### 模块配置
| 模块 | 默认 | 描述 |
|------|------|------|
| `dev` | ✓ 启用 | Dev 工作流 + Codex 集成 |
| `essentials` | ✓ 启用 | 核心开发命令 |
| `bmad` | 禁用 | 完整 BMAD 敏捷工作流 |
| `requirements` | 禁用 | 需求驱动工作流 |
### 安装内容
```
~/.claude/
├── bin/
│ └── codeagent-wrapper # 主可执行文件
├── CLAUDE.md # 核心指令和角色定义
├── commands/ # 斜杠命令 (/dev, /code 等)
├── agents/ # 智能体定义
├── skills/
│ └── codex/
│ └── SKILL.md # Codex 集成技能
├── config.json # 配置文件
└── installed_modules.json # 安装状态
```
### 自定义安装目录
默认情况下myclaude 安装到 `~/.claude`。您可以使用 `INSTALL_DIR` 环境变量自定义安装目录:
```bash
# 安装到自定义目录
INSTALL_DIR=/opt/myclaude bash install.sh
# 相应更新您的 PATH
export PATH="/opt/myclaude/bin:$PATH"
```
**目录结构:**
- `$INSTALL_DIR/bin/` - codeagent-wrapper 可执行文件
- `$INSTALL_DIR/skills/` - 技能定义
- `$INSTALL_DIR/config.json` - 配置文件
- `$INSTALL_DIR/commands/` - 斜杠命令定义
- `$INSTALL_DIR/agents/` - 智能体定义
**注意:** 使用自定义安装目录时,请确保将 `$INSTALL_DIR/bin` 添加到您的 `PATH` 环境变量中。
### 配置
编辑 `config.json` 自定义:
编辑 `config.json` 启用/禁用模块:
```json
{
"version": "1.0",
"install_dir": "~/.claude",
"modules": {
"dev": {
"enabled": true,
"operations": [
{"type": "merge_dir", "source": "dev-workflow"},
{"type": "copy_file", "source": "memorys/CLAUDE.md", "target": "CLAUDE.md"},
{"type": "copy_file", "source": "skills/codex/SKILL.md", "target": "skills/codex/SKILL.md"},
{"type": "run_command", "command": "bash install.sh"}
]
}
"bmad": { "enabled": false },
"requirements": { "enabled": false },
"essentials": { "enabled": false },
"omo": { "enabled": false },
"sparv": { "enabled": false },
"do": { "enabled": true },
"course": { "enabled": false }
}
}
```
**操作类型:**
| 类型 | 描述 |
|------|------|
| `merge_dir` | 合并子目录 (commands/, agents/) 到安装目录 |
| `copy_dir` | 复制整个目录 |
| `copy_file` | 复制单个文件到目标路径 |
| `run_command` | 执行 shell 命令 |
---
## Codex 集成
`codex` 技能使 Claude Code 能够将代码执行委托给 Codex CLI。
### 工作流中的使用
```bash
# 通过技能调用 Codex
codeagent-wrapper - <<'EOF'
在 @src/auth.ts 中实现 JWT 验证
EOF
```
### 并行执行
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: backend_api
workdir: /project/backend
---CONTENT---
实现 /api/users 的 REST 端点
---TASK---
id: frontend_ui
workdir: /project/frontend
dependencies: backend_api
---CONTENT---
创建消费 API 的 React 组件
EOF
```
### 安装 Codex Wrapper
```bash
# 自动(通过 dev 模块)
python3 install.py --module dev
# 手动
bash install.sh
```
#### Windows 系统
Windows 系统会将 `codeagent-wrapper.exe` 安装到 `%USERPROFILE%\bin`
```powershell
# PowerShell推荐
powershell -ExecutionPolicy Bypass -File install.ps1
# 批处理cmd
install.bat
```
**添加到 PATH**(如果安装程序未自动检测):
```powershell
# PowerShell - 永久添加(当前用户)
[Environment]::SetEnvironmentVariable('PATH', "$HOME\bin;" + [Environment]::GetEnvironmentVariable('PATH','User'), 'User')
# PowerShell - 仅当前会话
$Env:PATH = "$HOME\bin;$Env:PATH"
```
```batch
REM cmd.exe - 永久添加(当前用户)(建议使用上面的 PowerShell 方法)
REM 警告:此命令会展开 %PATH% 包含系统 PATH导致重复
REM 注意:使用 reg add 而非 setx 以避免 1024 字符截断限制
reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%" /f
```
---
## 工作流选择指南
| 场景 | 推荐工作流 |
|------|----------|
| 带测试的新功能 | `/dev` |
| 快速 bug 修复 | `/debug``/code` |
| 大型多 Sprint 功能 | `/bmad-pilot` |
| 原型或 POC | `/requirements-pilot` |
| 代码审查 | `/review` |
| 性能问题 | `/optimize` |
| 场景 | 推荐 |
|------|------|
| 功能开发(默认) | `/do` |
| Bug 调查 + 修复 | `/omo` |
| 大型企业项目 | `/bmad-pilot` |
| 快速原型 | `/requirements-pilot` |
| 简单任务 | `/code`, `/debug` |
---
## 后端 CLI 要求
| 后端 | 必需功能 |
|------|----------|
| Codex | `codex e`, `--json`, `-C`, `resume` |
| Claude | `--output-format stream-json`, `-r` |
| Gemini | `-o stream-json`, `-y`, `-r` |
## 故障排查
### 常见问题
**Codex wrapper 未找到:**
```bash
# 安装程序会自动添加 PATH检查是否已添加
if [[ ":$PATH:" != *":$HOME/.claude/bin:"* ]]; then
echo "PATH not configured. Reinstalling..."
bash install.sh
fi
# 或手动添加(幂等性命令)
[[ ":$PATH:" != *":$HOME/.claude/bin:"* ]] && echo 'export PATH="$HOME/.claude/bin:$PATH"' >> ~/.zshrc
```
**权限被拒绝:**
```bash
python3 install.py --install-dir ~/.claude --force
# 选择codeagent-wrapper
npx github:cexll/myclaude
```
**模块未加载:**
```bash
# 检查安装状态
cat ~/.claude/installed_modules.json
# 重新安装特定模块
python3 install.py --module dev --force
npx github:cexll/myclaude --force
```
---
## FAQ
## 常见问题 (FAQ)
| 问题 | 解决方案 |
|------|----------|
| "Unknown event format" | 日志显示问题,可忽略 |
| Gemini 无法读取 .gitignore 文件 | 从 .gitignore 移除或使用其他后端 |
| Codex 权限拒绝 | 在 ~/.codex/config.yaml 设置 `approval_policy = "never"` |
### Q1: `codeagent-wrapper` 执行时报错 "Unknown event format"
**问题描述:**
执行 `codeagent-wrapper` 时出现错误:
```
Unknown event format: {"type":"turn.started"}
Unknown event format: {"type":"assistant", ...}
```
**解决方案:**
这是日志事件流的显示问题,不影响实际功能执行。预计在下个版本中修复。如需排查其他问题,可忽略此日志输出。
**相关 Issue** [#96](https://github.com/cexll/myclaude/issues/96)
---
### Q2: Gemini 无法读取 `.gitignore` 忽略的文件
**问题描述:**
使用 `codeagent-wrapper --backend gemini` 时,无法读取 `.claude/` 等被 `.gitignore` 忽略的目录中的文件。
**解决方案:**
- **方案一:** 在项目根目录的 `.gitignore` 中取消对 `.claude/` 的忽略
- **方案二:** 确保需要读取的文件不在 `.gitignore` 忽略列表中
**相关 Issue** [#75](https://github.com/cexll/myclaude/issues/75)
---
### Q3: `/dev` 命令并行执行特别慢
**问题描述:**
使用 `/dev` 命令开发简单功能耗时过长超过30分钟无法了解任务执行状态。
**解决方案:**
1. **检查日志:** 查看 `C:\Users\User\AppData\Local\Temp\codeagent-wrapper-*.log` 分析瓶颈
2. **调整后端:**
- 尝试使用 `gpt-5.1-codex-max` 等更快的模型
- 在 WSL 环境下运行速度可能更快
3. **工作区选择:** 使用独立的代码仓库而非包含多个子项目的 monorepo
**相关 Issue** [#77](https://github.com/cexll/myclaude/issues/77)
---
### Q4: 新版 Go 实现的 Codex 权限不足
**问题描述:**
升级到新版 Go 实现的 Codex 后,出现权限不足的错误。
**解决方案:**
`~/.codex/config.yaml` 中添加以下配置Windows: `c:\user\.codex\config.toml`
```yaml
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
approval_policy = "never"
sandbox_mode = "workspace-write"
disable_response_storage = true
network_access = true
```
**关键配置说明:**
- `approval_policy = "never"` - 移除审批限制
- `sandbox_mode = "workspace-write"` - 允许工作区写入权限
- `network_access = true` - 启用网络访问
**相关 Issue** [#31](https://github.com/cexll/myclaude/issues/31)
---
### Q5: 执行时遇到权限拒绝或沙箱限制
**问题描述:**
运行 codeagent-wrapper 时出现权限错误或沙箱限制。
**解决方案:**
设置以下环境变量:
```bash
export CODEX_BYPASS_SANDBOX=true
export CODEAGENT_SKIP_PERMISSIONS=true
```
或添加到 shell 配置文件(`~/.zshrc``~/.bashrc`
```bash
echo 'export CODEX_BYPASS_SANDBOX=true' >> ~/.zshrc
echo 'export CODEAGENT_SKIP_PERMISSIONS=true' >> ~/.zshrc
```
**注意:** 这些设置会绕过安全限制,请仅在可信环境中使用。
---
**仍有疑问?** 请访问 [GitHub Issues](https://github.com/cexll/myclaude/issues) 搜索或提交新问题。
---
更多问题请访问 [GitHub Issues](https://github.com/cexll/myclaude/issues)。
## 许可证
AGPL-3.0 License - 查看 [LICENSE](LICENSE)
AGPL-3.0 - 查看 [LICENSE](LICENSE)
### 商业授权
如需商业授权(无需遵守 AGPL 义务请联系evanxian9@gmail.com
## 支持
- **问题反馈**: [GitHub Issues](https://github.com/cexll/myclaude/issues)
- **文档**: [docs/](docs/)
---
**Claude Code + Codex = 更好的开发** - 编排遇见执行。
- [GitHub Issues](https://github.com/cexll/myclaude/issues)

109
agents/bmad/README.md Normal file
View File

@@ -0,0 +1,109 @@
# bmad - BMAD Agile Workflow
Full enterprise agile methodology with 6 specialized agents, UltraThink analysis, and repository-aware development.
## Installation
```bash
python install.py --module bmad
```
## Usage
```bash
/bmad-pilot <PROJECT_DESCRIPTION> [OPTIONS]
```
### Options
| Option | Description |
|--------|-------------|
| `--skip-tests` | Skip QA testing phase |
| `--direct-dev` | Skip SM planning, go directly to development |
| `--skip-scan` | Skip initial repository scanning |
## Workflow Phases
| Phase | Agent | Deliverable | Description |
|-------|-------|-------------|-------------|
| 0 | Orchestrator | `00-repo-scan.md` | Repository scanning with UltraThink analysis |
| 1 | Product Owner (PO) | `01-product-requirements.md` | PRD with 90+ quality score required |
| 2 | Architect | `02-system-architecture.md` | Technical design with 90+ score required |
| 3 | Scrum Master (SM) | `03-sprint-plan.md` | Sprint backlog with stories and estimates |
| 4 | Developer | Implementation code | Multi-sprint implementation |
| 4.5 | Reviewer | `04-dev-reviewed.md` | Code review (Pass/Pass with Risk/Fail) |
| 5 | QA Engineer | Test suite | Comprehensive testing and validation |
## Agents
| Agent | Role |
|-------|------|
| `bmad-orchestrator` | Repository scanning, workflow coordination |
| `bmad-po` | Requirements gathering, PRD creation |
| `bmad-architect` | System design, technology decisions |
| `bmad-sm` | Sprint planning, task breakdown |
| `bmad-dev` | Code implementation |
| `bmad-review` | Code review, quality assessment |
| `bmad-qa` | Testing, validation |
## Approval Gates
Two mandatory stop points require explicit user approval:
1. **After PRD** (Phase 1 → 2): User must approve requirements before architecture
2. **After Architecture** (Phase 2 → 3): User must approve design before implementation
## Output Structure
```
.claude/specs/{feature_name}/
├── 00-repo-scan.md
├── 01-product-requirements.md
├── 02-system-architecture.md
├── 03-sprint-plan.md
└── 04-dev-reviewed.md
```
## UltraThink Methodology
Applied throughout the workflow for deep analysis:
1. **Hypothesis Generation** - Form hypotheses about the problem
2. **Evidence Collection** - Gather evidence from codebase
3. **Pattern Recognition** - Identify recurring patterns
4. **Synthesis** - Create comprehensive understanding
5. **Validation** - Cross-check findings
## Interactive Confirmation Flow
PO and Architect phases use iterative refinement:
1. Agent produces initial draft + quality score
2. Orchestrator presents to user with clarification questions
3. User provides responses
4. Agent refines until quality >= 90
5. User confirms to save deliverable
## When to Use
- Large multi-sprint features
- Enterprise projects requiring documentation
- Team coordination scenarios
- Projects needing formal approval gates
## Directory Structure
```
agents/bmad/
├── README.md
├── commands/
│ └── bmad-pilot.md
└── agents/
├── bmad-orchestrator.md
├── bmad-po.md
├── bmad-architect.md
├── bmad-sm.md
├── bmad-dev.md
├── bmad-review.md
└── bmad-qa.md
```

View File

@@ -304,7 +304,7 @@ Deep reasoning and analysis for complex problems.
## 🔌 Agent Configuration
All commands use specialized agents configured in:
- `development-essentials/agents/`
- `agents/development-essentials/agents/`
- Agent prompt templates
- Tool access permissions
- Output formatting

View File

@@ -244,8 +244,8 @@ Development Essentials 模块包含以下专用代理:
## 🔗 相关文档
- [主文档](../README.md) - 项目总览
- [BMAD工作流](../docs/BMAD-WORKFLOW.md) - 完整敏捷流程
- [Requirements工作流](../docs/REQUIREMENTS-WORKFLOW.md) - 轻量级开发流程
- [BMAD工作流](../agents/bmad/BMAD-WORKFLOW.md) - 完整敏捷流程
- [Requirements工作流](../agents/requirements/REQUIREMENTS-WORKFLOW.md) - 轻量级开发流程
- [插件系统](../PLUGIN_README.md) - 插件安装和管理
---

View File

@@ -0,0 +1,90 @@
# requirements - Requirements-Driven Workflow
Lightweight requirements-to-code pipeline with interactive quality gates.
## Installation
```bash
python install.py --module requirements
```
## Usage
```bash
/requirements-pilot <FEATURE_DESCRIPTION> [OPTIONS]
```
### Options
| Option | Description |
|--------|-------------|
| `--skip-tests` | Skip testing phase entirely |
| `--skip-scan` | Skip initial repository scanning |
## Workflow Phases
| Phase | Description | Output |
|-------|-------------|--------|
| 0 | Repository scanning | `00-repository-context.md` |
| 1 | Requirements confirmation | `requirements-confirm.md` (90+ score required) |
| 2 | Implementation | Code + `requirements-spec.md` |
## Quality Scoring (100-point system)
| Category | Points | Focus |
|----------|--------|-------|
| Functional Clarity | 30 | Input/output specs, success criteria |
| Technical Specificity | 25 | Integration points, constraints |
| Implementation Completeness | 25 | Edge cases, error handling |
| Business Context | 20 | User value, priority |
## Sub-Agents
| Agent | Role |
|-------|------|
| `requirements-generate` | Create technical specifications |
| `requirements-code` | Implement functionality |
| `requirements-review` | Code quality evaluation |
| `requirements-testing` | Test case creation |
## Approval Gate
One mandatory stop point after Phase 1:
- Requirements must achieve 90+ quality score
- User must explicitly approve before implementation begins
## Testing Decision
After code review passes (≥90%):
- `--skip-tests`: Complete without testing
- No option: Interactive prompt with smart recommendations based on task complexity
## Output Structure
```
.claude/specs/{feature_name}/
├── 00-repository-context.md
├── requirements-confirm.md
└── requirements-spec.md
```
## When to Use
- Quick prototypes
- Well-defined features
- Smaller scope tasks
- When full BMAD workflow is overkill
## Directory Structure
```
agents/requirements/
├── README.md
├── commands/
│ └── requirements-pilot.md
└── agents/
├── requirements-generate.md
├── requirements-code.md
├── requirements-review.md
└── requirements-testing.md
```

730
bin/cli.js Executable file
View File

@@ -0,0 +1,730 @@
#!/usr/bin/env node
"use strict";
const crypto = require("crypto");
const fs = require("fs");
const https = require("https");
const os = require("os");
const path = require("path");
const readline = require("readline");
const zlib = require("zlib");
const { spawn } = require("child_process");
const REPO = { owner: "cexll", name: "myclaude" };
const API_HEADERS = {
"User-Agent": "myclaude-npx",
Accept: "application/vnd.github+json",
};
function parseArgs(argv) {
const out = {
installDir: "~/.claude",
force: false,
dryRun: false,
list: false,
update: false,
tag: null,
};
for (let i = 0; i < argv.length; i++) {
const a = argv[i];
if (a === "--install-dir") out.installDir = argv[++i];
else if (a === "--force") out.force = true;
else if (a === "--dry-run") out.dryRun = true;
else if (a === "--list") out.list = true;
else if (a === "--update") out.update = true;
else if (a === "--tag") out.tag = argv[++i];
else if (a === "-h" || a === "--help") out.help = true;
else throw new Error(`Unknown arg: ${a}`);
}
return out;
}
function printHelp() {
process.stdout.write(
[
"myclaude (npx installer)",
"",
"Usage:",
" npx github:cexll/myclaude",
" npx github:cexll/myclaude --list",
" npx github:cexll/myclaude --update",
" npx github:cexll/myclaude --install-dir ~/.claude --force",
"",
"Options:",
" --install-dir <path> Default: ~/.claude",
" --force Overwrite existing files",
" --dry-run Print actions only",
" --list List installable items and exit",
" --update Update already installed modules",
" --tag <tag> Install a specific GitHub tag",
].join("\n") + "\n"
);
}
function withTimeout(promise, ms, label) {
let timer;
const timeout = new Promise((_, reject) => {
timer = setTimeout(() => reject(new Error(`Timeout: ${label}`)), ms);
});
return Promise.race([promise, timeout]).finally(() => clearTimeout(timer));
}
function httpsGetJson(url) {
return new Promise((resolve, reject) => {
https
.get(url, { headers: API_HEADERS }, (res) => {
let body = "";
res.setEncoding("utf8");
res.on("data", (d) => (body += d));
res.on("end", () => {
if (res.statusCode && res.statusCode >= 400) {
return reject(
new Error(`HTTP ${res.statusCode}: ${url}\n${body.slice(0, 500)}`)
);
}
try {
resolve(JSON.parse(body));
} catch (e) {
reject(new Error(`Invalid JSON from ${url}: ${e.message}`));
}
});
})
.on("error", reject);
});
}
function downloadToFile(url, outPath) {
return new Promise((resolve, reject) => {
const file = fs.createWriteStream(outPath);
https
.get(url, { headers: API_HEADERS }, (res) => {
if (
res.statusCode &&
res.statusCode >= 300 &&
res.statusCode < 400 &&
res.headers.location
) {
file.close();
fs.unlink(outPath, () => {
downloadToFile(res.headers.location, outPath).then(resolve, reject);
});
return;
}
if (res.statusCode && res.statusCode >= 400) {
file.close();
fs.unlink(outPath, () => {});
return reject(new Error(`HTTP ${res.statusCode}: ${url}`));
}
res.pipe(file);
file.on("finish", () => file.close(resolve));
})
.on("error", (err) => {
file.close();
fs.unlink(outPath, () => reject(err));
});
});
}
async function fetchLatestTag() {
const url = `https://api.github.com/repos/${REPO.owner}/${REPO.name}/releases/latest`;
const json = await httpsGetJson(url);
if (!json || typeof json.tag_name !== "string" || !json.tag_name.trim()) {
throw new Error("GitHub API: missing tag_name");
}
return json.tag_name.trim();
}
async function fetchRemoteConfig(tag) {
const url = `https://api.github.com/repos/${REPO.owner}/${REPO.name}/contents/config.json?ref=${encodeURIComponent(
tag
)}`;
const json = await httpsGetJson(url);
if (!json || typeof json.content !== "string") {
throw new Error("GitHub contents API: missing config.json content");
}
const buf = Buffer.from(json.content.replace(/\n/g, ""), "base64");
return JSON.parse(buf.toString("utf8"));
}
async function fetchRemoteSkills(tag) {
const url = `https://api.github.com/repos/${REPO.owner}/${REPO.name}/contents/skills?ref=${encodeURIComponent(
tag
)}`;
const json = await httpsGetJson(url);
if (!Array.isArray(json)) throw new Error("GitHub contents API: skills is not a directory");
return json
.filter((e) => e && e.type === "dir" && typeof e.name === "string")
.map((e) => e.name)
.sort();
}
function repoRootFromHere() {
return path.resolve(__dirname, "..");
}
function readLocalConfig() {
const p = path.join(repoRootFromHere(), "config.json");
return JSON.parse(fs.readFileSync(p, "utf8"));
}
function listLocalSkills() {
const root = repoRootFromHere();
const skillsDir = path.join(root, "skills");
if (!fs.existsSync(skillsDir)) return [];
return fs
.readdirSync(skillsDir, { withFileTypes: true })
.filter((d) => d.isDirectory())
.map((d) => d.name)
.sort();
}
function expandHome(p) {
if (!p) return p;
if (p === "~") return os.homedir();
if (p.startsWith("~/")) return path.join(os.homedir(), p.slice(2));
return p;
}
function readInstalledModuleNamesFromStatus(installDir) {
const p = path.join(installDir, "installed_modules.json");
if (!fs.existsSync(p)) return null;
try {
const json = JSON.parse(fs.readFileSync(p, "utf8"));
const modules = json && json.modules;
if (!modules || typeof modules !== "object" || Array.isArray(modules)) return null;
return Object.keys(modules)
.filter((k) => typeof k === "string" && k.trim())
.sort();
} catch {
return null;
}
}
async function dirExists(p) {
try {
return (await fs.promises.stat(p)).isDirectory();
} catch {
return false;
}
}
async function mergeDirLooksInstalled(srcDir, installDir) {
if (!(await dirExists(srcDir))) return false;
const subdirs = await fs.promises.readdir(srcDir, { withFileTypes: true });
for (const d of subdirs) {
if (!d.isDirectory()) continue;
const srcSub = path.join(srcDir, d.name);
const entries = await fs.promises.readdir(srcSub, { withFileTypes: true });
for (const e of entries) {
if (!e.isFile()) continue;
const dst = path.join(installDir, d.name, e.name);
if (fs.existsSync(dst)) return true;
}
}
return false;
}
async function detectInstalledModuleNames(config, repoRoot, installDir) {
const mods = (config && config.modules) || {};
const installed = [];
for (const [name, mod] of Object.entries(mods)) {
const ops = Array.isArray(mod && mod.operations) ? mod.operations : [];
let ok = false;
for (const op of ops) {
const type = op && op.type;
if (type === "copy_file" || type === "copy_dir") {
const target = typeof op.target === "string" ? op.target : "";
if (target && fs.existsSync(path.join(installDir, target))) {
ok = true;
break;
}
} else if (type === "merge_dir") {
const source = typeof op.source === "string" ? op.source : "";
if (source && (await mergeDirLooksInstalled(path.join(repoRoot, source), installDir))) {
ok = true;
break;
}
}
}
if (ok) installed.push(name);
}
return installed.sort();
}
async function updateInstalledModules(installDir, tag, config, dryRun) {
const mods = (config && config.modules) || {};
if (!Object.keys(mods).length) throw new Error("No modules found in config.json");
let repoRoot = repoRootFromHere();
let tmp = null;
if (tag) {
tmp = path.join(
os.tmpdir(),
`myclaude-update-${Date.now()}-${crypto.randomBytes(4).toString("hex")}`
);
await fs.promises.mkdir(tmp, { recursive: true });
}
try {
if (tag) {
const archive = path.join(tmp, "src.tgz");
const url = `https://codeload.github.com/${REPO.owner}/${REPO.name}/tar.gz/refs/tags/${encodeURIComponent(
tag
)}`;
process.stdout.write(`Downloading ${REPO.owner}/${REPO.name}@${tag}...\n`);
await downloadToFile(url, archive);
process.stdout.write("Extracting...\n");
const extracted = path.join(tmp, "src");
await extractTarGz(archive, extracted);
repoRoot = extracted;
} else {
process.stdout.write("Offline mode: updating from local package contents.\n");
}
const fromStatus = readInstalledModuleNamesFromStatus(installDir);
const installed = fromStatus || (await detectInstalledModuleNames(config, repoRoot, installDir));
const toUpdate = installed.filter((name) => Object.prototype.hasOwnProperty.call(mods, name));
if (!toUpdate.length) {
process.stdout.write(`No installed modules found in ${installDir}.\n`);
return;
}
if (dryRun) {
for (const name of toUpdate) process.stdout.write(`module:${name}\n`);
return;
}
await fs.promises.mkdir(installDir, { recursive: true });
for (const name of toUpdate) {
process.stdout.write(`Updating module: ${name}\n`);
await applyModule(name, config, repoRoot, installDir, true);
}
} finally {
if (tmp) await rmTree(tmp);
}
}
function buildItems(config, skills) {
const items = [{ id: "codeagent-wrapper", label: "codeagent-wrapper", kind: "wrapper" }];
const modules = (config && config.modules) || {};
for (const [name, mod] of Object.entries(modules)) {
const desc = mod && typeof mod.description === "string" ? mod.description : "";
items.push({
id: `module:${name}`,
label: `module:${name}${desc ? ` - ${desc}` : ""}`,
kind: "module",
moduleName: name,
});
}
for (const s of skills) {
items.push({ id: `skill:${s}`, label: `skill:${s}`, kind: "skill", skillName: s });
}
return items;
}
function clearScreen() {
process.stdout.write("\x1b[2J\x1b[H");
}
async function promptMultiSelect(items, title) {
if (!process.stdin.isTTY) {
throw new Error("No TTY. Use --list or run in an interactive terminal.");
}
let idx = 0;
const selected = new Set();
readline.emitKeypressEvents(process.stdin);
process.stdin.setRawMode(true);
function render() {
clearScreen();
process.stdout.write(`${title}\n`);
process.stdout.write("↑↓ move Space toggle Enter confirm q quit\n\n");
for (let i = 0; i < items.length; i++) {
const it = items[i];
const cursor = i === idx ? ">" : " ";
const box = selected.has(it.id) ? "[x]" : "[ ]";
process.stdout.write(`${cursor} ${box} ${it.label}\n`);
}
}
function cleanup() {
process.stdin.setRawMode(false);
process.stdin.removeListener("keypress", onKey);
}
function onKey(_, key) {
if (!key) return;
if (key.name === "c" && key.ctrl) {
cleanup();
process.exit(130);
}
if (key.name === "q") {
cleanup();
process.exit(0);
}
if (key.name === "up") idx = (idx - 1 + items.length) % items.length;
else if (key.name === "down") idx = (idx + 1) % items.length;
else if (key.name === "space") {
const id = items[idx].id;
if (selected.has(id)) selected.delete(id);
else selected.add(id);
} else if (key.name === "return") {
cleanup();
clearScreen();
const picked = items.filter((it) => selected.has(it.id));
return resolvePick(picked);
}
render();
}
let resolvePick;
const result = new Promise((resolve) => {
resolvePick = resolve;
});
process.stdin.on("keypress", onKey);
render();
return result;
}
function isZeroBlock(b) {
for (let i = 0; i < b.length; i++) if (b[i] !== 0) return false;
return true;
}
function tarString(b, start, len) {
return b
.toString("utf8", start, start + len)
.replace(/\0.*$/, "")
.trim();
}
function tarOctal(b, start, len) {
const s = tarString(b, start, len);
if (!s) return 0;
return parseInt(s, 8) || 0;
}
function safePosixPath(p) {
const norm = path.posix.normalize(p);
if (norm.startsWith("/") || norm.startsWith("..") || norm.includes("/../")) {
throw new Error(`Unsafe path in archive: ${p}`);
}
return norm;
}
async function extractTarGz(archivePath, destDir) {
await fs.promises.mkdir(destDir, { recursive: true });
const gunzip = zlib.createGunzip();
const stream = fs.createReadStream(archivePath).pipe(gunzip);
let buf = Buffer.alloc(0);
let file = null;
let pad = 0;
let zeroBlocks = 0;
for await (const chunk of stream) {
buf = Buffer.concat([buf, chunk]);
while (true) {
if (pad) {
if (buf.length < pad) break;
buf = buf.slice(pad);
pad = 0;
}
if (!file) {
if (buf.length < 512) break;
const header = buf.slice(0, 512);
buf = buf.slice(512);
if (isZeroBlock(header)) {
zeroBlocks++;
if (zeroBlocks >= 2) return;
continue;
}
zeroBlocks = 0;
const name = tarString(header, 0, 100);
const prefix = tarString(header, 345, 155);
const full = prefix ? `${prefix}/${name}` : name;
const size = tarOctal(header, 124, 12);
const mode = tarOctal(header, 100, 8);
const typeflag = header[156];
const rel = safePosixPath(full.split("/").slice(1).join("/"));
if (!rel || rel === ".") {
file = null;
pad = 0;
continue;
}
const outPath = path.join(destDir, ...rel.split("/"));
if (typeflag === 53) {
await fs.promises.mkdir(outPath, { recursive: true });
if (mode) await fs.promises.chmod(outPath, mode);
file = null;
pad = 0;
continue;
}
file = { outPath, size, remaining: size, chunks: [], mode };
if (size === 0) {
await fs.promises.mkdir(path.dirname(outPath), { recursive: true });
await fs.promises.writeFile(outPath, Buffer.alloc(0));
if (mode) await fs.promises.chmod(outPath, mode);
file = null;
pad = 0;
}
continue;
}
if (buf.length < file.remaining) {
file.chunks.push(buf);
file.remaining -= buf.length;
buf = Buffer.alloc(0);
break;
}
file.chunks.push(buf.slice(0, file.remaining));
buf = buf.slice(file.remaining);
file.remaining = 0;
await fs.promises.mkdir(path.dirname(file.outPath), { recursive: true });
await fs.promises.writeFile(file.outPath, Buffer.concat(file.chunks));
if (file.mode) await fs.promises.chmod(file.outPath, file.mode);
pad = (512 - (file.size % 512)) % 512;
file = null;
}
}
}
async function copyFile(src, dst, force) {
if (!force && fs.existsSync(dst)) return;
await fs.promises.mkdir(path.dirname(dst), { recursive: true });
await fs.promises.copyFile(src, dst);
const st = await fs.promises.stat(src);
await fs.promises.chmod(dst, st.mode);
}
async function copyDirRecursive(src, dst, force) {
if (fs.existsSync(dst) && !force) return;
await fs.promises.mkdir(dst, { recursive: true });
const entries = await fs.promises.readdir(src, { withFileTypes: true });
for (const e of entries) {
const s = path.join(src, e.name);
const d = path.join(dst, e.name);
if (e.isDirectory()) await copyDirRecursive(s, d, force);
else if (e.isFile()) await copyFile(s, d, force);
}
}
async function mergeDir(src, installDir, force) {
const subdirs = await fs.promises.readdir(src, { withFileTypes: true });
for (const d of subdirs) {
if (!d.isDirectory()) continue;
const srcSub = path.join(src, d.name);
const dstSub = path.join(installDir, d.name);
await fs.promises.mkdir(dstSub, { recursive: true });
const entries = await fs.promises.readdir(srcSub, { withFileTypes: true });
for (const e of entries) {
if (!e.isFile()) continue;
await copyFile(path.join(srcSub, e.name), path.join(dstSub, e.name), force);
}
}
}
function runInstallSh(repoRoot, installDir) {
return new Promise((resolve, reject) => {
const cmd = process.platform === "win32" ? "cmd.exe" : "bash";
const args = process.platform === "win32" ? ["/c", "install.bat"] : ["install.sh"];
const p = spawn(cmd, args, {
cwd: repoRoot,
stdio: "inherit",
env: { ...process.env, INSTALL_DIR: installDir },
});
p.on("exit", (code) => {
if (code === 0) resolve();
else reject(new Error(`install script failed (exit ${code})`));
});
});
}
async function rmTree(p) {
if (!fs.existsSync(p)) return;
if (fs.promises.rm) {
await fs.promises.rm(p, { recursive: true, force: true });
return;
}
await fs.promises.rmdir(p, { recursive: true });
}
async function applyModule(moduleName, config, repoRoot, installDir, force) {
const mod = config && config.modules && config.modules[moduleName];
if (!mod) throw new Error(`Unknown module: ${moduleName}`);
const ops = Array.isArray(mod.operations) ? mod.operations : [];
for (const op of ops) {
const type = op && op.type;
if (type === "copy_file") {
await copyFile(
path.join(repoRoot, op.source),
path.join(installDir, op.target),
force
);
} else if (type === "copy_dir") {
await copyDirRecursive(
path.join(repoRoot, op.source),
path.join(installDir, op.target),
force
);
} else if (type === "merge_dir") {
await mergeDir(path.join(repoRoot, op.source), installDir, force);
} else if (type === "run_command") {
const cmd = typeof op.command === "string" ? op.command.trim() : "";
if (cmd !== "bash install.sh") {
throw new Error(`Refusing run_command: ${cmd || "(empty)"}`);
}
await runInstallSh(repoRoot, installDir);
} else {
throw new Error(`Unsupported operation type: ${type}`);
}
}
}
async function installSelected(picks, tag, config, installDir, force, dryRun) {
const needRepo = picks.some((p) => p.kind !== "wrapper");
const needWrapper = picks.some((p) => p.kind === "wrapper");
if (dryRun) {
for (const p of picks) process.stdout.write(`- ${p.id}\n`);
return;
}
const tmp = path.join(
os.tmpdir(),
`myclaude-${Date.now()}-${crypto.randomBytes(4).toString("hex")}`
);
await fs.promises.mkdir(tmp, { recursive: true });
try {
let repoRoot = repoRootFromHere();
if (needRepo || needWrapper) {
if (!tag) throw new Error("No tag available to download");
const archive = path.join(tmp, "src.tgz");
const url = `https://codeload.github.com/${REPO.owner}/${REPO.name}/tar.gz/refs/tags/${encodeURIComponent(
tag
)}`;
process.stdout.write(`Downloading ${REPO.owner}/${REPO.name}@${tag}...\n`);
await downloadToFile(url, archive);
process.stdout.write("Extracting...\n");
const extracted = path.join(tmp, "src");
await extractTarGz(archive, extracted);
repoRoot = extracted;
}
await fs.promises.mkdir(installDir, { recursive: true });
for (const p of picks) {
if (p.kind === "wrapper") {
process.stdout.write("Installing codeagent-wrapper...\n");
await runInstallSh(repoRoot, installDir);
continue;
}
if (p.kind === "module") {
process.stdout.write(`Installing module: ${p.moduleName}\n`);
await applyModule(p.moduleName, config, repoRoot, installDir, force);
continue;
}
if (p.kind === "skill") {
process.stdout.write(`Installing skill: ${p.skillName}\n`);
await copyDirRecursive(
path.join(repoRoot, "skills", p.skillName),
path.join(installDir, "skills", p.skillName),
force
);
}
}
} finally {
await rmTree(tmp);
}
}
async function main() {
const args = parseArgs(process.argv.slice(2));
if (args.help) {
printHelp();
return;
}
const installDir = expandHome(args.installDir);
if (args.list && args.update) throw new Error("Cannot combine --list and --update");
let tag = args.tag;
if (!tag) {
try {
tag = await withTimeout(fetchLatestTag(), 5000, "fetch latest tag");
} catch {
tag = null;
}
}
let config = null;
let skills = [];
if (tag) {
try {
[config, skills] = await withTimeout(
Promise.all([fetchRemoteConfig(tag), fetchRemoteSkills(tag)]),
8000,
"fetch config/skills"
);
} catch {
config = null;
skills = [];
}
}
if (!config) config = readLocalConfig();
if (!skills.length) skills = listLocalSkills();
if (args.update) {
await updateInstalledModules(installDir, tag, config, args.dryRun);
process.stdout.write("Done.\n");
return;
}
const items = buildItems(config, skills);
if (args.list) {
for (const it of items) process.stdout.write(`${it.id}\n`);
return;
}
const title = tag ? `myclaude installer (latest: ${tag})` : "myclaude installer (offline mode)";
const picks = await promptMultiSelect(items, title);
if (!picks.length) {
process.stdout.write("Nothing selected.\n");
return;
}
await installSelected(picks, tag, config, installDir, args.force, args.dryRun);
process.stdout.write("Done.\n");
}
main().catch((err) => {
process.stderr.write(`ERROR: ${err && err.message ? err.message : String(err)}\n`);
process.exit(1);
});

View File

@@ -0,0 +1,39 @@
name: CI
on:
push:
branches: [main, master]
pull_request:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
go-version: ["1.21", "1.22"]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
cache: true
- name: Test
run: make test
- name: Build
run: make build
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: "1.22"
cache: true
- name: Lint
run: make lint

View File

@@ -1,6 +1,9 @@
# Build artifacts
codeagent-wrapper
codeagent-wrapper.exe
bin/
codeagent
codeagent.exe
/codeagent-wrapper
/codeagent-wrapper.exe
*.test
# Coverage reports
@@ -9,3 +12,12 @@ coverage*.out
cover.out
cover_*.out
coverage.html
# Logs
*.log
# Temp files
*.tmp
*.swp
*~
.DS_Store

View File

@@ -0,0 +1,37 @@
GO ?= go
TOOLS_BIN := $(CURDIR)/bin
TOOLCHAIN ?= go1.22.0
GOLANGCI_LINT_VERSION := v1.56.2
STATICCHECK_VERSION := v0.4.7
GOLANGCI_LINT := $(TOOLS_BIN)/golangci-lint
STATICCHECK := $(TOOLS_BIN)/staticcheck
.PHONY: build test lint clean install
build:
$(GO) build -o codeagent ./cmd/codeagent
$(GO) build -o codeagent-wrapper ./cmd/codeagent-wrapper
test:
$(GO) test ./...
$(GOLANGCI_LINT):
@mkdir -p $(TOOLS_BIN)
GOTOOLCHAIN=$(TOOLCHAIN) GOBIN=$(TOOLS_BIN) $(GO) install github.com/golangci/golangci-lint/cmd/golangci-lint@$(GOLANGCI_LINT_VERSION)
$(STATICCHECK):
@mkdir -p $(TOOLS_BIN)
GOTOOLCHAIN=$(TOOLCHAIN) GOBIN=$(TOOLS_BIN) $(GO) install honnef.co/go/tools/cmd/staticcheck@$(STATICCHECK_VERSION)
lint: $(GOLANGCI_LINT) $(STATICCHECK)
GOTOOLCHAIN=$(TOOLCHAIN) $(GOLANGCI_LINT) run ./...
GOTOOLCHAIN=$(TOOLCHAIN) $(STATICCHECK) ./...
clean:
@python3 -c 'import glob, os; paths=["codeagent","codeagent.exe","codeagent-wrapper","codeagent-wrapper.exe","coverage.out","cover.out","coverage.html"]; paths += glob.glob("coverage*.out") + glob.glob("cover_*.out") + glob.glob("*.test"); [os.remove(p) for p in paths if os.path.exists(p)]'
install:
$(GO) install ./cmd/codeagent
$(GO) install ./cmd/codeagent-wrapper

152
codeagent-wrapper/README.md Normal file
View File

@@ -0,0 +1,152 @@
# codeagent-wrapper
`codeagent-wrapper` 是一个用 Go 编写的“多后端 AI 代码代理”命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端Codex / Claude / Gemini / Opencode并提供一致的参数、配置与会话恢复体验。
入口:`cmd/codeagent/main.go`(生成二进制名:`codeagent`)和 `cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。两者行为一致。
## 功能特性
- 多后端支持:`codex` / `claude` / `gemini` / `opencode`
- 统一命令行:`codeagent [flags] <task>` / `codeagent resume <session_id> <task> [workdir]`
- 自动 stdin遇到换行/特殊字符/超长任务自动走 stdin避免 shell quoting 地狱;也可显式使用 `-`
- 配置合并:支持配置文件与 `CODEAGENT_*` 环境变量viper
- Agent 预设:从 `~/.codeagent/models.json` 读取 backend/model/prompt 等预设
- 并行执行:`--parallel` 从 stdin 读取多任务配置,支持依赖拓扑并发执行
- 日志清理:`codeagent cleanup` 清理旧日志(日志写入系统临时目录)
## 安装
要求Go 1.21+。
在仓库根目录执行:
```bash
go install ./cmd/codeagent
go install ./cmd/codeagent-wrapper
```
安装后确认:
```bash
codeagent version
codeagent-wrapper version
```
## 使用示例
最简单用法(默认后端:`codex`
```bash
codeagent "分析 internal/app/cli.go 的入口逻辑,给出改进建议"
```
指定后端:
```bash
codeagent --backend claude "解释 internal/executor/parallel_config.go 的并行配置格式"
```
指定工作目录(第 2 个位置参数):
```bash
codeagent "在当前 repo 下搜索潜在数据竞争" .
```
显式从 stdin 读取 task使用 `-`
```bash
cat task.txt | codeagent -
```
恢复会话:
```bash
codeagent resume <session_id> "继续上次任务"
```
并行模式(从 stdin 读取任务配置;禁止位置参数):
```bash
codeagent --parallel <<'EOF'
---TASK---
id: t1
workdir: .
backend: codex
---CONTENT---
列出本项目的主要模块以及它们的职责。
---TASK---
id: t2
dependencies: t1
backend: claude
---CONTENT---
基于 t1 的结论,提出重构风险点与建议。
EOF
```
## 配置说明
### 配置文件
默认查找路径(当 `--config` 为空时):
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
示例YAML
```yaml
backend: codex
model: gpt-4.1
skip-permissions: false
```
也可以通过 `--config /path/to/config.yaml` 显式指定。
### 环境变量(`CODEAGENT_*`
通过 viper 读取并自动映射 `-``_`,常用项:
- `CODEAGENT_BACKEND``codex|claude|gemini|opencode`
- `CODEAGENT_MODEL`
- `CODEAGENT_AGENT`
- `CODEAGENT_PROMPT_FILE`
- `CODEAGENT_REASONING_EFFORT`
- `CODEAGENT_SKIP_PERMISSIONS`
- `CODEAGENT_FULL_OUTPUT`(并行模式 legacy 输出)
- `CODEAGENT_MAX_PARALLEL_WORKERS`0 表示不限制,上限 100
### Agent 预设(`~/.codeagent/models.json`
可在 `~/.codeagent/models.json` 定义 agent → backend/model/prompt 等映射,用 `--agent <name>` 选择:
```json
{
"default_backend": "opencode",
"default_model": "opencode/grok-code",
"agents": {
"develop": {
"backend": "codex",
"model": "gpt-4.1",
"prompt_file": "~/.codeagent/prompts/develop.md",
"description": "Code development"
}
}
}
```
## 支持的后端
该项目本身不内置模型能力,依赖你本机安装并可在 `PATH` 中找到对应 CLI
- `codex`:执行 `codex e ...`(默认会添加 `--dangerously-bypass-approvals-and-sandbox`;如需关闭请设置 `CODEX_BYPASS_SANDBOX=false`
- `claude`:执行 `claude -p ... --output-format stream-json`(默认会跳过权限提示;如需开启请设置 `CODEAGENT_SKIP_PERMISSIONS=false`
- `gemini`:执行 `gemini ... -o stream-json`(可从 `~/.gemini/.env` 加载环境变量)
- `opencode`:执行 `opencode run --format json`
## 开发
```bash
make build
make test
make lint
make clean
```

View File

@@ -14,14 +14,10 @@ Multi-backend AI code execution wrapper supporting Codex, Claude, and Gemini.
## Installation
```bash
# Clone repository
git clone https://github.com/cexll/myclaude.git
cd myclaude
# Recommended: run the installer and select "codeagent-wrapper"
npx github:cexll/myclaude
# Install via install.py (includes binary compilation)
python3 install.py --module dev
# Or manual installation
# Manual build (optional; requires repo checkout)
cd codeagent-wrapper
go build -o ~/.claude/bin/codeagent-wrapper
```

View File

@@ -1,79 +0,0 @@
package main
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
)
type AgentModelConfig struct {
Backend string `json:"backend"`
Model string `json:"model"`
PromptFile string `json:"prompt_file,omitempty"`
Description string `json:"description,omitempty"`
Yolo bool `json:"yolo,omitempty"`
Reasoning string `json:"reasoning,omitempty"`
}
type ModelsConfig struct {
DefaultBackend string `json:"default_backend"`
DefaultModel string `json:"default_model"`
Agents map[string]AgentModelConfig `json:"agents"`
}
var defaultModelsConfig = ModelsConfig{
DefaultBackend: "opencode",
DefaultModel: "opencode/grok-code",
Agents: map[string]AgentModelConfig{
"oracle": {Backend: "claude", Model: "claude-opus-4-5-20251101", PromptFile: "~/.claude/skills/omo/references/oracle.md", Description: "Technical advisor"},
"librarian": {Backend: "claude", Model: "claude-sonnet-4-5-20250929", PromptFile: "~/.claude/skills/omo/references/librarian.md", Description: "Researcher"},
"explore": {Backend: "opencode", Model: "opencode/grok-code", PromptFile: "~/.claude/skills/omo/references/explore.md", Description: "Code search"},
"develop": {Backend: "codex", Model: "", PromptFile: "~/.claude/skills/omo/references/develop.md", Description: "Code development"},
"frontend-ui-ux-engineer": {Backend: "gemini", Model: "", PromptFile: "~/.claude/skills/omo/references/frontend-ui-ux-engineer.md", Description: "Frontend engineer"},
"document-writer": {Backend: "gemini", Model: "", PromptFile: "~/.claude/skills/omo/references/document-writer.md", Description: "Documentation"},
},
}
func loadModelsConfig() *ModelsConfig {
home, err := os.UserHomeDir()
if err != nil {
logWarn(fmt.Sprintf("Failed to resolve home directory for models config: %v; using defaults", err))
return &defaultModelsConfig
}
configPath := filepath.Join(home, ".codeagent", "models.json")
data, err := os.ReadFile(configPath)
if err != nil {
if !os.IsNotExist(err) {
logWarn(fmt.Sprintf("Failed to read models config %s: %v; using defaults", configPath, err))
}
return &defaultModelsConfig
}
var cfg ModelsConfig
if err := json.Unmarshal(data, &cfg); err != nil {
logWarn(fmt.Sprintf("Failed to parse models config %s: %v; using defaults", configPath, err))
return &defaultModelsConfig
}
// Merge with defaults
for name, agent := range defaultModelsConfig.Agents {
if _, exists := cfg.Agents[name]; !exists {
if cfg.Agents == nil {
cfg.Agents = make(map[string]AgentModelConfig)
}
cfg.Agents[name] = agent
}
}
return &cfg
}
func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning string, yolo bool) {
cfg := loadModelsConfig()
if agent, ok := cfg.Agents[agentName]; ok {
return agent.Backend, agent.Model, agent.PromptFile, agent.Reasoning, agent.Yolo
}
return cfg.DefaultBackend, cfg.DefaultModel, "", "", false
}

View File

@@ -1,240 +0,0 @@
package main
import (
"encoding/json"
"os"
"path/filepath"
"strings"
)
// Backend defines the contract for invoking different AI CLI backends.
// Each backend is responsible for supplying the executable command and
// building the argument list based on the wrapper config.
type Backend interface {
Name() string
BuildArgs(cfg *Config, targetArg string) []string
Command() string
}
type CodexBackend struct{}
func (CodexBackend) Name() string { return "codex" }
func (CodexBackend) Command() string {
return "codex"
}
func (CodexBackend) BuildArgs(cfg *Config, targetArg string) []string {
return buildCodexArgs(cfg, targetArg)
}
type ClaudeBackend struct{}
func (ClaudeBackend) Name() string { return "claude" }
func (ClaudeBackend) Command() string {
return "claude"
}
func (ClaudeBackend) BuildArgs(cfg *Config, targetArg string) []string {
return buildClaudeArgs(cfg, targetArg)
}
const maxClaudeSettingsBytes = 1 << 20 // 1MB
type minimalClaudeSettings struct {
Env map[string]string
Model string
}
// loadMinimalClaudeSettings 从 ~/.claude/settings.json 只提取安全的最小子集:
// - env: 只接受字符串类型的值
// - model: 只接受字符串类型的值
// 文件缺失/解析失败/超限都返回空。
func loadMinimalClaudeSettings() minimalClaudeSettings {
home, err := os.UserHomeDir()
if err != nil || home == "" {
return minimalClaudeSettings{}
}
settingPath := filepath.Join(home, ".claude", "settings.json")
info, err := os.Stat(settingPath)
if err != nil || info.Size() > maxClaudeSettingsBytes {
return minimalClaudeSettings{}
}
data, err := os.ReadFile(settingPath)
if err != nil {
return minimalClaudeSettings{}
}
var cfg struct {
Env map[string]any `json:"env"`
Model any `json:"model"`
}
if err := json.Unmarshal(data, &cfg); err != nil {
return minimalClaudeSettings{}
}
out := minimalClaudeSettings{}
if model, ok := cfg.Model.(string); ok {
out.Model = strings.TrimSpace(model)
}
if len(cfg.Env) == 0 {
return out
}
env := make(map[string]string, len(cfg.Env))
for k, v := range cfg.Env {
s, ok := v.(string)
if !ok {
continue
}
env[k] = s
}
if len(env) == 0 {
return out
}
out.Env = env
return out
}
// loadMinimalEnvSettings is kept for backwards tests; prefer loadMinimalClaudeSettings.
func loadMinimalEnvSettings() map[string]string {
settings := loadMinimalClaudeSettings()
if len(settings.Env) == 0 {
return nil
}
return settings.Env
}
// loadGeminiEnv loads environment variables from ~/.gemini/.env
// Supports GEMINI_API_KEY, GEMINI_MODEL, GOOGLE_GEMINI_BASE_URL
// Also sets GEMINI_API_KEY_AUTH_MECHANISM=bearer for third-party API compatibility
func loadGeminiEnv() map[string]string {
home, err := os.UserHomeDir()
if err != nil || home == "" {
return nil
}
envPath := filepath.Join(home, ".gemini", ".env")
data, err := os.ReadFile(envPath)
if err != nil {
return nil
}
env := make(map[string]string)
for _, line := range strings.Split(string(data), "\n") {
line = strings.TrimSpace(line)
if line == "" || strings.HasPrefix(line, "#") {
continue
}
idx := strings.IndexByte(line, '=')
if idx <= 0 {
continue
}
key := strings.TrimSpace(line[:idx])
value := strings.TrimSpace(line[idx+1:])
if key != "" && value != "" {
env[key] = value
}
}
// Set bearer auth mechanism for third-party API compatibility
if _, ok := env["GEMINI_API_KEY"]; ok {
if _, hasAuth := env["GEMINI_API_KEY_AUTH_MECHANISM"]; !hasAuth {
env["GEMINI_API_KEY_AUTH_MECHANISM"] = "bearer"
}
}
if len(env) == 0 {
return nil
}
return env
}
func buildClaudeArgs(cfg *Config, targetArg string) []string {
if cfg == nil {
return nil
}
args := []string{"-p"}
// Default to skip permissions unless CODEAGENT_SKIP_PERMISSIONS=false
if cfg.SkipPermissions || cfg.Yolo || envFlagDefaultTrue("CODEAGENT_SKIP_PERMISSIONS") {
args = append(args, "--dangerously-skip-permissions")
}
// Prevent infinite recursion: disable all setting sources (user, project, local)
// This ensures a clean execution environment without CLAUDE.md or skills that would trigger codeagent
args = append(args, "--setting-sources", "")
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "--model", model)
}
if cfg.Mode == "resume" {
if cfg.SessionID != "" {
// Claude CLI uses -r <session_id> for resume.
args = append(args, "-r", cfg.SessionID)
}
}
// Note: claude CLI doesn't support -C flag; workdir set via cmd.Dir
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
return args
}
type GeminiBackend struct{}
func (GeminiBackend) Name() string { return "gemini" }
func (GeminiBackend) Command() string {
return "gemini"
}
func (GeminiBackend) BuildArgs(cfg *Config, targetArg string) []string {
return buildGeminiArgs(cfg, targetArg)
}
type OpencodeBackend struct{}
func (OpencodeBackend) Name() string { return "opencode" }
func (OpencodeBackend) Command() string { return "opencode" }
func (OpencodeBackend) BuildArgs(cfg *Config, targetArg string) []string {
args := []string{"run"}
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "-m", model)
}
if cfg.Mode == "resume" && cfg.SessionID != "" {
args = append(args, "-s", cfg.SessionID)
}
args = append(args, "--format", "json")
if targetArg != "-" {
args = append(args, targetArg)
}
return args
}
func buildGeminiArgs(cfg *Config, targetArg string) []string {
if cfg == nil {
return nil
}
args := []string{"-o", "stream-json", "-y"}
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "-m", model)
}
if cfg.Mode == "resume" {
if cfg.SessionID != "" {
args = append(args, "-r", cfg.SessionID)
}
}
// Note: gemini CLI doesn't support -C flag; workdir set via cmd.Dir
// Use positional argument instead of deprecated -p flag
// For stdin mode ("-"), use -p to read from stdin
if targetArg == "-" {
args = append(args, "-p", targetArg)
} else {
args = append(args, targetArg)
}
return args
}

View File

@@ -1,39 +0,0 @@
package main
import (
"testing"
)
// BenchmarkLoggerWrite 测试日志写入性能
func BenchmarkLoggerWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
logger.Info("benchmark log message")
}
b.StopTimer()
logger.Flush()
}
// BenchmarkLoggerConcurrentWrite 测试并发日志写入性能
func BenchmarkLoggerConcurrentWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
logger.Info("concurrent benchmark log message")
}
})
b.StopTimer()
logger.Flush()
}

View File

@@ -0,0 +1,7 @@
package main
import app "codeagent-wrapper/internal/app"
func main() {
app.Run()
}

View File

@@ -1,473 +0,0 @@
package main
import (
"bytes"
"context"
"fmt"
"os"
"strconv"
"strings"
)
// Config holds CLI configuration
type Config struct {
Mode string // "new" or "resume"
Task string
SessionID string
WorkDir string
Model string
ReasoningEffort string
ExplicitStdin bool
Timeout int
Backend string
Agent string
PromptFile string
PromptFileExplicit bool
SkipPermissions bool
Yolo bool
MaxParallelWorkers int
}
// ParallelConfig defines the JSON schema for parallel execution
type ParallelConfig struct {
Tasks []TaskSpec `json:"tasks"`
GlobalBackend string `json:"backend,omitempty"`
}
// TaskSpec describes an individual task entry in the parallel config
type TaskSpec struct {
ID string `json:"id"`
Task string `json:"task"`
WorkDir string `json:"workdir,omitempty"`
Dependencies []string `json:"dependencies,omitempty"`
SessionID string `json:"session_id,omitempty"`
Backend string `json:"backend,omitempty"`
Model string `json:"model,omitempty"`
ReasoningEffort string `json:"reasoning_effort,omitempty"`
Agent string `json:"agent,omitempty"`
PromptFile string `json:"prompt_file,omitempty"`
SkipPermissions bool `json:"skip_permissions,omitempty"`
Mode string `json:"-"`
UseStdin bool `json:"-"`
Context context.Context `json:"-"`
}
// TaskResult captures the execution outcome of a task
type TaskResult struct {
TaskID string `json:"task_id"`
ExitCode int `json:"exit_code"`
Message string `json:"message"`
SessionID string `json:"session_id"`
Error string `json:"error"`
LogPath string `json:"log_path"`
// Structured report fields
Coverage string `json:"coverage,omitempty"` // extracted coverage percentage (e.g., "92%")
CoverageNum float64 `json:"coverage_num,omitempty"` // numeric coverage for comparison
CoverageTarget float64 `json:"coverage_target,omitempty"` // target coverage (default 90)
FilesChanged []string `json:"files_changed,omitempty"` // list of changed files
KeyOutput string `json:"key_output,omitempty"` // brief summary of what was done
TestsPassed int `json:"tests_passed,omitempty"` // number of tests passed
TestsFailed int `json:"tests_failed,omitempty"` // number of tests failed
sharedLog bool
}
var backendRegistry = map[string]Backend{
"codex": CodexBackend{},
"claude": ClaudeBackend{},
"gemini": GeminiBackend{},
"opencode": OpencodeBackend{},
}
func selectBackend(name string) (Backend, error) {
key := strings.ToLower(strings.TrimSpace(name))
if key == "" {
key = defaultBackendName
}
if backend, ok := backendRegistry[key]; ok {
return backend, nil
}
return nil, fmt.Errorf("unsupported backend %q", name)
}
func envFlagEnabled(key string) bool {
val, ok := os.LookupEnv(key)
if !ok {
return false
}
val = strings.TrimSpace(strings.ToLower(val))
switch val {
case "", "0", "false", "no", "off":
return false
default:
return true
}
}
func parseBoolFlag(val string, defaultValue bool) bool {
val = strings.TrimSpace(strings.ToLower(val))
switch val {
case "1", "true", "yes", "on":
return true
case "0", "false", "no", "off":
return false
default:
return defaultValue
}
}
// envFlagDefaultTrue returns true unless the env var is explicitly set to false/0/no/off.
func envFlagDefaultTrue(key string) bool {
val, ok := os.LookupEnv(key)
if !ok {
return true
}
return parseBoolFlag(val, true)
}
func validateAgentName(name string) error {
if strings.TrimSpace(name) == "" {
return fmt.Errorf("agent name is empty")
}
for _, r := range name {
switch {
case r >= 'a' && r <= 'z':
case r >= 'A' && r <= 'Z':
case r >= '0' && r <= '9':
case r == '-', r == '_':
default:
return fmt.Errorf("agent name %q contains invalid character %q", name, r)
}
}
return nil
}
func parseParallelConfig(data []byte) (*ParallelConfig, error) {
trimmed := bytes.TrimSpace(data)
if len(trimmed) == 0 {
return nil, fmt.Errorf("parallel config is empty")
}
tasks := strings.Split(string(trimmed), "---TASK---")
var cfg ParallelConfig
seen := make(map[string]struct{})
taskIndex := 0
for _, taskBlock := range tasks {
taskBlock = strings.TrimSpace(taskBlock)
if taskBlock == "" {
continue
}
taskIndex++
parts := strings.SplitN(taskBlock, "---CONTENT---", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("task block #%d missing ---CONTENT--- separator", taskIndex)
}
meta := strings.TrimSpace(parts[0])
content := strings.TrimSpace(parts[1])
task := TaskSpec{WorkDir: defaultWorkdir}
agentSpecified := false
for _, line := range strings.Split(meta, "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
kv := strings.SplitN(line, ":", 2)
if len(kv) != 2 {
continue
}
key := strings.TrimSpace(kv[0])
value := strings.TrimSpace(kv[1])
switch key {
case "id":
task.ID = value
case "workdir":
// Validate workdir: "-" is not a valid directory
if value == "-" {
return nil, fmt.Errorf("task block #%d has invalid workdir: '-' is not a valid directory path", taskIndex)
}
task.WorkDir = value
case "session_id":
task.SessionID = value
task.Mode = "resume"
case "backend":
task.Backend = value
case "model":
task.Model = value
case "reasoning_effort":
task.ReasoningEffort = value
case "agent":
agentSpecified = true
task.Agent = value
case "skip_permissions", "skip-permissions":
if value == "" {
task.SkipPermissions = true
continue
}
task.SkipPermissions = parseBoolFlag(value, false)
case "dependencies":
for _, dep := range strings.Split(value, ",") {
dep = strings.TrimSpace(dep)
if dep != "" {
task.Dependencies = append(task.Dependencies, dep)
}
}
}
}
if task.Mode == "" {
task.Mode = "new"
}
if agentSpecified {
if strings.TrimSpace(task.Agent) == "" {
return nil, fmt.Errorf("task block #%d has empty agent field", taskIndex)
}
if err := validateAgentName(task.Agent); err != nil {
return nil, fmt.Errorf("task block #%d invalid agent name: %w", taskIndex, err)
}
backend, model, promptFile, reasoning, _ := resolveAgentConfig(task.Agent)
if task.Backend == "" {
task.Backend = backend
}
if task.Model == "" {
task.Model = model
}
if task.ReasoningEffort == "" {
task.ReasoningEffort = reasoning
}
task.PromptFile = promptFile
}
if task.ID == "" {
return nil, fmt.Errorf("task block #%d missing id field", taskIndex)
}
if content == "" {
return nil, fmt.Errorf("task block #%d (%q) missing content", taskIndex, task.ID)
}
if task.Mode == "resume" && strings.TrimSpace(task.SessionID) == "" {
return nil, fmt.Errorf("task block #%d (%q) has empty session_id", taskIndex, task.ID)
}
if _, exists := seen[task.ID]; exists {
return nil, fmt.Errorf("task block #%d has duplicate id: %s", taskIndex, task.ID)
}
task.Task = content
cfg.Tasks = append(cfg.Tasks, task)
seen[task.ID] = struct{}{}
}
if len(cfg.Tasks) == 0 {
return nil, fmt.Errorf("no tasks found")
}
return &cfg, nil
}
func parseArgs() (*Config, error) {
args := os.Args[1:]
if len(args) == 0 {
return nil, fmt.Errorf("task required")
}
backendName := defaultBackendName
model := ""
reasoningEffort := ""
agentName := ""
promptFile := ""
promptFileExplicit := false
yolo := false
skipPermissions := envFlagEnabled("CODEAGENT_SKIP_PERMISSIONS")
filtered := make([]string, 0, len(args))
for i := 0; i < len(args); i++ {
arg := args[i]
switch {
case arg == "--agent":
if i+1 >= len(args) {
return nil, fmt.Errorf("--agent flag requires a value")
}
value := strings.TrimSpace(args[i+1])
if value == "" {
return nil, fmt.Errorf("--agent flag requires a value")
}
if err := validateAgentName(value); err != nil {
return nil, fmt.Errorf("--agent flag invalid value: %w", err)
}
resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning, resolvedYolo := resolveAgentConfig(value)
backendName = resolvedBackend
model = resolvedModel
if !promptFileExplicit {
promptFile = resolvedPromptFile
}
if reasoningEffort == "" {
reasoningEffort = resolvedReasoning
}
yolo = resolvedYolo
agentName = value
i++
continue
case strings.HasPrefix(arg, "--agent="):
value := strings.TrimSpace(strings.TrimPrefix(arg, "--agent="))
if value == "" {
return nil, fmt.Errorf("--agent flag requires a value")
}
if err := validateAgentName(value); err != nil {
return nil, fmt.Errorf("--agent flag invalid value: %w", err)
}
resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning, resolvedYolo := resolveAgentConfig(value)
backendName = resolvedBackend
model = resolvedModel
if !promptFileExplicit {
promptFile = resolvedPromptFile
}
if reasoningEffort == "" {
reasoningEffort = resolvedReasoning
}
yolo = resolvedYolo
agentName = value
continue
case arg == "--prompt-file":
if i+1 >= len(args) {
return nil, fmt.Errorf("--prompt-file flag requires a value")
}
value := strings.TrimSpace(args[i+1])
if value == "" {
return nil, fmt.Errorf("--prompt-file flag requires a value")
}
promptFile = value
promptFileExplicit = true
i++
continue
case strings.HasPrefix(arg, "--prompt-file="):
value := strings.TrimSpace(strings.TrimPrefix(arg, "--prompt-file="))
if value == "" {
return nil, fmt.Errorf("--prompt-file flag requires a value")
}
promptFile = value
promptFileExplicit = true
continue
case arg == "--backend":
if i+1 >= len(args) {
return nil, fmt.Errorf("--backend flag requires a value")
}
backendName = args[i+1]
i++
continue
case strings.HasPrefix(arg, "--backend="):
value := strings.TrimPrefix(arg, "--backend=")
if value == "" {
return nil, fmt.Errorf("--backend flag requires a value")
}
backendName = value
continue
case arg == "--skip-permissions", arg == "--dangerously-skip-permissions":
skipPermissions = true
continue
case arg == "--model":
if i+1 >= len(args) {
return nil, fmt.Errorf("--model flag requires a value")
}
model = args[i+1]
i++
continue
case strings.HasPrefix(arg, "--model="):
value := strings.TrimPrefix(arg, "--model=")
if value == "" {
return nil, fmt.Errorf("--model flag requires a value")
}
model = value
continue
case arg == "--reasoning-effort":
if i+1 >= len(args) {
return nil, fmt.Errorf("--reasoning-effort flag requires a value")
}
value := strings.TrimSpace(args[i+1])
if value == "" {
return nil, fmt.Errorf("--reasoning-effort flag requires a value")
}
reasoningEffort = value
i++
continue
case strings.HasPrefix(arg, "--reasoning-effort="):
value := strings.TrimSpace(strings.TrimPrefix(arg, "--reasoning-effort="))
if value == "" {
return nil, fmt.Errorf("--reasoning-effort flag requires a value")
}
reasoningEffort = value
continue
case strings.HasPrefix(arg, "--skip-permissions="):
skipPermissions = parseBoolFlag(strings.TrimPrefix(arg, "--skip-permissions="), skipPermissions)
continue
case strings.HasPrefix(arg, "--dangerously-skip-permissions="):
skipPermissions = parseBoolFlag(strings.TrimPrefix(arg, "--dangerously-skip-permissions="), skipPermissions)
continue
}
filtered = append(filtered, arg)
}
if len(filtered) == 0 {
return nil, fmt.Errorf("task required")
}
args = filtered
cfg := &Config{WorkDir: defaultWorkdir, Backend: backendName, Agent: agentName, PromptFile: promptFile, PromptFileExplicit: promptFileExplicit, SkipPermissions: skipPermissions, Yolo: yolo, Model: strings.TrimSpace(model), ReasoningEffort: strings.TrimSpace(reasoningEffort)}
cfg.MaxParallelWorkers = resolveMaxParallelWorkers()
if args[0] == "resume" {
if len(args) < 3 {
return nil, fmt.Errorf("resume mode requires: resume <session_id> <task>")
}
cfg.Mode = "resume"
cfg.SessionID = strings.TrimSpace(args[1])
if cfg.SessionID == "" {
return nil, fmt.Errorf("resume mode requires non-empty session_id")
}
cfg.Task = args[2]
cfg.ExplicitStdin = (args[2] == "-")
if len(args) > 3 {
// Validate workdir: "-" is not a valid directory
if args[3] == "-" {
return nil, fmt.Errorf("invalid workdir: '-' is not a valid directory path")
}
cfg.WorkDir = args[3]
}
} else {
cfg.Mode = "new"
cfg.Task = args[0]
cfg.ExplicitStdin = (args[0] == "-")
if len(args) > 1 {
// Validate workdir: "-" is not a valid directory
if args[1] == "-" {
return nil, fmt.Errorf("invalid workdir: '-' is not a valid directory path")
}
cfg.WorkDir = args[1]
}
}
return cfg, nil
}
const maxParallelWorkersLimit = 100
func resolveMaxParallelWorkers() int {
raw := strings.TrimSpace(os.Getenv("CODEAGENT_MAX_PARALLEL_WORKERS"))
if raw == "" {
return 0
}
value, err := strconv.Atoi(raw)
if err != nil || value < 0 {
logWarn(fmt.Sprintf("Invalid CODEAGENT_MAX_PARALLEL_WORKERS=%q, falling back to unlimited", raw))
return 0
}
if value > maxParallelWorkersLimit {
logWarn(fmt.Sprintf("CODEAGENT_MAX_PARALLEL_WORKERS=%d exceeds limit, capping at %d", value, maxParallelWorkersLimit))
return maxParallelWorkersLimit
}
return value
}

View File

@@ -1,3 +1,43 @@
module codeagent-wrapper
go 1.21
require (
github.com/goccy/go-json v0.10.5
github.com/rs/zerolog v1.34.0
github.com/shirou/gopsutil/v3 v3.24.5
github.com/spf13/cobra v1.8.1
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.19.0
)
require (
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/sagikazarmark/locafero v0.4.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.11.0 // indirect
github.com/spf13/cast v1.6.0 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.9.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.14.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

117
codeagent-wrapper/go.sum Normal file
View File

@@ -0,0 +1,117 @@
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ=
github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4=
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.19.0 h1:RWq5SEjt8o25SROyN3z2OrDB9l7RPd3lwTWU8EcEdcI=
github.com/spf13/viper v1.19.0/go.mod h1:GQUN9bilAbhU/jgc1bKs99f/suXKeUMct8Adx5+Ntkg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"context"
@@ -6,6 +6,9 @@ import (
"path/filepath"
"testing"
"time"
config "codeagent-wrapper/internal/config"
executor "codeagent-wrapper/internal/executor"
)
func TestValidateAgentName(t *testing.T) {
@@ -28,7 +31,7 @@ func TestValidateAgentName(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateAgentName(tt.input)
err := config.ValidateAgentName(tt.input)
if (err != nil) != tt.wantErr {
t.Fatalf("validateAgentName(%q) err=%v, wantErr=%v", tt.input, err, tt.wantErr)
}
@@ -59,6 +62,8 @@ func TestParseParallelConfig_ResolvesAgentPromptFile(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(config.ResetModelsConfigCacheForTest)
config.ResetModelsConfigCacheForTest()
configDir := filepath.Join(home, ".codeagent")
if err := os.MkdirAll(configDir, 0o755); err != nil {
@@ -117,10 +122,8 @@ func TestDefaultRunCodexTaskFn_AppliesAgentPromptFile(t *testing.T) {
WaitDelay: 2 * time.Millisecond,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
selectBackendFn = func(name string) (Backend, error) {
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
_ = executor.SetSelectBackendFn(func(name string) (Backend, error) {
return testBackend{
name: name,
command: "fake-cmd",
@@ -128,7 +131,7 @@ func TestDefaultRunCodexTaskFn_AppliesAgentPromptFile(t *testing.T) {
return []string{targetArg}
},
}, nil
}
})
res := defaultRunCodexTaskFn(TaskSpec{
ID: "t",

View File

@@ -0,0 +1,278 @@
package wrapper
import (
"fmt"
"io"
"os"
"path/filepath"
"strings"
"time"
)
const (
version = "6.1.2"
defaultWorkdir = "."
defaultTimeout = 7200 // seconds (2 hours)
defaultCoverageTarget = 90.0
codexLogLineLimit = 1000
stdinSpecialChars = "\n\\\"'`$"
stderrCaptureLimit = 4 * 1024
defaultBackendName = "codex"
defaultCodexCommand = "codex"
// stdout close reasons
stdoutCloseReasonWait = "wait-done"
stdoutCloseReasonDrain = "drain-timeout"
stdoutCloseReasonCtx = "context-cancel"
stdoutDrainTimeout = 500 * time.Millisecond
)
// Test hooks for dependency injection
var (
stdinReader io.Reader = os.Stdin
isTerminalFn = defaultIsTerminal
codexCommand = defaultCodexCommand
cleanupHook func()
startupCleanupAsync = true
buildCodexArgsFn = buildCodexArgs
selectBackendFn = selectBackend
cleanupLogsFn = cleanupOldLogs
defaultBuildArgsFn = buildCodexArgs
runTaskFn = runCodexTask
exitFn = os.Exit
)
func runStartupCleanup() {
if cleanupLogsFn == nil {
return
}
defer func() {
if r := recover(); r != nil {
logWarn(fmt.Sprintf("cleanupOldLogs panic: %v", r))
}
}()
if _, err := cleanupLogsFn(); err != nil {
logWarn(fmt.Sprintf("cleanupOldLogs error: %v", err))
}
}
func scheduleStartupCleanup() {
if !startupCleanupAsync {
runStartupCleanup()
return
}
if cleanupLogsFn == nil {
return
}
fn := cleanupLogsFn
go func() {
defer func() {
if r := recover(); r != nil {
logWarn(fmt.Sprintf("cleanupOldLogs panic: %v", r))
}
}()
if _, err := fn(); err != nil {
logWarn(fmt.Sprintf("cleanupOldLogs error: %v", err))
}
}()
}
func runCleanupMode() int {
if cleanupLogsFn == nil {
fmt.Fprintln(os.Stderr, "Cleanup failed: log cleanup function not configured")
return 1
}
stats, err := cleanupLogsFn()
if err != nil {
fmt.Fprintf(os.Stderr, "Cleanup failed: %v\n", err)
return 1
}
fmt.Println("Cleanup completed")
fmt.Printf("Files scanned: %d\n", stats.Scanned)
fmt.Printf("Files deleted: %d\n", stats.Deleted)
if len(stats.DeletedFiles) > 0 {
for _, f := range stats.DeletedFiles {
fmt.Printf(" - %s\n", f)
}
}
fmt.Printf("Files kept: %d\n", stats.Kept)
if len(stats.KeptFiles) > 0 {
for _, f := range stats.KeptFiles {
fmt.Printf(" - %s\n", f)
}
}
if stats.Errors > 0 {
fmt.Printf("Deletion errors: %d\n", stats.Errors)
}
return 0
}
func readAgentPromptFile(path string, allowOutsideClaudeDir bool) (string, error) {
raw := strings.TrimSpace(path)
if raw == "" {
return "", nil
}
expanded := raw
if raw == "~" || strings.HasPrefix(raw, "~/") || strings.HasPrefix(raw, "~\\") {
home, err := os.UserHomeDir()
if err != nil {
return "", err
}
if raw == "~" {
expanded = home
} else {
expanded = home + raw[1:]
}
}
absPath, err := filepath.Abs(expanded)
if err != nil {
return "", err
}
absPath = filepath.Clean(absPath)
home, err := os.UserHomeDir()
if err != nil {
if !allowOutsideClaudeDir {
return "", err
}
logWarn(fmt.Sprintf("Failed to resolve home directory for prompt file validation: %v; proceeding without restriction", err))
} else {
allowedDirs := []string{
filepath.Clean(filepath.Join(home, ".claude")),
filepath.Clean(filepath.Join(home, ".codeagent", "agents")),
}
for i := range allowedDirs {
allowedAbs, err := filepath.Abs(allowedDirs[i])
if err == nil {
allowedDirs[i] = filepath.Clean(allowedAbs)
}
}
isWithinDir := func(path, dir string) bool {
rel, err := filepath.Rel(dir, path)
if err != nil {
return false
}
rel = filepath.Clean(rel)
if rel == "." {
return true
}
if rel == ".." {
return false
}
prefix := ".." + string(os.PathSeparator)
return !strings.HasPrefix(rel, prefix)
}
if !allowOutsideClaudeDir {
withinAllowed := false
for _, dir := range allowedDirs {
if isWithinDir(absPath, dir) {
withinAllowed = true
break
}
}
if !withinAllowed {
logWarn(fmt.Sprintf("Refusing to read prompt file outside allowed dirs (%s): %s", strings.Join(allowedDirs, ", "), absPath))
return "", fmt.Errorf("prompt file must be under ~/.claude or ~/.codeagent/agents")
}
resolvedPath, errPath := filepath.EvalSymlinks(absPath)
if errPath == nil {
resolvedPath = filepath.Clean(resolvedPath)
resolvedAllowed := make([]string, 0, len(allowedDirs))
for _, dir := range allowedDirs {
resolvedBase, errBase := filepath.EvalSymlinks(dir)
if errBase != nil {
continue
}
resolvedAllowed = append(resolvedAllowed, filepath.Clean(resolvedBase))
}
if len(resolvedAllowed) > 0 {
withinResolved := false
for _, dir := range resolvedAllowed {
if isWithinDir(resolvedPath, dir) {
withinResolved = true
break
}
}
if !withinResolved {
logWarn(fmt.Sprintf("Refusing to read prompt file outside allowed dirs (%s) (resolved): %s", strings.Join(resolvedAllowed, ", "), resolvedPath))
return "", fmt.Errorf("prompt file must be under ~/.claude or ~/.codeagent/agents")
}
}
}
} else {
withinAllowed := false
for _, dir := range allowedDirs {
if isWithinDir(absPath, dir) {
withinAllowed = true
break
}
}
if !withinAllowed {
logWarn(fmt.Sprintf("Reading prompt file outside allowed dirs (%s): %s", strings.Join(allowedDirs, ", "), absPath))
}
}
}
data, err := os.ReadFile(absPath)
if err != nil {
return "", err
}
return strings.TrimRight(string(data), "\r\n"), nil
}
func wrapTaskWithAgentPrompt(prompt string, task string) string {
return "<agent-prompt>\n" + prompt + "\n</agent-prompt>\n\n" + task
}
func runCleanupHook() {
if logger := activeLogger(); logger != nil {
logger.Flush()
}
if cleanupHook != nil {
cleanupHook()
}
}
func printHelp() {
name := currentWrapperName()
help := fmt.Sprintf(`%[1]s - Go wrapper for AI CLI backends
Usage:
%[1]s "task" [workdir]
%[1]s --backend claude "task" [workdir]
%[1]s --prompt-file /path/to/prompt.md "task" [workdir]
%[1]s - [workdir] Read task from stdin
%[1]s resume <session_id> "task" [workdir]
%[1]s resume <session_id> - [workdir]
%[1]s --parallel Run tasks in parallel (config from stdin)
%[1]s --parallel --full-output Run tasks in parallel with full output (legacy)
%[1]s --version
%[1]s --help
Parallel mode examples:
%[1]s --parallel < tasks.txt
echo '...' | %[1]s --parallel
%[1]s --parallel --full-output < tasks.txt
%[1]s --parallel <<'EOF'
Environment Variables:
CODEX_TIMEOUT Timeout in milliseconds (default: 7200000)
CODEAGENT_ASCII_MODE Use ASCII symbols instead of Unicode (PASS/WARN/FAIL)
Exit Codes:
0 Success
1 General error (missing args, no output)
124 Timeout
127 backend command not found
130 Interrupted (Ctrl+C)
* Passthrough from backend process`, name)
fmt.Println(help)
}

View File

@@ -0,0 +1,9 @@
package wrapper
import backend "codeagent-wrapper/internal/backend"
type Backend = backend.Backend
type CodexBackend = backend.CodexBackend
type ClaudeBackend = backend.ClaudeBackend
type GeminiBackend = backend.GeminiBackend
type OpencodeBackend = backend.OpencodeBackend

View File

@@ -0,0 +1,7 @@
package wrapper
import backend "codeagent-wrapper/internal/backend"
func init() {
backend.SetLogFuncs(logWarn, logError)
}

View File

@@ -0,0 +1,5 @@
package wrapper
import backend "codeagent-wrapper/internal/backend"
func selectBackend(name string) (Backend, error) { return backend.Select(name) }

View File

@@ -0,0 +1,103 @@
package wrapper
import (
"bytes"
"os"
"testing"
config "codeagent-wrapper/internal/config"
)
var (
benchCmdSink any
benchConfigSink *Config
benchMessageSink string
benchThreadIDSink string
)
// BenchmarkStartup_NewRootCommand measures CLI startup overhead (command+flags construction).
func BenchmarkStartup_NewRootCommand(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
benchCmdSink = newRootCommand()
}
}
// BenchmarkConfigParse_ParseArgs measures config parsing from argv/env (steady-state).
func BenchmarkConfigParse_ParseArgs(b *testing.B) {
home := b.TempDir()
b.Setenv("HOME", home)
b.Setenv("USERPROFILE", home)
config.ResetModelsConfigCacheForTest()
b.Cleanup(config.ResetModelsConfigCacheForTest)
origArgs := os.Args
os.Args = []string{"codeagent-wrapper", "--agent", "develop", "task"}
b.Cleanup(func() { os.Args = origArgs })
if _, err := parseArgs(); err != nil {
b.Fatalf("warmup parseArgs() error: %v", err)
}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
cfg, err := parseArgs()
if err != nil {
b.Fatalf("parseArgs() error: %v", err)
}
benchConfigSink = cfg
}
}
// BenchmarkJSONParse_ParseJSONStreamInternal measures line-delimited JSON stream parsing.
func BenchmarkJSONParse_ParseJSONStreamInternal(b *testing.B) {
stream := []byte(
`{"type":"thread.started","thread_id":"t"}` + "\n" +
`{"type":"item.completed","item":{"type":"agent_message","text":"hello"}}` + "\n" +
`{"type":"thread.completed","thread_id":"t"}` + "\n",
)
b.SetBytes(int64(len(stream)))
b.ReportAllocs()
for i := 0; i < b.N; i++ {
message, threadID := parseJSONStreamInternal(bytes.NewReader(stream), nil, nil, nil, nil)
benchMessageSink = message
benchThreadIDSink = threadID
}
}
// BenchmarkLoggerWrite 测试日志写入性能
func BenchmarkLoggerWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
logger.Info("benchmark log message")
}
b.StopTimer()
logger.Flush()
}
// BenchmarkLoggerConcurrentWrite 测试并发日志写入性能
func BenchmarkLoggerConcurrentWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
logger.Info("concurrent benchmark log message")
}
})
b.StopTimer()
logger.Flush()
}

View File

@@ -0,0 +1,664 @@
package wrapper
import (
"errors"
"fmt"
"io"
"os"
"reflect"
"strings"
config "codeagent-wrapper/internal/config"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/spf13/viper"
)
type exitError struct {
code int
}
func (e exitError) Error() string {
return fmt.Sprintf("exit %d", e.code)
}
type cliOptions struct {
Backend string
Model string
ReasoningEffort string
Agent string
PromptFile string
SkipPermissions bool
Parallel bool
FullOutput bool
Cleanup bool
Version bool
ConfigFile string
}
func Main() {
Run()
}
// Run is the program entrypoint for cmd/codeagent/main.go.
func Run() {
exitFn(run())
}
func run() int {
cmd := newRootCommand()
cmd.SetArgs(os.Args[1:])
if err := cmd.Execute(); err != nil {
var ee exitError
if errors.As(err, &ee) {
return ee.code
}
return 1
}
return 0
}
func newRootCommand() *cobra.Command {
name := currentWrapperName()
opts := &cliOptions{}
cmd := &cobra.Command{
Use: fmt.Sprintf("%s [flags] <task>|resume <session_id> <task> [workdir]", name),
Short: "Go wrapper for AI CLI backends",
SilenceErrors: true,
SilenceUsage: true,
Args: cobra.ArbitraryArgs,
RunE: func(cmd *cobra.Command, args []string) error {
if opts.Version {
fmt.Printf("%s version %s\n", name, version)
return nil
}
if opts.Cleanup {
code := runCleanupMode()
if code == 0 {
return nil
}
return exitError{code: code}
}
exitCode := runWithLoggerAndCleanup(func() int {
v, err := config.NewViper(opts.ConfigFile)
if err != nil {
logError(err.Error())
return 1
}
if opts.Parallel {
return runParallelMode(cmd, args, opts, v, name)
}
logInfo("Script started")
cfg, err := buildSingleConfig(cmd, args, os.Args[1:], opts, v)
if err != nil {
logError(err.Error())
return 1
}
logInfo(fmt.Sprintf("Parsed args: mode=%s, task_len=%d, backend=%s", cfg.Mode, len(cfg.Task), cfg.Backend))
return runSingleMode(cfg, name)
})
if exitCode == 0 {
return nil
}
return exitError{code: exitCode}
},
}
cmd.CompletionOptions.DisableDefaultCmd = true
addRootFlags(cmd.Flags(), opts)
cmd.AddCommand(newVersionCommand(name), newCleanupCommand())
return cmd
}
func addRootFlags(fs *pflag.FlagSet, opts *cliOptions) {
fs.StringVar(&opts.ConfigFile, "config", "", "Config file path (default: $HOME/.codeagent/config.*)")
fs.BoolVarP(&opts.Version, "version", "v", false, "Print version and exit")
fs.BoolVar(&opts.Cleanup, "cleanup", false, "Clean up old logs and exit")
fs.BoolVar(&opts.Parallel, "parallel", false, "Run tasks in parallel (config from stdin)")
fs.BoolVar(&opts.FullOutput, "full-output", false, "Parallel mode: include full task output (legacy)")
fs.StringVar(&opts.Backend, "backend", defaultBackendName, "Backend to use (codex, claude, gemini, opencode)")
fs.StringVar(&opts.Model, "model", "", "Model override")
fs.StringVar(&opts.ReasoningEffort, "reasoning-effort", "", "Reasoning effort (backend-specific)")
fs.StringVar(&opts.Agent, "agent", "", "Agent preset name (from ~/.codeagent/models.json)")
fs.StringVar(&opts.PromptFile, "prompt-file", "", "Prompt file path")
fs.BoolVar(&opts.SkipPermissions, "skip-permissions", false, "Skip permissions prompts (also via CODEAGENT_SKIP_PERMISSIONS)")
fs.BoolVar(&opts.SkipPermissions, "dangerously-skip-permissions", false, "Alias for --skip-permissions")
}
func newVersionCommand(name string) *cobra.Command {
return &cobra.Command{
Use: "version",
Short: "Print version and exit",
SilenceErrors: true,
SilenceUsage: true,
RunE: func(cmd *cobra.Command, args []string) error {
fmt.Printf("%s version %s\n", name, version)
return nil
},
}
}
func newCleanupCommand() *cobra.Command {
return &cobra.Command{
Use: "cleanup",
Short: "Clean up old logs and exit",
SilenceErrors: true,
SilenceUsage: true,
RunE: func(cmd *cobra.Command, args []string) error {
code := runCleanupMode()
if code == 0 {
return nil
}
return exitError{code: code}
},
}
}
func runWithLoggerAndCleanup(fn func() int) (exitCode int) {
logger, err := NewLogger()
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to initialize logger: %v\n", err)
return 1
}
setLogger(logger)
defer func() {
logger := activeLogger()
if logger != nil {
logger.Flush()
}
if err := closeLogger(); err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to close logger: %v\n", err)
}
if logger == nil {
return
}
if exitCode != 0 {
if entries := logger.ExtractRecentErrors(10); len(entries) > 0 {
fmt.Fprintln(os.Stderr, "\n=== Recent Errors ===")
for _, entry := range entries {
fmt.Fprintln(os.Stderr, entry)
}
fmt.Fprintf(os.Stderr, "Log file: %s (deleted)\n", logger.Path())
}
}
_ = logger.RemoveLogFile()
}()
defer runCleanupHook()
// Clean up stale logs from previous runs.
scheduleStartupCleanup()
return fn()
}
func parseArgs() (*Config, error) {
opts := &cliOptions{}
cmd := &cobra.Command{SilenceErrors: true, SilenceUsage: true, Args: cobra.ArbitraryArgs}
addRootFlags(cmd.Flags(), opts)
rawArgv := os.Args[1:]
if err := cmd.ParseFlags(rawArgv); err != nil {
return nil, err
}
args := cmd.Flags().Args()
v, err := config.NewViper(opts.ConfigFile)
if err != nil {
return nil, err
}
return buildSingleConfig(cmd, args, rawArgv, opts, v)
}
func buildSingleConfig(cmd *cobra.Command, args []string, rawArgv []string, opts *cliOptions, v *viper.Viper) (*Config, error) {
backendName := defaultBackendName
model := ""
reasoningEffort := ""
agentName := ""
promptFile := ""
promptFileExplicit := false
yolo := false
if cmd.Flags().Changed("agent") {
agentName = strings.TrimSpace(opts.Agent)
if agentName == "" {
return nil, fmt.Errorf("--agent flag requires a value")
}
if err := config.ValidateAgentName(agentName); err != nil {
return nil, fmt.Errorf("--agent flag invalid value: %w", err)
}
} else {
agentName = strings.TrimSpace(v.GetString("agent"))
if agentName != "" {
if err := config.ValidateAgentName(agentName); err != nil {
return nil, fmt.Errorf("--agent flag invalid value: %w", err)
}
}
}
var resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning string
if agentName != "" {
var resolvedYolo bool
resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning, _, _, resolvedYolo = config.ResolveAgentConfig(agentName)
yolo = resolvedYolo
}
if cmd.Flags().Changed("prompt-file") {
promptFile = strings.TrimSpace(opts.PromptFile)
if promptFile == "" {
return nil, fmt.Errorf("--prompt-file flag requires a value")
}
promptFileExplicit = true
} else if val := strings.TrimSpace(v.GetString("prompt-file")); val != "" {
promptFile = val
promptFileExplicit = true
} else {
promptFile = resolvedPromptFile
}
agentFlagChanged := cmd.Flags().Changed("agent")
backendFlagChanged := cmd.Flags().Changed("backend")
if backendFlagChanged {
backendName = strings.TrimSpace(opts.Backend)
if backendName == "" {
return nil, fmt.Errorf("--backend flag requires a value")
}
}
switch {
case agentFlagChanged && backendFlagChanged && lastFlagIndex(rawArgv, "agent") > lastFlagIndex(rawArgv, "backend"):
backendName = resolvedBackend
case !backendFlagChanged && agentName != "":
backendName = resolvedBackend
case !backendFlagChanged:
if val := strings.TrimSpace(v.GetString("backend")); val != "" {
backendName = val
}
}
modelFlagChanged := cmd.Flags().Changed("model")
if modelFlagChanged {
model = strings.TrimSpace(opts.Model)
if model == "" {
return nil, fmt.Errorf("--model flag requires a value")
}
}
switch {
case agentFlagChanged && modelFlagChanged && lastFlagIndex(rawArgv, "agent") > lastFlagIndex(rawArgv, "model"):
model = strings.TrimSpace(resolvedModel)
case !modelFlagChanged && agentName != "":
model = strings.TrimSpace(resolvedModel)
case !modelFlagChanged:
model = strings.TrimSpace(v.GetString("model"))
}
if cmd.Flags().Changed("reasoning-effort") {
reasoningEffort = strings.TrimSpace(opts.ReasoningEffort)
if reasoningEffort == "" {
return nil, fmt.Errorf("--reasoning-effort flag requires a value")
}
} else if val := strings.TrimSpace(v.GetString("reasoning-effort")); val != "" {
reasoningEffort = val
} else if agentName != "" {
reasoningEffort = strings.TrimSpace(resolvedReasoning)
}
skipChanged := cmd.Flags().Changed("skip-permissions") || cmd.Flags().Changed("dangerously-skip-permissions")
skipPermissions := false
if skipChanged {
skipPermissions = opts.SkipPermissions
} else {
skipPermissions = v.GetBool("skip-permissions")
}
if len(args) == 0 {
return nil, fmt.Errorf("task required")
}
cfg := &Config{
WorkDir: defaultWorkdir,
Backend: backendName,
Agent: agentName,
PromptFile: promptFile,
PromptFileExplicit: promptFileExplicit,
SkipPermissions: skipPermissions,
Yolo: yolo,
Model: model,
ReasoningEffort: reasoningEffort,
MaxParallelWorkers: config.ResolveMaxParallelWorkers(),
}
if args[0] == "resume" {
if len(args) < 3 {
return nil, fmt.Errorf("resume mode requires: resume <session_id> <task>")
}
cfg.Mode = "resume"
cfg.SessionID = strings.TrimSpace(args[1])
if cfg.SessionID == "" {
return nil, fmt.Errorf("resume mode requires non-empty session_id")
}
cfg.Task = args[2]
cfg.ExplicitStdin = (args[2] == "-")
if len(args) > 3 {
if args[3] == "-" {
return nil, fmt.Errorf("invalid workdir: '-' is not a valid directory path")
}
cfg.WorkDir = args[3]
}
} else {
cfg.Mode = "new"
cfg.Task = args[0]
cfg.ExplicitStdin = (args[0] == "-")
if len(args) > 1 {
if args[1] == "-" {
return nil, fmt.Errorf("invalid workdir: '-' is not a valid directory path")
}
cfg.WorkDir = args[1]
}
}
return cfg, nil
}
func lastFlagIndex(argv []string, name string) int {
if len(argv) == 0 {
return -1
}
name = strings.TrimSpace(name)
if name == "" {
return -1
}
needle := "--" + name
prefix := needle + "="
last := -1
for i, arg := range argv {
if arg == needle || strings.HasPrefix(arg, prefix) {
last = i
}
}
return last
}
func runParallelMode(cmd *cobra.Command, args []string, opts *cliOptions, v *viper.Viper, name string) int {
if len(args) > 0 {
fmt.Fprintln(os.Stderr, "ERROR: --parallel reads its task configuration from stdin; no positional arguments are allowed.")
fmt.Fprintln(os.Stderr, "Usage examples:")
fmt.Fprintf(os.Stderr, " %s --parallel < tasks.txt\n", name)
fmt.Fprintf(os.Stderr, " echo '...' | %s --parallel\n", name)
fmt.Fprintf(os.Stderr, " %s --parallel <<'EOF'\n", name)
fmt.Fprintf(os.Stderr, " %s --parallel --full-output <<'EOF' # include full task output\n", name)
return 1
}
if cmd.Flags().Changed("agent") || cmd.Flags().Changed("prompt-file") || cmd.Flags().Changed("reasoning-effort") {
fmt.Fprintln(os.Stderr, "ERROR: --parallel reads its task configuration from stdin; only --backend, --model, --full-output and --skip-permissions are allowed.")
return 1
}
backendName := defaultBackendName
if cmd.Flags().Changed("backend") {
backendName = strings.TrimSpace(opts.Backend)
if backendName == "" {
fmt.Fprintln(os.Stderr, "ERROR: --backend flag requires a value")
return 1
}
} else if val := strings.TrimSpace(v.GetString("backend")); val != "" {
backendName = val
}
model := ""
if cmd.Flags().Changed("model") {
model = strings.TrimSpace(opts.Model)
if model == "" {
fmt.Fprintln(os.Stderr, "ERROR: --model flag requires a value")
return 1
}
} else {
model = strings.TrimSpace(v.GetString("model"))
}
fullOutput := opts.FullOutput
if !cmd.Flags().Changed("full-output") && v.IsSet("full-output") {
fullOutput = v.GetBool("full-output")
}
skipChanged := cmd.Flags().Changed("skip-permissions") || cmd.Flags().Changed("dangerously-skip-permissions")
skipPermissions := false
if skipChanged {
skipPermissions = opts.SkipPermissions
} else {
skipPermissions = v.GetBool("skip-permissions")
}
backend, err := selectBackendFn(backendName)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
backendName = backend.Name()
data, err := io.ReadAll(stdinReader)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to read stdin: %v\n", err)
return 1
}
cfg, err := parseParallelConfig(data)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
cfg.GlobalBackend = backendName
model = strings.TrimSpace(model)
for i := range cfg.Tasks {
if strings.TrimSpace(cfg.Tasks[i].Backend) == "" {
cfg.Tasks[i].Backend = backendName
}
if strings.TrimSpace(cfg.Tasks[i].Model) == "" && model != "" {
cfg.Tasks[i].Model = model
}
cfg.Tasks[i].SkipPermissions = cfg.Tasks[i].SkipPermissions || skipPermissions
}
timeoutSec := resolveTimeout()
layers, err := topologicalSort(cfg.Tasks)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
results := executeConcurrent(layers, timeoutSec)
for i := range results {
results[i].CoverageTarget = defaultCoverageTarget
if results[i].Message == "" {
continue
}
lines := strings.Split(results[i].Message, "\n")
results[i].Coverage = extractCoverageFromLines(lines)
results[i].CoverageNum = extractCoverageNum(results[i].Coverage)
results[i].FilesChanged = extractFilesChangedFromLines(lines)
results[i].TestsPassed, results[i].TestsFailed = extractTestResultsFromLines(lines)
results[i].KeyOutput = extractKeyOutputFromLines(lines, 150)
}
fmt.Println(generateFinalOutputWithMode(results, !fullOutput))
exitCode := 0
for _, res := range results {
if res.ExitCode != 0 {
exitCode = res.ExitCode
}
}
return exitCode
}
func runSingleMode(cfg *Config, name string) int {
backend, err := selectBackendFn(cfg.Backend)
if err != nil {
logError(err.Error())
return 1
}
cfg.Backend = backend.Name()
cmdInjected := codexCommand != defaultCodexCommand
argsInjected := buildCodexArgsFn != nil && reflect.ValueOf(buildCodexArgsFn).Pointer() != reflect.ValueOf(defaultBuildArgsFn).Pointer()
if backend.Name() != defaultBackendName || !cmdInjected {
codexCommand = backend.Command()
}
if backend.Name() != defaultBackendName || !argsInjected {
buildCodexArgsFn = backend.BuildArgs
}
logInfo(fmt.Sprintf("Selected backend: %s", backend.Name()))
timeoutSec := resolveTimeout()
logInfo(fmt.Sprintf("Timeout: %ds", timeoutSec))
cfg.Timeout = timeoutSec
var taskText string
var piped bool
if cfg.ExplicitStdin {
logInfo("Explicit stdin mode: reading task from stdin")
data, err := io.ReadAll(stdinReader)
if err != nil {
logError("Failed to read stdin: " + err.Error())
return 1
}
taskText = string(data)
if taskText == "" {
logError("Explicit stdin mode requires task input from stdin")
return 1
}
piped = !isTerminal()
} else {
pipedTask, err := readPipedTask()
if err != nil {
logError("Failed to read piped stdin: " + err.Error())
return 1
}
piped = pipedTask != ""
if piped {
taskText = pipedTask
} else {
taskText = cfg.Task
}
}
if strings.TrimSpace(cfg.PromptFile) != "" {
prompt, err := readAgentPromptFile(cfg.PromptFile, cfg.PromptFileExplicit)
if err != nil {
logError("Failed to read prompt file: " + err.Error())
return 1
}
taskText = wrapTaskWithAgentPrompt(prompt, taskText)
}
useStdin := cfg.ExplicitStdin || shouldUseStdin(taskText, piped)
targetArg := taskText
if useStdin {
targetArg = "-"
}
codexArgs := buildCodexArgsFn(cfg, targetArg)
logger := activeLogger()
if logger == nil {
fmt.Fprintln(os.Stderr, "ERROR: logger is not initialized")
return 1
}
fmt.Fprintf(os.Stderr, "[%s]\n", name)
fmt.Fprintf(os.Stderr, " Backend: %s\n", cfg.Backend)
fmt.Fprintf(os.Stderr, " Command: %s %s\n", codexCommand, strings.Join(codexArgs, " "))
fmt.Fprintf(os.Stderr, " PID: %d\n", os.Getpid())
fmt.Fprintf(os.Stderr, " Log: %s\n", logger.Path())
if useStdin {
var reasons []string
if piped {
reasons = append(reasons, "piped input")
}
if cfg.ExplicitStdin {
reasons = append(reasons, "explicit \"-\"")
}
if strings.Contains(taskText, "\n") {
reasons = append(reasons, "newline")
}
if strings.Contains(taskText, "\\") {
reasons = append(reasons, "backslash")
}
if strings.Contains(taskText, "\"") {
reasons = append(reasons, "double-quote")
}
if strings.Contains(taskText, "'") {
reasons = append(reasons, "single-quote")
}
if strings.Contains(taskText, "`") {
reasons = append(reasons, "backtick")
}
if strings.Contains(taskText, "$") {
reasons = append(reasons, "dollar")
}
if len(taskText) > 800 {
reasons = append(reasons, "length>800")
}
if len(reasons) > 0 {
logWarn(fmt.Sprintf("Using stdin mode for task due to: %s", strings.Join(reasons, ", ")))
}
}
logInfo(fmt.Sprintf("%s running...", cfg.Backend))
taskSpec := TaskSpec{
Task: taskText,
WorkDir: cfg.WorkDir,
Mode: cfg.Mode,
SessionID: cfg.SessionID,
Backend: cfg.Backend,
Model: cfg.Model,
ReasoningEffort: cfg.ReasoningEffort,
Agent: cfg.Agent,
SkipPermissions: cfg.SkipPermissions,
UseStdin: useStdin,
}
result := runTaskFn(taskSpec, false, cfg.Timeout)
if result.ExitCode != 0 {
return result.ExitCode
}
// Validate that we got a meaningful output message
if strings.TrimSpace(result.Message) == "" {
logError(fmt.Sprintf("no output message: backend=%s returned empty result.Message with exit_code=0", cfg.Backend))
return 1
}
fmt.Println(result.Message)
if result.SessionID != "" {
fmt.Printf("\n---\nSESSION_ID: %s\n", result.SessionID)
}
return 0
}

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"bufio"
@@ -11,9 +11,20 @@ import (
"sync/atomic"
"testing"
"time"
"github.com/goccy/go-json"
)
func stripTimestampPrefix(line string) string {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "{") {
var evt struct {
Message string `json:"message"`
}
if err := json.Unmarshal([]byte(line), &evt); err == nil && evt.Message != "" {
return evt.Message
}
}
if !strings.HasPrefix(line, "[") {
return line
}

View File

@@ -0,0 +1,7 @@
package wrapper
import config "codeagent-wrapper/internal/config"
// Keep the existing Config name throughout the codebase, but source the
// implementation from internal/config.
type Config = config.Config

View File

@@ -0,0 +1,54 @@
package wrapper
import (
"context"
backend "codeagent-wrapper/internal/backend"
config "codeagent-wrapper/internal/config"
executor "codeagent-wrapper/internal/executor"
)
// defaultRunCodexTaskFn is the default implementation of runCodexTaskFn (exposed for test reset).
func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
return executor.DefaultRunCodexTaskFn(task, timeout)
}
var runCodexTaskFn = defaultRunCodexTaskFn
func topologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
return executor.TopologicalSort(tasks)
}
func executeConcurrent(layers [][]TaskSpec, timeout int) []TaskResult {
maxWorkers := config.ResolveMaxParallelWorkers()
return executeConcurrentWithContext(context.Background(), layers, timeout, maxWorkers)
}
func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec, timeout int, maxWorkers int) []TaskResult {
return executor.ExecuteConcurrentWithContext(parentCtx, layers, timeout, maxWorkers, runCodexTaskFn)
}
func generateFinalOutput(results []TaskResult) string {
return executor.GenerateFinalOutput(results)
}
func generateFinalOutputWithMode(results []TaskResult, summaryOnly bool) string {
return executor.GenerateFinalOutputWithMode(results, summaryOnly)
}
func buildCodexArgs(cfg *Config, targetArg string) []string {
return backend.BuildCodexArgs(cfg, targetArg)
}
func runCodexTask(taskSpec TaskSpec, silent bool, timeoutSec int) TaskResult {
return runCodexTaskWithContext(context.Background(), taskSpec, nil, nil, false, silent, timeoutSec)
}
func runCodexProcess(parentCtx context.Context, codexArgs []string, taskText string, useStdin bool, timeoutSec int) (message, threadID string, exitCode int) {
res := runCodexTaskWithContext(parentCtx, TaskSpec{Task: taskText, WorkDir: defaultWorkdir, Mode: "new", UseStdin: useStdin}, nil, codexArgs, true, false, timeoutSec)
return res.Message, res.SessionID, res.ExitCode
}
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
return executor.RunCodexTaskWithContext(parentCtx, taskSpec, backend, codexCommand, buildCodexArgsFn, customArgs, useCustomArgs, silent, timeoutSec)
}

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"bufio"
@@ -15,9 +15,10 @@ import (
"strings"
"sync"
"sync/atomic"
"syscall"
"testing"
"time"
executor "codeagent-wrapper/internal/executor"
)
var executorTestTaskCounter atomic.Int64
@@ -91,7 +92,7 @@ func (rc *reasonReadCloser) record(reason string) {
type execFakeRunner struct {
stdout io.ReadCloser
stderr io.ReadCloser
process processHandle
process executor.ProcessHandle
stdin io.WriteCloser
dir string
env map[string]string
@@ -158,7 +159,7 @@ func (f *execFakeRunner) SetEnv(env map[string]string) {
f.env[k] = v
}
}
func (f *execFakeRunner) Process() processHandle {
func (f *execFakeRunner) Process() executor.ProcessHandle {
if f.process != nil {
return f.process
}
@@ -168,225 +169,15 @@ func (f *execFakeRunner) Process() processHandle {
return &execFakeProcess{pid: 1}
}
func TestExecutorHelperCoverage(t *testing.T) {
t.Run("realCmdAndProcess", func(t *testing.T) {
rc := &realCmd{}
if err := rc.Start(); err == nil {
t.Fatalf("expected error for nil command")
}
if err := rc.Wait(); err == nil {
t.Fatalf("expected error for nil command")
}
if _, err := rc.StdoutPipe(); err == nil {
t.Fatalf("expected error for nil command")
}
if _, err := rc.StderrPipe(); err == nil {
t.Fatalf("expected error for nil command")
}
if _, err := rc.StdinPipe(); err == nil {
t.Fatalf("expected error for nil command")
}
rc.SetStderr(io.Discard)
if rc.Process() != nil {
t.Fatalf("expected nil process")
}
rcWithCmd := &realCmd{cmd: &exec.Cmd{}}
rcWithCmd.SetStderr(io.Discard)
rcWithCmd.SetDir("/tmp")
if rcWithCmd.cmd.Dir != "/tmp" {
t.Fatalf("expected SetDir to set cmd.Dir, got %q", rcWithCmd.cmd.Dir)
}
echoCmd := exec.Command("echo", "ok")
rcProc := &realCmd{cmd: echoCmd}
stdoutPipe, err := rcProc.StdoutPipe()
if err != nil {
t.Fatalf("StdoutPipe error: %v", err)
}
stderrPipe, err := rcProc.StderrPipe()
if err != nil {
t.Fatalf("StderrPipe error: %v", err)
}
stdinPipe, err := rcProc.StdinPipe()
if err != nil {
t.Fatalf("StdinPipe error: %v", err)
}
if err := rcProc.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
_, _ = stdinPipe.Write([]byte{})
_ = stdinPipe.Close()
procHandle := rcProc.Process()
if procHandle == nil {
t.Fatalf("expected process handle")
}
_ = procHandle.Signal(syscall.SIGTERM)
_ = procHandle.Kill()
_ = rcProc.Wait()
_, _ = io.ReadAll(stdoutPipe)
_, _ = io.ReadAll(stderrPipe)
rp := &realProcess{}
if rp.Pid() != 0 {
t.Fatalf("nil process should have pid 0")
}
if rp.Kill() != nil {
t.Fatalf("nil process Kill should be nil")
}
if rp.Signal(syscall.SIGTERM) != nil {
t.Fatalf("nil process Signal should be nil")
}
rpLive := &realProcess{proc: &os.Process{Pid: 99}}
if rpLive.Pid() != 99 {
t.Fatalf("expected pid 99, got %d", rpLive.Pid())
}
_ = rpLive.Kill()
_ = rpLive.Signal(syscall.SIGTERM)
})
t.Run("topologicalSortAndSkip", func(t *testing.T) {
layers, err := topologicalSort([]TaskSpec{{ID: "root"}, {ID: "child", Dependencies: []string{"root"}}})
if err != nil || len(layers) != 2 {
t.Fatalf("unexpected topological sort result: layers=%d err=%v", len(layers), err)
}
if _, err := topologicalSort([]TaskSpec{{ID: "cycle", Dependencies: []string{"cycle"}}}); err == nil {
t.Fatalf("expected cycle detection error")
}
failed := map[string]TaskResult{"root": {ExitCode: 1}}
if skip, _ := shouldSkipTask(TaskSpec{ID: "child", Dependencies: []string{"root"}}, failed); !skip {
t.Fatalf("should skip when dependency failed")
}
if skip, _ := shouldSkipTask(TaskSpec{ID: "leaf"}, failed); skip {
t.Fatalf("should not skip task without dependencies")
}
if skip, _ := shouldSkipTask(TaskSpec{ID: "child-ok", Dependencies: []string{"root"}}, map[string]TaskResult{}); skip {
t.Fatalf("should not skip when dependencies succeeded")
}
})
t.Run("cancelledTaskResult", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
cancel()
res := cancelledTaskResult("t1", ctx)
if res.ExitCode != 130 {
t.Fatalf("expected cancel exit code, got %d", res.ExitCode)
}
timeoutCtx, timeoutCancel := context.WithTimeout(context.Background(), 0)
defer timeoutCancel()
res = cancelledTaskResult("t2", timeoutCtx)
if res.ExitCode != 124 {
t.Fatalf("expected timeout exit code, got %d", res.ExitCode)
}
})
t.Run("generateFinalOutputAndArgs", func(t *testing.T) {
const key = "CODEX_BYPASS_SANDBOX"
t.Setenv(key, "false")
out := generateFinalOutput([]TaskResult{
{TaskID: "ok", ExitCode: 0},
{TaskID: "fail", ExitCode: 1, Error: "boom"},
})
if !strings.Contains(out, "ok") || !strings.Contains(out, "fail") {
t.Fatalf("unexpected summary output: %s", out)
}
// Test summary mode (default) - should have new format with ### headers
out = generateFinalOutput([]TaskResult{{TaskID: "rich", ExitCode: 0, SessionID: "sess", LogPath: "/tmp/log", Message: "hello"}})
if !strings.Contains(out, "### rich") {
t.Fatalf("summary output missing task header: %s", out)
}
// Test full output mode - should have Session and Message
out = generateFinalOutputWithMode([]TaskResult{{TaskID: "rich", ExitCode: 0, SessionID: "sess", LogPath: "/tmp/log", Message: "hello"}}, false)
if !strings.Contains(out, "Session: sess") || !strings.Contains(out, "Log: /tmp/log") || !strings.Contains(out, "hello") {
t.Fatalf("full output missing fields: %s", out)
}
args := buildCodexArgs(&Config{Mode: "new", WorkDir: "/tmp"}, "task")
if !slices.Equal(args, []string{"e", "--skip-git-repo-check", "-C", "/tmp", "--json", "task"}) {
t.Fatalf("unexpected codex args: %+v", args)
}
args = buildCodexArgs(&Config{Mode: "resume", SessionID: "sess"}, "target")
if !slices.Equal(args, []string{"e", "--skip-git-repo-check", "--json", "resume", "sess", "target"}) {
t.Fatalf("unexpected resume args: %+v", args)
}
})
t.Run("generateFinalOutputASCIIMode", func(t *testing.T) {
t.Setenv("CODEAGENT_ASCII_MODE", "true")
results := []TaskResult{
{TaskID: "ok", ExitCode: 0, Coverage: "92%", CoverageNum: 92, CoverageTarget: 90, KeyOutput: "done"},
{TaskID: "warn", ExitCode: 0, Coverage: "80%", CoverageNum: 80, CoverageTarget: 90, KeyOutput: "did"},
{TaskID: "bad", ExitCode: 2, Error: "boom"},
}
out := generateFinalOutput(results)
for _, sym := range []string{"PASS", "WARN", "FAIL"} {
if !strings.Contains(out, sym) {
t.Fatalf("ASCII mode should include %q, got: %s", sym, out)
}
}
for _, sym := range []string{"✓", "⚠️", "✗"} {
if strings.Contains(out, sym) {
t.Fatalf("ASCII mode should not include %q, got: %s", sym, out)
}
}
})
t.Run("generateFinalOutputUnicodeMode", func(t *testing.T) {
t.Setenv("CODEAGENT_ASCII_MODE", "false")
results := []TaskResult{
{TaskID: "ok", ExitCode: 0, Coverage: "92%", CoverageNum: 92, CoverageTarget: 90, KeyOutput: "done"},
{TaskID: "warn", ExitCode: 0, Coverage: "80%", CoverageNum: 80, CoverageTarget: 90, KeyOutput: "did"},
{TaskID: "bad", ExitCode: 2, Error: "boom"},
}
out := generateFinalOutput(results)
for _, sym := range []string{"✓", "⚠️", "✗"} {
if !strings.Contains(out, sym) {
t.Fatalf("Unicode mode should include %q, got: %s", sym, out)
}
}
})
t.Run("executeConcurrentWrapper", func(t *testing.T) {
orig := runCodexTaskFn
defer func() { runCodexTaskFn = orig }()
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
return TaskResult{TaskID: task.ID, ExitCode: 0, Message: "done"}
}
t.Setenv("CODEAGENT_MAX_PARALLEL_WORKERS", "1")
results := executeConcurrent([][]TaskSpec{{{ID: "wrap"}}}, 1)
if len(results) != 1 || results[0].TaskID != "wrap" {
t.Fatalf("unexpected wrapper results: %+v", results)
}
unbounded := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: "unbounded"}}}, 1, 0)
if len(unbounded) != 1 || unbounded[0].ExitCode != 0 {
t.Fatalf("unexpected unbounded result: %+v", unbounded)
}
ctx, cancel := context.WithCancel(context.Background())
cancel()
cancelled := executeConcurrentWithContext(ctx, [][]TaskSpec{{{ID: "cancel"}}}, 1, 1)
if cancelled[0].ExitCode == 0 {
t.Fatalf("expected cancelled result, got %+v", cancelled[0])
}
})
}
func TestExecutorRunCodexTaskWithContext(t *testing.T) {
origRunner := newCommandRunner
defer func() { newCommandRunner = origRunner }()
defer resetTestHooks()
t.Run("resumeMissingSessionID", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
t.Fatalf("unexpected command execution for invalid resume config")
return nil
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: ".", Mode: "resume"}, nil, nil, false, false, 1)
if res.ExitCode == 0 || !strings.Contains(res.Error, "session_id") {
@@ -396,13 +187,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
t.Run("success", func(t *testing.T) {
var firstStdout *reasonReadCloser
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
rc := newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"hello"}}`)
if firstStdout == nil {
firstStdout = rc
}
return &execFakeRunner{stdout: rc, process: &execFakeProcess{pid: 1234}}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{ID: "task-1", Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.Error != "" || res.Message != "hello" || res.ExitCode != 0 {
@@ -432,17 +224,18 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("startErrors", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{startErr: errors.New("executable file not found"), process: &execFakeProcess{pid: 1}}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode != 127 {
t.Fatalf("expected missing executable exit code, got %d", res.ExitCode)
}
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{startErr: errors.New("start failed"), process: &execFakeProcess{pid: 2}}
}
})
res = runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected non-zero exit on start failure")
@@ -450,13 +243,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("timeoutAndPipes", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"slow"}}`),
process: &execFakeProcess{pid: 5},
waitDelay: 20 * time.Millisecond,
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: ".", UseStdin: true}, nil, nil, false, false, 0)
if res.ExitCode == 0 {
t.Fatalf("expected timeout result, got %+v", res)
@@ -464,17 +258,18 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("pipeErrors", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{stdoutErr: errors.New("stdout fail"), process: &execFakeProcess{pid: 6}}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected failure on stdout pipe error")
}
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{stdinErr: errors.New("stdin fail"), process: &execFakeProcess{pid: 7}}
}
})
res = runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: ".", UseStdin: true}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected failure on stdin pipe error")
@@ -487,13 +282,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
if exitErr == nil {
t.Fatalf("expected exec.ExitError")
}
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ignored"}}`),
process: &execFakeProcess{pid: 8},
waitErr: exitErr,
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected non-zero exit on wait error")
@@ -501,13 +297,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("contextCancelled", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"cancel"}}`),
process: &execFakeProcess{pid: 9},
waitDelay: 10 * time.Millisecond,
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
ctx, cancel := context.WithCancel(context.Background())
cancel()
res := runCodexTaskWithContext(ctx, TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
@@ -517,12 +314,13 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("silentLogger", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"quiet"}}`),
process: &execFakeProcess{pid: 10},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, true, 1)
if res.ExitCode != 0 || res.LogPath == "" {
@@ -532,12 +330,13 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("injectedLogger", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"injected"}}`),
process: &execFakeProcess{pid: 12},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
injected, err := NewLoggerWithSuffix("executor-injected")
@@ -549,7 +348,7 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
_ = os.Remove(injected.Path())
}()
ctx := withTaskLogger(context.Background(), injected)
ctx := executor.WithTaskLogger(context.Background(), injected)
res := runCodexTaskWithContext(ctx, TaskSpec{ID: "task-injected", Task: "payload", WorkDir: "."}, nil, nil, false, true, 1)
if res.ExitCode != 0 || res.LogPath != injected.Path() {
t.Fatalf("expected injected logger path, got %+v", res)
@@ -569,12 +368,13 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("contextLoggerWithoutParent", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ctx"}}`),
process: &execFakeProcess{pid: 14},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
taskLogger, err := NewLoggerWithSuffix("executor-taskctx")
@@ -586,8 +386,8 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
_ = os.Remove(taskLogger.Path())
})
ctx := withTaskLogger(context.Background(), taskLogger)
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-context", Task: "payload", WorkDir: ".", Context: ctx}, nil, nil, false, true, 1)
ctx := executor.WithTaskLogger(context.Background(), taskLogger)
res := runCodexTaskWithContext(context.TODO(), TaskSpec{ID: "task-context", Task: "payload", WorkDir: ".", Context: ctx}, nil, nil, false, true, 1)
if res.ExitCode != 0 || res.LogPath != taskLogger.Path() {
t.Fatalf("expected task logger to be reused from spec context, got %+v", res)
}
@@ -607,16 +407,17 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
t.Run("backendSetsDirAndNilContext", func(t *testing.T) {
var rc *execFakeRunner
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
rc = &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"backend"}}`),
process: &execFakeProcess{pid: 13},
}
return rc
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-backend", Task: "payload", WorkDir: "/tmp"}, ClaudeBackend{}, nil, false, false, 1)
res := runCodexTaskWithContext(context.TODO(), TaskSpec{ID: "task-backend", Task: "payload", WorkDir: "/tmp"}, ClaudeBackend{}, nil, false, false, 1)
if res.ExitCode != 0 || res.Message != "backend" {
t.Fatalf("unexpected result: %+v", res)
}
@@ -628,13 +429,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
t.Run("claudeSkipPermissionsPropagatesFromTaskSpec", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
var gotArgs []string
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
gotArgs = append([]string(nil), args...)
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ok"}}`),
process: &execFakeProcess{pid: 15},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
res := runCodexTaskWithContext(context.Background(), TaskSpec{ID: "task-skip", Task: "payload", WorkDir: ".", SkipPermissions: true}, ClaudeBackend{}, nil, false, false, 1)
@@ -647,12 +449,13 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("missingMessage", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"task","text":"noop"}}`),
process: &execFakeProcess{pid: 11},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected failure when no agent_message returned")
@@ -678,7 +481,7 @@ func TestExecutorParallelLogIsolation(t *testing.T) {
origRun := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
logger := taskLoggerFromContext(task.Context)
logger := executor.TaskLoggerFromContext(task.Context)
if logger == nil {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing task logger"}
}
@@ -702,7 +505,7 @@ func TestExecutorParallelLogIsolation(t *testing.T) {
os.Stderr = stderrW
defer func() { os.Stderr = oldStderr }()
results := executeConcurrentWithContext(nil, [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, -1)
results := executeConcurrentWithContext(context.TODO(), [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, -1)
_ = stderrW.Close()
os.Stderr = oldStderr
@@ -768,7 +571,7 @@ func TestConcurrentExecutorParallelLogIsolationAndClosure(t *testing.T) {
t.Setenv("TMPDIR", tempDir)
oldArgs := os.Args
os.Args = []string{defaultWrapperName}
os.Args = []string{wrapperName}
t.Cleanup(func() { os.Args = oldArgs })
mainLogger, err := NewLoggerWithSuffix("concurrent-main")
@@ -814,7 +617,7 @@ func TestConcurrentExecutorParallelLogIsolationAndClosure(t *testing.T) {
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
readyCh <- struct{}{}
logger := taskLoggerFromContext(task.Context)
logger := executor.TaskLoggerFromContext(task.Context)
loggerCh <- taskLoggerInfo{taskID: task.ID, logger: logger}
if logger == nil {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing task logger"}
@@ -901,15 +704,9 @@ func TestConcurrentExecutorParallelLogIsolationAndClosure(t *testing.T) {
}
for taskID, logger := range loggers {
if !logger.closed.Load() {
if !logger.IsClosed() {
t.Fatalf("expected task logger to be closed for %q", taskID)
}
if logger.file == nil {
t.Fatalf("expected task logger file to be non-nil for %q", taskID)
}
if _, err := logger.file.Write([]byte("x")); err == nil {
t.Fatalf("expected task logger file to be closed for %q", taskID)
}
}
mainLogger.Flush()
@@ -979,10 +776,10 @@ func parseTaskIDFromLogLine(line string) (string, bool) {
}
func TestExecutorTaskLoggerContext(t *testing.T) {
if taskLoggerFromContext(nil) != nil {
t.Fatalf("expected nil logger from nil context")
if executor.TaskLoggerFromContext(context.TODO()) != nil {
t.Fatalf("expected nil logger from TODO context")
}
if taskLoggerFromContext(context.Background()) != nil {
if executor.TaskLoggerFromContext(context.Background()) != nil {
t.Fatalf("expected nil logger when context has no logger")
}
@@ -995,12 +792,12 @@ func TestExecutorTaskLoggerContext(t *testing.T) {
_ = os.Remove(logger.Path())
}()
ctx := withTaskLogger(context.Background(), logger)
if got := taskLoggerFromContext(ctx); got != logger {
ctx := executor.WithTaskLogger(context.Background(), logger)
if got := executor.TaskLoggerFromContext(ctx); got != logger {
t.Fatalf("expected logger roundtrip, got %v", got)
}
if taskLoggerFromContext(withTaskLogger(context.Background(), nil)) != nil {
if executor.TaskLoggerFromContext(executor.WithTaskLogger(context.Background(), nil)) != nil {
t.Fatalf("expected nil logger when injected logger is nil")
}
}
@@ -1157,7 +954,7 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
orig := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
logger := taskLoggerFromContext(task.Context)
logger := executor.TaskLoggerFromContext(task.Context)
if logger != mainLogger {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "unexpected logger"}
}
@@ -1191,9 +988,6 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
if res.LogPath != mainLogger.Path() {
t.Fatalf("shared log path mismatch: got %q want %q", res.LogPath, mainLogger.Path())
}
if !res.sharedLog {
t.Fatalf("expected sharedLog flag for %+v", res)
}
if !strings.Contains(stderrOut, "Log (shared)") {
t.Fatalf("stderr missing shared marker: %s", stderrOut)
}
@@ -1222,7 +1016,7 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
orig := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
logger := taskLoggerFromContext(task.Context)
logger := executor.TaskLoggerFromContext(task.Context)
if logger == nil {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing logger"}
}
@@ -1260,7 +1054,14 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
if err != nil {
t.Fatalf("failed to read log %q: %v", res.LogPath, err)
}
if !strings.Contains(string(data), "TASK="+res.TaskID) {
found := false
for _, line := range strings.Split(string(data), "\n") {
if strings.Contains(stripTimestampPrefix(line), "TASK="+res.TaskID) {
found = true
break
}
}
if !found {
t.Fatalf("log for %q missing task marker, content: %s", res.TaskID, string(data))
}
_ = os.Remove(res.LogPath)
@@ -1268,147 +1069,6 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
})
}
func TestExecutorSignalAndTermination(t *testing.T) {
forceKillDelay.Store(0)
defer forceKillDelay.Store(5)
proc := &execFakeProcess{pid: 42}
cmd := &execFakeRunner{process: proc}
origNotify := signalNotifyFn
origStop := signalStopFn
defer func() {
signalNotifyFn = origNotify
signalStopFn = origStop
}()
signalNotifyFn = func(c chan<- os.Signal, sigs ...os.Signal) {
go func() { c <- syscall.SIGINT }()
}
signalStopFn = func(c chan<- os.Signal) {}
forwardSignals(context.Background(), cmd, func(string) {})
time.Sleep(20 * time.Millisecond)
proc.mu.Lock()
signalled := len(proc.signals)
proc.mu.Unlock()
if runtime.GOOS != "windows" && signalled == 0 {
t.Fatalf("process did not receive signal")
}
if proc.killed.Load() == 0 {
t.Fatalf("process was not killed after signal")
}
timer := terminateProcess(cmd)
if timer == nil {
t.Fatalf("terminateProcess returned nil timer")
}
timer.Stop()
ft := terminateCommand(cmd)
if ft == nil {
t.Fatalf("terminateCommand returned nil")
}
ft.Stop()
cmdKill := &execFakeRunner{process: &execFakeProcess{pid: 50}}
ftKill := terminateCommand(cmdKill)
time.Sleep(10 * time.Millisecond)
if p, ok := cmdKill.process.(*execFakeProcess); ok && p.killed.Load() == 0 {
t.Fatalf("terminateCommand did not kill process")
}
ftKill.Stop()
cmdKill2 := &execFakeRunner{process: &execFakeProcess{pid: 51}}
timer2 := terminateProcess(cmdKill2)
time.Sleep(10 * time.Millisecond)
if p, ok := cmdKill2.process.(*execFakeProcess); ok && p.killed.Load() == 0 {
t.Fatalf("terminateProcess did not kill process")
}
timer2.Stop()
if terminateCommand(nil) != nil {
t.Fatalf("terminateCommand should return nil for nil cmd")
}
if terminateCommand(&execFakeRunner{allowNilProcess: true}) != nil {
t.Fatalf("terminateCommand should return nil when process is nil")
}
if terminateProcess(nil) != nil {
t.Fatalf("terminateProcess should return nil for nil cmd")
}
if terminateProcess(&execFakeRunner{allowNilProcess: true}) != nil {
t.Fatalf("terminateProcess should return nil when process is nil")
}
signalNotifyFn = func(c chan<- os.Signal, sigs ...os.Signal) {}
ctxDone, cancelDone := context.WithCancel(context.Background())
cancelDone()
forwardSignals(ctxDone, &execFakeRunner{process: &execFakeProcess{pid: 70}}, func(string) {})
}
func TestExecutorCancelReasonAndCloseWithReason(t *testing.T) {
if reason := cancelReason("", nil); !strings.Contains(reason, "Context") {
t.Fatalf("unexpected cancelReason for nil ctx: %s", reason)
}
ctx, cancel := context.WithTimeout(context.Background(), 0)
defer cancel()
if !strings.Contains(cancelReason("cmd", ctx), "timeout") {
t.Fatalf("expected timeout reason")
}
cancelCtx, cancelFn := context.WithCancel(context.Background())
cancelFn()
if !strings.Contains(cancelReason("cmd", cancelCtx), "Execution cancelled") {
t.Fatalf("expected cancellation reason")
}
if !strings.Contains(cancelReason("", cancelCtx), "codex") {
t.Fatalf("expected default command name in cancel reason")
}
rc := &reasonReadCloser{r: strings.NewReader("data"), closedC: make(chan struct{}, 1)}
closeWithReason(rc, "why")
select {
case <-rc.closedC:
default:
t.Fatalf("CloseWithReason was not called")
}
plain := io.NopCloser(strings.NewReader("x"))
closeWithReason(plain, "noop")
closeWithReason(nil, "noop")
}
func TestExecutorForceKillTimerStop(t *testing.T) {
done := make(chan struct{}, 1)
ft := &forceKillTimer{timer: time.AfterFunc(50*time.Millisecond, func() { done <- struct{}{} }), done: done}
ft.Stop()
done2 := make(chan struct{}, 1)
ft2 := &forceKillTimer{timer: time.AfterFunc(0, func() { done2 <- struct{}{} }), done: done2}
time.Sleep(10 * time.Millisecond)
ft2.Stop()
var nilTimer *forceKillTimer
nilTimer.Stop()
(&forceKillTimer{}).Stop()
}
func TestExecutorForwardSignalsDefaults(t *testing.T) {
origNotify := signalNotifyFn
origStop := signalStopFn
signalNotifyFn = nil
signalStopFn = nil
defer func() {
signalNotifyFn = origNotify
signalStopFn = origStop
}()
ctx, cancel := context.WithCancel(context.Background())
cancel()
forwardSignals(ctx, &execFakeRunner{process: &execFakeProcess{pid: 80}}, func(string) {})
time.Sleep(10 * time.Millisecond)
}
func TestExecutorSharedLogFalseWhenCustomLogPath(t *testing.T) {
devNull, err := os.OpenFile(os.DevNull, os.O_WRONLY, 0)
if err != nil {
@@ -1464,10 +1124,9 @@ func TestExecutorSharedLogFalseWhenCustomLogPath(t *testing.T) {
}
res := results[0]
// 关键断言:即使 handle.shared=true因为 task logger 创建失败),
// 但因为 LogPath 不等于主 logger 的路径sharedLog 应为 false
if res.sharedLog {
t.Fatalf("expected sharedLog=false when LogPath differs from shared logger, got true")
out := generateFinalOutputWithMode(results, false)
if strings.Contains(out, "(shared)") {
t.Fatalf("did not expect shared marker when LogPath differs from shared logger, got: %s", out)
}
// 验证 LogPath 确实是自定义的

View File

@@ -0,0 +1,26 @@
package wrapper
import ilogger "codeagent-wrapper/internal/logger"
type Logger = ilogger.Logger
type CleanupStats = ilogger.CleanupStats
func NewLogger() (*Logger, error) { return ilogger.NewLogger() }
func NewLoggerWithSuffix(suffix string) (*Logger, error) { return ilogger.NewLoggerWithSuffix(suffix) }
func setLogger(l *Logger) { ilogger.SetLogger(l) }
func closeLogger() error { return ilogger.CloseLogger() }
func activeLogger() *Logger { return ilogger.ActiveLogger() }
func logInfo(msg string) { ilogger.LogInfo(msg) }
func logWarn(msg string) { ilogger.LogWarn(msg) }
func logError(msg string) { ilogger.LogError(msg) }
func cleanupOldLogs() (CleanupStats, error) { return ilogger.CleanupOldLogs() }
func sanitizeLogSuffix(raw string) string { return ilogger.SanitizeLogSuffix(raw) }

View File

@@ -1,7 +1,8 @@
package main
package wrapper
import (
"bytes"
"codeagent-wrapper/internal/logger"
"fmt"
"io"
"os"
@@ -36,7 +37,9 @@ func captureStdout(t *testing.T, fn func()) string {
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
return buf.String()
}
@@ -57,11 +60,17 @@ func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
for _, p := range parts {
p = strings.TrimSpace(p)
if strings.HasSuffix(p, "tasks") {
fmt.Sscanf(p, "%d tasks", &payload.Summary.Total)
if _, err := fmt.Sscanf(p, "%d tasks", &payload.Summary.Total); err != nil {
t.Fatalf("failed to parse total tasks from %q: %v", p, err)
}
} else if strings.HasSuffix(p, "passed") {
fmt.Sscanf(p, "%d passed", &payload.Summary.Success)
if _, err := fmt.Sscanf(p, "%d passed", &payload.Summary.Success); err != nil {
t.Fatalf("failed to parse passed tasks from %q: %v", p, err)
}
} else if strings.HasSuffix(p, "failed") {
fmt.Sscanf(p, "%d failed", &payload.Summary.Failed)
if _, err := fmt.Sscanf(p, "%d failed", &payload.Summary.Failed); err != nil {
t.Fatalf("failed to parse failed tasks from %q: %v", p, err)
}
}
}
} else if strings.HasPrefix(line, "Total:") {
@@ -70,11 +79,17 @@ func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
for _, p := range parts {
p = strings.TrimSpace(p)
if strings.HasPrefix(p, "Total:") {
fmt.Sscanf(p, "Total: %d", &payload.Summary.Total)
if _, err := fmt.Sscanf(p, "Total: %d", &payload.Summary.Total); err != nil {
t.Fatalf("failed to parse total tasks from %q: %v", p, err)
}
} else if strings.HasPrefix(p, "Success:") {
fmt.Sscanf(p, "Success: %d", &payload.Summary.Success)
if _, err := fmt.Sscanf(p, "Success: %d", &payload.Summary.Success); err != nil {
t.Fatalf("failed to parse passed tasks from %q: %v", p, err)
}
} else if strings.HasPrefix(p, "Failed:") {
fmt.Sscanf(p, "Failed: %d", &payload.Summary.Failed)
if _, err := fmt.Sscanf(p, "Failed: %d", &payload.Summary.Failed); err != nil {
t.Fatalf("failed to parse failed tasks from %q: %v", p, err)
}
}
}
} else if line == "## Task Results" {
@@ -94,34 +109,39 @@ func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
currentTask = &TaskResult{}
taskLine := strings.TrimPrefix(line, "### ")
success, warning, failed := getStatusSymbols()
// Parse different formats
if strings.Contains(taskLine, " "+success) {
parts := strings.Split(taskLine, " "+success)
parseMarker := func(marker string, exitCode int) bool {
needle := " " + marker
if !strings.Contains(taskLine, needle) {
return false
}
parts := strings.Split(taskLine, needle)
currentTask.TaskID = strings.TrimSpace(parts[0])
currentTask.ExitCode = 0
// Extract coverage if present
if len(parts) > 1 {
currentTask.ExitCode = exitCode
if exitCode == 0 && len(parts) > 1 {
coveragePart := strings.TrimSpace(parts[1])
if strings.HasSuffix(coveragePart, "%") {
currentTask.Coverage = coveragePart
}
}
} else if strings.Contains(taskLine, " "+warning) {
parts := strings.Split(taskLine, " "+warning)
currentTask.TaskID = strings.TrimSpace(parts[0])
currentTask.ExitCode = 0
} else if strings.Contains(taskLine, " "+failed) {
parts := strings.Split(taskLine, " "+failed)
currentTask.TaskID = strings.TrimSpace(parts[0])
currentTask.ExitCode = 1
} else {
return true
}
switch {
case parseMarker("✓", 0), parseMarker("PASS", 0):
// ok
case parseMarker("⚠️", 0), parseMarker("WARN", 0):
// warning
case parseMarker("✗", 1), parseMarker("FAIL", 1):
// fail
default:
currentTask.TaskID = taskLine
}
} else if currentTask != nil && inTaskResults {
// Parse task details
if strings.HasPrefix(line, "Exit code:") {
fmt.Sscanf(line, "Exit code: %d", &currentTask.ExitCode)
if _, err := fmt.Sscanf(line, "Exit code: %d", &currentTask.ExitCode); err != nil {
t.Fatalf("failed to parse exit code from %q: %v", line, err)
}
} else if strings.HasPrefix(line, "Error:") {
currentTask.Error = strings.TrimPrefix(line, "Error: ")
} else if strings.HasPrefix(line, "Log:") {
@@ -147,7 +167,9 @@ func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
currentTask.ExitCode = 0
} else if strings.HasPrefix(line, "Status: FAILED") {
if strings.Contains(line, "exit code") {
fmt.Sscanf(line, "Status: FAILED (exit code %d)", &currentTask.ExitCode)
if _, err := fmt.Sscanf(line, "Status: FAILED (exit code %d)", &currentTask.ExitCode); err != nil {
t.Fatalf("failed to parse exit code from %q: %v", line, err)
}
} else {
currentTask.ExitCode = 1
}
@@ -180,6 +202,37 @@ func findResultByID(t *testing.T, payload integrationOutput, id string) TaskResu
return TaskResult{}
}
func setTempDirEnv(t *testing.T, dir string) string {
t.Helper()
resolved := dir
if eval, err := filepath.EvalSymlinks(dir); err == nil {
resolved = eval
}
t.Setenv("TMPDIR", resolved)
t.Setenv("TEMP", resolved)
t.Setenv("TMP", resolved)
return resolved
}
func createTempLog(t *testing.T, dir, name string) string {
t.Helper()
path := filepath.Join(dir, name)
if err := os.WriteFile(path, []byte("test"), 0o644); err != nil {
t.Fatalf("failed to create temp log %s: %v", path, err)
}
return path
}
func stubProcessRunning(t *testing.T, fn func(int) bool) {
t.Helper()
t.Cleanup(logger.SetProcessRunningCheck(fn))
}
func stubProcessStartTime(t *testing.T, fn func(int) time.Time) {
t.Helper()
t.Cleanup(logger.SetProcessStartTimeFn(fn))
}
func TestRunParallelEndToEnd_OrderAndConcurrency(t *testing.T) {
defer resetTestHooks()
origRun := runCodexTaskFn
@@ -365,7 +418,7 @@ id: beta
---CONTENT---
task-beta`
stdinReader = bytes.NewReader([]byte(input))
os.Args = []string{"codex-wrapper", "--parallel"}
os.Args = []string{"codeagent-wrapper", "--parallel"}
var exitCode int
output := captureStdout(t, func() {
@@ -418,9 +471,9 @@ id: d
---CONTENT---
ok-d`
stdinReader = bytes.NewReader([]byte(input))
os.Args = []string{"codex-wrapper", "--parallel"}
os.Args = []string{"codeagent-wrapper", "--parallel"}
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
origRun := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
@@ -474,9 +527,9 @@ ok-d`
// After parallel log isolation fix, each task has its own log file
expectedLines := map[string]struct{}{
fmt.Sprintf("Task a: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-a.log", os.Getpid()))): {},
fmt.Sprintf("Task b: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-b.log", os.Getpid()))): {},
fmt.Sprintf("Task d: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-d.log", os.Getpid()))): {},
fmt.Sprintf("Task a: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d-a.log", os.Getpid()))): {},
fmt.Sprintf("Task b: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d-b.log", os.Getpid()))): {},
fmt.Sprintf("Task d: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d-d.log", os.Getpid()))): {},
}
if len(taskLines) != len(expectedLines) {
@@ -494,7 +547,7 @@ func TestRunNonParallelOutputsIncludeLogPathsIntegration(t *testing.T) {
defer resetTestHooks()
tempDir := setTempDirEnv(t, t.TempDir())
os.Args = []string{"codex-wrapper", "integration-log-check"}
os.Args = []string{"codeagent-wrapper", "integration-log-check"}
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
codexCommand = "echo"
@@ -512,7 +565,7 @@ func TestRunNonParallelOutputsIncludeLogPathsIntegration(t *testing.T) {
if exitCode != 0 {
t.Fatalf("run() exit=%d, want 0", exitCode)
}
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
wantLine := fmt.Sprintf("Log: %s", expectedLog)
if !strings.Contains(stderr, wantLine) {
t.Fatalf("stderr missing %q, got: %q", wantLine, stderr)
@@ -693,11 +746,11 @@ func TestRunStartupCleanupRemovesOrphansEndToEnd(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
orphanA := createTempLog(t, tempDir, "codex-wrapper-5001.log")
orphanB := createTempLog(t, tempDir, "codex-wrapper-5002-extra.log")
orphanC := createTempLog(t, tempDir, "codex-wrapper-5003-suffix.log")
orphanA := createTempLog(t, tempDir, "codeagent-wrapper-5001.log")
orphanB := createTempLog(t, tempDir, "codeagent-wrapper-5002-extra.log")
orphanC := createTempLog(t, tempDir, "codeagent-wrapper-5003-suffix.log")
runningPID := 81234
runningLog := createTempLog(t, tempDir, fmt.Sprintf("codex-wrapper-%d.log", runningPID))
runningLog := createTempLog(t, tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", runningPID))
unrelated := createTempLog(t, tempDir, "wrapper.log")
stubProcessRunning(t, func(pid int) bool {
@@ -713,7 +766,7 @@ func TestRunStartupCleanupRemovesOrphansEndToEnd(t *testing.T) {
codexCommand = createFakeCodexScript(t, "tid-startup", "ok")
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
os.Args = []string{"codex-wrapper", "task"}
os.Args = []string{"codeagent-wrapper", "task"}
if exit := run(); exit != 0 {
t.Fatalf("run() exit=%d, want 0", exit)
@@ -739,7 +792,7 @@ func TestRunStartupCleanupConcurrentWrappers(t *testing.T) {
const totalLogs = 40
for i := 0; i < totalLogs; i++ {
createTempLog(t, tempDir, fmt.Sprintf("codex-wrapper-%d.log", 9000+i))
createTempLog(t, tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", 9000+i))
}
stubProcessRunning(t, func(pid int) bool {
@@ -763,7 +816,7 @@ func TestRunStartupCleanupConcurrentWrappers(t *testing.T) {
close(start)
wg.Wait()
matches, err := filepath.Glob(filepath.Join(tempDir, "codex-wrapper-*.log"))
matches, err := filepath.Glob(filepath.Join(tempDir, "codeagent-wrapper-*.log"))
if err != nil {
t.Fatalf("glob error: %v", err)
}
@@ -777,9 +830,9 @@ func TestRunCleanupFlagEndToEnd_Success(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
staleA := createTempLog(t, tempDir, "codex-wrapper-2100.log")
staleB := createTempLog(t, tempDir, "codex-wrapper-2200-extra.log")
keeper := createTempLog(t, tempDir, "codex-wrapper-2300.log")
staleA := createTempLog(t, tempDir, "codeagent-wrapper-2100.log")
staleB := createTempLog(t, tempDir, "codeagent-wrapper-2200-extra.log")
keeper := createTempLog(t, tempDir, "codeagent-wrapper-2300.log")
stubProcessRunning(t, func(pid int) bool {
return pid == 2300 || pid == os.Getpid()
@@ -791,7 +844,7 @@ func TestRunCleanupFlagEndToEnd_Success(t *testing.T) {
return time.Time{}
})
os.Args = []string{"codex-wrapper", "--cleanup"}
os.Args = []string{"codeagent-wrapper", "--cleanup"}
var exitCode int
output := captureStdout(t, func() {
@@ -815,10 +868,10 @@ func TestRunCleanupFlagEndToEnd_Success(t *testing.T) {
if !strings.Contains(output, "Files kept: 1") {
t.Fatalf("missing 'Files kept: 1' in output: %q", output)
}
if !strings.Contains(output, "codex-wrapper-2100.log") || !strings.Contains(output, "codex-wrapper-2200-extra.log") {
if !strings.Contains(output, "codeagent-wrapper-2100.log") || !strings.Contains(output, "codeagent-wrapper-2200-extra.log") {
t.Fatalf("missing deleted file names in output: %q", output)
}
if !strings.Contains(output, "codex-wrapper-2300.log") {
if !strings.Contains(output, "codeagent-wrapper-2300.log") {
t.Fatalf("missing kept file names in output: %q", output)
}
@@ -831,7 +884,7 @@ func TestRunCleanupFlagEndToEnd_Success(t *testing.T) {
t.Fatalf("expected kept log to remain, err=%v", err)
}
currentLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
currentLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
if _, err := os.Stat(currentLog); err == nil {
t.Fatalf("cleanup mode should not create new log file %s", currentLog)
} else if !os.IsNotExist(err) {
@@ -850,7 +903,7 @@ func TestRunCleanupFlagEndToEnd_FailureDoesNotAffectStartup(t *testing.T) {
return CleanupStats{Scanned: 1}, fmt.Errorf("permission denied")
}
os.Args = []string{"codex-wrapper", "--cleanup"}
os.Args = []string{"codeagent-wrapper", "--cleanup"}
var exitCode int
errOutput := captureStderr(t, func() {
@@ -867,7 +920,7 @@ func TestRunCleanupFlagEndToEnd_FailureDoesNotAffectStartup(t *testing.T) {
t.Fatalf("cleanup called %d times, want 1", calls)
}
currentLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
currentLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
if _, err := os.Stat(currentLog); err == nil {
t.Fatalf("cleanup failure should not create new log file %s", currentLog)
} else if !os.IsNotExist(err) {
@@ -880,7 +933,7 @@ func TestRunCleanupFlagEndToEnd_FailureDoesNotAffectStartup(t *testing.T) {
codexCommand = createFakeCodexScript(t, "tid-cleanup-e2e", "ok")
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
os.Args = []string{"codex-wrapper", "post-cleanup task"}
os.Args = []string{"codeagent-wrapper", "post-cleanup task"}
var normalExit int
normalOutput := captureStdout(t, func() {

View File

@@ -1,10 +1,9 @@
package main
package wrapper
import (
"bufio"
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
@@ -19,6 +18,11 @@ import (
"syscall"
"testing"
"time"
config "codeagent-wrapper/internal/config"
executor "codeagent-wrapper/internal/executor"
"github.com/goccy/go-json"
)
// Helper to reset test hooks
@@ -28,17 +32,15 @@ func resetTestHooks() {
codexCommand = "codex"
cleanupHook = nil
cleanupLogsFn = cleanupOldLogs
signalNotifyFn = signal.Notify
signalStopFn = signal.Stop
startupCleanupAsync = false
config.ResetModelsConfigCacheForTest()
_ = executor.SetSelectBackendFn(nil)
buildCodexArgsFn = buildCodexArgs
selectBackendFn = selectBackend
commandContext = exec.CommandContext
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return &realCmd{cmd: commandContext(ctx, name, args...)}
}
forceKillDelay.Store(5)
closeLogger()
executablePathFn = os.Executable
_ = executor.SetCommandContextFn(nil)
_ = executor.SetNewCommandRunner(nil)
_ = executor.SetForceKillDelay(5)
_ = closeLogger()
runTaskFn = runCodexTask
runCodexTaskFn = defaultRunCodexTaskFn
exitFn = os.Exit
@@ -86,6 +88,8 @@ func (t testBackend) Command() string {
return "echo"
}
func (t testBackend) Env(baseURL, apiKey string) map[string]string { return nil }
func withBackend(command string, argsFn func(*Config, string) []string) func() {
prev := selectBackendFn
selectBackendFn = func(name string) (Backend, error) {
@@ -107,7 +111,7 @@ func restoreStdoutPipe(c *capturedStdout) {
}
c.writer.Close()
os.Stdout = c.old
io.Copy(&c.buf, c.reader)
_, _ = io.Copy(&c.buf, c.reader)
}
func (c *capturedStdout) String() string {
@@ -127,7 +131,9 @@ func captureOutput(t *testing.T, fn func()) string {
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
return buf.String()
}
@@ -141,7 +147,9 @@ func captureStderr(t *testing.T, fn func()) string {
os.Stderr = old
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
return buf.String()
}
@@ -262,7 +270,7 @@ func (d *drainBlockingCmd) SetEnv(env map[string]string) {
d.inner.SetEnv(env)
}
func (d *drainBlockingCmd) Process() processHandle {
func (d *drainBlockingCmd) Process() executor.ProcessHandle {
return d.inner.Process()
}
@@ -553,7 +561,7 @@ func (f *fakeCmd) SetEnv(env map[string]string) {
}
}
func (f *fakeCmd) Process() processHandle {
func (f *fakeCmd) Process() executor.ProcessHandle {
if f == nil {
return nil
}
@@ -728,9 +736,7 @@ func TestFakeCmdInfra(t *testing.T) {
WaitDelay: 5 * time.Millisecond,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string {
return []string{targetArg}
}
@@ -774,9 +780,7 @@ func TestRunCodexTask_WaitBeforeParse(t *testing.T) {
WaitDelay: waitDelay,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string {
return []string{targetArg}
}
@@ -821,9 +825,7 @@ func TestRunCodexTask_ParseStall(t *testing.T) {
})
blockingCmd := newDrainBlockingCmd(fake)
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return blockingCmd
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return blockingCmd })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string {
return []string{targetArg}
}
@@ -878,7 +880,7 @@ func TestRunCodexTask_ParseStall(t *testing.T) {
func TestRunCodexTask_ContextTimeout(t *testing.T) {
defer resetTestHooks()
forceKillDelay.Store(0)
_ = executor.SetForceKillDelay(0)
fake := newFakeCmd(fakeCmdConfig{
KeepStdoutOpen: true,
@@ -887,9 +889,7 @@ func TestRunCodexTask_ContextTimeout(t *testing.T) {
ReleaseWaitOnSignal: false,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string {
return []string{targetArg}
}
@@ -898,14 +898,6 @@ func TestRunCodexTask_ContextTimeout(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 200*time.Millisecond)
defer cancel()
var capturedTimer *forceKillTimer
terminateCommandFn = func(cmd commandRunner) *forceKillTimer {
timer := terminateCommand(cmd)
capturedTimer = timer
return timer
}
defer func() { terminateCommandFn = terminateCommand }()
result := runCodexTaskWithContext(ctx, TaskSpec{Task: "ctx-timeout", WorkDir: defaultWorkdir}, nil, nil, false, false, 60)
if result.ExitCode != 124 {
@@ -929,15 +921,6 @@ func TestRunCodexTask_ContextTimeout(t *testing.T) {
t.Fatalf("expected Kill to eventually run, got 0")
}
}
if capturedTimer == nil {
t.Fatalf("forceKillTimer not captured")
}
if !capturedTimer.stopped.Load() {
t.Fatalf("forceKillTimer.Stop was not called")
}
if !capturedTimer.drained.Load() {
t.Fatalf("forceKillTimer drain logic did not run")
}
if fake.stdout == nil {
t.Fatalf("stdout reader not initialized")
}
@@ -948,7 +931,7 @@ func TestRunCodexTask_ContextTimeout(t *testing.T) {
func TestRunCodexTask_ForcesStopAfterCompletion(t *testing.T) {
defer resetTestHooks()
forceKillDelay.Store(0)
_ = executor.SetForceKillDelay(0)
fake := newFakeCmd(fakeCmdConfig{
StdoutPlan: []fakeStdoutEvent{
@@ -961,9 +944,7 @@ func TestRunCodexTask_ForcesStopAfterCompletion(t *testing.T) {
ReleaseWaitOnKill: true,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{targetArg} }
codexCommand = "fake-cmd"
@@ -988,7 +969,7 @@ func TestRunCodexTask_ForcesStopAfterCompletion(t *testing.T) {
func TestRunCodexTask_ForcesStopAfterTurnCompleted(t *testing.T) {
defer resetTestHooks()
forceKillDelay.Store(0)
_ = executor.SetForceKillDelay(0)
fake := newFakeCmd(fakeCmdConfig{
StdoutPlan: []fakeStdoutEvent{
@@ -1001,9 +982,7 @@ func TestRunCodexTask_ForcesStopAfterTurnCompleted(t *testing.T) {
ReleaseWaitOnKill: true,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{targetArg} }
codexCommand = "fake-cmd"
@@ -1028,7 +1007,7 @@ func TestRunCodexTask_ForcesStopAfterTurnCompleted(t *testing.T) {
func TestRunCodexTask_DoesNotTerminateBeforeThreadCompleted(t *testing.T) {
defer resetTestHooks()
forceKillDelay.Store(0)
_ = executor.SetForceKillDelay(0)
fake := newFakeCmd(fakeCmdConfig{
StdoutPlan: []fakeStdoutEvent{
@@ -1042,9 +1021,7 @@ func TestRunCodexTask_DoesNotTerminateBeforeThreadCompleted(t *testing.T) {
ReleaseWaitOnKill: true,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{targetArg} }
codexCommand = "fake-cmd"
@@ -1498,7 +1475,7 @@ func TestBackendParseBoolFlag(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := parseBoolFlag(tt.val, tt.def); got != tt.want {
if got := config.ParseBoolFlag(tt.val, tt.def); got != tt.want {
t.Fatalf("parseBoolFlag(%q,%v) = %v, want %v", tt.val, tt.def, got, tt.want)
}
})
@@ -1508,17 +1485,17 @@ func TestBackendParseBoolFlag(t *testing.T) {
func TestBackendEnvFlagEnabled(t *testing.T) {
const key = "TEST_FLAG_ENABLED"
t.Setenv(key, "")
if envFlagEnabled(key) {
if config.EnvFlagEnabled(key) {
t.Fatalf("envFlagEnabled should be false when unset")
}
t.Setenv(key, "true")
if !envFlagEnabled(key) {
if !config.EnvFlagEnabled(key) {
t.Fatalf("envFlagEnabled should be true for 'true'")
}
t.Setenv(key, "no")
if envFlagEnabled(key) {
if config.EnvFlagEnabled(key) {
t.Fatalf("envFlagEnabled should be false for 'no'")
}
}
@@ -1708,8 +1685,8 @@ func TestClaudeModel_DefaultsFromSettings(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
makeRunner := func(gotName *string, gotArgs *[]string, fake **fakeCmd) func(context.Context, string, ...string) commandRunner {
return func(ctx context.Context, name string, args ...string) commandRunner {
makeRunner := func(gotName *string, gotArgs *[]string, fake **fakeCmd) func(context.Context, string, ...string) executor.CommandRunner {
return func(ctx context.Context, name string, args ...string) executor.CommandRunner {
*gotName = name
*gotArgs = append([]string(nil), args...)
cmd := newFakeCmd(fakeCmdConfig{
@@ -1729,9 +1706,8 @@ func TestClaudeModel_DefaultsFromSettings(t *testing.T) {
gotArgs []string
fake *fakeCmd
)
origRunner := newCommandRunner
newCommandRunner = makeRunner(&gotName, &gotArgs, &fake)
t.Cleanup(func() { newCommandRunner = origRunner })
restore := executor.SetNewCommandRunner(makeRunner(&gotName, &gotArgs, &fake))
t.Cleanup(restore)
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "hi", Mode: "new", WorkDir: defaultWorkdir}, ClaudeBackend{}, nil, false, true, 5)
if res.ExitCode != 0 || res.Message != "ok" {
@@ -1761,9 +1737,8 @@ func TestClaudeModel_DefaultsFromSettings(t *testing.T) {
gotArgs []string
fake *fakeCmd
)
origRunner := newCommandRunner
newCommandRunner = makeRunner(&gotName, &gotArgs, &fake)
t.Cleanup(func() { newCommandRunner = origRunner })
restore := executor.SetNewCommandRunner(makeRunner(&gotName, &gotArgs, &fake))
t.Cleanup(restore)
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "hi", Mode: "new", WorkDir: defaultWorkdir, Model: "sonnet"}, ClaudeBackend{}, nil, false, true, 5)
if res.ExitCode != 0 || res.Message != "ok" {
@@ -1787,9 +1762,8 @@ func TestClaudeModel_DefaultsFromSettings(t *testing.T) {
gotArgs []string
fake *fakeCmd
)
origRunner := newCommandRunner
newCommandRunner = makeRunner(&gotName, &gotArgs, &fake)
t.Cleanup(func() { newCommandRunner = origRunner })
restore := executor.SetNewCommandRunner(makeRunner(&gotName, &gotArgs, &fake))
t.Cleanup(restore)
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "hi", Mode: "resume", SessionID: "sid-123", WorkDir: defaultWorkdir}, ClaudeBackend{}, nil, false, true, 5)
if res.ExitCode != 0 || res.Message != "ok" {
@@ -1939,6 +1913,37 @@ func TestRun_PassesReasoningEffortToTaskSpec(t *testing.T) {
}
}
func TestRun_NoOutputMessage_ReturnsExitCode1AndWritesStderr(t *testing.T) {
defer resetTestHooks()
cleanupLogsFn = func() (CleanupStats, error) { return CleanupStats{}, nil }
t.Setenv("TMPDIR", t.TempDir())
selectBackendFn = func(name string) (Backend, error) {
return testBackend{name: name, command: "echo"}, nil
}
runTaskFn = func(task TaskSpec, silent bool, timeout int) TaskResult {
return TaskResult{ExitCode: 0, Message: ""}
}
isTerminalFn = func() bool { return true }
stdinReader = strings.NewReader("")
os.Args = []string{"codeagent-wrapper", "task"}
var code int
errOutput := captureStderr(t, func() {
code = run()
})
if code != 1 {
t.Fatalf("run() exit=%d, want 1", code)
}
if !strings.Contains(errOutput, "no output message") {
t.Fatalf("stderr missing sentinel error text; got:\n%s", errOutput)
}
}
func TestRunBuildCodexArgs_NewMode(t *testing.T) {
const key = "CODEX_BYPASS_SANDBOX"
t.Setenv(key, "false")
@@ -1991,8 +1996,7 @@ func TestRunCodexTaskWithContext_CodexReasoningEffort(t *testing.T) {
t.Setenv("CODEX_BYPASS_SANDBOX", "false")
var gotArgs []string
origRunner := newCommandRunner
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
restore := executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
gotArgs = append([]string(nil), args...)
return newFakeCmd(fakeCmdConfig{
PID: 123,
@@ -2000,8 +2004,8 @@ func TestRunCodexTaskWithContext_CodexReasoningEffort(t *testing.T) {
{Data: "{\"type\":\"result\",\"session_id\":\"sid\",\"result\":\"ok\"}\n"},
},
})
}
t.Cleanup(func() { newCommandRunner = origRunner })
})
t.Cleanup(restore)
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "hi", Mode: "new", WorkDir: defaultWorkdir, ReasoningEffort: "high"}, nil, nil, false, true, 5)
if res.ExitCode != 0 || res.Message != "ok" {
@@ -2071,7 +2075,7 @@ func TestRunBuildCodexArgs_BypassSandboxEnvTrue(t *testing.T) {
t.Fatalf("NewLogger() error = %v", err)
}
setLogger(logger)
defer closeLogger()
defer func() { _ = closeLogger() }()
t.Setenv("CODEX_BYPASS_SANDBOX", "true")
@@ -2716,7 +2720,7 @@ func TestRunLogFunctions(t *testing.T) {
t.Fatalf("NewLogger() error = %v", err)
}
setLogger(logger)
defer closeLogger()
defer func() { _ = closeLogger() }()
logInfo("info message")
logWarn("warn message")
@@ -2751,25 +2755,38 @@ func TestLoggerPathAndRemoveNil(t *testing.T) {
}
func TestLoggerLogDropOnDone(t *testing.T) {
logger := &Logger{
ch: make(chan logEntry),
done: make(chan struct{}),
}
close(logger.done)
logger.log("INFO", "dropped")
logger.pendingWG.Wait()
t.Skip("internal logger behavior moved to internal/logger; exercise via public methods instead")
}
func TestLoggerLogAfterClose(t *testing.T) {
defer resetTestHooks()
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
logger, err := NewLogger()
if err != nil {
t.Fatalf("NewLogger error: %v", err)
}
logPath := logger.Path()
t.Cleanup(func() { _ = os.Remove(logPath) })
logger.Info("before close")
logger.Flush()
if err := logger.Close(); err != nil {
t.Fatalf("Close error: %v", err)
}
logger.log("INFO", "should be ignored")
logger.Info("should be ignored")
logger.Flush()
data, err := os.ReadFile(logPath)
if err != nil {
t.Fatalf("failed to read log file: %v", err)
}
if strings.Contains(string(data), "should be ignored") {
t.Fatalf("expected log message to be dropped after Close, got: %s", string(data))
}
}
func TestLogWriterLogLine(t *testing.T) {
@@ -2788,7 +2805,7 @@ func TestLogWriterLogLine(t *testing.T) {
if !strings.Contains(string(data), "P:abc") {
t.Fatalf("log output missing truncated entry, got %q", string(data))
}
closeLogger()
_ = closeLogger()
}
func TestNewLogWriterDefaultMaxLen(t *testing.T) {
@@ -2807,7 +2824,9 @@ func TestBackendPrintHelp(t *testing.T) {
os.Stdout = oldStdout
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
output := buf.String()
expected := []string{"codeagent-wrapper", "Usage:", "resume", "CODEX_TIMEOUT", "Exit Codes:"}
@@ -2929,12 +2948,12 @@ func TestRunCodexTaskFn_UsesTaskBackend(t *testing.T) {
var seenName string
var seenArgs []string
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
seenName = name
seenArgs = append([]string(nil), args...)
return fake
}
selectBackendFn = func(name string) (Backend, error) {
})
_ = executor.SetSelectBackendFn(func(name string) (Backend, error) {
return testBackend{
name: strings.ToLower(name),
command: "custom-cli",
@@ -2942,7 +2961,7 @@ func TestRunCodexTaskFn_UsesTaskBackend(t *testing.T) {
return []string{"do", targetArg}
},
}, nil
}
})
res := runCodexTaskFn(TaskSpec{ID: "task-1", Task: "payload", Backend: "Custom"}, 5)
@@ -2966,9 +2985,9 @@ func TestRunCodexTaskFn_UsesTaskBackend(t *testing.T) {
func TestRunCodexTaskFn_InvalidBackend(t *testing.T) {
defer resetTestHooks()
selectBackendFn = func(name string) (Backend, error) {
_ = executor.SetSelectBackendFn(func(name string) (Backend, error) {
return nil, fmt.Errorf("invalid backend: %s", name)
}
})
res := runCodexTaskFn(TaskSpec{ID: "bad-task", Task: "noop", Backend: "unknown"}, 5)
if res.ExitCode == 0 {
@@ -3109,11 +3128,11 @@ func TestRunCodexTask_ExitError(t *testing.T) {
func TestRunCodexTask_StdinPipeError(t *testing.T) {
defer resetTestHooks()
commandContext = func(ctx context.Context, name string, args ...string) *exec.Cmd {
_ = executor.SetCommandContextFn(func(ctx context.Context, name string, args ...string) *exec.Cmd {
cmd := exec.CommandContext(ctx, "cat")
cmd.Stdin = os.Stdin
return cmd
}
})
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{} }
res := runCodexTask(TaskSpec{Task: "data", UseStdin: true}, false, 1)
if res.ExitCode != 1 || !strings.Contains(res.Error, "stdin pipe") {
@@ -3123,11 +3142,11 @@ func TestRunCodexTask_StdinPipeError(t *testing.T) {
func TestRunCodexTask_StdoutPipeError(t *testing.T) {
defer resetTestHooks()
commandContext = func(ctx context.Context, name string, args ...string) *exec.Cmd {
_ = executor.SetCommandContextFn(func(ctx context.Context, name string, args ...string) *exec.Cmd {
cmd := exec.CommandContext(ctx, "echo", "noop")
cmd.Stdout = os.Stdout
return cmd
}
})
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{} }
res := runCodexTask(TaskSpec{Task: "noop"}, false, 1)
if res.ExitCode != 1 || !strings.Contains(res.Error, "stdout pipe") {
@@ -3170,36 +3189,6 @@ func TestRunCodexTask_SignalHandling(t *testing.T) {
}
}
func TestForwardSignals_ContextCancel(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
forwardSignals(ctx, &realCmd{cmd: &exec.Cmd{}}, func(string) {})
cancel()
time.Sleep(10 * time.Millisecond)
}
func TestCancelReason(t *testing.T) {
const cmdName = "codex"
if got := cancelReason(cmdName, nil); got != "Context cancelled" {
t.Fatalf("cancelReason(nil) = %q, want %q", got, "Context cancelled")
}
ctxTimeout, cancelTimeout := context.WithTimeout(context.Background(), 1*time.Nanosecond)
defer cancelTimeout()
<-ctxTimeout.Done()
wantTimeout := fmt.Sprintf("%s execution timeout", cmdName)
if got := cancelReason(cmdName, ctxTimeout); got != wantTimeout {
t.Fatalf("cancelReason(deadline) = %q, want %q", got, wantTimeout)
}
ctxCancelled, cancel := context.WithCancel(context.Background())
cancel()
if got := cancelReason(cmdName, ctxCancelled); got != "Execution cancelled, terminating codex process" {
t.Fatalf("cancelReason(cancelled) = %q, want %q", got, "Execution cancelled, terminating codex process")
}
}
func TestRunCodexProcess(t *testing.T) {
defer resetTestHooks()
script := createFakeCodexScript(t, "proc-thread", "proc-msg")
@@ -3235,7 +3224,9 @@ func TestRunSilentMode(t *testing.T) {
w.Close()
os.Stderr = oldStderr
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
return buf.String()
}
@@ -3336,35 +3327,6 @@ func TestParallelTopologicalSortTasks(t *testing.T) {
}
}
func TestRunShouldSkipTask(t *testing.T) {
failed := map[string]TaskResult{"a": {TaskID: "a", ExitCode: 1}, "b": {TaskID: "b", ExitCode: 2}}
tests := []struct {
name string
task TaskSpec
skip bool
reasonContains []string
}{
{"no deps", TaskSpec{ID: "c"}, false, nil},
{"missing deps not failed", TaskSpec{ID: "d", Dependencies: []string{"x"}}, false, nil},
{"single failed dep", TaskSpec{ID: "e", Dependencies: []string{"a"}}, true, []string{"a"}},
{"multiple failed deps", TaskSpec{ID: "f", Dependencies: []string{"a", "b"}}, true, []string{"a", "b"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
skip, reason := shouldSkipTask(tt.task, failed)
if skip != tt.skip {
t.Fatalf("skip=%v, want %v", skip, tt.skip)
}
for _, expect := range tt.reasonContains {
if !strings.Contains(reason, expect) {
t.Fatalf("reason %q missing %q", reason, expect)
}
}
})
}
}
func TestRunTopologicalSort_CycleDetection(t *testing.T) {
tasks := []TaskSpec{{ID: "a", Dependencies: []string{"b"}}, {ID: "b", Dependencies: []string{"a"}}}
if _, err := topologicalSort(tasks); err == nil || !strings.Contains(err.Error(), "cycle detected") {
@@ -3701,7 +3663,7 @@ func TestParallelTriggersCleanup(t *testing.T) {
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"codex-wrapper", "--parallel"}
os.Args = []string{"codeagent-wrapper", "--parallel"}
stdinReader = strings.NewReader(`---TASK---
id: only
---CONTENT---
@@ -3736,10 +3698,8 @@ func TestVersionFlag(t *testing.T) {
}
})
want := "codeagent-wrapper version 5.6.4\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}
@@ -3752,26 +3712,22 @@ func TestVersionShortFlag(t *testing.T) {
}
})
want := "codeagent-wrapper version 5.6.4\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}
func TestVersionLegacyAlias(t *testing.T) {
defer resetTestHooks()
os.Args = []string{"codex-wrapper", "--version"}
os.Args = []string{"codeagent-wrapper", "--version"}
output := captureOutput(t, func() {
if code := run(); code != 0 {
t.Errorf("exit = %d, want 0", code)
}
})
want := "codex-wrapper version 5.6.4\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}
@@ -3793,7 +3749,7 @@ func TestRun_HelpShort(t *testing.T) {
func TestRun_HelpDoesNotTriggerCleanup(t *testing.T) {
defer resetTestHooks()
os.Args = []string{"codex-wrapper", "--help"}
os.Args = []string{"codeagent-wrapper", "--help"}
cleanupLogsFn = func() (CleanupStats, error) {
t.Fatalf("cleanup should not run for --help")
return CleanupStats{}, nil
@@ -3806,7 +3762,7 @@ func TestRun_HelpDoesNotTriggerCleanup(t *testing.T) {
func TestVersionDoesNotTriggerCleanup(t *testing.T) {
defer resetTestHooks()
os.Args = []string{"codex-wrapper", "--version"}
os.Args = []string{"codeagent-wrapper", "--version"}
cleanupLogsFn = func() (CleanupStats, error) {
t.Fatalf("cleanup should not run for --version")
return CleanupStats{}, nil
@@ -3874,7 +3830,7 @@ func TestVersionCoverageFullRun(t *testing.T) {
_ = closeLogger()
_ = logger.RemoveLogFile()
loggerPtr.Store(nil)
setLogger(nil)
})
t.Run("parseArgsError", func(t *testing.T) {
@@ -4089,7 +4045,7 @@ func TestVersionMainWrapper(t *testing.T) {
exitCalled := -1
exitFn = func(code int) { exitCalled = code }
os.Args = []string{"codeagent-wrapper", "--version"}
main()
Main()
if exitCalled != 0 {
t.Fatalf("main exit = %d, want 0", exitCalled)
}
@@ -4102,8 +4058,8 @@ func TestBackendCleanupMode_Success(t *testing.T) {
Scanned: 5,
Deleted: 3,
Kept: 2,
DeletedFiles: []string{"codex-wrapper-111.log", "codex-wrapper-222.log", "codex-wrapper-333.log"},
KeptFiles: []string{"codex-wrapper-444.log", "codex-wrapper-555.log"},
DeletedFiles: []string{"codeagent-wrapper-111.log", "codeagent-wrapper-222.log", "codeagent-wrapper-333.log"},
KeptFiles: []string{"codeagent-wrapper-444.log", "codeagent-wrapper-555.log"},
}, nil
}
@@ -4114,7 +4070,7 @@ func TestBackendCleanupMode_Success(t *testing.T) {
if exitCode != 0 {
t.Fatalf("exit = %d, want 0", exitCode)
}
want := "Cleanup completed\nFiles scanned: 5\nFiles deleted: 3\n - codex-wrapper-111.log\n - codex-wrapper-222.log\n - codex-wrapper-333.log\nFiles kept: 2\n - codex-wrapper-444.log\n - codex-wrapper-555.log\n"
want := "Cleanup completed\nFiles scanned: 5\nFiles deleted: 3\n - codeagent-wrapper-111.log\n - codeagent-wrapper-222.log\n - codeagent-wrapper-333.log\nFiles kept: 2\n - codeagent-wrapper-444.log\n - codeagent-wrapper-555.log\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
}
@@ -4128,7 +4084,7 @@ func TestBackendCleanupMode_SuccessWithErrorsLine(t *testing.T) {
Deleted: 1,
Kept: 0,
Errors: 1,
DeletedFiles: []string{"codex-wrapper-123.log"},
DeletedFiles: []string{"codeagent-wrapper-123.log"},
}, nil
}
@@ -4139,7 +4095,7 @@ func TestBackendCleanupMode_SuccessWithErrorsLine(t *testing.T) {
if exitCode != 0 {
t.Fatalf("exit = %d, want 0", exitCode)
}
want := "Cleanup completed\nFiles scanned: 2\nFiles deleted: 1\n - codex-wrapper-123.log\nFiles kept: 0\nDeletion errors: 1\n"
want := "Cleanup completed\nFiles scanned: 2\nFiles deleted: 1\n - codeagent-wrapper-123.log\nFiles kept: 0\nDeletion errors: 1\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
}
@@ -4208,7 +4164,7 @@ func TestRun_CleanupFlag(t *testing.T) {
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"codex-wrapper", "--cleanup"}
os.Args = []string{"codeagent-wrapper", "--cleanup"}
calls := 0
cleanupLogsFn = func() (CleanupStats, error) {
@@ -4453,7 +4409,7 @@ func TestRun_LoggerRemovedOnSignal(t *testing.T) {
defer signal.Reset(syscall.SIGINT, syscall.SIGTERM)
// Set shorter delays for faster test
forceKillDelay.Store(1)
_ = executor.SetForceKillDelay(1)
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
@@ -4565,7 +4521,7 @@ func TestRun_CleanupFailureDoesNotBlock(t *testing.T) {
codexCommand = createFakeCodexScript(t, "tid-cleanup", "ok")
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
os.Args = []string{"codex-wrapper", "task"}
os.Args = []string{"codeagent-wrapper", "task"}
if exit := run(); exit != 0 {
t.Fatalf("exit = %d, want 0", exit)
@@ -4704,73 +4660,6 @@ func TestBackendDiscardInvalidJSONBuffer(t *testing.T) {
})
}
func TestRunForwardSignals(t *testing.T) {
defer resetTestHooks()
if runtime.GOOS == "windows" {
t.Skip("sleep command not available on Windows")
}
execCmd := exec.Command("sleep", "5")
if err := execCmd.Start(); err != nil {
t.Skipf("unable to start sleep command: %v", err)
}
defer func() {
_ = execCmd.Process.Kill()
execCmd.Wait()
}()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
forceKillDelay.Store(0)
defer forceKillDelay.Store(5)
ready := make(chan struct{})
var captured chan<- os.Signal
signalNotifyFn = func(ch chan<- os.Signal, sig ...os.Signal) {
captured = ch
close(ready)
}
signalStopFn = func(ch chan<- os.Signal) {}
defer func() {
signalNotifyFn = signal.Notify
signalStopFn = signal.Stop
}()
var mu sync.Mutex
var logs []string
cmd := &realCmd{cmd: execCmd}
forwardSignals(ctx, cmd, func(msg string) {
mu.Lock()
defer mu.Unlock()
logs = append(logs, msg)
})
select {
case <-ready:
case <-time.After(500 * time.Millisecond):
t.Fatalf("signalNotifyFn not invoked")
}
captured <- syscall.SIGINT
done := make(chan error, 1)
go func() { done <- cmd.Wait() }()
select {
case <-done:
case <-time.After(2 * time.Second):
t.Fatalf("process did not exit after forwarded signal")
}
mu.Lock()
defer mu.Unlock()
if len(logs) == 0 {
t.Fatalf("expected log entry for forwarded signal")
}
}
// Backend-focused coverage suite to ensure run() paths stay exercised under the focused pattern.
func TestBackendRunCoverage(t *testing.T) {
suite := []struct {
@@ -4810,7 +4699,7 @@ func TestParallelLogPathInSerialMode(t *testing.T) {
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
os.Args = []string{"codex-wrapper", "do-stuff"}
os.Args = []string{"codeagent-wrapper", "do-stuff"}
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
codexCommand = "echo"
@@ -4827,126 +4716,13 @@ func TestParallelLogPathInSerialMode(t *testing.T) {
if exitCode != 0 {
t.Fatalf("run() exit = %d, want 0", exitCode)
}
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
wantLine := fmt.Sprintf("Log: %s", expectedLog)
if !strings.Contains(stderr, wantLine) {
t.Fatalf("stderr missing %q, got: %q", wantLine, stderr)
}
}
func TestRealProcessNilSafety(t *testing.T) {
var proc *realProcess
if pid := proc.Pid(); pid != 0 {
t.Fatalf("Pid() = %d, want 0", pid)
}
if err := proc.Kill(); err != nil {
t.Fatalf("Kill() error = %v", err)
}
if err := proc.Signal(syscall.SIGTERM); err != nil {
t.Fatalf("Signal() error = %v", err)
}
}
func TestRealProcessKill(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("sleep command not available on Windows")
}
cmd := exec.Command("sleep", "5")
if err := cmd.Start(); err != nil {
t.Skipf("unable to start sleep command: %v", err)
}
waited := false
defer func() {
if waited {
return
}
if cmd.Process != nil {
_ = cmd.Process.Kill()
cmd.Wait()
}
}()
proc := &realProcess{proc: cmd.Process}
if proc.Pid() == 0 {
t.Fatalf("Pid() returned 0 for active process")
}
if err := proc.Kill(); err != nil {
t.Fatalf("Kill() error = %v", err)
}
waitErr := cmd.Wait()
waited = true
if waitErr == nil {
t.Fatalf("Kill() should lead to non-nil wait error")
}
}
func TestRealProcessSignal(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("sleep command not available on Windows")
}
cmd := exec.Command("sleep", "5")
if err := cmd.Start(); err != nil {
t.Skipf("unable to start sleep command: %v", err)
}
waited := false
defer func() {
if waited {
return
}
if cmd.Process != nil {
_ = cmd.Process.Kill()
cmd.Wait()
}
}()
proc := &realProcess{proc: cmd.Process}
if err := proc.Signal(syscall.SIGTERM); err != nil {
t.Fatalf("Signal() error = %v", err)
}
waitErr := cmd.Wait()
waited = true
if waitErr == nil {
t.Fatalf("Signal() should lead to non-nil wait error")
}
}
func TestRealCmdProcess(t *testing.T) {
rc := &realCmd{}
if rc.Process() != nil {
t.Fatalf("Process() should return nil when realCmd has no command")
}
rc = &realCmd{cmd: &exec.Cmd{}}
if rc.Process() != nil {
t.Fatalf("Process() should return nil when exec.Cmd has no process")
}
if runtime.GOOS == "windows" {
return
}
cmd := exec.Command("sleep", "5")
if err := cmd.Start(); err != nil {
t.Skipf("unable to start sleep command: %v", err)
}
defer func() {
if cmd.Process != nil {
_ = cmd.Process.Kill()
cmd.Wait()
}
}()
rc = &realCmd{cmd: cmd}
handle := rc.Process()
if handle == nil {
t.Fatalf("expected non-nil process handle")
}
if pid := handle.Pid(); pid == 0 {
t.Fatalf("process handle returned pid=0")
}
}
func TestRun_CLI_Success(t *testing.T) {
defer resetTestHooks()
os.Args = []string{"codeagent-wrapper", "do-things"}
@@ -4988,7 +4764,7 @@ func TestResolveMaxParallelWorkers(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
t.Setenv("CODEAGENT_MAX_PARALLEL_WORKERS", tt.envValue)
got := resolveMaxParallelWorkers()
got := config.ResolveMaxParallelWorkers()
if got != tt.want {
t.Errorf("resolveMaxParallelWorkers() = %d, want %d", got, tt.want)
}

View File

@@ -0,0 +1,9 @@
package wrapper
import (
executor "codeagent-wrapper/internal/executor"
)
func parseParallelConfig(data []byte) (*ParallelConfig, error) {
return executor.ParseParallelConfig(data)
}

View File

@@ -0,0 +1,34 @@
package wrapper
import (
"bufio"
"io"
parser "codeagent-wrapper/internal/parser"
"github.com/goccy/go-json"
)
func parseJSONStream(r io.Reader) (message, threadID string) {
return parseJSONStreamWithLog(r, logWarn, logInfo)
}
func parseJSONStreamWithWarn(r io.Reader, warnFn func(string)) (message, threadID string) {
return parseJSONStreamWithLog(r, warnFn, logInfo)
}
func parseJSONStreamWithLog(r io.Reader, warnFn func(string), infoFn func(string)) (message, threadID string) {
return parseJSONStreamInternal(r, warnFn, infoFn, nil, nil)
}
func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func(), onComplete func()) (message, threadID string) {
return parser.ParseJSONStreamInternal(r, warnFn, infoFn, onMessage, onComplete)
}
func hasKey(m map[string]json.RawMessage, key string) bool { return parser.HasKey(m, key) }
func discardInvalidJSON(decoder *json.Decoder, reader *bufio.Reader) (*bufio.Reader, error) {
return parser.DiscardInvalidJSON(decoder, reader)
}
func normalizeText(text interface{}) string { return parser.NormalizeText(text) }

View File

@@ -0,0 +1,8 @@
package wrapper
import executor "codeagent-wrapper/internal/executor"
// Type aliases to keep existing names in the wrapper package.
type ParallelConfig = executor.ParallelConfig
type TaskSpec = executor.TaskSpec
type TaskResult = executor.TaskResult

View File

@@ -0,0 +1,30 @@
package wrapper
import (
"os"
"testing"
)
func TestDefaultIsTerminalCoverage(t *testing.T) {
oldStdin := os.Stdin
t.Cleanup(func() { os.Stdin = oldStdin })
f, err := os.CreateTemp(t.TempDir(), "stdin-*")
if err != nil {
t.Fatalf("os.CreateTemp() error = %v", err)
}
defer os.Remove(f.Name())
os.Stdin = f
if got := defaultIsTerminal(); got {
t.Fatalf("defaultIsTerminal() = %v, want false for regular file", got)
}
if err := f.Close(); err != nil {
t.Fatalf("Close() error = %v", err)
}
os.Stdin = f
if got := defaultIsTerminal(); !got {
t.Fatalf("defaultIsTerminal() = %v, want true when Stat fails", got)
}
}

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"bytes"
@@ -7,6 +7,8 @@ import (
"os"
"strconv"
"strings"
utils "codeagent-wrapper/internal/utils"
)
func resolveTimeout() int {
@@ -52,7 +54,7 @@ func shouldUseStdin(taskText string, piped bool) bool {
if len(taskText) > 800 {
return true
}
return strings.IndexAny(taskText, stdinSpecialChars) >= 0
return strings.ContainsAny(taskText, stdinSpecialChars)
}
func defaultIsTerminal() bool {
@@ -196,69 +198,21 @@ func (b *tailBuffer) String() string {
}
func truncate(s string, maxLen int) string {
if len(s) <= maxLen {
return s
}
if maxLen < 0 {
return ""
}
return s[:maxLen] + "..."
return utils.Truncate(s, maxLen)
}
// safeTruncate safely truncates string to maxLen, avoiding panic and UTF-8 corruption.
func safeTruncate(s string, maxLen int) string {
if maxLen <= 0 || s == "" {
return ""
}
runes := []rune(s)
if len(runes) <= maxLen {
return s
}
if maxLen < 4 {
return string(runes[:1])
}
cutoff := maxLen - 3
if cutoff <= 0 {
return string(runes[:1])
}
if len(runes) <= cutoff {
return s
}
return string(runes[:cutoff]) + "..."
return utils.SafeTruncate(s, maxLen)
}
// sanitizeOutput removes ANSI escape sequences and control characters.
func sanitizeOutput(s string) string {
var result strings.Builder
inEscape := false
for i := 0; i < len(s); i++ {
if s[i] == '\x1b' && i+1 < len(s) && s[i+1] == '[' {
inEscape = true
i++ // skip '['
continue
}
if inEscape {
if (s[i] >= 'A' && s[i] <= 'Z') || (s[i] >= 'a' && s[i] <= 'z') {
inEscape = false
}
continue
}
// Keep printable chars and common whitespace.
if s[i] >= 32 || s[i] == '\n' || s[i] == '\t' {
result.WriteByte(s[i])
}
}
return result.String()
return utils.SanitizeOutput(s)
}
func min(a, b int) int {
if a < b {
return a
}
return b
return utils.Min(a, b)
}
func hello() string {
@@ -381,7 +335,7 @@ func extractFilesChangedFromLines(lines []string) []string {
for _, prefix := range []string{"Modified:", "Created:", "Updated:", "Edited:", "Wrote:", "Changed:"} {
if strings.HasPrefix(line, prefix) {
file := strings.TrimSpace(strings.TrimPrefix(line, prefix))
file = strings.Trim(file, "`,\"'()[],:")
file = strings.Trim(file, "`\"'()[],:")
file = strings.TrimPrefix(file, "@")
if file != "" && !seen[file] {
files = append(files, file)
@@ -398,7 +352,7 @@ func extractFilesChangedFromLines(lines []string) []string {
// Pattern 2: Tokens that look like file paths (allow root files, strip @ prefix).
parts := strings.Fields(line)
for _, part := range parts {
part = strings.Trim(part, "`,\"'()[],:")
part = strings.Trim(part, "`\"'()[],:")
part = strings.TrimPrefix(part, "@")
for _, ext := range exts {
if strings.HasSuffix(part, ext) && !seen[part] {
@@ -567,116 +521,3 @@ func extractKeyOutputFromLines(lines []string, maxLen int) string {
clean := strings.TrimSpace(strings.Join(lines, "\n"))
return safeTruncate(clean, maxLen)
}
// extractCoverageGap extracts what's missing from coverage reports
// Looks for uncovered lines, branches, or functions
func extractCoverageGap(message string) string {
if message == "" {
return ""
}
lower := strings.ToLower(message)
lines := strings.Split(message, "\n")
// Look for uncovered/missing patterns
for _, line := range lines {
lineLower := strings.ToLower(line)
line = strings.TrimSpace(line)
// Common patterns for uncovered code
if strings.Contains(lineLower, "uncovered") ||
strings.Contains(lineLower, "not covered") ||
strings.Contains(lineLower, "missing coverage") ||
strings.Contains(lineLower, "lines not covered") {
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
// Look for specific file:line patterns in coverage reports
if strings.Contains(lineLower, "branch") && strings.Contains(lineLower, "not taken") {
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
}
// Look for function names that aren't covered
if strings.Contains(lower, "function") && strings.Contains(lower, "0%") {
for _, line := range lines {
if strings.Contains(strings.ToLower(line), "0%") && strings.Contains(line, "function") {
line = strings.TrimSpace(line)
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
}
}
return ""
}
// extractErrorDetail extracts meaningful error context from task output
// Returns the most relevant error information up to maxLen characters
func extractErrorDetail(message string, maxLen int) string {
if message == "" || maxLen <= 0 {
return ""
}
lines := strings.Split(message, "\n")
var errorLines []string
// Look for error-related lines
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
lower := strings.ToLower(line)
// Skip noise lines
if strings.HasPrefix(line, "at ") && strings.Contains(line, "(") {
// Stack trace line - only keep first one
if len(errorLines) > 0 && strings.HasPrefix(strings.ToLower(errorLines[len(errorLines)-1]), "at ") {
continue
}
}
// Prioritize error/fail lines
if strings.Contains(lower, "error") ||
strings.Contains(lower, "fail") ||
strings.Contains(lower, "exception") ||
strings.Contains(lower, "assert") ||
strings.Contains(lower, "expected") ||
strings.Contains(lower, "timeout") ||
strings.Contains(lower, "not found") ||
strings.Contains(lower, "cannot") ||
strings.Contains(lower, "undefined") ||
strings.HasPrefix(line, "FAIL") ||
strings.HasPrefix(line, "●") {
errorLines = append(errorLines, line)
}
}
if len(errorLines) == 0 {
// No specific error lines found, take last few lines
start := len(lines) - 5
if start < 0 {
start = 0
}
for _, line := range lines[start:] {
line = strings.TrimSpace(line)
if line != "" {
errorLines = append(errorLines, line)
}
}
}
// Join and truncate
result := strings.Join(errorLines, " | ")
return safeTruncate(result, maxLen)
}

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"fmt"

View File

@@ -0,0 +1,9 @@
package wrapper
import ilogger "codeagent-wrapper/internal/logger"
const wrapperName = ilogger.WrapperName
func currentWrapperName() string { return ilogger.CurrentWrapperName() }
func primaryLogPrefix() string { return ilogger.PrimaryLogPrefix() }

View File

@@ -0,0 +1,33 @@
package backend
import config "codeagent-wrapper/internal/config"
// Backend defines the contract for invoking different AI CLI backends.
// Each backend is responsible for supplying the executable command and
// building the argument list based on the wrapper config.
type Backend interface {
Name() string
BuildArgs(cfg *config.Config, targetArg string) []string
Command() string
Env(baseURL, apiKey string) map[string]string
}
var (
logWarnFn = func(string) {}
logErrorFn = func(string) {}
)
// SetLogFuncs configures optional logging hooks used by some backends.
// Callers can safely pass nil to disable the hook.
func SetLogFuncs(warnFn, errorFn func(string)) {
if warnFn != nil {
logWarnFn = warnFn
} else {
logWarnFn = func(string) {}
}
if errorFn != nil {
logErrorFn = errorFn
} else {
logErrorFn = func(string) {}
}
}

View File

@@ -1,4 +1,4 @@
package main
package backend
import (
"bytes"
@@ -6,6 +6,8 @@ import (
"path/filepath"
"reflect"
"testing"
config "codeagent-wrapper/internal/config"
)
func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
@@ -13,7 +15,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
t.Run("new mode omits skip-permissions when env disabled", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
cfg := &Config{Mode: "new", WorkDir: "/repo"}
cfg := &config.Config{Mode: "new", WorkDir: "/repo"}
got := backend.BuildArgs(cfg, "todo")
want := []string{"-p", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "todo"}
if !reflect.DeepEqual(got, want) {
@@ -22,7 +24,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
})
t.Run("new mode includes skip-permissions by default", func(t *testing.T) {
cfg := &Config{Mode: "new", SkipPermissions: false}
cfg := &config.Config{Mode: "new", SkipPermissions: false}
got := backend.BuildArgs(cfg, "-")
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "-"}
if !reflect.DeepEqual(got, want) {
@@ -32,7 +34,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
t.Run("resume mode includes session id", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
cfg := &Config{Mode: "resume", SessionID: "sid-123", WorkDir: "/ignored"}
cfg := &config.Config{Mode: "resume", SessionID: "sid-123", WorkDir: "/ignored"}
got := backend.BuildArgs(cfg, "resume-task")
want := []string{"-p", "--setting-sources", "", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
if !reflect.DeepEqual(got, want) {
@@ -42,7 +44,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
t.Run("resume mode without session still returns base flags", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
cfg := &Config{Mode: "resume", WorkDir: "/ignored"}
cfg := &config.Config{Mode: "resume", WorkDir: "/ignored"}
got := backend.BuildArgs(cfg, "follow-up")
want := []string{"-p", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "follow-up"}
if !reflect.DeepEqual(got, want) {
@@ -51,7 +53,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
})
t.Run("resume mode can opt-in skip permissions", func(t *testing.T) {
cfg := &Config{Mode: "resume", SessionID: "sid-123", SkipPermissions: true}
cfg := &config.Config{Mode: "resume", SessionID: "sid-123", SkipPermissions: true}
got := backend.BuildArgs(cfg, "resume-task")
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
if !reflect.DeepEqual(got, want) {
@@ -70,7 +72,7 @@ func TestBackendBuildArgs_Model(t *testing.T) {
t.Run("claude includes --model when set", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
backend := ClaudeBackend{}
cfg := &Config{Mode: "new", Model: "opus"}
cfg := &config.Config{Mode: "new", Model: "opus"}
got := backend.BuildArgs(cfg, "todo")
want := []string{"-p", "--setting-sources", "", "--model", "opus", "--output-format", "stream-json", "--verbose", "todo"}
if !reflect.DeepEqual(got, want) {
@@ -80,7 +82,7 @@ func TestBackendBuildArgs_Model(t *testing.T) {
t.Run("gemini includes -m when set", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "new", Model: "gemini-3-pro-preview"}
cfg := &config.Config{Mode: "new", Model: "gemini-3-pro-preview"}
got := backend.BuildArgs(cfg, "task")
want := []string{"-o", "stream-json", "-y", "-m", "gemini-3-pro-preview", "task"}
if !reflect.DeepEqual(got, want) {
@@ -93,7 +95,7 @@ func TestBackendBuildArgs_Model(t *testing.T) {
t.Setenv(key, "false")
backend := CodexBackend{}
cfg := &Config{Mode: "new", WorkDir: "/tmp", Model: "o3"}
cfg := &config.Config{Mode: "new", WorkDir: "/tmp", Model: "o3"}
got := backend.BuildArgs(cfg, "task")
want := []string{"e", "--model", "o3", "--skip-git-repo-check", "-C", "/tmp", "--json", "task"}
if !reflect.DeepEqual(got, want) {
@@ -105,7 +107,7 @@ func TestBackendBuildArgs_Model(t *testing.T) {
func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Run("gemini new mode defaults workdir", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "new", WorkDir: "/workspace"}
cfg := &config.Config{Mode: "new", WorkDir: "/workspace"}
got := backend.BuildArgs(cfg, "task")
want := []string{"-o", "stream-json", "-y", "task"}
if !reflect.DeepEqual(got, want) {
@@ -115,7 +117,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Run("gemini resume mode uses session id", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "resume", SessionID: "sid-999"}
cfg := &config.Config{Mode: "resume", SessionID: "sid-999"}
got := backend.BuildArgs(cfg, "resume")
want := []string{"-o", "stream-json", "-y", "-r", "sid-999", "resume"}
if !reflect.DeepEqual(got, want) {
@@ -125,7 +127,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Run("gemini resume mode without session omits identifier", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "resume"}
cfg := &config.Config{Mode: "resume"}
got := backend.BuildArgs(cfg, "resume")
want := []string{"-o", "stream-json", "-y", "resume"}
if !reflect.DeepEqual(got, want) {
@@ -142,7 +144,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Run("gemini stdin mode uses -p flag", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "new"}
cfg := &config.Config{Mode: "new"}
got := backend.BuildArgs(cfg, "-")
want := []string{"-o", "stream-json", "-y", "-p", "-"}
if !reflect.DeepEqual(got, want) {
@@ -155,7 +157,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Setenv(key, "false")
backend := CodexBackend{}
cfg := &Config{Mode: "new", WorkDir: "/tmp"}
cfg := &config.Config{Mode: "new", WorkDir: "/tmp"}
got := backend.BuildArgs(cfg, "task")
want := []string{"e", "--skip-git-repo-check", "-C", "/tmp", "--json", "task"}
if !reflect.DeepEqual(got, want) {
@@ -168,7 +170,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Setenv(key, "true")
backend := CodexBackend{}
cfg := &Config{Mode: "new", WorkDir: "/tmp"}
cfg := &config.Config{Mode: "new", WorkDir: "/tmp"}
got := backend.BuildArgs(cfg, "task")
want := []string{"e", "--dangerously-bypass-approvals-and-sandbox", "--skip-git-repo-check", "-C", "/tmp", "--json", "task"}
if !reflect.DeepEqual(got, want) {
@@ -204,7 +206,7 @@ func TestLoadMinimalEnvSettings(t *testing.T) {
t.Setenv("USERPROFILE", home)
t.Run("missing file returns empty", func(t *testing.T) {
if got := loadMinimalEnvSettings(); len(got) != 0 {
if got := LoadMinimalEnvSettings(); len(got) != 0 {
t.Fatalf("got %v, want empty", got)
}
})
@@ -220,7 +222,7 @@ func TestLoadMinimalEnvSettings(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
got := loadMinimalEnvSettings()
got := LoadMinimalEnvSettings()
if got["ANTHROPIC_API_KEY"] != "secret" || got["FOO"] != "bar" {
t.Fatalf("got %v, want keys present", got)
}
@@ -234,7 +236,7 @@ func TestLoadMinimalEnvSettings(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
got := loadMinimalEnvSettings()
got := LoadMinimalEnvSettings()
if got["GOOD"] != "ok" {
t.Fatalf("got %v, want GOOD=ok", got)
}
@@ -249,12 +251,72 @@ func TestLoadMinimalEnvSettings(t *testing.T) {
t.Run("oversized file returns empty", func(t *testing.T) {
dir := filepath.Join(home, ".claude")
path := filepath.Join(dir, "settings.json")
data := bytes.Repeat([]byte("a"), maxClaudeSettingsBytes+1)
data := bytes.Repeat([]byte("a"), MaxClaudeSettingsBytes+1)
if err := os.WriteFile(path, data, 0o600); err != nil {
t.Fatalf("WriteFile: %v", err)
}
if got := loadMinimalEnvSettings(); len(got) != 0 {
if got := LoadMinimalEnvSettings(); len(got) != 0 {
t.Fatalf("got %v, want empty", got)
}
})
}
func TestOpencodeBackend_BuildArgs(t *testing.T) {
backend := OpencodeBackend{}
t.Run("basic", func(t *testing.T) {
cfg := &config.Config{Mode: "new"}
got := backend.BuildArgs(cfg, "hello")
want := []string{"run", "--format", "json", "hello"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("with model", func(t *testing.T) {
cfg := &config.Config{Mode: "new", Model: "opencode/grok-code"}
got := backend.BuildArgs(cfg, "task")
want := []string{"run", "-m", "opencode/grok-code", "--format", "json", "task"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("resume mode", func(t *testing.T) {
cfg := &config.Config{Mode: "resume", SessionID: "ses_123", Model: "opencode/grok-code"}
got := backend.BuildArgs(cfg, "follow-up")
want := []string{"run", "-m", "opencode/grok-code", "-s", "ses_123", "--format", "json", "follow-up"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("resume without session", func(t *testing.T) {
cfg := &config.Config{Mode: "resume"}
got := backend.BuildArgs(cfg, "task")
want := []string{"run", "--format", "json", "task"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("stdin mode omits dash", func(t *testing.T) {
cfg := &config.Config{Mode: "new"}
got := backend.BuildArgs(cfg, "-")
want := []string{"run", "--format", "json"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
}
func TestOpencodeBackend_Interface(t *testing.T) {
backend := OpencodeBackend{}
if backend.Name() != "opencode" {
t.Errorf("Name() = %q, want %q", backend.Name(), "opencode")
}
if backend.Command() != "opencode" {
t.Errorf("Command() = %q, want %q", backend.Command(), "opencode")
}
}

View File

@@ -0,0 +1,139 @@
package backend
import (
"os"
"path/filepath"
"strings"
config "codeagent-wrapper/internal/config"
"github.com/goccy/go-json"
)
type ClaudeBackend struct{}
func (ClaudeBackend) Name() string { return "claude" }
func (ClaudeBackend) Command() string { return "claude" }
func (ClaudeBackend) Env(baseURL, apiKey string) map[string]string {
baseURL = strings.TrimSpace(baseURL)
apiKey = strings.TrimSpace(apiKey)
if baseURL == "" && apiKey == "" {
return nil
}
env := make(map[string]string, 2)
if baseURL != "" {
env["ANTHROPIC_BASE_URL"] = baseURL
}
if apiKey != "" {
env["ANTHROPIC_AUTH_TOKEN"] = apiKey
}
return env
}
func (ClaudeBackend) BuildArgs(cfg *config.Config, targetArg string) []string {
return buildClaudeArgs(cfg, targetArg)
}
const MaxClaudeSettingsBytes = 1 << 20 // 1MB
type MinimalClaudeSettings struct {
Env map[string]string
Model string
}
// LoadMinimalClaudeSettings 从 ~/.claude/settings.json 只提取安全的最小子集:
// - env: 只接受字符串类型的值
// - model: 只接受字符串类型的值
// 文件缺失/解析失败/超限都返回空。
func LoadMinimalClaudeSettings() MinimalClaudeSettings {
home, err := os.UserHomeDir()
if err != nil || home == "" {
return MinimalClaudeSettings{}
}
claudeDir := filepath.Clean(filepath.Join(home, ".claude"))
settingPath := filepath.Clean(filepath.Join(claudeDir, "settings.json"))
rel, err := filepath.Rel(claudeDir, settingPath)
if err != nil || rel == ".." || strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return MinimalClaudeSettings{}
}
info, err := os.Stat(settingPath)
if err != nil || info.Size() > MaxClaudeSettingsBytes {
return MinimalClaudeSettings{}
}
data, err := os.ReadFile(settingPath) // #nosec G304 -- path is fixed under user home and validated to stay within claudeDir
if err != nil {
return MinimalClaudeSettings{}
}
var cfg struct {
Env map[string]any `json:"env"`
Model any `json:"model"`
}
if err := json.Unmarshal(data, &cfg); err != nil {
return MinimalClaudeSettings{}
}
out := MinimalClaudeSettings{}
if model, ok := cfg.Model.(string); ok {
out.Model = strings.TrimSpace(model)
}
if len(cfg.Env) == 0 {
return out
}
env := make(map[string]string, len(cfg.Env))
for k, v := range cfg.Env {
s, ok := v.(string)
if !ok {
continue
}
env[k] = s
}
if len(env) == 0 {
return out
}
out.Env = env
return out
}
func LoadMinimalEnvSettings() map[string]string {
settings := LoadMinimalClaudeSettings()
if len(settings.Env) == 0 {
return nil
}
return settings.Env
}
func buildClaudeArgs(cfg *config.Config, targetArg string) []string {
if cfg == nil {
return nil
}
args := []string{"-p"}
// Default to skip permissions unless CODEAGENT_SKIP_PERMISSIONS=false
if cfg.SkipPermissions || cfg.Yolo || config.EnvFlagDefaultTrue("CODEAGENT_SKIP_PERMISSIONS") {
args = append(args, "--dangerously-skip-permissions")
}
// Prevent infinite recursion: disable all setting sources (user, project, local)
// This ensures a clean execution environment without CLAUDE.md or skills that would trigger codeagent
args = append(args, "--setting-sources", "")
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "--model", model)
}
if cfg.Mode == "resume" {
if cfg.SessionID != "" {
// Claude CLI uses -r <session_id> for resume.
args = append(args, "-r", cfg.SessionID)
}
}
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
return args
}

View File

@@ -0,0 +1,79 @@
package backend
import (
"strings"
config "codeagent-wrapper/internal/config"
)
type CodexBackend struct{}
func (CodexBackend) Name() string { return "codex" }
func (CodexBackend) Command() string { return "codex" }
func (CodexBackend) Env(baseURL, apiKey string) map[string]string {
baseURL = strings.TrimSpace(baseURL)
apiKey = strings.TrimSpace(apiKey)
if baseURL == "" && apiKey == "" {
return nil
}
env := make(map[string]string, 2)
if baseURL != "" {
env["OPENAI_BASE_URL"] = baseURL
}
if apiKey != "" {
env["OPENAI_API_KEY"] = apiKey
}
return env
}
func (CodexBackend) BuildArgs(cfg *config.Config, targetArg string) []string {
return BuildCodexArgs(cfg, targetArg)
}
func BuildCodexArgs(cfg *config.Config, targetArg string) []string {
if cfg == nil {
panic("buildCodexArgs: nil config")
}
var resumeSessionID string
isResume := cfg.Mode == "resume"
if isResume {
resumeSessionID = strings.TrimSpace(cfg.SessionID)
if resumeSessionID == "" {
logErrorFn("invalid config: resume mode requires non-empty session_id")
isResume = false
}
}
args := []string{"e"}
// Default to bypass sandbox unless CODEX_BYPASS_SANDBOX=false
if cfg.Yolo || config.EnvFlagDefaultTrue("CODEX_BYPASS_SANDBOX") {
logWarnFn("YOLO mode or CODEX_BYPASS_SANDBOX enabled: running without approval/sandbox protection")
args = append(args, "--dangerously-bypass-approvals-and-sandbox")
}
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "--model", model)
}
if reasoningEffort := strings.TrimSpace(cfg.ReasoningEffort); reasoningEffort != "" {
args = append(args, "-c", "model_reasoning_effort="+reasoningEffort)
}
args = append(args, "--skip-git-repo-check")
if isResume {
return append(args,
"--json",
"resume",
resumeSessionID,
targetArg,
)
}
return append(args,
"-C", cfg.WorkDir,
"--json",
targetArg,
)
}

View File

@@ -0,0 +1,110 @@
package backend
import (
"os"
"path/filepath"
"strings"
config "codeagent-wrapper/internal/config"
)
type GeminiBackend struct{}
func (GeminiBackend) Name() string { return "gemini" }
func (GeminiBackend) Command() string { return "gemini" }
func (GeminiBackend) Env(baseURL, apiKey string) map[string]string {
baseURL = strings.TrimSpace(baseURL)
apiKey = strings.TrimSpace(apiKey)
if baseURL == "" && apiKey == "" {
return nil
}
env := make(map[string]string, 2)
if baseURL != "" {
env["GOOGLE_GEMINI_BASE_URL"] = baseURL
}
if apiKey != "" {
env["GEMINI_API_KEY"] = apiKey
}
return env
}
func (GeminiBackend) BuildArgs(cfg *config.Config, targetArg string) []string {
return buildGeminiArgs(cfg, targetArg)
}
// LoadGeminiEnv loads environment variables from ~/.gemini/.env
// Supports GEMINI_API_KEY, GEMINI_MODEL, GOOGLE_GEMINI_BASE_URL
// Also sets GEMINI_API_KEY_AUTH_MECHANISM=bearer for third-party API compatibility
func LoadGeminiEnv() map[string]string {
home, err := os.UserHomeDir()
if err != nil || home == "" {
return nil
}
envDir := filepath.Clean(filepath.Join(home, ".gemini"))
envPath := filepath.Clean(filepath.Join(envDir, ".env"))
rel, err := filepath.Rel(envDir, envPath)
if err != nil || rel == ".." || strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return nil
}
data, err := os.ReadFile(envPath) // #nosec G304 -- path is fixed under user home and validated to stay within envDir
if err != nil {
return nil
}
env := make(map[string]string)
for _, line := range strings.Split(string(data), "\n") {
line = strings.TrimSpace(line)
if line == "" || strings.HasPrefix(line, "#") {
continue
}
idx := strings.IndexByte(line, '=')
if idx <= 0 {
continue
}
key := strings.TrimSpace(line[:idx])
value := strings.TrimSpace(line[idx+1:])
if key != "" && value != "" {
env[key] = value
}
}
// Set bearer auth mechanism for third-party API compatibility
if _, ok := env["GEMINI_API_KEY"]; ok {
if _, hasAuth := env["GEMINI_API_KEY_AUTH_MECHANISM"]; !hasAuth {
env["GEMINI_API_KEY_AUTH_MECHANISM"] = "bearer"
}
}
if len(env) == 0 {
return nil
}
return env
}
func buildGeminiArgs(cfg *config.Config, targetArg string) []string {
if cfg == nil {
return nil
}
args := []string{"-o", "stream-json", "-y"}
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "-m", model)
}
if cfg.Mode == "resume" {
if cfg.SessionID != "" {
args = append(args, "-r", cfg.SessionID)
}
}
// Use positional argument instead of deprecated -p flag.
// For stdin mode ("-"), use -p to read from stdin.
if targetArg == "-" {
args = append(args, "-p", targetArg)
} else {
args = append(args, targetArg)
}
return args
}

View File

@@ -0,0 +1,29 @@
package backend
import (
"strings"
config "codeagent-wrapper/internal/config"
)
type OpencodeBackend struct{}
func (OpencodeBackend) Name() string { return "opencode" }
func (OpencodeBackend) Command() string { return "opencode" }
func (OpencodeBackend) Env(baseURL, apiKey string) map[string]string { return nil }
func (OpencodeBackend) BuildArgs(cfg *config.Config, targetArg string) []string {
args := []string{"run"}
if cfg != nil {
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "-m", model)
}
if cfg.Mode == "resume" && cfg.SessionID != "" {
args = append(args, "-s", cfg.SessionID)
}
}
args = append(args, "--format", "json")
if targetArg != "-" {
args = append(args, targetArg)
}
return args
}

View File

@@ -0,0 +1,29 @@
package backend
import (
"fmt"
"strings"
)
var registry = map[string]Backend{
"codex": CodexBackend{},
"claude": ClaudeBackend{},
"gemini": GeminiBackend{},
"opencode": OpencodeBackend{},
}
// Registry exposes the available backends. Intended for internal inspection/tests.
func Registry() map[string]Backend {
return registry
}
func Select(name string) (Backend, error) {
key := strings.ToLower(strings.TrimSpace(name))
if key == "" {
key = "codex"
}
if backend, ok := registry[key]; ok {
return backend, nil
}
return nil, fmt.Errorf("unsupported backend %q", name)
}

View File

@@ -0,0 +1,220 @@
package config
import (
"fmt"
"os"
"path/filepath"
"strings"
"sync"
ilogger "codeagent-wrapper/internal/logger"
"github.com/goccy/go-json"
)
type BackendConfig struct {
BaseURL string `json:"base_url,omitempty"`
APIKey string `json:"api_key,omitempty"`
}
type AgentModelConfig struct {
Backend string `json:"backend"`
Model string `json:"model"`
PromptFile string `json:"prompt_file,omitempty"`
Description string `json:"description,omitempty"`
Yolo bool `json:"yolo,omitempty"`
Reasoning string `json:"reasoning,omitempty"`
BaseURL string `json:"base_url,omitempty"`
APIKey string `json:"api_key,omitempty"`
}
type ModelsConfig struct {
DefaultBackend string `json:"default_backend"`
DefaultModel string `json:"default_model"`
Agents map[string]AgentModelConfig `json:"agents"`
Backends map[string]BackendConfig `json:"backends,omitempty"`
}
var defaultModelsConfig = ModelsConfig{
DefaultBackend: "opencode",
DefaultModel: "opencode/grok-code",
Agents: map[string]AgentModelConfig{
"oracle": {Backend: "claude", Model: "claude-opus-4-5-20251101", PromptFile: "~/.claude/skills/omo/references/oracle.md", Description: "Technical advisor"},
"librarian": {Backend: "claude", Model: "claude-sonnet-4-5-20250929", PromptFile: "~/.claude/skills/omo/references/librarian.md", Description: "Researcher"},
"explore": {Backend: "opencode", Model: "opencode/grok-code", PromptFile: "~/.claude/skills/omo/references/explore.md", Description: "Code search"},
"develop": {Backend: "codex", Model: "", PromptFile: "~/.claude/skills/omo/references/develop.md", Description: "Code development"},
"frontend-ui-ux-engineer": {Backend: "gemini", Model: "", PromptFile: "~/.claude/skills/omo/references/frontend-ui-ux-engineer.md", Description: "Frontend engineer"},
"document-writer": {Backend: "gemini", Model: "", PromptFile: "~/.claude/skills/omo/references/document-writer.md", Description: "Documentation"},
},
}
var (
modelsConfigOnce sync.Once
modelsConfigCached *ModelsConfig
)
func modelsConfig() *ModelsConfig {
modelsConfigOnce.Do(func() {
modelsConfigCached = loadModelsConfig()
})
if modelsConfigCached == nil {
return &defaultModelsConfig
}
return modelsConfigCached
}
func loadModelsConfig() *ModelsConfig {
home, err := os.UserHomeDir()
if err != nil {
ilogger.LogWarn(fmt.Sprintf("Failed to resolve home directory for models config: %v; using defaults", err))
return &defaultModelsConfig
}
configDir := filepath.Clean(filepath.Join(home, ".codeagent"))
configPath := filepath.Clean(filepath.Join(configDir, "models.json"))
rel, err := filepath.Rel(configDir, configPath)
if err != nil || rel == ".." || strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return &defaultModelsConfig
}
data, err := os.ReadFile(configPath) // #nosec G304 -- path is fixed under user home and validated to stay within configDir
if err != nil {
if !os.IsNotExist(err) {
ilogger.LogWarn(fmt.Sprintf("Failed to read models config %s: %v; using defaults", configPath, err))
}
return &defaultModelsConfig
}
var cfg ModelsConfig
if err := json.Unmarshal(data, &cfg); err != nil {
ilogger.LogWarn(fmt.Sprintf("Failed to parse models config %s: %v; using defaults", configPath, err))
return &defaultModelsConfig
}
cfg.DefaultBackend = strings.TrimSpace(cfg.DefaultBackend)
if cfg.DefaultBackend == "" {
cfg.DefaultBackend = defaultModelsConfig.DefaultBackend
}
cfg.DefaultModel = strings.TrimSpace(cfg.DefaultModel)
if cfg.DefaultModel == "" {
cfg.DefaultModel = defaultModelsConfig.DefaultModel
}
// Merge with defaults
for name, agent := range defaultModelsConfig.Agents {
if _, exists := cfg.Agents[name]; !exists {
if cfg.Agents == nil {
cfg.Agents = make(map[string]AgentModelConfig)
}
cfg.Agents[name] = agent
}
}
// Normalize backend keys so lookups can be case-insensitive.
if len(cfg.Backends) > 0 {
normalized := make(map[string]BackendConfig, len(cfg.Backends))
for k, v := range cfg.Backends {
key := strings.ToLower(strings.TrimSpace(k))
if key == "" {
continue
}
normalized[key] = v
}
if len(normalized) > 0 {
cfg.Backends = normalized
} else {
cfg.Backends = nil
}
}
return &cfg
}
func LoadDynamicAgent(name string) (AgentModelConfig, bool) {
if err := ValidateAgentName(name); err != nil {
return AgentModelConfig{}, false
}
home, err := os.UserHomeDir()
if err != nil || strings.TrimSpace(home) == "" {
return AgentModelConfig{}, false
}
absPath := filepath.Join(home, ".codeagent", "agents", name+".md")
info, err := os.Stat(absPath)
if err != nil || info.IsDir() {
return AgentModelConfig{}, false
}
return AgentModelConfig{PromptFile: "~/.codeagent/agents/" + name + ".md"}, true
}
func ResolveBackendConfig(backendName string) (baseURL, apiKey string) {
cfg := modelsConfig()
resolved := resolveBackendConfig(cfg, backendName)
return strings.TrimSpace(resolved.BaseURL), strings.TrimSpace(resolved.APIKey)
}
func resolveBackendConfig(cfg *ModelsConfig, backendName string) BackendConfig {
if cfg == nil || len(cfg.Backends) == 0 {
return BackendConfig{}
}
key := strings.ToLower(strings.TrimSpace(backendName))
if key == "" {
key = strings.ToLower(strings.TrimSpace(cfg.DefaultBackend))
}
if key == "" {
return BackendConfig{}
}
if backend, ok := cfg.Backends[key]; ok {
return backend
}
return BackendConfig{}
}
func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning, baseURL, apiKey string, yolo bool) {
cfg := modelsConfig()
if agent, ok := cfg.Agents[agentName]; ok {
backend = strings.TrimSpace(agent.Backend)
if backend == "" {
backend = cfg.DefaultBackend
}
backendCfg := resolveBackendConfig(cfg, backend)
baseURL = strings.TrimSpace(agent.BaseURL)
if baseURL == "" {
baseURL = strings.TrimSpace(backendCfg.BaseURL)
}
apiKey = strings.TrimSpace(agent.APIKey)
if apiKey == "" {
apiKey = strings.TrimSpace(backendCfg.APIKey)
}
return backend, strings.TrimSpace(agent.Model), agent.PromptFile, agent.Reasoning, baseURL, apiKey, agent.Yolo
}
if dynamic, ok := LoadDynamicAgent(agentName); ok {
backend = cfg.DefaultBackend
model = cfg.DefaultModel
backendCfg := resolveBackendConfig(cfg, backend)
baseURL = strings.TrimSpace(backendCfg.BaseURL)
apiKey = strings.TrimSpace(backendCfg.APIKey)
return backend, model, dynamic.PromptFile, "", baseURL, apiKey, false
}
backend = cfg.DefaultBackend
model = cfg.DefaultModel
backendCfg := resolveBackendConfig(cfg, backend)
baseURL = strings.TrimSpace(backendCfg.BaseURL)
apiKey = strings.TrimSpace(backendCfg.APIKey)
return backend, model, "", "", baseURL, apiKey, false
}
func ResolveAgentConfig(agentName string) (backend, model, promptFile, reasoning, baseURL, apiKey string, yolo bool) {
return resolveAgentConfig(agentName)
}
func ResetModelsConfigCacheForTest() {
modelsConfigCached = nil
modelsConfigOnce = sync.Once{}
}

View File

@@ -1,9 +1,8 @@
package main
package config
import (
"os"
"path/filepath"
"reflect"
"testing"
)
@@ -11,6 +10,8 @@ func TestResolveAgentConfig_Defaults(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
// Test that default agents resolve correctly without config file
tests := []struct {
@@ -19,16 +20,16 @@ func TestResolveAgentConfig_Defaults(t *testing.T) {
wantModel string
wantPromptFile string
}{
{"oracle", "claude", "claude-opus-4-5-20251101", "~/.claude/skills/omo/references/oracle.md"},
{"librarian", "claude", "claude-sonnet-4-5-20250929", "~/.claude/skills/omo/references/librarian.md"},
{"explore", "opencode", "opencode/grok-code", "~/.claude/skills/omo/references/explore.md"},
{"frontend-ui-ux-engineer", "gemini", "", "~/.claude/skills/omo/references/frontend-ui-ux-engineer.md"},
{"document-writer", "gemini", "", "~/.claude/skills/omo/references/document-writer.md"},
}
{"oracle", "claude", "claude-opus-4-5-20251101", "~/.claude/skills/omo/references/oracle.md"},
{"librarian", "claude", "claude-sonnet-4-5-20250929", "~/.claude/skills/omo/references/librarian.md"},
{"explore", "opencode", "opencode/grok-code", "~/.claude/skills/omo/references/explore.md"},
{"frontend-ui-ux-engineer", "gemini", "", "~/.claude/skills/omo/references/frontend-ui-ux-engineer.md"},
{"document-writer", "gemini", "", "~/.claude/skills/omo/references/document-writer.md"},
}
for _, tt := range tests {
t.Run(tt.agent, func(t *testing.T) {
backend, model, promptFile, _, _ := resolveAgentConfig(tt.agent)
backend, model, promptFile, _, _, _, _ := resolveAgentConfig(tt.agent)
if backend != tt.wantBackend {
t.Errorf("backend = %q, want %q", backend, tt.wantBackend)
}
@@ -46,8 +47,10 @@ func TestResolveAgentConfig_UnknownAgent(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
backend, model, promptFile, _, _ := resolveAgentConfig("unknown-agent")
backend, model, promptFile, _, _, _, _ := resolveAgentConfig("unknown-agent")
if backend != "opencode" {
t.Errorf("unknown agent backend = %q, want %q", backend, "opencode")
}
@@ -63,6 +66,8 @@ func TestLoadModelsConfig_NoFile(t *testing.T) {
home := "/nonexistent/path/that/does/not/exist"
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
cfg := loadModelsConfig()
if cfg.DefaultBackend != "opencode" {
@@ -84,11 +89,23 @@ func TestLoadModelsConfig_WithFile(t *testing.T) {
configContent := `{
"default_backend": "claude",
"default_model": "claude-opus-4",
"backends": {
"Claude": {
"base_url": "https://backend.example",
"api_key": "backend-key"
},
"codex": {
"base_url": "https://openai.example",
"api_key": "openai-key"
}
},
"agents": {
"custom-agent": {
"backend": "codex",
"model": "gpt-4o",
"description": "Custom agent"
"description": "Custom agent",
"base_url": "https://agent.example",
"api_key": "agent-key"
}
}
}`
@@ -99,6 +116,8 @@ func TestLoadModelsConfig_WithFile(t *testing.T) {
t.Setenv("HOME", tmpDir)
t.Setenv("USERPROFILE", tmpDir)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
cfg := loadModelsConfig()
@@ -125,6 +144,55 @@ func TestLoadModelsConfig_WithFile(t *testing.T) {
if _, ok := cfg.Agents["oracle"]; !ok {
t.Error("default agent oracle should be merged")
}
baseURL, apiKey := ResolveBackendConfig("claude")
if baseURL != "https://backend.example" {
t.Errorf("ResolveBackendConfig(baseURL) = %q, want %q", baseURL, "https://backend.example")
}
if apiKey != "backend-key" {
t.Errorf("ResolveBackendConfig(apiKey) = %q, want %q", apiKey, "backend-key")
}
backend, model, _, _, agentBaseURL, agentAPIKey, _ := ResolveAgentConfig("custom-agent")
if backend != "codex" {
t.Errorf("ResolveAgentConfig(backend) = %q, want %q", backend, "codex")
}
if model != "gpt-4o" {
t.Errorf("ResolveAgentConfig(model) = %q, want %q", model, "gpt-4o")
}
if agentBaseURL != "https://agent.example" {
t.Errorf("ResolveAgentConfig(baseURL) = %q, want %q", agentBaseURL, "https://agent.example")
}
if agentAPIKey != "agent-key" {
t.Errorf("ResolveAgentConfig(apiKey) = %q, want %q", agentAPIKey, "agent-key")
}
}
func TestResolveAgentConfig_DynamicAgent(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
agentDir := filepath.Join(home, ".codeagent", "agents")
if err := os.MkdirAll(agentDir, 0o755); err != nil {
t.Fatalf("MkdirAll: %v", err)
}
if err := os.WriteFile(filepath.Join(agentDir, "sarsh.md"), []byte("prompt\n"), 0o644); err != nil {
t.Fatalf("WriteFile: %v", err)
}
backend, model, promptFile, _, _, _, _ := resolveAgentConfig("sarsh")
if backend != "opencode" {
t.Errorf("backend = %q, want %q", backend, "opencode")
}
if model != "opencode/grok-code" {
t.Errorf("model = %q, want %q", model, "opencode/grok-code")
}
if promptFile != "~/.codeagent/agents/sarsh.md" {
t.Errorf("promptFile = %q, want %q", promptFile, "~/.codeagent/agents/sarsh.md")
}
}
func TestLoadModelsConfig_InvalidJSON(t *testing.T) {
@@ -142,6 +210,8 @@ func TestLoadModelsConfig_InvalidJSON(t *testing.T) {
t.Setenv("HOME", tmpDir)
t.Setenv("USERPROFILE", tmpDir)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
cfg := loadModelsConfig()
// Should fall back to defaults
@@ -149,69 +219,3 @@ func TestLoadModelsConfig_InvalidJSON(t *testing.T) {
t.Errorf("invalid JSON should fallback, got DefaultBackend = %q", cfg.DefaultBackend)
}
}
func TestOpencodeBackend_BuildArgs(t *testing.T) {
backend := OpencodeBackend{}
t.Run("basic", func(t *testing.T) {
cfg := &Config{Mode: "new"}
got := backend.BuildArgs(cfg, "hello")
want := []string{"run", "--format", "json", "hello"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("with model", func(t *testing.T) {
cfg := &Config{Mode: "new", Model: "opencode/grok-code"}
got := backend.BuildArgs(cfg, "task")
want := []string{"run", "-m", "opencode/grok-code", "--format", "json", "task"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("resume mode", func(t *testing.T) {
cfg := &Config{Mode: "resume", SessionID: "ses_123", Model: "opencode/grok-code"}
got := backend.BuildArgs(cfg, "follow-up")
want := []string{"run", "-m", "opencode/grok-code", "-s", "ses_123", "--format", "json", "follow-up"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("resume without session", func(t *testing.T) {
cfg := &Config{Mode: "resume"}
got := backend.BuildArgs(cfg, "task")
want := []string{"run", "--format", "json", "task"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("stdin mode omits dash", func(t *testing.T) {
cfg := &Config{Mode: "new"}
got := backend.BuildArgs(cfg, "-")
want := []string{"run", "--format", "json"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
}
func TestOpencodeBackend_Interface(t *testing.T) {
backend := OpencodeBackend{}
if backend.Name() != "opencode" {
t.Errorf("Name() = %q, want %q", backend.Name(), "opencode")
}
if backend.Command() != "opencode" {
t.Errorf("Command() = %q, want %q", backend.Command(), "opencode")
}
}
func TestBackendRegistry_IncludesOpencode(t *testing.T) {
if _, ok := backendRegistry["opencode"]; !ok {
t.Error("backendRegistry should include opencode")
}
}

View File

@@ -0,0 +1,102 @@
package config
import (
"fmt"
"os"
"strconv"
"strings"
)
// Config holds CLI configuration.
type Config struct {
Mode string // "new" or "resume"
Task string
SessionID string
WorkDir string
Model string
ReasoningEffort string
ExplicitStdin bool
Timeout int
Backend string
Agent string
PromptFile string
PromptFileExplicit bool
SkipPermissions bool
Yolo bool
MaxParallelWorkers int
}
// EnvFlagEnabled returns true when the environment variable exists and is not
// explicitly set to a falsey value ("0/false/no/off").
func EnvFlagEnabled(key string) bool {
val, ok := os.LookupEnv(key)
if !ok {
return false
}
val = strings.TrimSpace(strings.ToLower(val))
switch val {
case "", "0", "false", "no", "off":
return false
default:
return true
}
}
func ParseBoolFlag(val string, defaultValue bool) bool {
val = strings.TrimSpace(strings.ToLower(val))
switch val {
case "1", "true", "yes", "on":
return true
case "0", "false", "no", "off":
return false
default:
return defaultValue
}
}
// EnvFlagDefaultTrue returns true unless the env var is explicitly set to
// false/0/no/off.
func EnvFlagDefaultTrue(key string) bool {
val, ok := os.LookupEnv(key)
if !ok {
return true
}
return ParseBoolFlag(val, true)
}
func ValidateAgentName(name string) error {
if strings.TrimSpace(name) == "" {
return fmt.Errorf("agent name is empty")
}
for _, r := range name {
switch {
case r >= 'a' && r <= 'z':
case r >= 'A' && r <= 'Z':
case r >= '0' && r <= '9':
case r == '-', r == '_':
default:
return fmt.Errorf("agent name %q contains invalid character %q", name, r)
}
}
return nil
}
const maxParallelWorkersLimit = 100
// ResolveMaxParallelWorkers reads CODEAGENT_MAX_PARALLEL_WORKERS. It returns 0
// for "unlimited".
func ResolveMaxParallelWorkers() int {
raw := strings.TrimSpace(os.Getenv("CODEAGENT_MAX_PARALLEL_WORKERS"))
if raw == "" {
return 0
}
value, err := strconv.Atoi(raw)
if err != nil || value < 0 {
return 0
}
if value > maxParallelWorkersLimit {
return maxParallelWorkersLimit
}
return value
}

View File

@@ -0,0 +1,47 @@
package config
import (
"errors"
"os"
"path/filepath"
"strings"
"github.com/spf13/viper"
)
// NewViper returns a viper instance configured for CODEAGENT_* environment
// variables and an optional config file.
//
// Search order when configFile is empty:
// - $HOME/.codeagent/config.(yaml|yml|json|toml|...)
func NewViper(configFile string) (*viper.Viper, error) {
v := viper.New()
v.SetEnvPrefix("CODEAGENT")
v.SetEnvKeyReplacer(strings.NewReplacer("-", "_"))
v.AutomaticEnv()
if strings.TrimSpace(configFile) != "" {
v.SetConfigFile(configFile)
if err := v.ReadInConfig(); err != nil {
return nil, err
}
return v, nil
}
home, err := os.UserHomeDir()
if err != nil || strings.TrimSpace(home) == "" {
return v, nil
}
v.SetConfigName("config")
v.AddConfigPath(filepath.Join(home, ".codeagent"))
if err := v.ReadInConfig(); err != nil {
var notFound viper.ConfigFileNotFoundError
if errors.As(err, &notFound) {
return v, nil
}
return nil, err
}
return v, nil
}

View File

@@ -0,0 +1,193 @@
package executor
import (
"os"
"path/filepath"
"strings"
"testing"
backend "codeagent-wrapper/internal/backend"
config "codeagent-wrapper/internal/config"
)
// TestEnvInjectionWithAgent tests the full flow of env injection with agent config
func TestEnvInjectionWithAgent(t *testing.T) {
// Setup temp config
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0755); err != nil {
t.Fatal(err)
}
// Write test config with agent that has base_url and api_key
configContent := `{
"default_backend": "codex",
"agents": {
"test-agent": {
"backend": "claude",
"model": "test-model",
"base_url": "https://test.api.com",
"api_key": "test-api-key-12345678"
}
}
}`
configPath := filepath.Join(configDir, "models.json")
if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil {
t.Fatal(err)
}
// Override HOME to use temp dir
oldHome := os.Getenv("HOME")
os.Setenv("HOME", tmpDir)
defer os.Setenv("HOME", oldHome)
// Reset config cache
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Test ResolveAgentConfig
agentBackend, model, _, _, baseURL, apiKey, _ := config.ResolveAgentConfig("test-agent")
t.Logf("ResolveAgentConfig: backend=%q, model=%q, baseURL=%q, apiKey=%q",
agentBackend, model, baseURL, apiKey)
if agentBackend != "claude" {
t.Errorf("expected backend 'claude', got %q", agentBackend)
}
if baseURL != "https://test.api.com" {
t.Errorf("expected baseURL 'https://test.api.com', got %q", baseURL)
}
if apiKey != "test-api-key-12345678" {
t.Errorf("expected apiKey 'test-api-key-12345678', got %q", apiKey)
}
// Test Backend.Env
b := backend.ClaudeBackend{}
env := b.Env(baseURL, apiKey)
t.Logf("Backend.Env: %v", env)
if env == nil {
t.Fatal("expected non-nil env from Backend.Env")
}
if env["ANTHROPIC_BASE_URL"] != baseURL {
t.Errorf("expected ANTHROPIC_BASE_URL=%q, got %q", baseURL, env["ANTHROPIC_BASE_URL"])
}
if env["ANTHROPIC_AUTH_TOKEN"] != apiKey {
t.Errorf("expected ANTHROPIC_AUTH_TOKEN=%q, got %q", apiKey, env["ANTHROPIC_AUTH_TOKEN"])
}
}
// TestEnvInjectionLogic tests the exact logic used in executor
func TestEnvInjectionLogic(t *testing.T) {
// Setup temp config
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0755); err != nil {
t.Fatal(err)
}
configContent := `{
"default_backend": "codex",
"agents": {
"explore": {
"backend": "claude",
"model": "MiniMax-M2.1",
"base_url": "https://api.minimaxi.com/anthropic",
"api_key": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.test"
}
}
}`
configPath := filepath.Join(configDir, "models.json")
if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil {
t.Fatal(err)
}
oldHome := os.Getenv("HOME")
os.Setenv("HOME", tmpDir)
defer os.Setenv("HOME", oldHome)
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Simulate the executor logic
cfgBackend := "claude" // This should come from taskSpec.Backend
agentName := "explore"
// Step 1: Get backend config (usually empty for claude without global config)
baseURL, apiKey := config.ResolveBackendConfig(cfgBackend)
t.Logf("Step 1 - ResolveBackendConfig(%q): baseURL=%q, apiKey=%q", cfgBackend, baseURL, apiKey)
// Step 2: If agent specified, get agent config
if agentName != "" {
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _ := config.ResolveAgentConfig(agentName)
t.Logf("Step 2 - ResolveAgentConfig(%q): backend=%q, baseURL=%q, apiKey=%q",
agentName, agentBackend, agentBaseURL, agentAPIKey)
// Step 3: Check if agent backend matches cfg backend
if strings.EqualFold(strings.TrimSpace(agentBackend), strings.TrimSpace(cfgBackend)) {
baseURL, apiKey = agentBaseURL, agentAPIKey
t.Logf("Step 3 - Backend match! Using agent config: baseURL=%q, apiKey=%q", baseURL, apiKey)
} else {
t.Logf("Step 3 - Backend mismatch: agent=%q, cfg=%q", agentBackend, cfgBackend)
}
}
// Step 4: Get env vars from backend
b := backend.ClaudeBackend{}
injected := b.Env(baseURL, apiKey)
t.Logf("Step 4 - Backend.Env: %v", injected)
// Verify
if len(injected) == 0 {
t.Fatal("Expected env vars to be injected, got none")
}
expectedURL := "https://api.minimaxi.com/anthropic"
if injected["ANTHROPIC_BASE_URL"] != expectedURL {
t.Errorf("ANTHROPIC_BASE_URL: expected %q, got %q", expectedURL, injected["ANTHROPIC_BASE_URL"])
}
if _, ok := injected["ANTHROPIC_AUTH_TOKEN"]; !ok {
t.Error("ANTHROPIC_AUTH_TOKEN not set")
}
// Step 5: Test masking
for k, v := range injected {
masked := maskSensitiveValue(k, v)
t.Logf("Step 5 - Env log: %s=%s", k, masked)
}
}
// TestTaskSpecBackendPropagation tests that taskSpec.Backend is properly used
func TestTaskSpecBackendPropagation(t *testing.T) {
// Simulate what happens in RunCodexTaskWithContext
taskSpec := TaskSpec{
ID: "test",
Task: "hello",
Backend: "claude",
Agent: "explore",
}
// This is the logic from executor.go lines 889-916
cfg := &config.Config{
Mode: "new",
Task: taskSpec.Task,
Backend: "codex", // default
}
var backend Backend = nil // nil in single mode
commandName := "codex" // default
if backend != nil {
cfg.Backend = backend.Name()
} else if taskSpec.Backend != "" {
cfg.Backend = taskSpec.Backend
} else if commandName != "" {
cfg.Backend = commandName
}
t.Logf("taskSpec.Backend=%q, cfg.Backend=%q", taskSpec.Backend, cfg.Backend)
if cfg.Backend != "claude" {
t.Errorf("expected cfg.Backend='claude', got %q", cfg.Backend)
}
}

View File

@@ -0,0 +1,333 @@
package executor
import (
"strings"
"testing"
backend "codeagent-wrapper/internal/backend"
)
func TestMaskSensitiveValue(t *testing.T) {
tests := []struct {
name string
key string
value string
expected string
}{
{
name: "API_KEY with long value",
key: "ANTHROPIC_AUTH_TOKEN",
value: "sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
expected: "sk-a****xxxx",
},
{
name: "api_key lowercase",
key: "api_key",
value: "abcdefghijklmnop",
expected: "abcd****mnop",
},
{
name: "AUTH_TOKEN",
key: "AUTH_TOKEN",
value: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9",
expected: "eyJh****VCJ9",
},
{
name: "SECRET",
key: "MY_SECRET",
value: "super-secret-value-12345",
expected: "supe****2345",
},
{
name: "short key value (8 chars)",
key: "API_KEY",
value: "12345678",
expected: "****",
},
{
name: "very short key value",
key: "API_KEY",
value: "abc",
expected: "****",
},
{
name: "empty key value",
key: "API_KEY",
value: "",
expected: "",
},
{
name: "non-sensitive BASE_URL",
key: "ANTHROPIC_BASE_URL",
value: "https://api.anthropic.com",
expected: "https://api.anthropic.com",
},
{
name: "non-sensitive MODEL",
key: "MODEL",
value: "claude-3-opus",
expected: "claude-3-opus",
},
{
name: "case insensitive - Key",
key: "My_Key",
value: "1234567890abcdef",
expected: "1234****cdef",
},
{
name: "case insensitive - TOKEN",
key: "ACCESS_TOKEN",
value: "access123456789",
expected: "acce****6789",
},
{
name: "partial match - apikey",
key: "MYAPIKEY",
value: "1234567890",
expected: "1234****7890",
},
{
name: "partial match - secretvalue",
key: "SECRETVALUE",
value: "abcdefghij",
expected: "abcd****ghij",
},
{
name: "9 char value (just above threshold)",
key: "API_KEY",
value: "123456789",
expected: "1234****6789",
},
{
name: "exactly 8 char value (at threshold)",
key: "API_KEY",
value: "12345678",
expected: "****",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := maskSensitiveValue(tt.key, tt.value)
if result != tt.expected {
t.Errorf("maskSensitiveValue(%q, %q) = %q, want %q", tt.key, tt.value, result, tt.expected)
}
})
}
}
func TestMaskSensitiveValue_NoLeakage(t *testing.T) {
// Ensure sensitive values are never fully exposed
sensitiveKeys := []string{"API_KEY", "api_key", "AUTH_TOKEN", "SECRET", "access_token", "MYAPIKEY"}
longValue := "this-is-a-very-long-secret-value-that-should-be-masked"
for _, key := range sensitiveKeys {
t.Run(key, func(t *testing.T) {
masked := maskSensitiveValue(key, longValue)
// Should not contain the full value
if masked == longValue {
t.Errorf("key %q: value was not masked", key)
}
// Should contain mask marker
if !strings.Contains(masked, "****") {
t.Errorf("key %q: masked value %q does not contain ****", key, masked)
}
// First 4 chars should be visible
if !strings.HasPrefix(masked, longValue[:4]) {
t.Errorf("key %q: masked value should start with first 4 chars", key)
}
// Last 4 chars should be visible
if !strings.HasSuffix(masked, longValue[len(longValue)-4:]) {
t.Errorf("key %q: masked value should end with last 4 chars", key)
}
})
}
}
func TestMaskSensitiveValue_NonSensitivePassthrough(t *testing.T) {
// Non-sensitive keys should pass through unchanged
nonSensitiveKeys := []string{
"ANTHROPIC_BASE_URL",
"BASE_URL",
"MODEL",
"BACKEND",
"WORKDIR",
"HOME",
"PATH",
}
value := "any-value-here-12345"
for _, key := range nonSensitiveKeys {
t.Run(key, func(t *testing.T) {
result := maskSensitiveValue(key, value)
if result != value {
t.Errorf("key %q: expected passthrough but got %q", key, result)
}
})
}
}
// TestClaudeBackendEnv tests that ClaudeBackend.Env returns correct env vars
func TestClaudeBackendEnv(t *testing.T) {
tests := []struct {
name string
baseURL string
apiKey string
expectKeys []string
expectNil bool
}{
{
name: "both base_url and api_key",
baseURL: "https://api.custom.com",
apiKey: "sk-test-key-12345",
expectKeys: []string{"ANTHROPIC_BASE_URL", "ANTHROPIC_AUTH_TOKEN"},
},
{
name: "only base_url",
baseURL: "https://api.custom.com",
apiKey: "",
expectKeys: []string{"ANTHROPIC_BASE_URL"},
},
{
name: "only api_key",
baseURL: "",
apiKey: "sk-test-key-12345",
expectKeys: []string{"ANTHROPIC_AUTH_TOKEN"},
},
{
name: "both empty",
baseURL: "",
apiKey: "",
expectNil: true,
},
{
name: "whitespace only",
baseURL: " ",
apiKey: " ",
expectNil: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
b := backend.ClaudeBackend{}
env := b.Env(tt.baseURL, tt.apiKey)
if tt.expectNil {
if env != nil {
t.Errorf("expected nil env, got %v", env)
}
return
}
if env == nil {
t.Fatal("expected non-nil env")
}
for _, key := range tt.expectKeys {
if _, ok := env[key]; !ok {
t.Errorf("expected key %q in env", key)
}
}
// Verify values are correct
if tt.baseURL != "" && strings.TrimSpace(tt.baseURL) != "" {
if env["ANTHROPIC_BASE_URL"] != strings.TrimSpace(tt.baseURL) {
t.Errorf("ANTHROPIC_BASE_URL = %q, want %q", env["ANTHROPIC_BASE_URL"], strings.TrimSpace(tt.baseURL))
}
}
if tt.apiKey != "" && strings.TrimSpace(tt.apiKey) != "" {
if env["ANTHROPIC_AUTH_TOKEN"] != strings.TrimSpace(tt.apiKey) {
t.Errorf("ANTHROPIC_AUTH_TOKEN = %q, want %q", env["ANTHROPIC_AUTH_TOKEN"], strings.TrimSpace(tt.apiKey))
}
}
})
}
}
// TestEnvLoggingIntegration tests that env vars are properly masked in logs
func TestEnvLoggingIntegration(t *testing.T) {
b := backend.ClaudeBackend{}
baseURL := "https://api.minimaxi.com/anthropic"
apiKey := "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.longjwttoken"
env := b.Env(baseURL, apiKey)
if env == nil {
t.Fatal("expected non-nil env")
}
// Verify that when we log these values, sensitive ones are masked
for k, v := range env {
masked := maskSensitiveValue(k, v)
if k == "ANTHROPIC_BASE_URL" {
// URL should not be masked
if masked != v {
t.Errorf("BASE_URL should not be masked: got %q, want %q", masked, v)
}
}
if k == "ANTHROPIC_AUTH_TOKEN" {
// API key should be masked
if masked == v {
t.Errorf("API_KEY should be masked, but got original value")
}
if !strings.Contains(masked, "****") {
t.Errorf("masked API_KEY should contain ****: got %q", masked)
}
// Should still show first 4 and last 4 chars
if !strings.HasPrefix(masked, v[:4]) {
t.Errorf("masked value should start with first 4 chars of original")
}
if !strings.HasSuffix(masked, v[len(v)-4:]) {
t.Errorf("masked value should end with last 4 chars of original")
}
}
}
}
// TestGeminiBackendEnv tests GeminiBackend.Env for comparison
func TestGeminiBackendEnv(t *testing.T) {
b := backend.GeminiBackend{}
env := b.Env("https://custom.api", "gemini-api-key-12345")
if env == nil {
t.Fatal("expected non-nil env")
}
// Check that GEMINI env vars are set
if _, ok := env["GOOGLE_GEMINI_BASE_URL"]; !ok {
t.Error("expected GOOGLE_GEMINI_BASE_URL in env")
}
if _, ok := env["GEMINI_API_KEY"]; !ok {
t.Error("expected GEMINI_API_KEY in env")
}
// Verify masking works for Gemini keys too
for k, v := range env {
masked := maskSensitiveValue(k, v)
if strings.Contains(strings.ToLower(k), "key") {
if masked == v && len(v) > 0 {
t.Errorf("key %q should be masked", k)
}
}
}
}
// TestCodexBackendEnv tests CodexBackend.Env
func TestCodexBackendEnv(t *testing.T) {
b := backend.CodexBackend{}
env := b.Env("https://custom.api", "codex-api-key-12345")
if env == nil {
t.Fatal("expected non-nil env for codex")
}
// Check for OPENAI env vars
if _, ok := env["OPENAI_BASE_URL"]; !ok {
t.Error("expected OPENAI_BASE_URL in env")
}
if _, ok := env["OPENAI_API_KEY"]; !ok {
t.Error("expected OPENAI_API_KEY in env")
}
}

View File

@@ -0,0 +1,133 @@
package executor
import (
"context"
"errors"
"io"
"os"
"path/filepath"
"strings"
"testing"
config "codeagent-wrapper/internal/config"
)
type fakeCmd struct {
env map[string]string
}
func (f *fakeCmd) Start() error { return nil }
func (f *fakeCmd) Wait() error { return nil }
func (f *fakeCmd) StdoutPipe() (io.ReadCloser, error) {
return io.NopCloser(strings.NewReader("")), nil
}
func (f *fakeCmd) StderrPipe() (io.ReadCloser, error) {
return nil, errors.New("fake stderr pipe error")
}
func (f *fakeCmd) StdinPipe() (io.WriteCloser, error) {
return nil, errors.New("fake stdin pipe error")
}
func (f *fakeCmd) SetStderr(io.Writer) {}
func (f *fakeCmd) SetDir(string) {}
func (f *fakeCmd) SetEnv(env map[string]string) {
if len(env) == 0 {
return
}
if f.env == nil {
f.env = make(map[string]string, len(env))
}
for k, v := range env {
f.env[k] = v
}
}
func (f *fakeCmd) Process() processHandle { return nil }
func TestEnvInjection_LogsToStderrAndMasksKey(t *testing.T) {
// Arrange ~/.codeagent/models.json via HOME override.
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0o755); err != nil {
t.Fatal(err)
}
const baseURL = "https://api.minimaxi.com/anthropic"
const apiKey = "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.test"
models := `{
"agents": {
"explore": {
"backend": "claude",
"model": "MiniMax-M2.1",
"base_url": "` + baseURL + `",
"api_key": "` + apiKey + `"
}
}
}`
if err := os.WriteFile(filepath.Join(configDir, "models.json"), []byte(models), 0o644); err != nil {
t.Fatal(err)
}
oldHome := os.Getenv("HOME")
if err := os.Setenv("HOME", tmpDir); err != nil {
t.Fatal(err)
}
defer func() { _ = os.Setenv("HOME", oldHome) }()
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Capture stderr (RunCodexTaskWithContext prints env injection lines there).
r, w, err := os.Pipe()
if err != nil {
t.Fatal(err)
}
oldStderr := os.Stderr
os.Stderr = w
defer func() { os.Stderr = oldStderr }()
readDone := make(chan string, 1)
go func() {
defer r.Close()
b, _ := io.ReadAll(r)
readDone <- string(b)
}()
var cmd *fakeCmd
restoreRunner := SetNewCommandRunner(func(ctx context.Context, name string, args ...string) CommandRunner {
cmd = &fakeCmd{}
return cmd
})
defer restoreRunner()
// Act: force an early return right after env injection by making StderrPipe fail.
_ = RunCodexTaskWithContext(
context.Background(),
TaskSpec{Task: "hi", WorkDir: ".", Backend: "claude", Agent: "explore"},
nil,
"claude",
nil,
nil,
false,
false,
1,
)
_ = w.Close()
got := <-readDone
// Assert: env was injected into the command and logging is present with masking.
if cmd == nil || cmd.env == nil {
t.Fatalf("expected cmd env to be set, got cmd=%v env=%v", cmd, nil)
}
if cmd.env["ANTHROPIC_BASE_URL"] != baseURL {
t.Fatalf("ANTHROPIC_BASE_URL=%q, want %q", cmd.env["ANTHROPIC_BASE_URL"], baseURL)
}
if cmd.env["ANTHROPIC_AUTH_TOKEN"] != apiKey {
t.Fatalf("ANTHROPIC_AUTH_TOKEN=%q, want %q", cmd.env["ANTHROPIC_AUTH_TOKEN"], apiKey)
}
if !strings.Contains(got, "Env: ANTHROPIC_BASE_URL="+baseURL) {
t.Fatalf("stderr missing base URL env log; stderr=%q", got)
}
if !strings.Contains(got, "Env: ANTHROPIC_AUTH_TOKEN=eyJh****test") {
t.Fatalf("stderr missing masked API key log; stderr=%q", got)
}
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"context"
@@ -14,11 +14,92 @@ import (
"sync/atomic"
"syscall"
"time"
backend "codeagent-wrapper/internal/backend"
config "codeagent-wrapper/internal/config"
ilogger "codeagent-wrapper/internal/logger"
parser "codeagent-wrapper/internal/parser"
utils "codeagent-wrapper/internal/utils"
)
const postMessageTerminateDelay = 1 * time.Second
const forceKillWaitTimeout = 5 * time.Second
// Defaults duplicated from wrapper for module decoupling.
const (
defaultWorkdir = "."
defaultCoverageTarget = 90.0
defaultBackendName = "codex"
codexLogLineLimit = 1000
stderrCaptureLimit = 4 * 1024
)
const (
// stdout close reasons
stdoutCloseReasonWait = "wait-done"
stdoutCloseReasonDrain = "drain-timeout"
stdoutCloseReasonCtx = "context-cancel"
stdoutDrainTimeout = 500 * time.Millisecond
)
// Hook points (tests can override inside this package).
var (
selectBackendFn = backend.Select
commandContext = exec.CommandContext
terminateCommandFn = terminateCommand
)
var forceKillDelay atomic.Int32
func init() {
forceKillDelay.Store(5) // seconds - default value
}
type (
Backend = backend.Backend
Config = config.Config
Logger = ilogger.Logger
)
type minimalClaudeSettings = backend.MinimalClaudeSettings
func loadMinimalClaudeSettings() minimalClaudeSettings { return backend.LoadMinimalClaudeSettings() }
func loadGeminiEnv() map[string]string { return backend.LoadGeminiEnv() }
func NewLogger() (*Logger, error) { return ilogger.NewLogger() }
func NewLoggerWithSuffix(suffix string) (*Logger, error) { return ilogger.NewLoggerWithSuffix(suffix) }
func setLogger(l *Logger) { ilogger.SetLogger(l) }
func closeLogger() error { return ilogger.CloseLogger() }
func activeLogger() *Logger { return ilogger.ActiveLogger() }
func logInfo(msg string) { ilogger.LogInfo(msg) }
func logWarn(msg string) { ilogger.LogWarn(msg) }
func logError(msg string) { ilogger.LogError(msg) }
func logConcurrencyPlanning(limit, total int) { ilogger.LogConcurrencyPlanning(limit, total) }
func logConcurrencyState(event, taskID string, active, limit int) {
ilogger.LogConcurrencyState(event, taskID, active, limit)
}
func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func(), onComplete func()) (message, threadID string) {
return parser.ParseJSONStreamInternal(r, warnFn, infoFn, onMessage, onComplete)
}
func sanitizeOutput(s string) string { return utils.SanitizeOutput(s) }
func safeTruncate(s string, maxLen int) string { return utils.SafeTruncate(s, maxLen) }
func min(a, b int) int { return utils.Min(a, b) }
// commandRunner abstracts exec.Cmd for testability
type commandRunner interface {
Start() error
@@ -230,7 +311,7 @@ func newTaskLoggerHandle(taskID string) taskLoggerHandle {
}
// defaultRunCodexTaskFn is the default implementation of runCodexTaskFn (exposed for test reset)
func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
func DefaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
if task.WorkDir == "" {
task.WorkDir = defaultWorkdir
}
@@ -238,13 +319,13 @@ func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
task.Mode = "new"
}
if strings.TrimSpace(task.PromptFile) != "" {
prompt, err := readAgentPromptFile(task.PromptFile, false)
prompt, err := ReadAgentPromptFile(task.PromptFile, false)
if err != nil {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "failed to read prompt file: " + err.Error()}
}
task.Task = wrapTaskWithAgentPrompt(prompt, task.Task)
task.Task = WrapTaskWithAgentPrompt(prompt, task.Task)
}
if task.UseStdin || shouldUseStdin(task.Task, false) {
if task.UseStdin || ShouldUseStdin(task.Task, false) {
task.UseStdin = true
}
@@ -263,12 +344,10 @@ func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
if parentCtx == nil {
parentCtx = context.Background()
}
return runCodexTaskWithContext(parentCtx, task, backend, nil, false, true, timeout)
return RunCodexTaskWithContext(parentCtx, task, backend, "", nil, nil, false, true, timeout)
}
var runCodexTaskFn = defaultRunCodexTaskFn
func topologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
func TopologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
idToTask := make(map[string]TaskSpec, len(tasks))
indegree := make(map[string]int, len(tasks))
adj := make(map[string][]string, len(tasks))
@@ -334,12 +413,16 @@ func topologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
return layers, nil
}
func executeConcurrent(layers [][]TaskSpec, timeout int) []TaskResult {
maxWorkers := resolveMaxParallelWorkers()
return executeConcurrentWithContext(context.Background(), layers, timeout, maxWorkers)
func ExecuteConcurrent(layers [][]TaskSpec, timeout int, runTask func(TaskSpec, int) TaskResult) []TaskResult {
maxWorkers := config.ResolveMaxParallelWorkers()
return ExecuteConcurrentWithContext(context.Background(), layers, timeout, maxWorkers, runTask)
}
func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec, timeout int, maxWorkers int) []TaskResult {
func ExecuteConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec, timeout int, maxWorkers int, runTask func(TaskSpec, int) TaskResult) []TaskResult {
if runTask == nil {
runTask = DefaultRunCodexTaskFn
}
totalTasks := 0
for _, layer := range layers {
totalTasks += len(layer)
@@ -470,7 +553,7 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
printTaskStart(ts.ID, taskLogPath, handle.shared)
res := runCodexTaskFn(ts, timeout)
res := runTask(ts, timeout)
if taskLogPath != "" {
if res.LogPath == "" || (handle.shared && handle.logger != nil && res.LogPath == handle.logger.Path()) {
res.LogPath = taskLogPath
@@ -535,14 +618,14 @@ func getStatusSymbols() (success, warning, failed string) {
return "✓", "⚠️", "✗"
}
func generateFinalOutput(results []TaskResult) string {
return generateFinalOutputWithMode(results, true) // default to summary mode
func GenerateFinalOutput(results []TaskResult) string {
return GenerateFinalOutputWithMode(results, true) // default to summary mode
}
// generateFinalOutputWithMode generates output based on mode
// summaryOnly=true: structured report - every token has value
// summaryOnly=false: full output with complete messages (legacy behavior)
func generateFinalOutputWithMode(results []TaskResult, summaryOnly bool) string {
func GenerateFinalOutputWithMode(results []TaskResult, summaryOnly bool) string {
var sb strings.Builder
successSymbol, warningSymbol, failedSymbol := getStatusSymbols()
@@ -756,7 +839,7 @@ func buildCodexArgs(cfg *Config, targetArg string) []string {
args := []string{"e"}
// Default to bypass sandbox unless CODEX_BYPASS_SANDBOX=false
if cfg.Yolo || envFlagDefaultTrue("CODEX_BYPASS_SANDBOX") {
if cfg.Yolo || config.EnvFlagDefaultTrue("CODEX_BYPASS_SANDBOX") {
logWarn("YOLO mode or CODEX_BYPASS_SANDBOX enabled: running without approval/sandbox protection")
args = append(args, "--dangerously-bypass-approvals-and-sandbox")
}
@@ -787,25 +870,20 @@ func buildCodexArgs(cfg *Config, targetArg string) []string {
)
}
func runCodexTask(taskSpec TaskSpec, silent bool, timeoutSec int) TaskResult {
return runCodexTaskWithContext(context.Background(), taskSpec, nil, nil, false, silent, timeoutSec)
}
func runCodexProcess(parentCtx context.Context, codexArgs []string, taskText string, useStdin bool, timeoutSec int) (message, threadID string, exitCode int) {
res := runCodexTaskWithContext(parentCtx, TaskSpec{Task: taskText, WorkDir: defaultWorkdir, Mode: "new", UseStdin: useStdin}, nil, codexArgs, true, false, timeoutSec)
return res.Message, res.SessionID, res.ExitCode
}
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
func RunCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, defaultCommandName string, defaultArgsBuilder func(*Config, string) []string, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
taskCtx := taskSpec.Context
if parentCtx == nil {
parentCtx = taskSpec.Context
parentCtx = taskCtx
}
if parentCtx == nil {
parentCtx = context.Background()
}
result := TaskResult{TaskID: taskSpec.ID}
injectedLogger := taskLoggerFromContext(parentCtx)
injectedLogger := taskLoggerFromContext(taskCtx)
if injectedLogger == nil {
injectedLogger = taskLoggerFromContext(parentCtx)
}
logger := injectedLogger
cfg := &Config{
@@ -819,8 +897,14 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
Backend: defaultBackendName,
}
commandName := codexCommand
argsBuilder := buildCodexArgsFn
commandName := strings.TrimSpace(defaultCommandName)
if commandName == "" {
commandName = defaultBackendName
}
argsBuilder := defaultArgsBuilder
if argsBuilder == nil {
argsBuilder = buildCodexArgs
}
if backend != nil {
commandName = backend.Command()
argsBuilder = backend.BuildArgs
@@ -844,19 +928,23 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
return result
}
var claudeEnv map[string]string
var fileEnv map[string]string
if cfg.Backend == "claude" {
settings := loadMinimalClaudeSettings()
claudeEnv = settings.Env
fileEnv = settings.Env
if cfg.Mode != "resume" && strings.TrimSpace(cfg.Model) == "" && settings.Model != "" {
cfg.Model = settings.Model
}
}
// Load gemini env from ~/.gemini/.env if exists
var geminiEnv map[string]string
if cfg.Backend == "gemini" {
geminiEnv = loadGeminiEnv()
fileEnv = loadGeminiEnv()
if cfg.Mode != "resume" && strings.TrimSpace(cfg.Model) == "" {
if model := fileEnv["GEMINI_MODEL"]; model != "" {
cfg.Model = model
}
}
}
useStdin := taskSpec.UseStdin
@@ -958,11 +1046,34 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
cmd := newCommandRunner(ctx, commandName, codexArgs...)
if cfg.Backend == "claude" && len(claudeEnv) > 0 {
cmd.SetEnv(claudeEnv)
if len(fileEnv) > 0 {
cmd.SetEnv(fileEnv)
}
if cfg.Backend == "gemini" && len(geminiEnv) > 0 {
cmd.SetEnv(geminiEnv)
envBackend := backend
if envBackend == nil && cfg.Backend != "" {
if b, err := selectBackendFn(cfg.Backend); err == nil {
envBackend = b
}
}
if envBackend != nil {
baseURL, apiKey := config.ResolveBackendConfig(cfg.Backend)
if agentName := strings.TrimSpace(taskSpec.Agent); agentName != "" {
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _ := config.ResolveAgentConfig(agentName)
if strings.EqualFold(strings.TrimSpace(agentBackend), strings.TrimSpace(cfg.Backend)) {
baseURL, apiKey = agentBaseURL, agentAPIKey
}
}
if injected := envBackend.Env(baseURL, apiKey); len(injected) > 0 {
cmd.SetEnv(injected)
// Log injected env vars with masked API keys (to file and stderr)
for k, v := range injected {
msg := fmt.Sprintf("Env: %s=%s", k, maskSensitiveValue(k, v))
logInfoFn(msg)
fmt.Fprintln(os.Stderr, " "+msg)
}
}
}
// For backends that don't support -C flag (claude, gemini), set working directory via cmd.Dir
@@ -1202,11 +1313,9 @@ waitLoop:
case parsed = <-parseCh:
closeWithReason(stdout, stdoutCloseReasonWait)
case <-messageSeen:
messageSeenObserved = true
closeWithReason(stdout, stdoutCloseReasonWait)
parsed = <-parseCh
case <-completeSeen:
completeSeenObserved = true
closeWithReason(stdout, stdoutCloseReasonWait)
parsed = <-parseCh
case <-drainTimer.C:
@@ -1276,44 +1385,13 @@ waitLoop:
return result
}
func forwardSignals(ctx context.Context, cmd commandRunner, logErrorFn func(string)) {
notify := signalNotifyFn
stop := signalStopFn
if notify == nil {
notify = signal.Notify
}
if stop == nil {
stop = signal.Stop
}
sigCh := make(chan os.Signal, 1)
notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
go func() {
defer stop(sigCh)
select {
case sig := <-sigCh:
logErrorFn(fmt.Sprintf("Received signal: %v", sig))
if proc := cmd.Process(); proc != nil {
_ = sendTermSignal(proc)
time.AfterFunc(time.Duration(forceKillDelay.Load())*time.Second, func() {
if p := cmd.Process(); p != nil {
_ = p.Kill()
}
})
}
case <-ctx.Done():
}
}()
}
func cancelReason(commandName string, ctx context.Context) string {
if ctx == nil {
return "Context cancelled"
}
if commandName == "" {
commandName = codexCommand
commandName = defaultBackendName
}
if errors.Is(ctx.Err(), context.DeadlineExceeded) {
@@ -1378,20 +1456,18 @@ func terminateCommand(cmd commandRunner) *forceKillTimer {
return &forceKillTimer{timer: timer, done: done}
}
func terminateProcess(cmd commandRunner) *time.Timer {
if cmd == nil {
return nil
}
proc := cmd.Process()
if proc == nil {
return nil
}
_ = sendTermSignal(proc)
return time.AfterFunc(time.Duration(forceKillDelay.Load())*time.Second, func() {
if p := cmd.Process(); p != nil {
_ = p.Kill()
// maskSensitiveValue masks sensitive values like API keys for logging.
// Values containing "key", "token", or "secret" (case-insensitive) are masked.
// For values longer than 8 chars: shows first 4 + **** + last 4.
// For shorter values: shows only ****.
func maskSensitiveValue(key, value string) string {
keyLower := strings.ToLower(key)
if strings.Contains(keyLower, "key") || strings.Contains(keyLower, "token") || strings.Contains(keyLower, "secret") {
if len(value) > 8 {
return value[:4] + "****" + value[len(value)-4:]
} else if len(value) > 0 {
return "****"
}
})
}
return value
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"bytes"
@@ -45,7 +45,7 @@ func (f *filteringWriter) Write(p []byte) (n int, err error) {
break
}
if !f.shouldFilter(line) {
f.w.Write([]byte(line))
_, _ = f.w.Write([]byte(line))
}
}
return len(p), nil
@@ -65,7 +65,7 @@ func (f *filteringWriter) Flush() {
if f.buf.Len() > 0 {
remaining := f.buf.String()
if !f.shouldFilter(remaining) {
f.w.Write([]byte(remaining))
_, _ = f.w.Write([]byte(remaining))
}
f.buf.Reset()
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"bytes"
@@ -48,7 +48,7 @@ func TestFilteringWriter(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
var buf bytes.Buffer
fw := newFilteringWriter(&buf, tt.patterns)
fw.Write([]byte(tt.input))
_, _ = fw.Write([]byte(tt.input))
fw.Flush()
if got := buf.String(); got != tt.want {
@@ -63,8 +63,8 @@ func TestFilteringWriterPartialLines(t *testing.T) {
fw := newFilteringWriter(&buf, geminiNoisePatterns)
// Write partial line
fw.Write([]byte("Hello "))
fw.Write([]byte("World\n"))
_, _ = fw.Write([]byte("Hello "))
_, _ = fw.Write([]byte("World\n"))
fw.Flush()
if got := buf.String(); got != "Hello World\n" {

View File

@@ -0,0 +1,124 @@
package executor
import "bytes"
type logWriter struct {
prefix string
maxLen int
buf bytes.Buffer
dropped bool
}
func newLogWriter(prefix string, maxLen int) *logWriter {
if maxLen <= 0 {
maxLen = codexLogLineLimit
}
return &logWriter{prefix: prefix, maxLen: maxLen}
}
func (lw *logWriter) Write(p []byte) (int, error) {
if lw == nil {
return len(p), nil
}
total := len(p)
for len(p) > 0 {
if idx := bytes.IndexByte(p, '\n'); idx >= 0 {
lw.writeLimited(p[:idx])
lw.logLine(true)
p = p[idx+1:]
continue
}
lw.writeLimited(p)
break
}
return total, nil
}
func (lw *logWriter) Flush() {
if lw == nil || lw.buf.Len() == 0 {
return
}
lw.logLine(false)
}
func (lw *logWriter) logLine(force bool) {
if lw == nil {
return
}
line := lw.buf.String()
dropped := lw.dropped
lw.dropped = false
lw.buf.Reset()
if line == "" && !force {
return
}
if lw.maxLen > 0 {
if dropped {
if lw.maxLen > 3 {
line = line[:min(len(line), lw.maxLen-3)] + "..."
} else {
line = line[:min(len(line), lw.maxLen)]
}
} else if len(line) > lw.maxLen {
cutoff := lw.maxLen
if cutoff > 3 {
line = line[:cutoff-3] + "..."
} else {
line = line[:cutoff]
}
}
}
logInfo(lw.prefix + line)
}
func (lw *logWriter) writeLimited(p []byte) {
if lw == nil || len(p) == 0 {
return
}
if lw.maxLen <= 0 {
lw.buf.Write(p)
return
}
remaining := lw.maxLen - lw.buf.Len()
if remaining <= 0 {
lw.dropped = true
return
}
if len(p) <= remaining {
lw.buf.Write(p)
return
}
lw.buf.Write(p[:remaining])
lw.dropped = true
}
type tailBuffer struct {
limit int
data []byte
}
func (b *tailBuffer) Write(p []byte) (int, error) {
if b.limit <= 0 {
return len(p), nil
}
if len(p) >= b.limit {
b.data = append(b.data[:0], p[len(p)-b.limit:]...)
return len(p), nil
}
total := len(b.data) + len(p)
if total <= b.limit {
b.data = append(b.data, p...)
return len(p), nil
}
overflow := total - b.limit
b.data = append(b.data[overflow:], p...)
return len(p), nil
}
func (b *tailBuffer) String() string {
return string(b.data)
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"os"
@@ -7,14 +7,12 @@ import (
)
func TestLogWriterWriteLimitsBuffer(t *testing.T) {
defer resetTestHooks()
logger, err := NewLogger()
if err != nil {
t.Fatalf("NewLogger error: %v", err)
}
setLogger(logger)
defer closeLogger()
t.Cleanup(func() { _ = closeLogger() })
lw := newLogWriter("P:", 10)
_, _ = lw.Write([]byte(strings.Repeat("a", 100)))

View File

@@ -0,0 +1,135 @@
package executor
import (
"bytes"
"fmt"
"strings"
config "codeagent-wrapper/internal/config"
)
func ParseParallelConfig(data []byte) (*ParallelConfig, error) {
trimmed := bytes.TrimSpace(data)
if len(trimmed) == 0 {
return nil, fmt.Errorf("parallel config is empty")
}
tasks := strings.Split(string(trimmed), "---TASK---")
var cfg ParallelConfig
seen := make(map[string]struct{})
taskIndex := 0
for _, taskBlock := range tasks {
taskBlock = strings.TrimSpace(taskBlock)
if taskBlock == "" {
continue
}
taskIndex++
parts := strings.SplitN(taskBlock, "---CONTENT---", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("task block #%d missing ---CONTENT--- separator", taskIndex)
}
meta := strings.TrimSpace(parts[0])
content := strings.TrimSpace(parts[1])
task := TaskSpec{WorkDir: defaultWorkdir}
agentSpecified := false
for _, line := range strings.Split(meta, "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
kv := strings.SplitN(line, ":", 2)
if len(kv) != 2 {
continue
}
key := strings.TrimSpace(kv[0])
value := strings.TrimSpace(kv[1])
switch key {
case "id":
task.ID = value
case "workdir":
// Validate workdir: "-" is not a valid directory
if value == "-" {
return nil, fmt.Errorf("task block #%d has invalid workdir: '-' is not a valid directory path", taskIndex)
}
task.WorkDir = value
case "session_id":
task.SessionID = value
task.Mode = "resume"
case "backend":
task.Backend = value
case "model":
task.Model = value
case "reasoning_effort":
task.ReasoningEffort = value
case "agent":
agentSpecified = true
task.Agent = value
case "skip_permissions", "skip-permissions":
if value == "" {
task.SkipPermissions = true
continue
}
task.SkipPermissions = config.ParseBoolFlag(value, false)
case "dependencies":
for _, dep := range strings.Split(value, ",") {
dep = strings.TrimSpace(dep)
if dep != "" {
task.Dependencies = append(task.Dependencies, dep)
}
}
}
}
if task.Mode == "" {
task.Mode = "new"
}
if agentSpecified {
if strings.TrimSpace(task.Agent) == "" {
return nil, fmt.Errorf("task block #%d has empty agent field", taskIndex)
}
if err := config.ValidateAgentName(task.Agent); err != nil {
return nil, fmt.Errorf("task block #%d invalid agent name: %w", taskIndex, err)
}
backend, model, promptFile, reasoning, _, _, _ := config.ResolveAgentConfig(task.Agent)
if task.Backend == "" {
task.Backend = backend
}
if task.Model == "" {
task.Model = model
}
if task.ReasoningEffort == "" {
task.ReasoningEffort = reasoning
}
task.PromptFile = promptFile
}
if task.ID == "" {
return nil, fmt.Errorf("task block #%d missing id field", taskIndex)
}
if content == "" {
return nil, fmt.Errorf("task block #%d (%q) missing content", taskIndex, task.ID)
}
if task.Mode == "resume" && strings.TrimSpace(task.SessionID) == "" {
return nil, fmt.Errorf("task block #%d (%q) has empty session_id", taskIndex, task.ID)
}
if _, exists := seen[task.ID]; exists {
return nil, fmt.Errorf("task block #%d has duplicate id: %s", taskIndex, task.ID)
}
task.Task = content
cfg.Tasks = append(cfg.Tasks, task)
seen[task.ID] = struct{}{}
}
if len(cfg.Tasks) == 0 {
return nil, fmt.Errorf("no tasks found")
}
return &cfg, nil
}

Some files were not shown because too many files have changed in this diff Show More