mirror of
https://github.com/cexll/myclaude.git
synced 2026-02-05 02:30:26 +08:00
Compare commits
36 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a30f434b5d | ||
|
|
41f4e21268 | ||
|
|
a67aa00c9a | ||
|
|
d61a0f9ffd | ||
|
|
fe5508228f | ||
|
|
50093036c3 | ||
|
|
0cae0ede08 | ||
|
|
4613b57240 | ||
|
|
7535a7b101 | ||
|
|
f6bb97eba9 | ||
|
|
78a411462b | ||
|
|
9471a981e3 | ||
|
|
3d27d44676 | ||
|
|
6a66c9741f | ||
|
|
a09c103cfb | ||
|
|
1dec763e26 | ||
|
|
f57ea2df59 | ||
|
|
d215c33549 | ||
|
|
b3f8fcfea6 | ||
|
|
806bb04a35 | ||
|
|
b1156038de | ||
|
|
0c93bbe574 | ||
|
|
6f4f4e701b | ||
|
|
ff301507fe | ||
|
|
93b72eba42 | ||
|
|
b01758e7e1 | ||
|
|
c51b38c671 | ||
|
|
b227fee225 | ||
|
|
2b7569335b | ||
|
|
9e667f0895 | ||
|
|
4759eb2c42 | ||
|
|
edbf168b57 | ||
|
|
9bfea81ca6 | ||
|
|
a9bcea45f5 | ||
|
|
8554da6e2f | ||
|
|
b2f941af5f |
22
.gitattributes
vendored
Normal file
22
.gitattributes
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
# Ensure shell scripts always use LF line endings on all platforms
|
||||
*.sh text eol=lf
|
||||
|
||||
# Ensure Python files use LF line endings
|
||||
*.py text eol=lf
|
||||
|
||||
# Auto-detect text files and normalize line endings to LF
|
||||
* text=auto eol=lf
|
||||
|
||||
# Explicitly declare files that should always be treated as binary
|
||||
*.exe binary
|
||||
*.png binary
|
||||
*.jpg binary
|
||||
*.jpeg binary
|
||||
*.gif binary
|
||||
*.ico binary
|
||||
*.mov binary
|
||||
*.mp4 binary
|
||||
*.mp3 binary
|
||||
*.zip binary
|
||||
*.gz binary
|
||||
*.tar binary
|
||||
23
.github/workflows/release.yml
vendored
23
.github/workflows/release.yml
vendored
@@ -97,6 +97,11 @@ jobs:
|
||||
with:
|
||||
path: artifacts
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Prepare release files
|
||||
run: |
|
||||
mkdir -p release
|
||||
@@ -104,10 +109,26 @@ jobs:
|
||||
cp install.sh install.bat release/
|
||||
ls -la release/
|
||||
|
||||
- name: Generate release notes with git-cliff
|
||||
run: |
|
||||
# Install git-cliff via npx
|
||||
npx git-cliff@latest --current --strip all -o release_notes.md
|
||||
|
||||
# Fallback if generation failed
|
||||
if [ ! -s release_notes.md ]; then
|
||||
echo "⚠️ Failed to generate release notes with git-cliff" > release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
echo "## What's Changed" >> release_notes.md
|
||||
echo "See commits in this release for details." >> release_notes.md
|
||||
fi
|
||||
|
||||
echo "--- Generated Release Notes ---"
|
||||
cat release_notes.md
|
||||
|
||||
- name: Create Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
with:
|
||||
files: release/*
|
||||
generate_release_notes: true
|
||||
body_path: release_notes.md
|
||||
draft: false
|
||||
prerelease: false
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -4,3 +4,4 @@
|
||||
.pytest_cache
|
||||
__pycache__
|
||||
.coverage
|
||||
coverage.out
|
||||
|
||||
714
CHANGELOG.md
714
CHANGELOG.md
@@ -1,6 +1,712 @@
|
||||
# Changelog
|
||||
|
||||
## 5.2.0 - 2025-12-11
|
||||
- PR #53: unified CLI version source in `codeagent-wrapper/main.go` and bumped to 5.2.0.
|
||||
- Added legacy `codex-wrapper` alias support (runtime detection plus `scripts/install.sh` symlink helper).
|
||||
- Updated documentation to reflect backend flag usage and new version output.
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
## [5.2.4] - 2025-12-16
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- integrate git-cliff for automated changelog generation
|
||||
|
||||
- bump version to 5.2.4
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 防止 Claude backend 无限递归调用
|
||||
|
||||
- isolate log files per task in parallel mode
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #70 from cexll/fix/prevent-codeagent-infinite-recursion
|
||||
|
||||
- Merge pull request #69 from cexll/myclaude-master-20251215-073053-338465000
|
||||
|
||||
- update CHANGELOG.md
|
||||
|
||||
- Merge pull request #65 from cexll/fix/issue-64-buffer-overflow
|
||||
|
||||
## [5.2.3] - 2025-12-15
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 修复 bufio.Scanner token too long 错误 ([#64](https://github.com/cexll/myclaude/issues/64))
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- change version
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 同步测试中的版本号至 5.2.3
|
||||
|
||||
## [5.2.2] - 2025-12-13
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- Bump version and clean up documentation
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codeagent backend claude no auto
|
||||
|
||||
- fix install.py dev fail
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- Fix tests for ClaudeBackend default --dangerously-skip-permissions
|
||||
|
||||
## [5.2.1] - 2025-12-13
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codeagent claude and gemini root dir
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update readme
|
||||
|
||||
## [5.2.0] - 2025-12-13
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- Update CHANGELOG and remove deprecated test files
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix race condition in stdout parsing
|
||||
|
||||
- add worker limit cap and remove legacy alias
|
||||
|
||||
- use -r flag for gemini backend resume
|
||||
|
||||
- clarify module list shows default state not enabled
|
||||
|
||||
- use -r flag for claude backend resume
|
||||
|
||||
- remove binary artifacts and improve error messages
|
||||
|
||||
- 异常退出时显示最近错误信息
|
||||
|
||||
- op_run_command 实时流式输出
|
||||
|
||||
- 修复权限标志逻辑和版本号测试
|
||||
|
||||
- 重构信号处理逻辑避免重复 nil 检查
|
||||
|
||||
- 移除 .claude 配置文件验证步骤
|
||||
|
||||
- 修复并行执行启动横幅重复打印问题
|
||||
|
||||
- 修复master合并后的编译和测试问题
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge rc/5.2 into master: v5.2.0 release improvements
|
||||
|
||||
- Merge pull request #53 from cexll/rc/5.2
|
||||
|
||||
- remove docs
|
||||
|
||||
- remove docs
|
||||
|
||||
- add prototype prompt skill
|
||||
|
||||
- add prd skill
|
||||
|
||||
- update memory claude
|
||||
|
||||
- remove command gh flow
|
||||
|
||||
- update license
|
||||
|
||||
- Merge branch 'master' into rc/5.2
|
||||
|
||||
- Merge pull request #52 from cexll/fix/parallel-log-path-on-startup
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- remove GitHub workflow related content
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- Complete skills system integration and config cleanup
|
||||
|
||||
- Improve release notes and installation scripts
|
||||
|
||||
- 添加终端日志输出和 verbose 模式
|
||||
|
||||
- 完整多后端支持与安全优化
|
||||
|
||||
- 替换 Codex 为 codeagent 并添加 UI 自动检测
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
|
||||
- 调整文件命名和技能定义
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 添加 ExtractRecentErrors 单元测试
|
||||
|
||||
## [5.1.4] - 2025-12-09
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 任务启动时立即返回日志文件路径以支持实时调试
|
||||
|
||||
## [5.1.3] - 2025-12-08
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- resolve CI timing race in TestFakeCmdInfra
|
||||
|
||||
## [5.1.2] - 2025-12-08
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 修复channel同步竞态条件和死锁问题
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #51 from cexll/fix/channel-sync-race-conditions
|
||||
|
||||
- change codex-wrapper version
|
||||
|
||||
## [5.1.1] - 2025-12-08
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 增强日志清理的安全性和可靠性
|
||||
|
||||
- resolve data race on forceKillDelay with atomic operations
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #49 from cexll/freespace8/master
|
||||
|
||||
- resolve signal handling conflict preserving testability and Windows support
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 补充测试覆盖提升至 89.3%
|
||||
|
||||
## [5.1.0] - 2025-12-07
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #45 from Michaelxwb/master
|
||||
|
||||
- 修改windows安装说明
|
||||
|
||||
- 修改打包脚本
|
||||
|
||||
- 支持windows系统的安装
|
||||
|
||||
- Merge pull request #1 from Michaelxwb/feature-win
|
||||
|
||||
- 支持window
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- 添加启动时清理日志的功能和--cleanup标志支持
|
||||
|
||||
- implement enterprise workflow with multi-backend support
|
||||
|
||||
## [5.0.0] - 2025-12-05
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- clarify unit-test coverage levels in requirement questions
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- defer startup log until args parsed
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #43 from gurdasnijor/smithery/add-badge
|
||||
|
||||
- Add Smithery badge
|
||||
|
||||
- Merge pull request #42 from freespace8/master
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- rewrite documentation for v5.0 modular architecture
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat install.py
|
||||
|
||||
- implement modular installation system
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
|
||||
- remove deprecated plugin modules
|
||||
|
||||
## [4.8.2] - 2025-12-02
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- skip signal test in CI environment
|
||||
|
||||
- make forceKillDelay testable to prevent signal test timeout
|
||||
|
||||
- correct Go version in go.mod from 1.25.3 to 1.21
|
||||
|
||||
- fix codex wrapper async log
|
||||
|
||||
- capture and include stderr in error messages
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #41 from cexll/fix-async-log
|
||||
|
||||
- remove test case 90
|
||||
|
||||
- optimize codex-wrapper
|
||||
|
||||
- Merge branch 'master' into fix-async-log
|
||||
|
||||
## [4.8.1] - 2025-12-01
|
||||
|
||||
|
||||
### 🎨 Styling
|
||||
|
||||
|
||||
- replace emoji with text labels
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- improve --parallel parameter validation and docs
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- remove codex-wrapper bin
|
||||
|
||||
## [4.8.0] - 2025-11-30
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update codex skill dependencies
|
||||
|
||||
## [4.7.3] - 2025-11-29
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 保留日志文件以便程序退出后调试并完善日志输出功能
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #34 from cexll/cce-worktree-master-20251129-111802-997076000
|
||||
|
||||
- update CLAUDE.md and codex skill
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- improve codex skill parameter best practices
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- add session resume support and improve output format
|
||||
|
||||
- add parallel execution support to codex-wrapper
|
||||
|
||||
- add async logging to temp file with lifecycle management
|
||||
|
||||
## [4.7.2] - 2025-11-28
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- improve buffer size and streamline message extraction
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #32 from freespace8/master
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 增加对超大单行文本和非字符串文本的处理测试
|
||||
|
||||
## [4.7.1] - 2025-11-27
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- optimize dev pipline
|
||||
|
||||
- Merge feat/codex-wrapper: fix repository URLs
|
||||
|
||||
## [4.7] - 2025-11-27
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- update repository URLs to cexll/myclaude
|
||||
|
||||
## [4.7-alpha1] - 2025-11-27
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix marketplace schema validation error in dev-workflow plugin
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #29 from cexll/feat/codex-wrapper
|
||||
|
||||
- Add codex-wrapper Go implementation
|
||||
|
||||
- update readme
|
||||
|
||||
- update readme
|
||||
|
||||
## [4.6] - 2025-11-25
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update dev workflow
|
||||
|
||||
- update dev workflow
|
||||
|
||||
## [4.5] - 2025-11-25
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skill eof
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update dev workflow plugin
|
||||
|
||||
- update readme
|
||||
|
||||
## [4.4] - 2025-11-22
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skill timeout and add more log
|
||||
|
||||
- fix codex skill
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update gemini skills
|
||||
|
||||
- update dev workflow
|
||||
|
||||
- update codex skills model config
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #24 from cexll/swe-agent/23-1763544297
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- 支持通过环境变量配置 skills 模型
|
||||
|
||||
## [4.3] - 2025-11-19
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skills running
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update skills plugin
|
||||
|
||||
- update gemini
|
||||
|
||||
- update doc
|
||||
|
||||
- Add Gemini CLI integration skill
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat simple dev workflow
|
||||
|
||||
## [4.2.2] - 2025-11-15
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update codex skills
|
||||
|
||||
## [4.2.1] - 2025-11-14
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #21 from Tshoiasc/master
|
||||
|
||||
- Merge branch 'master' into master
|
||||
|
||||
- Change default model to gpt-5.1-codex
|
||||
|
||||
- Enhance codex.py to auto-detect long inputs and switch to stdin mode, improving handling of shell argument issues. Updated build_codex_args to support stdin and added relevant logging for task length warnings.
|
||||
|
||||
## [4.2] - 2025-11-13
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex.py wsl run err
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- optimize codex skills
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Rename SKILLS.md to SKILL.md
|
||||
|
||||
- optimize codex skills
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat codex skills
|
||||
|
||||
## [4.1] - 2025-11-04
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update enhance-prompt.md response
|
||||
|
||||
- update readme
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- 新增 /enhance-prompt 命令并更新所有 README 文档
|
||||
|
||||
## [4.0] - 2025-10-22
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix skills format
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #18 from cexll/swe-agent/17-1760969135
|
||||
|
||||
- update requirements clarity
|
||||
|
||||
- update .gitignore
|
||||
|
||||
- Fix #17: Update root marketplace.json to use skills array
|
||||
|
||||
- Fix #17: Convert requirements-clarity to correct plugin directory format
|
||||
|
||||
- Fix #17: Convert requirements-clarity to correct plugin directory format
|
||||
|
||||
- Convert requirements-clarity to plugin format with English prompts
|
||||
|
||||
- Translate requirements-clarity skill to English for plugin compatibility
|
||||
|
||||
- Add requirements-clarity Claude Skill
|
||||
|
||||
- Add requirements clarification command
|
||||
|
||||
- update
|
||||
|
||||
## [3.5] - 2025-10-20
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #15 from cexll/swe-agent/13-1760944712
|
||||
|
||||
- Fix #13: Clean up redundant README files
|
||||
|
||||
- Optimize README structure - Solution A (modular)
|
||||
|
||||
- Merge pull request #14 from cexll/swe-agent/12-1760944588
|
||||
|
||||
- Fix #12: Update Makefile install paths for new directory structure
|
||||
|
||||
## [3.4] - 2025-10-20
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #11 from cexll/swe-agent/10-1760752533
|
||||
|
||||
- Fix marketplace metadata references
|
||||
|
||||
- Fix plugin configuration: rename to marketplace.json and update repository URLs
|
||||
|
||||
- Fix #10: Restructure plugin directories to ensure proper command isolation
|
||||
|
||||
## [3.3] - 2025-10-15
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Update README-zh.md
|
||||
|
||||
- Update README.md
|
||||
|
||||
- Update marketplace.json
|
||||
|
||||
- Update Chinese README with v3.2 plugin system documentation
|
||||
|
||||
- Update README with v3.2 plugin system documentation
|
||||
|
||||
## [3.2] - 2025-10-10
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Add Claude Code plugin system support
|
||||
|
||||
- update readme
|
||||
|
||||
- Add Makefile for quick deployment and update READMEs
|
||||
|
||||
## [3.1] - 2025-09-17
|
||||
|
||||
|
||||
### ◀️ Revert
|
||||
|
||||
|
||||
- revert
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fixed bmad-orchestrator not fund
|
||||
|
||||
- fix bmad
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update bmad review with codex support
|
||||
|
||||
- 优化 BMAD 工作流和代理配置
|
||||
|
||||
- update gpt5
|
||||
|
||||
- support bmad output-style
|
||||
|
||||
- update bmad user guide
|
||||
|
||||
- update bmad readme
|
||||
|
||||
- optimize requirements pilot
|
||||
|
||||
- add use gpt5 codex
|
||||
|
||||
- add bmad pilot
|
||||
|
||||
- sync READMEs with actual commands/agents; remove nonexistent commands; enhance requirements-pilot with testing decision gate and options.
|
||||
|
||||
- Update Chinese README and requirements-pilot command to align with latest workflow
|
||||
|
||||
- update readme
|
||||
|
||||
- update agent
|
||||
|
||||
- update bugfix sub agents
|
||||
|
||||
- Update ask support KISS YAGNI SOLID
|
||||
|
||||
- Add comprehensive documentation and multi-agent workflow system
|
||||
|
||||
- update commands
|
||||
<!-- generated by git-cliff -->
|
||||
|
||||
17
Makefile
17
Makefile
@@ -1,7 +1,7 @@
|
||||
# Claude Code Multi-Agent Workflow System Makefile
|
||||
# Quick deployment for BMAD and Requirements workflows
|
||||
|
||||
.PHONY: help install deploy-bmad deploy-requirements deploy-essentials deploy-advanced deploy-all deploy-commands deploy-agents clean test
|
||||
.PHONY: help install deploy-bmad deploy-requirements deploy-essentials deploy-advanced deploy-all deploy-commands deploy-agents clean test changelog
|
||||
|
||||
# Default target
|
||||
help:
|
||||
@@ -22,6 +22,7 @@ help:
|
||||
@echo " deploy-all - Deploy everything (commands + agents)"
|
||||
@echo " test-bmad - Test BMAD workflow with sample"
|
||||
@echo " test-requirements - Test Requirements workflow with sample"
|
||||
@echo " changelog - Update CHANGELOG.md using git-cliff"
|
||||
@echo " clean - Clean generated artifacts"
|
||||
@echo " help - Show this help message"
|
||||
|
||||
@@ -145,3 +146,17 @@ all: deploy-all
|
||||
version:
|
||||
@echo "Claude Code Multi-Agent Workflow System v3.1"
|
||||
@echo "BMAD + Requirements-Driven Development"
|
||||
|
||||
# Update CHANGELOG.md using git-cliff
|
||||
changelog:
|
||||
@echo "📝 Updating CHANGELOG.md with git-cliff..."
|
||||
@if ! command -v git-cliff > /dev/null 2>&1; then \
|
||||
echo "❌ git-cliff not found. Installing via Homebrew..."; \
|
||||
brew install git-cliff; \
|
||||
fi
|
||||
@git-cliff -o CHANGELOG.md
|
||||
@echo "✅ CHANGELOG.md updated successfully!"
|
||||
@echo ""
|
||||
@echo "Preview the changes:"
|
||||
@echo " git diff CHANGELOG.md"
|
||||
|
||||
|
||||
19
README.md
19
README.md
@@ -1,3 +1,5 @@
|
||||
[中文](README_CN.md) [English](README.md)
|
||||
|
||||
# Claude Code Multi-Agent Workflow System
|
||||
|
||||
[](https://smithery.ai/skills?ns=cexll&utm_source=github&utm_medium=badge)
|
||||
@@ -7,21 +9,21 @@
|
||||
[](https://claude.ai/code)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
|
||||
> AI-powered development automation with Claude Code + Codex collaboration
|
||||
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini)
|
||||
|
||||
## Core Concept: Claude Code + Codex
|
||||
## Core Concept: Multi-Backend Architecture
|
||||
|
||||
This system leverages a **dual-agent architecture**:
|
||||
This system leverages a **dual-agent architecture** with pluggable AI backends:
|
||||
|
||||
| Role | Agent | Responsibility |
|
||||
|------|-------|----------------|
|
||||
| **Orchestrator** | Claude Code | Planning, context gathering, verification, user interaction |
|
||||
| **Executor** | Codex | Code editing, test execution, file operations |
|
||||
| **Executor** | codeagent-wrapper | Code editing, test execution (Codex/Claude/Gemini backends) |
|
||||
|
||||
**Why this separation?**
|
||||
- Claude Code excels at understanding context and orchestrating complex workflows
|
||||
- Codex excels at focused code generation and execution
|
||||
- Together they provide better results than either alone
|
||||
- Specialized backends (Codex for code, Claude for reasoning, Gemini for prototyping) excel at focused execution
|
||||
- Backend selection via `--backend codex|claude|gemini` matches the model to the task
|
||||
|
||||
## Quick Start(Please execute in Powershell on Windows)
|
||||
|
||||
@@ -246,7 +248,7 @@ bash install.sh
|
||||
|
||||
#### Windows
|
||||
|
||||
Windows installs place `codex-wrapper.exe` in `%USERPROFILE%\bin`.
|
||||
Windows installs place `codeagent-wrapper.exe` in `%USERPROFILE%\bin`.
|
||||
|
||||
```powershell
|
||||
# PowerShell (recommended)
|
||||
@@ -318,13 +320,10 @@ python3 install.py --module dev --force
|
||||
## Documentation
|
||||
|
||||
### Core Guides
|
||||
- **[Architecture Overview](docs/architecture.md)** - System architecture and component design
|
||||
- **[Codeagent-Wrapper Guide](docs/CODEAGENT-WRAPPER.md)** - Multi-backend execution wrapper
|
||||
- **[GitHub Workflow Guide](docs/GITHUB-WORKFLOW.md)** - Issue-to-PR automation
|
||||
- **[Hooks Documentation](docs/HOOKS.md)** - Custom hooks and automation
|
||||
|
||||
### Additional Resources
|
||||
- **[Enterprise Workflow Ideas](docs/enterprise-workflow-ideas.md)** - Advanced patterns and best practices
|
||||
- **[Installation Log](install.log)** - Installation history and troubleshooting
|
||||
|
||||
---
|
||||
|
||||
14
README_CN.md
14
README_CN.md
@@ -4,21 +4,21 @@
|
||||
[](https://claude.ai/code)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
|
||||
> AI 驱动的开发自动化 - Claude Code + Codex 协作
|
||||
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini)
|
||||
|
||||
## 核心概念:Claude Code + Codex
|
||||
## 核心概念:多后端架构
|
||||
|
||||
本系统采用**双智能体架构**:
|
||||
本系统采用**双智能体架构**与可插拔 AI 后端:
|
||||
|
||||
| 角色 | 智能体 | 职责 |
|
||||
|------|-------|------|
|
||||
| **编排者** | Claude Code | 规划、上下文收集、验证、用户交互 |
|
||||
| **执行者** | Codex | 代码编辑、测试执行、文件操作 |
|
||||
| **执行者** | codeagent-wrapper | 代码编辑、测试执行(Codex/Claude/Gemini 后端)|
|
||||
|
||||
**为什么分离?**
|
||||
- Claude Code 擅长理解上下文和编排复杂工作流
|
||||
- Codex 擅长专注的代码生成和执行
|
||||
- 两者结合效果优于单独使用
|
||||
- 专业后端(Codex 擅长代码、Claude 擅长推理、Gemini 擅长原型)专注执行
|
||||
- 通过 `--backend codex|claude|gemini` 匹配模型与任务
|
||||
|
||||
## 快速开始(windows上请在Powershell中执行)
|
||||
|
||||
@@ -237,7 +237,7 @@ bash install.sh
|
||||
|
||||
#### Windows 系统
|
||||
|
||||
Windows 系统会将 `codex-wrapper.exe` 安装到 `%USERPROFILE%\bin`。
|
||||
Windows 系统会将 `codeagent-wrapper.exe` 安装到 `%USERPROFILE%\bin`。
|
||||
|
||||
```powershell
|
||||
# PowerShell(推荐)
|
||||
|
||||
@@ -427,6 +427,10 @@ Generate architecture document at `./.claude/specs/{feature_name}/02-system-arch
|
||||
|
||||
## Important Behaviors
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, REST, GraphQL, JWT, RBAC, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Start by reviewing and referencing the PRD
|
||||
- Present initial architecture based on requirements
|
||||
|
||||
@@ -419,6 +419,10 @@ logger.info('User created', {
|
||||
|
||||
## Important Implementation Rules
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, CRUD, JWT, SQL, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Follow architecture specifications exactly
|
||||
- Implement all acceptance criteria from PRD
|
||||
|
||||
@@ -22,6 +22,10 @@ You are the BMAD Orchestrator. Your core focus is repository analysis, workflow
|
||||
- Consistency: ensure conventions and patterns discovered in scan are preserved downstream
|
||||
- Explicit handoffs: clearly document assumptions, risks, and integration points for other agents
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, PRD, Sprint, etc.) in English; translate explanatory text only.
|
||||
|
||||
## UltraThink Repository Scan
|
||||
|
||||
When asked to analyze the repository, follow this structure and return a clear, actionable summary.
|
||||
|
||||
@@ -313,6 +313,10 @@ Generate PRD at `./.claude/specs/{feature_name}/01-product-requirements.md`:
|
||||
|
||||
## Important Behaviors
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, Sprint, PRD, KPI, MVP, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Start immediately with greeting and initial understanding
|
||||
- Show quality scores transparently
|
||||
|
||||
@@ -478,6 +478,10 @@ module.exports = {
|
||||
|
||||
## Important Testing Rules
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, E2E, CI/CD, Mock, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Test all acceptance criteria from PRD
|
||||
- Cover happy path, edge cases, and error scenarios
|
||||
|
||||
@@ -45,3 +45,7 @@ You are an independent code review agent responsible for conducting reviews betw
|
||||
- Focus on actionable findings
|
||||
- Provide specific QA guidance
|
||||
- Use clear, parseable output format
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, PRD, Sprint, etc.) in English; translate explanatory text only.
|
||||
|
||||
@@ -351,6 +351,10 @@ So that [benefit]
|
||||
|
||||
## Important Behaviors
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (Sprint, Epic, Story, Backlog, Velocity, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Read both PRD and Architecture documents thoroughly
|
||||
- Create comprehensive task breakdown
|
||||
|
||||
72
cliff.toml
Normal file
72
cliff.toml
Normal file
@@ -0,0 +1,72 @@
|
||||
# git-cliff configuration file
|
||||
# https://git-cliff.org/docs/configuration
|
||||
|
||||
[changelog]
|
||||
# changelog header
|
||||
header = """
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
"""
|
||||
# template for the changelog body
|
||||
body = """
|
||||
{% if version %}
|
||||
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
|
||||
{% else %}
|
||||
## Unreleased
|
||||
{% endif %}
|
||||
{% for group, commits in commits | group_by(attribute="group") %}
|
||||
### {{ group }}
|
||||
|
||||
{% for commit in commits %}
|
||||
- {{ commit.message | split(pat="\n") | first }}
|
||||
{% endfor -%}
|
||||
{% endfor -%}
|
||||
"""
|
||||
# remove the leading and trailing whitespace from the template
|
||||
trim = true
|
||||
# changelog footer
|
||||
footer = """
|
||||
<!-- generated by git-cliff -->
|
||||
"""
|
||||
|
||||
[git]
|
||||
# parse the commits based on https://www.conventionalcommits.org
|
||||
conventional_commits = true
|
||||
# filter out the commits that are not conventional
|
||||
filter_unconventional = false
|
||||
# process each line of a commit as an individual commit
|
||||
split_commits = false
|
||||
# regex for preprocessing the commit messages
|
||||
commit_preprocessors = [
|
||||
{ pattern = '\((\w+\s)?#([0-9]+)\)', replace = "([#${2}](https://github.com/cexll/myclaude/issues/${2}))" },
|
||||
]
|
||||
# regex for parsing and grouping commits
|
||||
commit_parsers = [
|
||||
{ message = "^feat", group = "🚀 Features" },
|
||||
{ message = "^fix", group = "🐛 Bug Fixes" },
|
||||
{ message = "^doc", group = "📚 Documentation" },
|
||||
{ message = "^perf", group = "⚡ Performance" },
|
||||
{ message = "^refactor", group = "🚜 Refactor" },
|
||||
{ message = "^style", group = "🎨 Styling" },
|
||||
{ message = "^test", group = "🧪 Testing" },
|
||||
{ message = "^chore\\(release\\):", skip = true },
|
||||
{ message = "^chore", group = "⚙️ Miscellaneous Tasks" },
|
||||
{ body = ".*security", group = "🛡️ Security" },
|
||||
{ message = "^revert", group = "◀️ Revert" },
|
||||
{ message = ".*", group = "💼 Other" },
|
||||
]
|
||||
# protect breaking changes from being skipped due to matching a skipping commit_parser
|
||||
protect_breaking_commits = false
|
||||
# filter out the commits that are not matched by commit parsers
|
||||
filter_commits = false
|
||||
# glob pattern for matching git tags
|
||||
tag_pattern = "v[0-9]*"
|
||||
# regex for skipping tags
|
||||
skip_tags = "v0.1.0-beta.1"
|
||||
# regex for ignoring tags
|
||||
ignore_tags = ""
|
||||
# sort the tags topologically
|
||||
topo_order = false
|
||||
# sort the commits inside sections by oldest/newest order
|
||||
sort_commits = "newest"
|
||||
@@ -29,26 +29,24 @@ func (ClaudeBackend) BuildArgs(cfg *Config, targetArg string) []string {
|
||||
if cfg == nil {
|
||||
return nil
|
||||
}
|
||||
args := []string{"-p"}
|
||||
args := []string{"-p", "--dangerously-skip-permissions"}
|
||||
|
||||
// Only skip permissions when explicitly requested
|
||||
if cfg.SkipPermissions {
|
||||
args = append(args, "--dangerously-skip-permissions")
|
||||
}
|
||||
// if cfg.SkipPermissions {
|
||||
// args = append(args, "--dangerously-skip-permissions")
|
||||
// }
|
||||
|
||||
workdir := cfg.WorkDir
|
||||
if workdir == "" {
|
||||
workdir = defaultWorkdir
|
||||
}
|
||||
// Prevent infinite recursion: disable all setting sources (user, project, local)
|
||||
// This ensures a clean execution environment without CLAUDE.md or skills that would trigger codeagent
|
||||
args = append(args, "--setting-sources", "")
|
||||
|
||||
if cfg.Mode == "resume" {
|
||||
if cfg.SessionID != "" {
|
||||
// Claude CLI uses -r <session_id> for resume.
|
||||
args = append(args, "-r", cfg.SessionID)
|
||||
}
|
||||
} else {
|
||||
args = append(args, "-C", workdir)
|
||||
}
|
||||
// Note: claude CLI doesn't support -C flag; workdir set via cmd.Dir
|
||||
|
||||
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
|
||||
|
||||
@@ -67,18 +65,12 @@ func (GeminiBackend) BuildArgs(cfg *Config, targetArg string) []string {
|
||||
}
|
||||
args := []string{"-o", "stream-json", "-y"}
|
||||
|
||||
workdir := cfg.WorkDir
|
||||
if workdir == "" {
|
||||
workdir = defaultWorkdir
|
||||
}
|
||||
|
||||
if cfg.Mode == "resume" {
|
||||
if cfg.SessionID != "" {
|
||||
args = append(args, "-r", cfg.SessionID)
|
||||
}
|
||||
} else {
|
||||
args = append(args, "-C", workdir)
|
||||
}
|
||||
// Note: gemini CLI doesn't support -C flag; workdir set via cmd.Dir
|
||||
|
||||
args = append(args, "-p", targetArg)
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("new mode uses workdir without skip by default", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "new", WorkDir: "/repo"}
|
||||
got := backend.BuildArgs(cfg, "todo")
|
||||
want := []string{"-p", "-C", "/repo", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -20,7 +20,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("new mode opt-in skip permissions with default workdir", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "new", SkipPermissions: true}
|
||||
got := backend.BuildArgs(cfg, "-")
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "-C", defaultWorkdir, "--output-format", "stream-json", "--verbose", "-"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "-"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -29,7 +29,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("resume mode uses session id and omits workdir", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "resume", SessionID: "sid-123", WorkDir: "/ignored"}
|
||||
got := backend.BuildArgs(cfg, "resume-task")
|
||||
want := []string{"-p", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -38,7 +38,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("resume mode without session still returns base flags", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "resume", WorkDir: "/ignored"}
|
||||
got := backend.BuildArgs(cfg, "follow-up")
|
||||
want := []string{"-p", "--output-format", "stream-json", "--verbose", "follow-up"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "follow-up"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -56,7 +56,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
|
||||
backend := GeminiBackend{}
|
||||
cfg := &Config{Mode: "new", WorkDir: "/workspace"}
|
||||
got := backend.BuildArgs(cfg, "task")
|
||||
want := []string{"-o", "stream-json", "-y", "-C", "/workspace", "-p", "task"}
|
||||
want := []string{"-o", "stream-json", "-y", "-p", "task"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
|
||||
@@ -76,8 +76,8 @@ func TestConcurrentStressLogger(t *testing.T) {
|
||||
t.Logf("Successfully wrote %d/%d logs (%.1f%%)",
|
||||
actualCount, totalExpected, float64(actualCount)/float64(totalExpected)*100)
|
||||
|
||||
// 验证日志格式
|
||||
formatRE := regexp.MustCompile(`^\[\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}\] \[PID:\d+\] INFO: goroutine-`)
|
||||
// 验证日志格式(纯文本,无前缀)
|
||||
formatRE := regexp.MustCompile(`^goroutine-\d+-msg-\d+$`)
|
||||
for i, line := range lines[:min(10, len(lines))] {
|
||||
if !formatRE.MatchString(line) {
|
||||
t.Errorf("line %d has invalid format: %s", i, line)
|
||||
@@ -293,16 +293,13 @@ func TestLoggerOrderPreservation(t *testing.T) {
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
var gid, seq int
|
||||
parts := strings.SplitN(line, " INFO: ", 2)
|
||||
if len(parts) != 2 {
|
||||
t.Errorf("invalid log format: %s", line)
|
||||
// Parse format: G0-SEQ0001 (without INFO: prefix)
|
||||
_, err := fmt.Sscanf(line, "G%d-SEQ%04d", &gid, &seq)
|
||||
if err != nil {
|
||||
t.Errorf("invalid log format: %s (error: %v)", line, err)
|
||||
continue
|
||||
}
|
||||
if _, err := fmt.Sscanf(parts[1], "G%d-SEQ%d", &gid, &seq); err == nil {
|
||||
sequences[gid] = append(sequences[gid], seq)
|
||||
} else {
|
||||
t.Errorf("failed to parse sequence from line: %s", line)
|
||||
}
|
||||
sequences[gid] = append(sequences[gid], seq)
|
||||
}
|
||||
|
||||
// 验证每个 goroutine 内部顺序
|
||||
|
||||
@@ -49,6 +49,7 @@ type TaskResult struct {
|
||||
SessionID string `json:"session_id"`
|
||||
Error string `json:"error"`
|
||||
LogPath string `json:"log_path"`
|
||||
sharedLog bool
|
||||
}
|
||||
|
||||
var backendRegistry = map[string]Backend{
|
||||
|
||||
@@ -23,6 +23,7 @@ type commandRunner interface {
|
||||
StdoutPipe() (io.ReadCloser, error)
|
||||
StdinPipe() (io.WriteCloser, error)
|
||||
SetStderr(io.Writer)
|
||||
SetDir(string)
|
||||
Process() processHandle
|
||||
}
|
||||
|
||||
@@ -72,6 +73,12 @@ func (r *realCmd) SetStderr(w io.Writer) {
|
||||
}
|
||||
}
|
||||
|
||||
func (r *realCmd) SetDir(dir string) {
|
||||
if r.cmd != nil {
|
||||
r.cmd.Dir = dir
|
||||
}
|
||||
}
|
||||
|
||||
func (r *realCmd) Process() processHandle {
|
||||
if r == nil || r.cmd == nil || r.cmd.Process == nil {
|
||||
return nil
|
||||
@@ -115,7 +122,57 @@ type parseResult struct {
|
||||
threadID string
|
||||
}
|
||||
|
||||
var runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
type taskLoggerContextKey struct{}
|
||||
|
||||
func withTaskLogger(ctx context.Context, logger *Logger) context.Context {
|
||||
if ctx == nil || logger == nil {
|
||||
return ctx
|
||||
}
|
||||
return context.WithValue(ctx, taskLoggerContextKey{}, logger)
|
||||
}
|
||||
|
||||
func taskLoggerFromContext(ctx context.Context) *Logger {
|
||||
if ctx == nil {
|
||||
return nil
|
||||
}
|
||||
logger, _ := ctx.Value(taskLoggerContextKey{}).(*Logger)
|
||||
return logger
|
||||
}
|
||||
|
||||
type taskLoggerHandle struct {
|
||||
logger *Logger
|
||||
path string
|
||||
shared bool
|
||||
closeFn func()
|
||||
}
|
||||
|
||||
func newTaskLoggerHandle(taskID string) taskLoggerHandle {
|
||||
taskLogger, err := NewLoggerWithSuffix(taskID)
|
||||
if err == nil {
|
||||
return taskLoggerHandle{
|
||||
logger: taskLogger,
|
||||
path: taskLogger.Path(),
|
||||
closeFn: func() { _ = taskLogger.Close() },
|
||||
}
|
||||
}
|
||||
|
||||
msg := fmt.Sprintf("Failed to create task logger for %s: %v, using main logger", taskID, err)
|
||||
mainLogger := activeLogger()
|
||||
if mainLogger != nil {
|
||||
logWarn(msg)
|
||||
return taskLoggerHandle{
|
||||
logger: mainLogger,
|
||||
path: mainLogger.Path(),
|
||||
shared: true,
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintln(os.Stderr, msg)
|
||||
return taskLoggerHandle{}
|
||||
}
|
||||
|
||||
// defaultRunCodexTaskFn is the default implementation of runCodexTaskFn (exposed for test reset)
|
||||
func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
|
||||
if task.WorkDir == "" {
|
||||
task.WorkDir = defaultWorkdir
|
||||
}
|
||||
@@ -144,6 +201,8 @@ var runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
return runCodexTaskWithContext(parentCtx, task, backend, nil, false, true, timeout)
|
||||
}
|
||||
|
||||
var runCodexTaskFn = defaultRunCodexTaskFn
|
||||
|
||||
func topologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
|
||||
idToTask := make(map[string]TaskSpec, len(tasks))
|
||||
indegree := make(map[string]int, len(tasks))
|
||||
@@ -228,13 +287,8 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
var startPrintMu sync.Mutex
|
||||
bannerPrinted := false
|
||||
|
||||
printTaskStart := func(taskID string) {
|
||||
logger := activeLogger()
|
||||
if logger == nil {
|
||||
return
|
||||
}
|
||||
path := logger.Path()
|
||||
if path == "" {
|
||||
printTaskStart := func(taskID, logPath string, shared bool) {
|
||||
if logPath == "" {
|
||||
return
|
||||
}
|
||||
startPrintMu.Lock()
|
||||
@@ -242,7 +296,11 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
fmt.Fprintln(os.Stderr, "=== Starting Parallel Execution ===")
|
||||
bannerPrinted = true
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Task %s: Log: %s\n", taskID, path)
|
||||
label := "Log"
|
||||
if shared {
|
||||
label = "Log (shared)"
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Task %s: %s: %s\n", taskID, label, logPath)
|
||||
startPrintMu.Unlock()
|
||||
}
|
||||
|
||||
@@ -312,9 +370,11 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
wg.Add(1)
|
||||
go func(ts TaskSpec) {
|
||||
defer wg.Done()
|
||||
var taskLogPath string
|
||||
handle := taskLoggerHandle{}
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
resultsCh <- TaskResult{TaskID: ts.ID, ExitCode: 1, Error: fmt.Sprintf("panic: %v", r)}
|
||||
resultsCh <- TaskResult{TaskID: ts.ID, ExitCode: 1, Error: fmt.Sprintf("panic: %v", r), LogPath: taskLogPath, sharedLog: handle.shared}
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -331,9 +391,31 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
logConcurrencyState("done", ts.ID, int(after), workerLimit)
|
||||
}()
|
||||
|
||||
ts.Context = ctx
|
||||
printTaskStart(ts.ID)
|
||||
resultsCh <- runCodexTaskFn(ts, timeout)
|
||||
handle = newTaskLoggerHandle(ts.ID)
|
||||
taskLogPath = handle.path
|
||||
if handle.closeFn != nil {
|
||||
defer handle.closeFn()
|
||||
}
|
||||
|
||||
taskCtx := ctx
|
||||
if handle.logger != nil {
|
||||
taskCtx = withTaskLogger(ctx, handle.logger)
|
||||
}
|
||||
ts.Context = taskCtx
|
||||
|
||||
printTaskStart(ts.ID, taskLogPath, handle.shared)
|
||||
|
||||
res := runCodexTaskFn(ts, timeout)
|
||||
if taskLogPath != "" {
|
||||
if res.LogPath == "" || (handle.shared && handle.logger != nil && res.LogPath == handle.logger.Path()) {
|
||||
res.LogPath = taskLogPath
|
||||
}
|
||||
}
|
||||
// 只有当最终的 LogPath 确实是共享 logger 的路径时才标记为 shared
|
||||
if handle.shared && handle.logger != nil && res.LogPath == handle.logger.Path() {
|
||||
res.sharedLog = true
|
||||
}
|
||||
resultsCh <- res
|
||||
}(task)
|
||||
}
|
||||
|
||||
@@ -409,7 +491,11 @@ func generateFinalOutput(results []TaskResult) string {
|
||||
sb.WriteString(fmt.Sprintf("Session: %s\n", res.SessionID))
|
||||
}
|
||||
if res.LogPath != "" {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s\n", res.LogPath))
|
||||
if res.sharedLog {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s (shared)\n", res.LogPath))
|
||||
} else {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s\n", res.LogPath))
|
||||
}
|
||||
}
|
||||
if res.Message != "" {
|
||||
sb.WriteString(fmt.Sprintf("\n%s\n", res.Message))
|
||||
@@ -450,15 +536,16 @@ func runCodexProcess(parentCtx context.Context, codexArgs []string, taskText str
|
||||
}
|
||||
|
||||
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
|
||||
result := TaskResult{TaskID: taskSpec.ID}
|
||||
setLogPath := func() {
|
||||
if result.LogPath != "" {
|
||||
return
|
||||
}
|
||||
if logger := activeLogger(); logger != nil {
|
||||
result.LogPath = logger.Path()
|
||||
}
|
||||
if parentCtx == nil {
|
||||
parentCtx = taskSpec.Context
|
||||
}
|
||||
if parentCtx == nil {
|
||||
parentCtx = context.Background()
|
||||
}
|
||||
|
||||
result := TaskResult{TaskID: taskSpec.ID}
|
||||
injectedLogger := taskLoggerFromContext(parentCtx)
|
||||
logger := injectedLogger
|
||||
|
||||
cfg := &Config{
|
||||
Mode: taskSpec.Mode,
|
||||
@@ -514,17 +601,17 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
if silent {
|
||||
// Silent mode: only persist to file when available; avoid stderr noise.
|
||||
logInfoFn = func(msg string) {
|
||||
if logger := activeLogger(); logger != nil {
|
||||
if logger != nil {
|
||||
logger.Info(prefixMsg(msg))
|
||||
}
|
||||
}
|
||||
logWarnFn = func(msg string) {
|
||||
if logger := activeLogger(); logger != nil {
|
||||
if logger != nil {
|
||||
logger.Warn(prefixMsg(msg))
|
||||
}
|
||||
}
|
||||
logErrorFn = func(msg string) {
|
||||
if logger := activeLogger(); logger != nil {
|
||||
if logger != nil {
|
||||
logger.Error(prefixMsg(msg))
|
||||
}
|
||||
}
|
||||
@@ -540,10 +627,11 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
var stderrLogger *logWriter
|
||||
|
||||
var tempLogger *Logger
|
||||
if silent && activeLogger() == nil {
|
||||
if logger == nil && silent && activeLogger() == nil {
|
||||
if l, err := NewLogger(); err == nil {
|
||||
setLogger(l)
|
||||
tempLogger = l
|
||||
logger = l
|
||||
}
|
||||
}
|
||||
defer func() {
|
||||
@@ -551,21 +639,29 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
_ = closeLogger()
|
||||
}
|
||||
}()
|
||||
defer setLogPath()
|
||||
if logger := activeLogger(); logger != nil {
|
||||
defer func() {
|
||||
if result.LogPath != "" || logger == nil {
|
||||
return
|
||||
}
|
||||
result.LogPath = logger.Path()
|
||||
}()
|
||||
if logger == nil {
|
||||
logger = activeLogger()
|
||||
}
|
||||
if logger != nil {
|
||||
result.LogPath = logger.Path()
|
||||
}
|
||||
|
||||
if !silent {
|
||||
stdoutLogger = newLogWriter("CODEX_STDOUT: ", codexLogLineLimit)
|
||||
stderrLogger = newLogWriter("CODEX_STDERR: ", codexLogLineLimit)
|
||||
// Note: Empty prefix ensures backend output is logged as-is without any wrapper format.
|
||||
// This preserves the original stdout/stderr content from codex/claude/gemini backends.
|
||||
// Trade-off: Reduces distinguishability between stdout/stderr in logs, but maintains
|
||||
// output fidelity which is critical for debugging backend-specific issues.
|
||||
stdoutLogger = newLogWriter("", codexLogLineLimit)
|
||||
stderrLogger = newLogWriter("", codexLogLineLimit)
|
||||
}
|
||||
|
||||
ctx := parentCtx
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, time.Duration(timeoutSec)*time.Second)
|
||||
defer cancel()
|
||||
ctx, stop := signal.NotifyContext(ctx, syscall.SIGINT, syscall.SIGTERM)
|
||||
@@ -577,12 +673,27 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
|
||||
cmd := newCommandRunner(ctx, commandName, codexArgs...)
|
||||
|
||||
// For backends that don't support -C flag (claude, gemini), set working directory via cmd.Dir
|
||||
// Codex passes workdir via -C flag, so we skip setting Dir for it to avoid conflicts
|
||||
if cfg.Mode != "resume" && commandName != "codex" && cfg.WorkDir != "" {
|
||||
cmd.SetDir(cfg.WorkDir)
|
||||
}
|
||||
|
||||
stderrWriters := []io.Writer{stderrBuf}
|
||||
if stderrLogger != nil {
|
||||
stderrWriters = append(stderrWriters, stderrLogger)
|
||||
}
|
||||
|
||||
// For gemini backend, filter noisy stderr output
|
||||
var stderrFilter *filteringWriter
|
||||
if !silent {
|
||||
stderrWriters = append([]io.Writer{os.Stderr}, stderrWriters...)
|
||||
stderrOut := io.Writer(os.Stderr)
|
||||
if cfg.Backend == "gemini" {
|
||||
stderrFilter = newFilteringWriter(os.Stderr, geminiNoisePatterns)
|
||||
stderrOut = stderrFilter
|
||||
defer stderrFilter.Flush()
|
||||
}
|
||||
stderrWriters = append([]io.Writer{stderrOut}, stderrWriters...)
|
||||
}
|
||||
if len(stderrWriters) == 1 {
|
||||
cmd.SetStderr(stderrWriters[0])
|
||||
@@ -615,6 +726,20 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
stdoutReader = io.TeeReader(stdout, stdoutLogger)
|
||||
}
|
||||
|
||||
// Start parse goroutine BEFORE starting the command to avoid race condition
|
||||
// where fast-completing commands close stdout before parser starts reading
|
||||
messageSeen := make(chan struct{}, 1)
|
||||
parseCh := make(chan parseResult, 1)
|
||||
go func() {
|
||||
msg, tid := parseJSONStreamInternal(stdoutReader, logWarnFn, logInfoFn, func() {
|
||||
select {
|
||||
case messageSeen <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
})
|
||||
parseCh <- parseResult{message: msg, threadID: tid}
|
||||
}()
|
||||
|
||||
logInfoFn(fmt.Sprintf("Starting %s with args: %s %s...", commandName, commandName, strings.Join(codexArgs[:min(5, len(codexArgs))], " ")))
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
@@ -632,7 +757,7 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
}
|
||||
|
||||
logInfoFn(fmt.Sprintf("Starting %s with PID: %d", commandName, cmd.Process().Pid()))
|
||||
if logger := activeLogger(); logger != nil {
|
||||
if logger != nil {
|
||||
logInfoFn(fmt.Sprintf("Log capturing to: %s", logger.Path()))
|
||||
}
|
||||
|
||||
@@ -648,18 +773,6 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
waitCh := make(chan error, 1)
|
||||
go func() { waitCh <- cmd.Wait() }()
|
||||
|
||||
messageSeen := make(chan struct{}, 1)
|
||||
parseCh := make(chan parseResult, 1)
|
||||
go func() {
|
||||
msg, tid := parseJSONStreamInternal(stdoutReader, logWarnFn, logInfoFn, func() {
|
||||
select {
|
||||
case messageSeen <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
})
|
||||
parseCh <- parseResult{message: msg, threadID: tid}
|
||||
}()
|
||||
|
||||
var waitErr error
|
||||
var forceKillTimer *forceKillTimer
|
||||
var ctxCancelled bool
|
||||
@@ -741,8 +854,8 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
result.ExitCode = 0
|
||||
result.Message = message
|
||||
result.SessionID = threadID
|
||||
if logger := activeLogger(); logger != nil {
|
||||
result.LogPath = logger.Path()
|
||||
if result.LogPath == "" && injectedLogger != nil {
|
||||
result.LogPath = injectedLogger.Path()
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
@@ -15,6 +18,12 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
var executorTestTaskCounter atomic.Int64
|
||||
|
||||
func nextExecutorTestTaskID(prefix string) string {
|
||||
return fmt.Sprintf("%s-%d", prefix, executorTestTaskCounter.Add(1))
|
||||
}
|
||||
|
||||
type execFakeProcess struct {
|
||||
pid int
|
||||
signals []os.Signal
|
||||
@@ -76,6 +85,7 @@ type execFakeRunner struct {
|
||||
stdout io.ReadCloser
|
||||
process processHandle
|
||||
stdin io.WriteCloser
|
||||
dir string
|
||||
waitErr error
|
||||
waitDelay time.Duration
|
||||
startErr error
|
||||
@@ -117,6 +127,7 @@ func (f *execFakeRunner) StdinPipe() (io.WriteCloser, error) {
|
||||
return &writeCloserStub{}, nil
|
||||
}
|
||||
func (f *execFakeRunner) SetStderr(io.Writer) {}
|
||||
func (f *execFakeRunner) SetDir(dir string) { f.dir = dir }
|
||||
func (f *execFakeRunner) Process() processHandle {
|
||||
if f.process != nil {
|
||||
return f.process
|
||||
@@ -148,6 +159,10 @@ func TestExecutorHelperCoverage(t *testing.T) {
|
||||
}
|
||||
rcWithCmd := &realCmd{cmd: &exec.Cmd{}}
|
||||
rcWithCmd.SetStderr(io.Discard)
|
||||
rcWithCmd.SetDir("/tmp")
|
||||
if rcWithCmd.cmd.Dir != "/tmp" {
|
||||
t.Fatalf("expected SetDir to set cmd.Dir, got %q", rcWithCmd.cmd.Dir)
|
||||
}
|
||||
echoCmd := exec.Command("echo", "ok")
|
||||
rcProc := &realCmd{cmd: echoCmd}
|
||||
stdoutPipe, err := rcProc.StdoutPipe()
|
||||
@@ -420,6 +435,100 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
|
||||
_ = closeLogger()
|
||||
})
|
||||
|
||||
t.Run("injectedLogger", func(t *testing.T) {
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
return &execFakeRunner{
|
||||
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"injected"}}`),
|
||||
process: &execFakeProcess{pid: 12},
|
||||
}
|
||||
}
|
||||
_ = closeLogger()
|
||||
|
||||
injected, err := NewLoggerWithSuffix("executor-injected")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = injected.Close()
|
||||
_ = os.Remove(injected.Path())
|
||||
}()
|
||||
|
||||
ctx := withTaskLogger(context.Background(), injected)
|
||||
res := runCodexTaskWithContext(ctx, TaskSpec{ID: "task-injected", Task: "payload", WorkDir: "."}, nil, nil, false, true, 1)
|
||||
if res.ExitCode != 0 || res.LogPath != injected.Path() {
|
||||
t.Fatalf("expected injected logger path, got %+v", res)
|
||||
}
|
||||
if activeLogger() != nil {
|
||||
t.Fatalf("expected no global logger to be created when injected")
|
||||
}
|
||||
|
||||
injected.Flush()
|
||||
data, err := os.ReadFile(injected.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read injected log file: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(data), "task-injected") {
|
||||
t.Fatalf("injected log missing task prefix, content: %s", string(data))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("contextLoggerWithoutParent", func(t *testing.T) {
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
return &execFakeRunner{
|
||||
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ctx"}}`),
|
||||
process: &execFakeProcess{pid: 14},
|
||||
}
|
||||
}
|
||||
_ = closeLogger()
|
||||
|
||||
taskLogger, err := NewLoggerWithSuffix("executor-taskctx")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
_ = taskLogger.Close()
|
||||
_ = os.Remove(taskLogger.Path())
|
||||
})
|
||||
|
||||
ctx := withTaskLogger(context.Background(), taskLogger)
|
||||
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-context", Task: "payload", WorkDir: ".", Context: ctx}, nil, nil, false, true, 1)
|
||||
if res.ExitCode != 0 || res.LogPath != taskLogger.Path() {
|
||||
t.Fatalf("expected task logger to be reused from spec context, got %+v", res)
|
||||
}
|
||||
if activeLogger() != nil {
|
||||
t.Fatalf("expected no global logger to be created when task context provides one")
|
||||
}
|
||||
|
||||
taskLogger.Flush()
|
||||
data, err := os.ReadFile(taskLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read task log: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(data), "task-context") {
|
||||
t.Fatalf("task log missing task id, content: %s", string(data))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("backendSetsDirAndNilContext", func(t *testing.T) {
|
||||
var rc *execFakeRunner
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
rc = &execFakeRunner{
|
||||
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"backend"}}`),
|
||||
process: &execFakeProcess{pid: 13},
|
||||
}
|
||||
return rc
|
||||
}
|
||||
|
||||
_ = closeLogger()
|
||||
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-backend", Task: "payload", WorkDir: "/tmp"}, ClaudeBackend{}, nil, false, false, 1)
|
||||
if res.ExitCode != 0 || res.Message != "backend" {
|
||||
t.Fatalf("unexpected result: %+v", res)
|
||||
}
|
||||
if rc == nil || rc.dir != "/tmp" {
|
||||
t.Fatalf("expected backend to set cmd.Dir, got runner=%v dir=%q", rc, rc.dir)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("missingMessage", func(t *testing.T) {
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
return &execFakeRunner{
|
||||
@@ -434,6 +543,613 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestExecutorParallelLogIsolation(t *testing.T) {
|
||||
mainLogger, err := NewLoggerWithSuffix("executor-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
t.Cleanup(func() {
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
})
|
||||
|
||||
taskA := nextExecutorTestTaskID("iso-a")
|
||||
taskB := nextExecutorTestTaskID("iso-b")
|
||||
markerA := "ISOLATION_MARKER:" + taskA
|
||||
markerB := "ISOLATION_MARKER:" + taskB
|
||||
|
||||
origRun := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
if logger == nil {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing task logger"}
|
||||
}
|
||||
switch task.ID {
|
||||
case taskA:
|
||||
logger.Info(markerA)
|
||||
case taskB:
|
||||
logger.Info(markerB)
|
||||
default:
|
||||
logger.Info("unexpected task: " + task.ID)
|
||||
}
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = origRun })
|
||||
|
||||
stderrR, stderrW, err := os.Pipe()
|
||||
if err != nil {
|
||||
t.Fatalf("os.Pipe() error = %v", err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = stderrW
|
||||
defer func() { os.Stderr = oldStderr }()
|
||||
|
||||
results := executeConcurrentWithContext(nil, [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, -1)
|
||||
|
||||
_ = stderrW.Close()
|
||||
os.Stderr = oldStderr
|
||||
stderrData, _ := io.ReadAll(stderrR)
|
||||
_ = stderrR.Close()
|
||||
stderrOut := string(stderrData)
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Fatalf("expected 2 results, got %d", len(results))
|
||||
}
|
||||
|
||||
paths := map[string]string{}
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 {
|
||||
t.Fatalf("unexpected failure: %+v", res)
|
||||
}
|
||||
if res.LogPath == "" {
|
||||
t.Fatalf("missing LogPath for task %q", res.TaskID)
|
||||
}
|
||||
paths[res.TaskID] = res.LogPath
|
||||
}
|
||||
if paths[taskA] == paths[taskB] {
|
||||
t.Fatalf("expected distinct task log paths, got %q", paths[taskA])
|
||||
}
|
||||
|
||||
if strings.Contains(stderrOut, mainLogger.Path()) {
|
||||
t.Fatalf("stderr should not print main log path: %s", stderrOut)
|
||||
}
|
||||
if !strings.Contains(stderrOut, paths[taskA]) || !strings.Contains(stderrOut, paths[taskB]) {
|
||||
t.Fatalf("stderr should include task log paths, got: %s", stderrOut)
|
||||
}
|
||||
|
||||
mainLogger.Flush()
|
||||
mainData, err := os.ReadFile(mainLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read main log: %v", err)
|
||||
}
|
||||
if strings.Contains(string(mainData), markerA) || strings.Contains(string(mainData), markerB) {
|
||||
t.Fatalf("main log should not contain task markers, content: %s", string(mainData))
|
||||
}
|
||||
|
||||
taskAData, err := os.ReadFile(paths[taskA])
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read task A log: %v", err)
|
||||
}
|
||||
taskBData, err := os.ReadFile(paths[taskB])
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read task B log: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(taskAData), markerA) || strings.Contains(string(taskAData), markerB) {
|
||||
t.Fatalf("task A log isolation failed, content: %s", string(taskAData))
|
||||
}
|
||||
if !strings.Contains(string(taskBData), markerB) || strings.Contains(string(taskBData), markerA) {
|
||||
t.Fatalf("task B log isolation failed, content: %s", string(taskBData))
|
||||
}
|
||||
|
||||
_ = os.Remove(paths[taskA])
|
||||
_ = os.Remove(paths[taskB])
|
||||
}
|
||||
|
||||
func TestConcurrentExecutorParallelLogIsolationAndClosure(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
oldArgs := os.Args
|
||||
os.Args = []string{defaultWrapperName}
|
||||
t.Cleanup(func() { os.Args = oldArgs })
|
||||
|
||||
mainLogger, err := NewLoggerWithSuffix("concurrent-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
t.Cleanup(func() {
|
||||
mainLogger.Flush()
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
})
|
||||
|
||||
const taskCount = 16
|
||||
const writersPerTask = 4
|
||||
const logsPerWriter = 50
|
||||
const expectedTaskLines = writersPerTask * logsPerWriter
|
||||
|
||||
taskIDs := make([]string, 0, taskCount)
|
||||
tasks := make([]TaskSpec, 0, taskCount)
|
||||
for i := 0; i < taskCount; i++ {
|
||||
id := nextExecutorTestTaskID("iso")
|
||||
taskIDs = append(taskIDs, id)
|
||||
tasks = append(tasks, TaskSpec{ID: id})
|
||||
}
|
||||
|
||||
type taskLoggerInfo struct {
|
||||
taskID string
|
||||
logger *Logger
|
||||
}
|
||||
loggerCh := make(chan taskLoggerInfo, taskCount)
|
||||
readyCh := make(chan struct{}, taskCount)
|
||||
startCh := make(chan struct{})
|
||||
|
||||
go func() {
|
||||
for i := 0; i < taskCount; i++ {
|
||||
<-readyCh
|
||||
}
|
||||
close(startCh)
|
||||
}()
|
||||
|
||||
origRun := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
readyCh <- struct{}{}
|
||||
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
loggerCh <- taskLoggerInfo{taskID: task.ID, logger: logger}
|
||||
if logger == nil {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing task logger"}
|
||||
}
|
||||
|
||||
<-startCh
|
||||
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(writersPerTask)
|
||||
for g := 0; g < writersPerTask; g++ {
|
||||
go func(g int) {
|
||||
defer wg.Done()
|
||||
for i := 0; i < logsPerWriter; i++ {
|
||||
logger.Info(fmt.Sprintf("TASK=%s g=%d i=%d", task.ID, g, i))
|
||||
}
|
||||
}(g)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = origRun })
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{tasks}, 1, 0)
|
||||
|
||||
if len(results) != taskCount {
|
||||
t.Fatalf("expected %d results, got %d", taskCount, len(results))
|
||||
}
|
||||
|
||||
taskLogPaths := make(map[string]string, taskCount)
|
||||
seenPaths := make(map[string]struct{}, taskCount)
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 || res.Error != "" {
|
||||
t.Fatalf("unexpected task failure: %+v", res)
|
||||
}
|
||||
if res.LogPath == "" {
|
||||
t.Fatalf("missing LogPath for task %q", res.TaskID)
|
||||
}
|
||||
if _, ok := taskLogPaths[res.TaskID]; ok {
|
||||
t.Fatalf("duplicate TaskID in results: %q", res.TaskID)
|
||||
}
|
||||
taskLogPaths[res.TaskID] = res.LogPath
|
||||
if _, ok := seenPaths[res.LogPath]; ok {
|
||||
t.Fatalf("expected unique log path per task; duplicate path %q", res.LogPath)
|
||||
}
|
||||
seenPaths[res.LogPath] = struct{}{}
|
||||
}
|
||||
if len(taskLogPaths) != taskCount {
|
||||
t.Fatalf("expected %d unique task IDs, got %d", taskCount, len(taskLogPaths))
|
||||
}
|
||||
|
||||
prefix := primaryLogPrefix()
|
||||
pid := os.Getpid()
|
||||
for _, id := range taskIDs {
|
||||
path := taskLogPaths[id]
|
||||
if path == "" {
|
||||
t.Fatalf("missing log path for task %q", id)
|
||||
}
|
||||
if _, err := os.Stat(path); err != nil {
|
||||
t.Fatalf("task log file not created for %q: %v", id, err)
|
||||
}
|
||||
wantBase := fmt.Sprintf("%s-%d-%s.log", prefix, pid, id)
|
||||
if got := filepath.Base(path); got != wantBase {
|
||||
t.Fatalf("unexpected log filename for %q: got %q, want %q", id, got, wantBase)
|
||||
}
|
||||
}
|
||||
|
||||
loggers := make(map[string]*Logger, taskCount)
|
||||
for i := 0; i < taskCount; i++ {
|
||||
info := <-loggerCh
|
||||
if info.taskID == "" {
|
||||
t.Fatalf("missing taskID in logger info")
|
||||
}
|
||||
if info.logger == nil {
|
||||
t.Fatalf("missing logger in context for task %q", info.taskID)
|
||||
}
|
||||
if prev, ok := loggers[info.taskID]; ok && prev != info.logger {
|
||||
t.Fatalf("task %q received multiple logger instances", info.taskID)
|
||||
}
|
||||
loggers[info.taskID] = info.logger
|
||||
}
|
||||
if len(loggers) != taskCount {
|
||||
t.Fatalf("expected %d task loggers, got %d", taskCount, len(loggers))
|
||||
}
|
||||
|
||||
for taskID, logger := range loggers {
|
||||
if !logger.closed.Load() {
|
||||
t.Fatalf("expected task logger to be closed for %q", taskID)
|
||||
}
|
||||
if logger.file == nil {
|
||||
t.Fatalf("expected task logger file to be non-nil for %q", taskID)
|
||||
}
|
||||
if _, err := logger.file.Write([]byte("x")); err == nil {
|
||||
t.Fatalf("expected task logger file to be closed for %q", taskID)
|
||||
}
|
||||
}
|
||||
|
||||
mainLogger.Flush()
|
||||
mainData, err := os.ReadFile(mainLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read main log: %v", err)
|
||||
}
|
||||
mainText := string(mainData)
|
||||
if !strings.Contains(mainText, "parallel: worker_limit=") {
|
||||
t.Fatalf("expected main log to include concurrency planning, content: %s", mainText)
|
||||
}
|
||||
if strings.Contains(mainText, "TASK=") {
|
||||
t.Fatalf("main log should not contain task output, content: %s", mainText)
|
||||
}
|
||||
|
||||
for taskID, path := range taskLogPaths {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open task log for %q: %v", taskID, err)
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
lines := 0
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.Contains(line, "parallel:") {
|
||||
t.Fatalf("task log should not contain main log entries for %q: %s", taskID, line)
|
||||
}
|
||||
gotID, ok := parseTaskIDFromLogLine(line)
|
||||
if !ok {
|
||||
t.Fatalf("task log entry missing task marker for %q: %s", taskID, line)
|
||||
}
|
||||
if gotID != taskID {
|
||||
t.Fatalf("task log isolation failed: file=%q got TASK=%q want TASK=%q", path, gotID, taskID)
|
||||
}
|
||||
lines++
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
_ = f.Close()
|
||||
t.Fatalf("scanner error for %q: %v", taskID, err)
|
||||
}
|
||||
if err := f.Close(); err != nil {
|
||||
t.Fatalf("failed to close task log for %q: %v", taskID, err)
|
||||
}
|
||||
if lines != expectedTaskLines {
|
||||
t.Fatalf("unexpected task log line count for %q: got %d, want %d", taskID, lines, expectedTaskLines)
|
||||
}
|
||||
}
|
||||
|
||||
for _, path := range taskLogPaths {
|
||||
_ = os.Remove(path)
|
||||
}
|
||||
}
|
||||
|
||||
func parseTaskIDFromLogLine(line string) (string, bool) {
|
||||
const marker = "TASK="
|
||||
idx := strings.Index(line, marker)
|
||||
if idx == -1 {
|
||||
return "", false
|
||||
}
|
||||
rest := line[idx+len(marker):]
|
||||
end := strings.IndexByte(rest, ' ')
|
||||
if end == -1 {
|
||||
return rest, rest != ""
|
||||
}
|
||||
return rest[:end], rest[:end] != ""
|
||||
}
|
||||
|
||||
func TestExecutorTaskLoggerContext(t *testing.T) {
|
||||
if taskLoggerFromContext(nil) != nil {
|
||||
t.Fatalf("expected nil logger from nil context")
|
||||
}
|
||||
if taskLoggerFromContext(context.Background()) != nil {
|
||||
t.Fatalf("expected nil logger when context has no logger")
|
||||
}
|
||||
|
||||
logger, err := NewLoggerWithSuffix("executor-taskctx")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = logger.Close()
|
||||
_ = os.Remove(logger.Path())
|
||||
}()
|
||||
|
||||
ctx := withTaskLogger(context.Background(), logger)
|
||||
if got := taskLoggerFromContext(ctx); got != logger {
|
||||
t.Fatalf("expected logger roundtrip, got %v", got)
|
||||
}
|
||||
|
||||
if taskLoggerFromContext(withTaskLogger(context.Background(), nil)) != nil {
|
||||
t.Fatalf("expected nil logger when injected logger is nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
|
||||
devNull, err := os.OpenFile(os.DevNull, os.O_WRONLY, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open %s: %v", os.DevNull, err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = devNull
|
||||
t.Cleanup(func() {
|
||||
os.Stderr = oldStderr
|
||||
_ = devNull.Close()
|
||||
})
|
||||
|
||||
t.Run("skipOnFailedDependencies", func(t *testing.T) {
|
||||
root := nextExecutorTestTaskID("root")
|
||||
child := nextExecutorTestTaskID("child")
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
if task.ID == root {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "boom"}
|
||||
}
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{
|
||||
{{ID: root}},
|
||||
{{ID: child, Dependencies: []string{root}}},
|
||||
}, 1, 0)
|
||||
|
||||
foundChild := false
|
||||
for _, res := range results {
|
||||
if res.LogPath != "" {
|
||||
_ = os.Remove(res.LogPath)
|
||||
}
|
||||
if res.TaskID != child {
|
||||
continue
|
||||
}
|
||||
foundChild = true
|
||||
if res.ExitCode == 0 || !strings.Contains(res.Error, "skipped") {
|
||||
t.Fatalf("expected skipped child task result, got %+v", res)
|
||||
}
|
||||
}
|
||||
if !foundChild {
|
||||
t.Fatalf("expected child task to be present in results")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("panicRecovered", func(t *testing.T) {
|
||||
taskID := nextExecutorTestTaskID("panic")
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
panic("boom")
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: taskID}}}, 1, 0)
|
||||
if len(results) != 1 {
|
||||
t.Fatalf("expected 1 result, got %d", len(results))
|
||||
}
|
||||
if results[0].ExitCode == 0 || !strings.Contains(results[0].Error, "panic") {
|
||||
t.Fatalf("expected panic result, got %+v", results[0])
|
||||
}
|
||||
if results[0].LogPath == "" {
|
||||
t.Fatalf("expected LogPath on panic result")
|
||||
}
|
||||
_ = os.Remove(results[0].LogPath)
|
||||
})
|
||||
|
||||
t.Run("cancelWhileWaitingForWorker", func(t *testing.T) {
|
||||
task1 := nextExecutorTestTaskID("slot")
|
||||
task2 := nextExecutorTestTaskID("slot")
|
||||
|
||||
parentCtx, cancel := context.WithCancel(context.Background())
|
||||
started := make(chan struct{})
|
||||
unblock := make(chan struct{})
|
||||
var startedOnce sync.Once
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
startedOnce.Do(func() { close(started) })
|
||||
<-unblock
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
go func() {
|
||||
<-started
|
||||
cancel()
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
close(unblock)
|
||||
}()
|
||||
|
||||
results := executeConcurrentWithContext(parentCtx, [][]TaskSpec{{{ID: task1}, {ID: task2}}}, 1, 1)
|
||||
foundCancelled := false
|
||||
for _, res := range results {
|
||||
if res.LogPath != "" {
|
||||
_ = os.Remove(res.LogPath)
|
||||
}
|
||||
if res.ExitCode == 130 {
|
||||
foundCancelled = true
|
||||
}
|
||||
}
|
||||
if !foundCancelled {
|
||||
t.Fatalf("expected a task to be cancelled")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("loggerCreateFails", func(t *testing.T) {
|
||||
taskID := nextExecutorTestTaskID("bad") + "/id"
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: taskID}}}, 1, 0)
|
||||
if len(results) != 1 || results[0].ExitCode != 0 {
|
||||
t.Fatalf("unexpected results: %+v", results)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("TestConcurrentTaskLoggerFailure", func(t *testing.T) {
|
||||
// Create a writable temp dir for the main logger, then flip TMPDIR to a read-only
|
||||
// location so task-specific loggers fail to open.
|
||||
writable := t.TempDir()
|
||||
t.Setenv("TMPDIR", writable)
|
||||
|
||||
mainLogger, err := NewLoggerWithSuffix("shared-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
t.Cleanup(func() {
|
||||
mainLogger.Flush()
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
})
|
||||
|
||||
noWrite := filepath.Join(writable, "ro")
|
||||
if err := os.Mkdir(noWrite, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only temp dir: %v", err)
|
||||
}
|
||||
t.Setenv("TMPDIR", noWrite)
|
||||
|
||||
taskA := nextExecutorTestTaskID("shared-a")
|
||||
taskB := nextExecutorTestTaskID("shared-b")
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
if logger != mainLogger {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "unexpected logger"}
|
||||
}
|
||||
logger.Info("TASK=" + task.ID)
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
stderrR, stderrW, err := os.Pipe()
|
||||
if err != nil {
|
||||
t.Fatalf("os.Pipe() error = %v", err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = stderrW
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, 0)
|
||||
|
||||
_ = stderrW.Close()
|
||||
os.Stderr = oldStderr
|
||||
stderrData, _ := io.ReadAll(stderrR)
|
||||
_ = stderrR.Close()
|
||||
stderrOut := string(stderrData)
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Fatalf("expected 2 results, got %d", len(results))
|
||||
}
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 || res.Error != "" {
|
||||
t.Fatalf("task failed unexpectedly: %+v", res)
|
||||
}
|
||||
if res.LogPath != mainLogger.Path() {
|
||||
t.Fatalf("shared log path mismatch: got %q want %q", res.LogPath, mainLogger.Path())
|
||||
}
|
||||
if !res.sharedLog {
|
||||
t.Fatalf("expected sharedLog flag for %+v", res)
|
||||
}
|
||||
if !strings.Contains(stderrOut, "Log (shared)") {
|
||||
t.Fatalf("stderr missing shared marker: %s", stderrOut)
|
||||
}
|
||||
}
|
||||
|
||||
summary := generateFinalOutput(results)
|
||||
if !strings.Contains(summary, "(shared)") {
|
||||
t.Fatalf("summary missing shared marker: %s", summary)
|
||||
}
|
||||
|
||||
mainLogger.Flush()
|
||||
data, err := os.ReadFile(mainLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read main log: %v", err)
|
||||
}
|
||||
content := string(data)
|
||||
if !strings.Contains(content, "TASK="+taskA) || !strings.Contains(content, "TASK="+taskB) {
|
||||
t.Fatalf("expected shared log to contain both tasks, got: %s", content)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("TestSanitizeTaskID", func(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
if logger == nil {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing logger"}
|
||||
}
|
||||
logger.Info("TASK=" + task.ID)
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
idA := "../bad id"
|
||||
idB := "tab\tid"
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: idA}, {ID: idB}}}, 1, 0)
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Fatalf("expected 2 results, got %d", len(results))
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
idA: sanitizeLogSuffix(idA),
|
||||
idB: sanitizeLogSuffix(idB),
|
||||
}
|
||||
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 || res.Error != "" {
|
||||
t.Fatalf("unexpected failure: %+v", res)
|
||||
}
|
||||
safe, ok := expected[res.TaskID]
|
||||
if !ok {
|
||||
t.Fatalf("unexpected task id %q in results", res.TaskID)
|
||||
}
|
||||
wantBase := fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), safe)
|
||||
if filepath.Base(res.LogPath) != wantBase {
|
||||
t.Fatalf("log filename for %q = %q, want %q", res.TaskID, filepath.Base(res.LogPath), wantBase)
|
||||
}
|
||||
data, err := os.ReadFile(res.LogPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read log %q: %v", res.LogPath, err)
|
||||
}
|
||||
if !strings.Contains(string(data), "TASK="+res.TaskID) {
|
||||
t.Fatalf("log for %q missing task marker, content: %s", res.TaskID, string(data))
|
||||
}
|
||||
_ = os.Remove(res.LogPath)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestExecutorSignalAndTermination(t *testing.T) {
|
||||
forceKillDelay.Store(0)
|
||||
defer forceKillDelay.Store(5)
|
||||
@@ -574,3 +1290,70 @@ func TestExecutorForwardSignalsDefaults(t *testing.T) {
|
||||
forwardSignals(ctx, &execFakeRunner{process: &execFakeProcess{pid: 80}}, func(string) {})
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
|
||||
func TestExecutorSharedLogFalseWhenCustomLogPath(t *testing.T) {
|
||||
devNull, err := os.OpenFile(os.DevNull, os.O_WRONLY, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open %s: %v", os.DevNull, err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = devNull
|
||||
t.Cleanup(func() {
|
||||
os.Stderr = oldStderr
|
||||
_ = devNull.Close()
|
||||
})
|
||||
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
// Setup: 创建主 logger
|
||||
mainLogger, err := NewLoggerWithSuffix("shared-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
defer func() {
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
}()
|
||||
|
||||
// 模拟场景:task logger 创建失败(通过设置只读的 TMPDIR),
|
||||
// 回退到主 logger(handle.shared=true),
|
||||
// 但 runCodexTaskFn 返回自定义的 LogPath(不等于主 logger 的路径)
|
||||
roDir := filepath.Join(tempDir, "ro")
|
||||
if err := os.Mkdir(roDir, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only dir: %v", err)
|
||||
}
|
||||
t.Setenv("TMPDIR", roDir)
|
||||
|
||||
orig := runCodexTaskFn
|
||||
customLogPath := "/custom/path/to.log"
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
// 返回自定义 LogPath,不等于主 logger 的路径
|
||||
return TaskResult{
|
||||
TaskID: task.ID,
|
||||
ExitCode: 0,
|
||||
LogPath: customLogPath,
|
||||
}
|
||||
}
|
||||
defer func() { runCodexTaskFn = orig }()
|
||||
|
||||
// 执行任务
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: "task1"}}}, 1, 0)
|
||||
|
||||
if len(results) != 1 {
|
||||
t.Fatalf("expected 1 result, got %d", len(results))
|
||||
}
|
||||
|
||||
res := results[0]
|
||||
// 关键断言:即使 handle.shared=true(因为 task logger 创建失败),
|
||||
// 但因为 LogPath 不等于主 logger 的路径,sharedLog 应为 false
|
||||
if res.sharedLog {
|
||||
t.Fatalf("expected sharedLog=false when LogPath differs from shared logger, got true")
|
||||
}
|
||||
|
||||
// 验证 LogPath 确实是自定义的
|
||||
if res.LogPath != customLogPath {
|
||||
t.Fatalf("expected custom LogPath %s, got %s", customLogPath, res.LogPath)
|
||||
}
|
||||
}
|
||||
|
||||
66
codeagent-wrapper/filter.go
Normal file
66
codeagent-wrapper/filter.go
Normal file
@@ -0,0 +1,66 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// geminiNoisePatterns contains stderr patterns to filter for gemini backend
|
||||
var geminiNoisePatterns = []string{
|
||||
"[STARTUP]",
|
||||
"Session cleanup disabled",
|
||||
"Warning:",
|
||||
"(node:",
|
||||
"(Use `node --trace-warnings",
|
||||
"Loaded cached credentials",
|
||||
"Loading extension:",
|
||||
"YOLO mode is enabled",
|
||||
}
|
||||
|
||||
// filteringWriter wraps an io.Writer and filters out lines matching patterns
|
||||
type filteringWriter struct {
|
||||
w io.Writer
|
||||
patterns []string
|
||||
buf bytes.Buffer
|
||||
}
|
||||
|
||||
func newFilteringWriter(w io.Writer, patterns []string) *filteringWriter {
|
||||
return &filteringWriter{w: w, patterns: patterns}
|
||||
}
|
||||
|
||||
func (f *filteringWriter) Write(p []byte) (n int, err error) {
|
||||
f.buf.Write(p)
|
||||
for {
|
||||
line, err := f.buf.ReadString('\n')
|
||||
if err != nil {
|
||||
// incomplete line, put it back
|
||||
f.buf.WriteString(line)
|
||||
break
|
||||
}
|
||||
if !f.shouldFilter(line) {
|
||||
f.w.Write([]byte(line))
|
||||
}
|
||||
}
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
func (f *filteringWriter) shouldFilter(line string) bool {
|
||||
for _, pattern := range f.patterns {
|
||||
if strings.Contains(line, pattern) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Flush writes any remaining buffered content
|
||||
func (f *filteringWriter) Flush() {
|
||||
if f.buf.Len() > 0 {
|
||||
remaining := f.buf.String()
|
||||
if !f.shouldFilter(remaining) {
|
||||
f.w.Write([]byte(remaining))
|
||||
}
|
||||
f.buf.Reset()
|
||||
}
|
||||
}
|
||||
73
codeagent-wrapper/filter_test.go
Normal file
73
codeagent-wrapper/filter_test.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestFilteringWriter(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
patterns []string
|
||||
input string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "filter STARTUP lines",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "[STARTUP] Recording metric\nHello World\n[STARTUP] Another line\n",
|
||||
want: "Hello World\n",
|
||||
},
|
||||
{
|
||||
name: "filter Warning lines",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "Warning: something bad\nActual output\n",
|
||||
want: "Actual output\n",
|
||||
},
|
||||
{
|
||||
name: "filter multiple patterns",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "YOLO mode is enabled\nSession cleanup disabled\nReal content\nLoading extension: foo\n",
|
||||
want: "Real content\n",
|
||||
},
|
||||
{
|
||||
name: "no filtering needed",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "Line 1\nLine 2\nLine 3\n",
|
||||
want: "Line 1\nLine 2\nLine 3\n",
|
||||
},
|
||||
{
|
||||
name: "empty input",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "",
|
||||
want: "",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
fw := newFilteringWriter(&buf, tt.patterns)
|
||||
fw.Write([]byte(tt.input))
|
||||
fw.Flush()
|
||||
|
||||
if got := buf.String(); got != tt.want {
|
||||
t.Errorf("got %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilteringWriterPartialLines(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
fw := newFilteringWriter(&buf, geminiNoisePatterns)
|
||||
|
||||
// Write partial line
|
||||
fw.Write([]byte("Hello "))
|
||||
fw.Write([]byte("World\n"))
|
||||
fw.Flush()
|
||||
|
||||
if got := buf.String(); got != "Hello World\n" {
|
||||
t.Errorf("got %q, want %q", got, "Hello World\n")
|
||||
}
|
||||
}
|
||||
39
codeagent-wrapper/log_writer_limit_test.go
Normal file
39
codeagent-wrapper/log_writer_limit_test.go
Normal file
@@ -0,0 +1,39 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLogWriterWriteLimitsBuffer(t *testing.T) {
|
||||
defer resetTestHooks()
|
||||
|
||||
logger, err := NewLogger()
|
||||
if err != nil {
|
||||
t.Fatalf("NewLogger error: %v", err)
|
||||
}
|
||||
setLogger(logger)
|
||||
defer closeLogger()
|
||||
|
||||
lw := newLogWriter("P:", 10)
|
||||
_, _ = lw.Write([]byte(strings.Repeat("a", 100)))
|
||||
|
||||
if lw.buf.Len() != 10 {
|
||||
t.Fatalf("logWriter buffer len=%d, want %d", lw.buf.Len(), 10)
|
||||
}
|
||||
if !lw.dropped {
|
||||
t.Fatalf("expected logWriter to drop overlong line bytes")
|
||||
}
|
||||
|
||||
lw.Flush()
|
||||
logger.Flush()
|
||||
data, err := os.ReadFile(logger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("ReadFile error: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(data), "P:aaaaaaa...") {
|
||||
t.Fatalf("log output missing truncated entry, got %q", string(data))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
@@ -18,22 +19,25 @@ import (
|
||||
// It is intentionally minimal: a buffered channel + single worker goroutine
|
||||
// to avoid contention while keeping ordering guarantees.
|
||||
type Logger struct {
|
||||
path string
|
||||
file *os.File
|
||||
writer *bufio.Writer
|
||||
ch chan logEntry
|
||||
flushReq chan chan struct{}
|
||||
done chan struct{}
|
||||
closed atomic.Bool
|
||||
closeOnce sync.Once
|
||||
workerWG sync.WaitGroup
|
||||
pendingWG sync.WaitGroup
|
||||
flushMu sync.Mutex
|
||||
path string
|
||||
file *os.File
|
||||
writer *bufio.Writer
|
||||
ch chan logEntry
|
||||
flushReq chan chan struct{}
|
||||
done chan struct{}
|
||||
closed atomic.Bool
|
||||
closeOnce sync.Once
|
||||
workerWG sync.WaitGroup
|
||||
pendingWG sync.WaitGroup
|
||||
flushMu sync.Mutex
|
||||
workerErr error
|
||||
errorEntries []string // Cache of recent ERROR/WARN entries
|
||||
errorMu sync.Mutex
|
||||
}
|
||||
|
||||
type logEntry struct {
|
||||
level string
|
||||
msg string
|
||||
msg string
|
||||
isError bool // true for ERROR or WARN levels
|
||||
}
|
||||
|
||||
// CleanupStats captures the outcome of a cleanupOldLogs run.
|
||||
@@ -55,6 +59,10 @@ var (
|
||||
evalSymlinksFn = filepath.EvalSymlinks
|
||||
)
|
||||
|
||||
const maxLogSuffixLen = 64
|
||||
|
||||
var logSuffixCounter atomic.Uint64
|
||||
|
||||
// NewLogger creates the async logger and starts the worker goroutine.
|
||||
// The log file is created under os.TempDir() using the required naming scheme.
|
||||
func NewLogger() (*Logger, error) {
|
||||
@@ -64,14 +72,23 @@ func NewLogger() (*Logger, error) {
|
||||
// NewLoggerWithSuffix creates a logger with an optional suffix in the filename.
|
||||
// Useful for tests that need isolated log files within the same process.
|
||||
func NewLoggerWithSuffix(suffix string) (*Logger, error) {
|
||||
filename := fmt.Sprintf("%s-%d", primaryLogPrefix(), os.Getpid())
|
||||
pid := os.Getpid()
|
||||
filename := fmt.Sprintf("%s-%d", primaryLogPrefix(), pid)
|
||||
var safeSuffix string
|
||||
if suffix != "" {
|
||||
filename += "-" + suffix
|
||||
safeSuffix = sanitizeLogSuffix(suffix)
|
||||
}
|
||||
if safeSuffix != "" {
|
||||
filename += "-" + safeSuffix
|
||||
}
|
||||
filename += ".log"
|
||||
|
||||
path := filepath.Clean(filepath.Join(os.TempDir(), filename))
|
||||
|
||||
if err := os.MkdirAll(filepath.Dir(path), 0o700); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0o600)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -92,6 +109,73 @@ func NewLoggerWithSuffix(suffix string) (*Logger, error) {
|
||||
return l, nil
|
||||
}
|
||||
|
||||
func sanitizeLogSuffix(raw string) string {
|
||||
trimmed := strings.TrimSpace(raw)
|
||||
if trimmed == "" {
|
||||
return fallbackLogSuffix()
|
||||
}
|
||||
|
||||
var b strings.Builder
|
||||
changed := false
|
||||
for _, r := range trimmed {
|
||||
if isSafeLogRune(r) {
|
||||
b.WriteRune(r)
|
||||
} else {
|
||||
changed = true
|
||||
b.WriteByte('-')
|
||||
}
|
||||
if b.Len() >= maxLogSuffixLen {
|
||||
changed = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
sanitized := strings.Trim(b.String(), "-.")
|
||||
if sanitized != b.String() {
|
||||
changed = true // Mark if trim removed any characters
|
||||
}
|
||||
if sanitized == "" {
|
||||
return fallbackLogSuffix()
|
||||
}
|
||||
|
||||
if changed || len(sanitized) > maxLogSuffixLen {
|
||||
hash := crc32.ChecksumIEEE([]byte(trimmed))
|
||||
hashStr := fmt.Sprintf("%x", hash)
|
||||
|
||||
maxPrefix := maxLogSuffixLen - len(hashStr) - 1
|
||||
if maxPrefix < 1 {
|
||||
maxPrefix = 1
|
||||
}
|
||||
if len(sanitized) > maxPrefix {
|
||||
sanitized = sanitized[:maxPrefix]
|
||||
}
|
||||
|
||||
sanitized = fmt.Sprintf("%s-%s", sanitized, hashStr)
|
||||
}
|
||||
|
||||
return sanitized
|
||||
}
|
||||
|
||||
func fallbackLogSuffix() string {
|
||||
next := logSuffixCounter.Add(1)
|
||||
return fmt.Sprintf("task-%d", next)
|
||||
}
|
||||
|
||||
func isSafeLogRune(r rune) bool {
|
||||
switch {
|
||||
case r >= 'a' && r <= 'z':
|
||||
return true
|
||||
case r >= 'A' && r <= 'Z':
|
||||
return true
|
||||
case r >= '0' && r <= '9':
|
||||
return true
|
||||
case r == '-', r == '_', r == '.':
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Path returns the underlying log file path (useful for tests/inspection).
|
||||
func (l *Logger) Path() string {
|
||||
if l == nil {
|
||||
@@ -112,10 +196,11 @@ func (l *Logger) Debug(msg string) { l.log("DEBUG", msg) }
|
||||
// Error logs at ERROR level.
|
||||
func (l *Logger) Error(msg string) { l.log("ERROR", msg) }
|
||||
|
||||
// Close stops the worker and syncs the log file.
|
||||
// Close signals the worker to flush and close the log file.
|
||||
// The log file is NOT removed, allowing inspection after program exit.
|
||||
// It is safe to call multiple times.
|
||||
// Returns after a 5-second timeout if worker doesn't stop gracefully.
|
||||
// Waits up to CODEAGENT_LOGGER_CLOSE_TIMEOUT_MS (default: 5000) for shutdown; set to 0 to wait indefinitely.
|
||||
// Returns an error if shutdown doesn't complete within the timeout.
|
||||
func (l *Logger) Close() error {
|
||||
if l == nil {
|
||||
return nil
|
||||
@@ -126,42 +211,51 @@ func (l *Logger) Close() error {
|
||||
l.closeOnce.Do(func() {
|
||||
l.closed.Store(true)
|
||||
close(l.done)
|
||||
close(l.ch)
|
||||
|
||||
// Wait for worker with timeout
|
||||
timeout := loggerCloseTimeout()
|
||||
workerDone := make(chan struct{})
|
||||
go func() {
|
||||
l.workerWG.Wait()
|
||||
close(workerDone)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-workerDone:
|
||||
// Worker stopped gracefully
|
||||
case <-time.After(5 * time.Second):
|
||||
// Worker timeout - proceed with cleanup anyway
|
||||
closeErr = fmt.Errorf("logger worker timeout during close")
|
||||
if timeout > 0 {
|
||||
select {
|
||||
case <-workerDone:
|
||||
// Worker stopped gracefully
|
||||
case <-time.After(timeout):
|
||||
closeErr = fmt.Errorf("logger worker timeout during close")
|
||||
return
|
||||
}
|
||||
} else {
|
||||
<-workerDone
|
||||
}
|
||||
|
||||
if err := l.writer.Flush(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
if l.workerErr != nil && closeErr == nil {
|
||||
closeErr = l.workerErr
|
||||
}
|
||||
|
||||
if err := l.file.Sync(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
}
|
||||
|
||||
if err := l.file.Close(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
}
|
||||
|
||||
// Log file is kept for debugging - NOT removed
|
||||
// Users can manually clean up /tmp/<wrapper>-*.log files
|
||||
})
|
||||
|
||||
return closeErr
|
||||
}
|
||||
|
||||
func loggerCloseTimeout() time.Duration {
|
||||
const defaultTimeout = 5 * time.Second
|
||||
|
||||
raw := strings.TrimSpace(os.Getenv("CODEAGENT_LOGGER_CLOSE_TIMEOUT_MS"))
|
||||
if raw == "" {
|
||||
return defaultTimeout
|
||||
}
|
||||
ms, err := strconv.Atoi(raw)
|
||||
if err != nil {
|
||||
return defaultTimeout
|
||||
}
|
||||
if ms <= 0 {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(ms) * time.Millisecond
|
||||
}
|
||||
|
||||
// RemoveLogFile removes the log file. Should only be called after Close().
|
||||
func (l *Logger) RemoveLogFile() error {
|
||||
if l == nil {
|
||||
@@ -170,34 +264,29 @@ func (l *Logger) RemoveLogFile() error {
|
||||
return os.Remove(l.path)
|
||||
}
|
||||
|
||||
// ExtractRecentErrors reads the log file and returns the most recent ERROR and WARN entries.
|
||||
// ExtractRecentErrors returns the most recent ERROR and WARN entries from memory cache.
|
||||
// Returns up to maxEntries entries in chronological order.
|
||||
func (l *Logger) ExtractRecentErrors(maxEntries int) []string {
|
||||
if l == nil || l.path == "" {
|
||||
if l == nil || maxEntries <= 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
f, err := os.Open(l.path)
|
||||
if err != nil {
|
||||
l.errorMu.Lock()
|
||||
defer l.errorMu.Unlock()
|
||||
|
||||
if len(l.errorEntries) == 0 {
|
||||
return nil
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
var entries []string
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.Contains(line, "] ERROR:") || strings.Contains(line, "] WARN:") {
|
||||
entries = append(entries, line)
|
||||
}
|
||||
// Return last N entries
|
||||
start := 0
|
||||
if len(l.errorEntries) > maxEntries {
|
||||
start = len(l.errorEntries) - maxEntries
|
||||
}
|
||||
|
||||
// Keep only the last maxEntries
|
||||
if len(entries) > maxEntries {
|
||||
entries = entries[len(entries)-maxEntries:]
|
||||
}
|
||||
|
||||
return entries
|
||||
result := make([]string, len(l.errorEntries)-start)
|
||||
copy(result, l.errorEntries[start:])
|
||||
return result
|
||||
}
|
||||
|
||||
// Flush waits for all pending log entries to be written. Primarily for tests.
|
||||
@@ -254,7 +343,8 @@ func (l *Logger) log(level, msg string) {
|
||||
return
|
||||
}
|
||||
|
||||
entry := logEntry{level: level, msg: msg}
|
||||
isError := level == "WARN" || level == "ERROR"
|
||||
entry := logEntry{msg: msg, isError: isError}
|
||||
l.flushMu.Lock()
|
||||
l.pendingWG.Add(1)
|
||||
l.flushMu.Unlock()
|
||||
@@ -275,18 +365,42 @@ func (l *Logger) run() {
|
||||
ticker := time.NewTicker(500 * time.Millisecond)
|
||||
defer ticker.Stop()
|
||||
|
||||
writeEntry := func(entry logEntry) {
|
||||
fmt.Fprintf(l.writer, "%s\n", entry.msg)
|
||||
|
||||
// Cache error/warn entries in memory for fast extraction
|
||||
if entry.isError {
|
||||
l.errorMu.Lock()
|
||||
l.errorEntries = append(l.errorEntries, entry.msg)
|
||||
if len(l.errorEntries) > 100 { // Keep last 100
|
||||
l.errorEntries = l.errorEntries[1:]
|
||||
}
|
||||
l.errorMu.Unlock()
|
||||
}
|
||||
|
||||
l.pendingWG.Done()
|
||||
}
|
||||
|
||||
finalize := func() {
|
||||
if err := l.writer.Flush(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
if err := l.file.Sync(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
if err := l.file.Close(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case entry, ok := <-l.ch:
|
||||
if !ok {
|
||||
// Channel closed, final flush
|
||||
_ = l.writer.Flush()
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
timestamp := time.Now().Format("2006-01-02 15:04:05.000")
|
||||
pid := os.Getpid()
|
||||
fmt.Fprintf(l.writer, "[%s] [PID:%d] %s: %s\n", timestamp, pid, entry.level, entry.msg)
|
||||
l.pendingWG.Done()
|
||||
writeEntry(entry)
|
||||
|
||||
case <-ticker.C:
|
||||
_ = l.writer.Flush()
|
||||
@@ -296,6 +410,21 @@ func (l *Logger) run() {
|
||||
_ = l.writer.Flush()
|
||||
_ = l.file.Sync()
|
||||
close(flushDone)
|
||||
|
||||
case <-l.done:
|
||||
for {
|
||||
select {
|
||||
case entry, ok := <-l.ch:
|
||||
if !ok {
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
writeEntry(entry)
|
||||
default:
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
158
codeagent-wrapper/logger_additional_coverage_test.go
Normal file
158
codeagent-wrapper/logger_additional_coverage_test.go
Normal file
@@ -0,0 +1,158 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLoggerNilReceiverNoop(t *testing.T) {
|
||||
var logger *Logger
|
||||
logger.Info("info")
|
||||
logger.Warn("warn")
|
||||
logger.Debug("debug")
|
||||
logger.Error("error")
|
||||
logger.Flush()
|
||||
if err := logger.Close(); err != nil {
|
||||
t.Fatalf("Close() on nil logger should return nil, got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerConcurrencyLogHelpers(t *testing.T) {
|
||||
setTempDirEnv(t, t.TempDir())
|
||||
|
||||
logger, err := NewLoggerWithSuffix("concurrency")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix error: %v", err)
|
||||
}
|
||||
setLogger(logger)
|
||||
defer closeLogger()
|
||||
|
||||
logConcurrencyPlanning(0, 2)
|
||||
logConcurrencyPlanning(3, 2)
|
||||
logConcurrencyState("start", "task-1", 1, 0)
|
||||
logConcurrencyState("done", "task-1", 0, 3)
|
||||
logger.Flush()
|
||||
|
||||
data, err := os.ReadFile(logger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read log file: %v", err)
|
||||
}
|
||||
output := string(data)
|
||||
|
||||
checks := []string{
|
||||
"parallel: worker_limit=unbounded total_tasks=2",
|
||||
"parallel: worker_limit=3 total_tasks=2",
|
||||
"parallel: start task=task-1 active=1 limit=unbounded",
|
||||
"parallel: done task=task-1 active=0 limit=3",
|
||||
}
|
||||
for _, c := range checks {
|
||||
if !strings.Contains(output, c) {
|
||||
t.Fatalf("log output missing %q, got: %s", c, output)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerConcurrencyLogHelpersNoopWithoutActiveLogger(t *testing.T) {
|
||||
_ = closeLogger()
|
||||
logConcurrencyPlanning(1, 1)
|
||||
logConcurrencyState("start", "task-1", 0, 1)
|
||||
}
|
||||
|
||||
func TestLoggerCleanupOldLogsSkipsUnsafeAndHandlesAlreadyDeleted(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
unsafePath := createTempLog(t, tempDir, fmt.Sprintf("%s-%d.log", primaryLogPrefix(), 222))
|
||||
orphanPath := createTempLog(t, tempDir, fmt.Sprintf("%s-%d.log", primaryLogPrefix(), 111))
|
||||
|
||||
stubFileStat(t, func(path string) (os.FileInfo, error) {
|
||||
if path == unsafePath {
|
||||
return fakeFileInfo{mode: os.ModeSymlink}, nil
|
||||
}
|
||||
return os.Lstat(path)
|
||||
})
|
||||
|
||||
stubProcessRunning(t, func(pid int) bool {
|
||||
if pid == 111 {
|
||||
_ = os.Remove(orphanPath)
|
||||
}
|
||||
return false
|
||||
})
|
||||
|
||||
stats, err := cleanupOldLogs()
|
||||
if err != nil {
|
||||
t.Fatalf("cleanupOldLogs() unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if stats.Scanned != 2 {
|
||||
t.Fatalf("scanned = %d, want %d", stats.Scanned, 2)
|
||||
}
|
||||
if stats.Deleted != 0 {
|
||||
t.Fatalf("deleted = %d, want %d", stats.Deleted, 0)
|
||||
}
|
||||
if stats.Kept != 2 {
|
||||
t.Fatalf("kept = %d, want %d", stats.Kept, 2)
|
||||
}
|
||||
if stats.Errors != 0 {
|
||||
t.Fatalf("errors = %d, want %d", stats.Errors, 0)
|
||||
}
|
||||
|
||||
hasSkip := false
|
||||
hasAlreadyDeleted := false
|
||||
for _, name := range stats.KeptFiles {
|
||||
if strings.Contains(name, "already deleted") {
|
||||
hasAlreadyDeleted = true
|
||||
}
|
||||
if strings.Contains(name, filepath.Base(unsafePath)) {
|
||||
hasSkip = true
|
||||
}
|
||||
}
|
||||
if !hasSkip {
|
||||
t.Fatalf("expected kept files to include unsafe log %q, got %+v", filepath.Base(unsafePath), stats.KeptFiles)
|
||||
}
|
||||
if !hasAlreadyDeleted {
|
||||
t.Fatalf("expected kept files to include already deleted marker, got %+v", stats.KeptFiles)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerIsUnsafeFileErrorPaths(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
|
||||
t.Run("stat ErrNotExist", func(t *testing.T) {
|
||||
stubFileStat(t, func(string) (os.FileInfo, error) {
|
||||
return nil, os.ErrNotExist
|
||||
})
|
||||
|
||||
unsafe, reason := isUnsafeFile("missing.log", tempDir)
|
||||
if !unsafe || reason != "" {
|
||||
t.Fatalf("expected missing file to be skipped silently, got unsafe=%v reason=%q", unsafe, reason)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("stat error", func(t *testing.T) {
|
||||
stubFileStat(t, func(string) (os.FileInfo, error) {
|
||||
return nil, fmt.Errorf("boom")
|
||||
})
|
||||
|
||||
unsafe, reason := isUnsafeFile("broken.log", tempDir)
|
||||
if !unsafe || !strings.Contains(reason, "stat failed") {
|
||||
t.Fatalf("expected stat failure to be unsafe, got unsafe=%v reason=%q", unsafe, reason)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("EvalSymlinks error", func(t *testing.T) {
|
||||
stubFileStat(t, func(string) (os.FileInfo, error) {
|
||||
return fakeFileInfo{}, nil
|
||||
})
|
||||
stubEvalSymlinks(t, func(string) (string, error) {
|
||||
return "", fmt.Errorf("resolve failed")
|
||||
})
|
||||
|
||||
unsafe, reason := isUnsafeFile("cannot-resolve.log", tempDir)
|
||||
if !unsafe || !strings.Contains(reason, "path resolution failed") {
|
||||
t.Fatalf("expected resolution failure to be unsafe, got unsafe=%v reason=%q", unsafe, reason)
|
||||
}
|
||||
})
|
||||
}
|
||||
115
codeagent-wrapper/logger_suffix_test.go
Normal file
115
codeagent-wrapper/logger_suffix_test.go
Normal file
@@ -0,0 +1,115 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLoggerWithSuffixNamingAndIsolation(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
taskA := "task-1"
|
||||
taskB := "task-2"
|
||||
|
||||
loggerA, err := NewLoggerWithSuffix(taskA)
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix(%q) error = %v", taskA, err)
|
||||
}
|
||||
defer loggerA.Close()
|
||||
|
||||
loggerB, err := NewLoggerWithSuffix(taskB)
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix(%q) error = %v", taskB, err)
|
||||
}
|
||||
defer loggerB.Close()
|
||||
|
||||
wantA := filepath.Join(tempDir, fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), taskA))
|
||||
if loggerA.Path() != wantA {
|
||||
t.Fatalf("loggerA path = %q, want %q", loggerA.Path(), wantA)
|
||||
}
|
||||
|
||||
wantB := filepath.Join(tempDir, fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), taskB))
|
||||
if loggerB.Path() != wantB {
|
||||
t.Fatalf("loggerB path = %q, want %q", loggerB.Path(), wantB)
|
||||
}
|
||||
|
||||
if loggerA.Path() == loggerB.Path() {
|
||||
t.Fatalf("expected different log files, got %q", loggerA.Path())
|
||||
}
|
||||
|
||||
loggerA.Info("from taskA")
|
||||
loggerB.Info("from taskB")
|
||||
loggerA.Flush()
|
||||
loggerB.Flush()
|
||||
|
||||
dataA, err := os.ReadFile(loggerA.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read loggerA file: %v", err)
|
||||
}
|
||||
dataB, err := os.ReadFile(loggerB.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read loggerB file: %v", err)
|
||||
}
|
||||
|
||||
if !strings.Contains(string(dataA), "from taskA") {
|
||||
t.Fatalf("loggerA missing its message, got: %q", string(dataA))
|
||||
}
|
||||
if strings.Contains(string(dataA), "from taskB") {
|
||||
t.Fatalf("loggerA contains loggerB message, got: %q", string(dataA))
|
||||
}
|
||||
if !strings.Contains(string(dataB), "from taskB") {
|
||||
t.Fatalf("loggerB missing its message, got: %q", string(dataB))
|
||||
}
|
||||
if strings.Contains(string(dataB), "from taskA") {
|
||||
t.Fatalf("loggerB contains loggerA message, got: %q", string(dataB))
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerWithSuffixReturnsErrorWhenTempDirNotWritable(t *testing.T) {
|
||||
base := t.TempDir()
|
||||
noWrite := filepath.Join(base, "ro")
|
||||
if err := os.Mkdir(noWrite, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only temp dir: %v", err)
|
||||
}
|
||||
t.Cleanup(func() { _ = os.Chmod(noWrite, 0o700) })
|
||||
setTempDirEnv(t, noWrite)
|
||||
|
||||
logger, err := NewLoggerWithSuffix("task-err")
|
||||
if err == nil {
|
||||
_ = logger.Close()
|
||||
t.Fatalf("expected error when temp dir is not writable")
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerWithSuffixSanitizesUnsafeSuffix(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
raw := "../bad id/with?chars"
|
||||
safe := sanitizeLogSuffix(raw)
|
||||
if safe == "" {
|
||||
t.Fatalf("sanitizeLogSuffix returned empty string")
|
||||
}
|
||||
if strings.ContainsAny(safe, "/\\") {
|
||||
t.Fatalf("sanitized suffix should not contain path separators, got %q", safe)
|
||||
}
|
||||
|
||||
logger, err := NewLoggerWithSuffix(raw)
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix(%q) error = %v", raw, err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
_ = logger.Close()
|
||||
_ = os.Remove(logger.Path())
|
||||
})
|
||||
|
||||
wantBase := fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), safe)
|
||||
if gotBase := filepath.Base(logger.Path()); gotBase != wantBase {
|
||||
t.Fatalf("log filename = %q, want %q", gotBase, wantBase)
|
||||
}
|
||||
if dir := filepath.Dir(logger.Path()); dir != tempDir {
|
||||
t.Fatalf("logger path dir = %q, want %q", dir, tempDir)
|
||||
}
|
||||
}
|
||||
@@ -26,7 +26,7 @@ func compareCleanupStats(got, want CleanupStats) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func TestRunLoggerCreatesFileWithPID(t *testing.T) {
|
||||
func TestLoggerCreatesFileWithPID(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
@@ -46,7 +46,7 @@ func TestRunLoggerCreatesFileWithPID(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerWritesLevels(t *testing.T) {
|
||||
func TestLoggerWritesLevels(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
@@ -69,7 +69,7 @@ func TestRunLoggerWritesLevels(t *testing.T) {
|
||||
}
|
||||
|
||||
content := string(data)
|
||||
checks := []string{"INFO: info message", "WARN: warn message", "DEBUG: debug message", "ERROR: error message"}
|
||||
checks := []string{"info message", "warn message", "debug message", "error message"}
|
||||
for _, c := range checks {
|
||||
if !strings.Contains(content, c) {
|
||||
t.Fatalf("log file missing entry %q, content: %s", c, content)
|
||||
@@ -77,7 +77,31 @@ func TestRunLoggerWritesLevels(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerCloseRemovesFileAndStopsWorker(t *testing.T) {
|
||||
func TestLoggerDefaultIsTerminalCoverage(t *testing.T) {
|
||||
oldStdin := os.Stdin
|
||||
t.Cleanup(func() { os.Stdin = oldStdin })
|
||||
|
||||
f, err := os.CreateTemp(t.TempDir(), "stdin-*")
|
||||
if err != nil {
|
||||
t.Fatalf("os.CreateTemp() error = %v", err)
|
||||
}
|
||||
defer os.Remove(f.Name())
|
||||
|
||||
os.Stdin = f
|
||||
if got := defaultIsTerminal(); got {
|
||||
t.Fatalf("defaultIsTerminal() = %v, want false for regular file", got)
|
||||
}
|
||||
|
||||
if err := f.Close(); err != nil {
|
||||
t.Fatalf("Close() error = %v", err)
|
||||
}
|
||||
os.Stdin = f
|
||||
if got := defaultIsTerminal(); !got {
|
||||
t.Fatalf("defaultIsTerminal() = %v, want true when Stat fails", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerCloseStopsWorkerAndKeepsFile(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
@@ -94,6 +118,11 @@ func TestRunLoggerCloseRemovesFileAndStopsWorker(t *testing.T) {
|
||||
if err := logger.Close(); err != nil {
|
||||
t.Fatalf("Close() returned error: %v", err)
|
||||
}
|
||||
if logger.file != nil {
|
||||
if _, err := logger.file.Write([]byte("x")); err == nil {
|
||||
t.Fatalf("expected file to be closed after Close()")
|
||||
}
|
||||
}
|
||||
|
||||
// After recent changes, log file is kept for debugging - NOT removed
|
||||
if _, err := os.Stat(logPath); os.IsNotExist(err) {
|
||||
@@ -116,7 +145,7 @@ func TestRunLoggerCloseRemovesFileAndStopsWorker(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerConcurrentWritesSafe(t *testing.T) {
|
||||
func TestLoggerConcurrentWritesSafe(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
@@ -165,7 +194,7 @@ func TestRunLoggerConcurrentWritesSafe(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerTerminateProcessActive(t *testing.T) {
|
||||
func TestLoggerTerminateProcessActive(t *testing.T) {
|
||||
cmd := exec.Command("sleep", "5")
|
||||
if err := cmd.Start(); err != nil {
|
||||
t.Skipf("cannot start sleep command: %v", err)
|
||||
@@ -193,7 +222,7 @@ func TestRunLoggerTerminateProcessActive(t *testing.T) {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
|
||||
func TestRunTerminateProcessNil(t *testing.T) {
|
||||
func TestLoggerTerminateProcessNil(t *testing.T) {
|
||||
if timer := terminateProcess(nil); timer != nil {
|
||||
t.Fatalf("terminateProcess(nil) should return nil timer")
|
||||
}
|
||||
@@ -202,7 +231,7 @@ func TestRunTerminateProcessNil(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsRemovesOrphans(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsRemovesOrphans(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
orphan1 := createTempLog(t, tempDir, "codex-wrapper-111.log")
|
||||
@@ -252,7 +281,7 @@ func TestRunCleanupOldLogsRemovesOrphans(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsHandlesInvalidNamesAndErrors(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsHandlesInvalidNamesAndErrors(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
invalid := []string{
|
||||
@@ -310,7 +339,7 @@ func TestRunCleanupOldLogsHandlesInvalidNamesAndErrors(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsHandlesGlobFailures(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsHandlesGlobFailures(t *testing.T) {
|
||||
stubProcessRunning(t, func(pid int) bool {
|
||||
t.Fatalf("process check should not run when glob fails")
|
||||
return false
|
||||
@@ -336,7 +365,7 @@ func TestRunCleanupOldLogsHandlesGlobFailures(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsEmptyDirectoryStats(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsEmptyDirectoryStats(t *testing.T) {
|
||||
setTempDirEnv(t, t.TempDir())
|
||||
|
||||
stubProcessRunning(t, func(int) bool {
|
||||
@@ -356,7 +385,7 @@ func TestRunCleanupOldLogsEmptyDirectoryStats(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsHandlesTempDirPermissionErrors(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsHandlesTempDirPermissionErrors(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
paths := []string{
|
||||
@@ -396,7 +425,7 @@ func TestRunCleanupOldLogsHandlesTempDirPermissionErrors(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsHandlesPermissionDeniedFile(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsHandlesPermissionDeniedFile(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
protected := createTempLog(t, tempDir, "codex-wrapper-6200.log")
|
||||
@@ -433,7 +462,7 @@ func TestRunCleanupOldLogsHandlesPermissionDeniedFile(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsPerformanceBound(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsPerformanceBound(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
const fileCount = 400
|
||||
@@ -476,17 +505,98 @@ func TestRunCleanupOldLogsPerformanceBound(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsCoverageSuite(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsCoverageSuite(t *testing.T) {
|
||||
TestBackendParseJSONStream_CoverageSuite(t)
|
||||
}
|
||||
|
||||
// Reuse the existing coverage suite so the focused TestLogger run still exercises
|
||||
// the rest of the codebase and keeps coverage high.
|
||||
func TestRunLoggerCoverageSuite(t *testing.T) {
|
||||
TestBackendParseJSONStream_CoverageSuite(t)
|
||||
func TestLoggerCoverageSuite(t *testing.T) {
|
||||
suite := []struct {
|
||||
name string
|
||||
fn func(*testing.T)
|
||||
}{
|
||||
{"TestBackendParseJSONStream_CoverageSuite", TestBackendParseJSONStream_CoverageSuite},
|
||||
{"TestVersionCoverageFullRun", TestVersionCoverageFullRun},
|
||||
{"TestVersionMainWrapper", TestVersionMainWrapper},
|
||||
|
||||
{"TestExecutorHelperCoverage", TestExecutorHelperCoverage},
|
||||
{"TestExecutorRunCodexTaskWithContext", TestExecutorRunCodexTaskWithContext},
|
||||
{"TestExecutorParallelLogIsolation", TestExecutorParallelLogIsolation},
|
||||
{"TestExecutorTaskLoggerContext", TestExecutorTaskLoggerContext},
|
||||
{"TestExecutorExecuteConcurrentWithContextBranches", TestExecutorExecuteConcurrentWithContextBranches},
|
||||
{"TestExecutorSignalAndTermination", TestExecutorSignalAndTermination},
|
||||
{"TestExecutorCancelReasonAndCloseWithReason", TestExecutorCancelReasonAndCloseWithReason},
|
||||
{"TestExecutorForceKillTimerStop", TestExecutorForceKillTimerStop},
|
||||
{"TestExecutorForwardSignalsDefaults", TestExecutorForwardSignalsDefaults},
|
||||
|
||||
{"TestBackendParseArgs_NewMode", TestBackendParseArgs_NewMode},
|
||||
{"TestBackendParseArgs_ResumeMode", TestBackendParseArgs_ResumeMode},
|
||||
{"TestBackendParseArgs_BackendFlag", TestBackendParseArgs_BackendFlag},
|
||||
{"TestBackendParseArgs_SkipPermissions", TestBackendParseArgs_SkipPermissions},
|
||||
{"TestBackendParseBoolFlag", TestBackendParseBoolFlag},
|
||||
{"TestBackendEnvFlagEnabled", TestBackendEnvFlagEnabled},
|
||||
{"TestRunResolveTimeout", TestRunResolveTimeout},
|
||||
{"TestRunIsTerminal", TestRunIsTerminal},
|
||||
{"TestRunReadPipedTask", TestRunReadPipedTask},
|
||||
{"TestTailBufferWrite", TestTailBufferWrite},
|
||||
{"TestLogWriterWriteLimitsBuffer", TestLogWriterWriteLimitsBuffer},
|
||||
{"TestLogWriterLogLine", TestLogWriterLogLine},
|
||||
{"TestNewLogWriterDefaultMaxLen", TestNewLogWriterDefaultMaxLen},
|
||||
{"TestNewLogWriterDefaultLimit", TestNewLogWriterDefaultLimit},
|
||||
{"TestRunHello", TestRunHello},
|
||||
{"TestRunGreet", TestRunGreet},
|
||||
{"TestRunFarewell", TestRunFarewell},
|
||||
{"TestRunFarewellEmpty", TestRunFarewellEmpty},
|
||||
|
||||
{"TestParallelParseConfig_Success", TestParallelParseConfig_Success},
|
||||
{"TestParallelParseConfig_Backend", TestParallelParseConfig_Backend},
|
||||
{"TestParallelParseConfig_InvalidFormat", TestParallelParseConfig_InvalidFormat},
|
||||
{"TestParallelParseConfig_EmptyTasks", TestParallelParseConfig_EmptyTasks},
|
||||
{"TestParallelParseConfig_MissingID", TestParallelParseConfig_MissingID},
|
||||
{"TestParallelParseConfig_MissingTask", TestParallelParseConfig_MissingTask},
|
||||
{"TestParallelParseConfig_DuplicateID", TestParallelParseConfig_DuplicateID},
|
||||
{"TestParallelParseConfig_DelimiterFormat", TestParallelParseConfig_DelimiterFormat},
|
||||
|
||||
{"TestBackendSelectBackend", TestBackendSelectBackend},
|
||||
{"TestBackendSelectBackend_Invalid", TestBackendSelectBackend_Invalid},
|
||||
{"TestBackendSelectBackend_DefaultOnEmpty", TestBackendSelectBackend_DefaultOnEmpty},
|
||||
{"TestBackendBuildArgs_CodexBackend", TestBackendBuildArgs_CodexBackend},
|
||||
{"TestBackendBuildArgs_ClaudeBackend", TestBackendBuildArgs_ClaudeBackend},
|
||||
{"TestClaudeBackendBuildArgs_OutputValidation", TestClaudeBackendBuildArgs_OutputValidation},
|
||||
{"TestBackendBuildArgs_GeminiBackend", TestBackendBuildArgs_GeminiBackend},
|
||||
{"TestGeminiBackendBuildArgs_OutputValidation", TestGeminiBackendBuildArgs_OutputValidation},
|
||||
{"TestBackendNamesAndCommands", TestBackendNamesAndCommands},
|
||||
|
||||
{"TestBackendParseJSONStream", TestBackendParseJSONStream},
|
||||
{"TestBackendParseJSONStream_ClaudeEvents", TestBackendParseJSONStream_ClaudeEvents},
|
||||
{"TestBackendParseJSONStream_GeminiEvents", TestBackendParseJSONStream_GeminiEvents},
|
||||
{"TestBackendParseJSONStreamWithWarn_InvalidLine", TestBackendParseJSONStreamWithWarn_InvalidLine},
|
||||
{"TestBackendParseJSONStream_OnMessage", TestBackendParseJSONStream_OnMessage},
|
||||
{"TestBackendParseJSONStream_ScannerError", TestBackendParseJSONStream_ScannerError},
|
||||
{"TestBackendDiscardInvalidJSON", TestBackendDiscardInvalidJSON},
|
||||
{"TestBackendDiscardInvalidJSONBuffer", TestBackendDiscardInvalidJSONBuffer},
|
||||
|
||||
{"TestCurrentWrapperNameFallsBackToExecutable", TestCurrentWrapperNameFallsBackToExecutable},
|
||||
{"TestCurrentWrapperNameDetectsLegacyAliasSymlink", TestCurrentWrapperNameDetectsLegacyAliasSymlink},
|
||||
|
||||
{"TestIsProcessRunning", TestIsProcessRunning},
|
||||
{"TestGetProcessStartTimeReadsProcStat", TestGetProcessStartTimeReadsProcStat},
|
||||
{"TestGetProcessStartTimeInvalidData", TestGetProcessStartTimeInvalidData},
|
||||
{"TestGetBootTimeParsesBtime", TestGetBootTimeParsesBtime},
|
||||
{"TestGetBootTimeInvalidData", TestGetBootTimeInvalidData},
|
||||
|
||||
{"TestClaudeBuildArgs_ModesAndPermissions", TestClaudeBuildArgs_ModesAndPermissions},
|
||||
{"TestClaudeBuildArgs_GeminiAndCodexModes", TestClaudeBuildArgs_GeminiAndCodexModes},
|
||||
{"TestClaudeBuildArgs_BackendMetadata", TestClaudeBuildArgs_BackendMetadata},
|
||||
}
|
||||
|
||||
for _, tc := range suite {
|
||||
t.Run(tc.name, tc.fn)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsKeepsCurrentProcessLog(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsKeepsCurrentProcessLog(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
currentPID := os.Getpid()
|
||||
@@ -518,7 +628,7 @@ func TestRunCleanupOldLogsKeepsCurrentProcessLog(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsPIDReusedScenarios(t *testing.T) {
|
||||
func TestLoggerIsPIDReusedScenarios(t *testing.T) {
|
||||
now := time.Now()
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -552,7 +662,7 @@ func TestIsPIDReusedScenarios(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsUnsafeFileSecurityChecks(t *testing.T) {
|
||||
func TestLoggerIsUnsafeFileSecurityChecks(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
absTempDir, err := filepath.Abs(tempDir)
|
||||
if err != nil {
|
||||
@@ -601,7 +711,7 @@ func TestIsUnsafeFileSecurityChecks(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestRunLoggerPathAndRemove(t *testing.T) {
|
||||
func TestLoggerPathAndRemove(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
path := filepath.Join(tempDir, "sample.log")
|
||||
if err := os.WriteFile(path, []byte("test"), 0o644); err != nil {
|
||||
@@ -628,7 +738,19 @@ func TestRunLoggerPathAndRemove(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerInternalLog(t *testing.T) {
|
||||
func TestLoggerTruncateBytesCoverage(t *testing.T) {
|
||||
if got := truncateBytes([]byte("abc"), 3); got != "abc" {
|
||||
t.Fatalf("truncateBytes() = %q, want %q", got, "abc")
|
||||
}
|
||||
if got := truncateBytes([]byte("abcd"), 3); got != "abc..." {
|
||||
t.Fatalf("truncateBytes() = %q, want %q", got, "abc...")
|
||||
}
|
||||
if got := truncateBytes([]byte("abcd"), -1); got != "" {
|
||||
t.Fatalf("truncateBytes() = %q, want empty string", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerInternalLog(t *testing.T) {
|
||||
logger := &Logger{
|
||||
ch: make(chan logEntry, 1),
|
||||
done: make(chan struct{}),
|
||||
@@ -644,7 +766,7 @@ func TestRunLoggerInternalLog(t *testing.T) {
|
||||
|
||||
logger.log("INFO", "hello")
|
||||
entry := <-done
|
||||
if entry.level != "INFO" || entry.msg != "hello" {
|
||||
if entry.msg != "hello" {
|
||||
t.Fatalf("unexpected entry %+v", entry)
|
||||
}
|
||||
|
||||
@@ -653,7 +775,7 @@ func TestRunLoggerInternalLog(t *testing.T) {
|
||||
close(logger.done)
|
||||
}
|
||||
|
||||
func TestRunParsePIDFromLog(t *testing.T) {
|
||||
func TestLoggerParsePIDFromLog(t *testing.T) {
|
||||
hugePID := strconv.FormatInt(math.MaxInt64, 10) + "0"
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -769,69 +891,93 @@ func (f fakeFileInfo) ModTime() time.Time { return f.modTime }
|
||||
func (f fakeFileInfo) IsDir() bool { return false }
|
||||
func (f fakeFileInfo) Sys() interface{} { return nil }
|
||||
|
||||
func TestExtractRecentErrors(t *testing.T) {
|
||||
func TestLoggerExtractRecentErrors(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
content string
|
||||
logs []struct{ level, msg string }
|
||||
maxEntries int
|
||||
want []string
|
||||
}{
|
||||
{
|
||||
name: "empty log",
|
||||
content: "",
|
||||
logs: nil,
|
||||
maxEntries: 10,
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
name: "no errors",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] DEBUG: processing`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"DEBUG", "processing"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
name: "single error",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] ERROR: something failed`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"ERROR", "something failed"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: []string{"[2025-01-01 12:00:01.000] [PID:123] ERROR: something failed"},
|
||||
want: []string{"something failed"},
|
||||
},
|
||||
{
|
||||
name: "error and warn",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] WARN: warning message
|
||||
[2025-01-01 12:00:02.000] [PID:123] ERROR: error message`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"WARN", "warning message"},
|
||||
{"ERROR", "error message"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: []string{
|
||||
"[2025-01-01 12:00:01.000] [PID:123] WARN: warning message",
|
||||
"[2025-01-01 12:00:02.000] [PID:123] ERROR: error message",
|
||||
"warning message",
|
||||
"error message",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "truncate to max",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] ERROR: error 1
|
||||
[2025-01-01 12:00:01.000] [PID:123] ERROR: error 2
|
||||
[2025-01-01 12:00:02.000] [PID:123] ERROR: error 3
|
||||
[2025-01-01 12:00:03.000] [PID:123] ERROR: error 4
|
||||
[2025-01-01 12:00:04.000] [PID:123] ERROR: error 5`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"ERROR", "error 1"},
|
||||
{"ERROR", "error 2"},
|
||||
{"ERROR", "error 3"},
|
||||
{"ERROR", "error 4"},
|
||||
{"ERROR", "error 5"},
|
||||
},
|
||||
maxEntries: 3,
|
||||
want: []string{
|
||||
"[2025-01-01 12:00:02.000] [PID:123] ERROR: error 3",
|
||||
"[2025-01-01 12:00:03.000] [PID:123] ERROR: error 4",
|
||||
"[2025-01-01 12:00:04.000] [PID:123] ERROR: error 5",
|
||||
"error 3",
|
||||
"error 4",
|
||||
"error 5",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
logPath := filepath.Join(tempDir, "test.log")
|
||||
if err := os.WriteFile(logPath, []byte(tt.content), 0o644); err != nil {
|
||||
t.Fatalf("failed to write test log: %v", err)
|
||||
logger, err := NewLoggerWithSuffix("extract-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write logs using logger methods
|
||||
for _, entry := range tt.logs {
|
||||
switch entry.level {
|
||||
case "INFO":
|
||||
logger.Info(entry.msg)
|
||||
case "WARN":
|
||||
logger.Warn(entry.msg)
|
||||
case "ERROR":
|
||||
logger.Error(entry.msg)
|
||||
case "DEBUG":
|
||||
logger.Debug(entry.msg)
|
||||
}
|
||||
}
|
||||
|
||||
logger := &Logger{path: logPath}
|
||||
logger.Flush()
|
||||
|
||||
got := logger.ExtractRecentErrors(tt.maxEntries)
|
||||
|
||||
if len(got) != len(tt.want) {
|
||||
@@ -846,23 +992,137 @@ func TestExtractRecentErrors(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsNilLogger(t *testing.T) {
|
||||
func TestLoggerExtractRecentErrorsNilLogger(t *testing.T) {
|
||||
var logger *Logger
|
||||
if got := logger.ExtractRecentErrors(10); got != nil {
|
||||
t.Fatalf("nil logger ExtractRecentErrors() should return nil, got %v", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsEmptyPath(t *testing.T) {
|
||||
func TestLoggerExtractRecentErrorsEmptyPath(t *testing.T) {
|
||||
logger := &Logger{path: ""}
|
||||
if got := logger.ExtractRecentErrors(10); got != nil {
|
||||
t.Fatalf("empty path ExtractRecentErrors() should return nil, got %v", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsFileNotExist(t *testing.T) {
|
||||
func TestLoggerExtractRecentErrorsFileNotExist(t *testing.T) {
|
||||
logger := &Logger{path: "/nonexistent/path/to/log.log"}
|
||||
if got := logger.ExtractRecentErrors(10); got != nil {
|
||||
t.Fatalf("nonexistent file ExtractRecentErrors() should return nil, got %v", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSanitizeLogSuffixNoDuplicates(t *testing.T) {
|
||||
testCases := []string{
|
||||
"task",
|
||||
"task.",
|
||||
".task",
|
||||
"-task",
|
||||
"task-",
|
||||
"--task--",
|
||||
"..task..",
|
||||
}
|
||||
|
||||
seen := make(map[string]string)
|
||||
for _, input := range testCases {
|
||||
result := sanitizeLogSuffix(input)
|
||||
if result == "" {
|
||||
t.Fatalf("sanitizeLogSuffix(%q) returned empty string", input)
|
||||
}
|
||||
|
||||
if prev, exists := seen[result]; exists {
|
||||
t.Fatalf("collision detected: %q and %q both produce %q", input, prev, result)
|
||||
}
|
||||
seen[result] = input
|
||||
|
||||
// Verify result is safe for file names
|
||||
if strings.ContainsAny(result, "/\\:*?\"<>|") {
|
||||
t.Fatalf("sanitizeLogSuffix(%q) = %q contains unsafe characters", input, result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsBoundaryCheck(t *testing.T) {
|
||||
logger, err := NewLoggerWithSuffix("boundary-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write some errors
|
||||
logger.Error("error 1")
|
||||
logger.Warn("warn 1")
|
||||
logger.Error("error 2")
|
||||
logger.Flush()
|
||||
|
||||
// Test zero
|
||||
result := logger.ExtractRecentErrors(0)
|
||||
if result != nil {
|
||||
t.Fatalf("ExtractRecentErrors(0) should return nil, got %v", result)
|
||||
}
|
||||
|
||||
// Test negative
|
||||
result = logger.ExtractRecentErrors(-5)
|
||||
if result != nil {
|
||||
t.Fatalf("ExtractRecentErrors(-5) should return nil, got %v", result)
|
||||
}
|
||||
|
||||
// Test positive still works
|
||||
result = logger.ExtractRecentErrors(10)
|
||||
if len(result) != 3 {
|
||||
t.Fatalf("ExtractRecentErrors(10) expected 3 entries, got %d", len(result))
|
||||
}
|
||||
}
|
||||
|
||||
func TestErrorEntriesMaxLimit(t *testing.T) {
|
||||
logger, err := NewLoggerWithSuffix("max-limit-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write 150 error/warn entries
|
||||
for i := 1; i <= 150; i++ {
|
||||
if i%2 == 0 {
|
||||
logger.Error(fmt.Sprintf("error-%03d", i))
|
||||
} else {
|
||||
logger.Warn(fmt.Sprintf("warn-%03d", i))
|
||||
}
|
||||
}
|
||||
logger.Flush()
|
||||
|
||||
// Extract all cached errors
|
||||
result := logger.ExtractRecentErrors(200) // Request more than cache size
|
||||
|
||||
// Should only have last 100 entries (entries 51-150 in sequence)
|
||||
if len(result) != 100 {
|
||||
t.Fatalf("expected 100 cached entries, got %d", len(result))
|
||||
}
|
||||
|
||||
// Verify entries are the last 100 (entries 51-150)
|
||||
if !strings.Contains(result[0], "051") {
|
||||
t.Fatalf("first cached entry should be entry 51, got: %s", result[0])
|
||||
}
|
||||
if !strings.Contains(result[99], "150") {
|
||||
t.Fatalf("last cached entry should be entry 150, got: %s", result[99])
|
||||
}
|
||||
|
||||
// Verify order is preserved - simplified logic
|
||||
for i := 0; i < len(result)-1; i++ {
|
||||
expectedNum := 51 + i
|
||||
nextNum := 51 + i + 1
|
||||
|
||||
expectedEntry := fmt.Sprintf("%03d", expectedNum)
|
||||
nextEntry := fmt.Sprintf("%03d", nextNum)
|
||||
|
||||
if !strings.Contains(result[i], expectedEntry) {
|
||||
t.Fatalf("entry at index %d should contain %s, got: %s", i, expectedEntry, result[i])
|
||||
}
|
||||
if !strings.Contains(result[i+1], nextEntry) {
|
||||
t.Fatalf("entry at index %d should contain %s, got: %s", i+1, nextEntry, result[i+1])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
version = "5.2.0"
|
||||
version = "5.2.5"
|
||||
defaultWorkdir = "."
|
||||
defaultTimeout = 7200 // seconds
|
||||
codexLogLineLimit = 1000
|
||||
|
||||
@@ -426,10 +426,11 @@ ok-d`
|
||||
t.Fatalf("expected startup banner in stderr, got:\n%s", stderrOut)
|
||||
}
|
||||
|
||||
// After parallel log isolation fix, each task has its own log file
|
||||
expectedLines := map[string]struct{}{
|
||||
fmt.Sprintf("Task a: Log: %s", expectedLog): {},
|
||||
fmt.Sprintf("Task b: Log: %s", expectedLog): {},
|
||||
fmt.Sprintf("Task d: Log: %s", expectedLog): {},
|
||||
fmt.Sprintf("Task a: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-a.log", os.Getpid()))): {},
|
||||
fmt.Sprintf("Task b: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-b.log", os.Getpid()))): {},
|
||||
fmt.Sprintf("Task d: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-d.log", os.Getpid()))): {},
|
||||
}
|
||||
|
||||
if len(taskLines) != len(expectedLines) {
|
||||
|
||||
@@ -41,6 +41,7 @@ func resetTestHooks() {
|
||||
closeLogger()
|
||||
executablePathFn = os.Executable
|
||||
runTaskFn = runCodexTask
|
||||
runCodexTaskFn = defaultRunCodexTaskFn
|
||||
exitFn = os.Exit
|
||||
}
|
||||
|
||||
@@ -250,6 +251,10 @@ func (d *drainBlockingCmd) SetStderr(w io.Writer) {
|
||||
d.inner.SetStderr(w)
|
||||
}
|
||||
|
||||
func (d *drainBlockingCmd) SetDir(dir string) {
|
||||
d.inner.SetDir(dir)
|
||||
}
|
||||
|
||||
func (d *drainBlockingCmd) Process() processHandle {
|
||||
return d.inner.Process()
|
||||
}
|
||||
@@ -504,6 +509,8 @@ func (f *fakeCmd) SetStderr(w io.Writer) {
|
||||
f.stderr = w
|
||||
}
|
||||
|
||||
func (f *fakeCmd) SetDir(string) {}
|
||||
|
||||
func (f *fakeCmd) Process() processHandle {
|
||||
if f == nil {
|
||||
return nil
|
||||
@@ -1371,7 +1378,7 @@ func TestBackendBuildArgs_ClaudeBackend(t *testing.T) {
|
||||
backend := ClaudeBackend{}
|
||||
cfg := &Config{Mode: "new", WorkDir: defaultWorkdir}
|
||||
got := backend.BuildArgs(cfg, "todo")
|
||||
want := []string{"-p", "-C", defaultWorkdir, "--output-format", "stream-json", "--verbose", "todo"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
if len(got) != len(want) {
|
||||
t.Fatalf("length mismatch")
|
||||
}
|
||||
@@ -1392,7 +1399,7 @@ func TestClaudeBackendBuildArgs_OutputValidation(t *testing.T) {
|
||||
target := "ensure-flags"
|
||||
|
||||
args := backend.BuildArgs(cfg, target)
|
||||
expectedPrefix := []string{"-p", "--output-format", "stream-json", "--verbose"}
|
||||
expectedPrefix := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose"}
|
||||
|
||||
if len(args) != len(expectedPrefix)+1 {
|
||||
t.Fatalf("args length=%d, want %d", len(args), len(expectedPrefix)+1)
|
||||
@@ -1411,7 +1418,7 @@ func TestBackendBuildArgs_GeminiBackend(t *testing.T) {
|
||||
backend := GeminiBackend{}
|
||||
cfg := &Config{Mode: "new"}
|
||||
got := backend.BuildArgs(cfg, "task")
|
||||
want := []string{"-o", "stream-json", "-y", "-C", defaultWorkdir, "-p", "task"}
|
||||
want := []string{"-o", "stream-json", "-y", "-p", "task"}
|
||||
if len(got) != len(want) {
|
||||
t.Fatalf("length mismatch")
|
||||
}
|
||||
@@ -1777,13 +1784,13 @@ func TestRunLogFunctions(t *testing.T) {
|
||||
}
|
||||
|
||||
output := string(data)
|
||||
if !strings.Contains(output, "INFO: info message") {
|
||||
if !strings.Contains(output, "info message") {
|
||||
t.Errorf("logInfo output missing, got: %s", output)
|
||||
}
|
||||
if !strings.Contains(output, "WARN: warn message") {
|
||||
if !strings.Contains(output, "warn message") {
|
||||
t.Errorf("logWarn output missing, got: %s", output)
|
||||
}
|
||||
if !strings.Contains(output, "ERROR: error message") {
|
||||
if !strings.Contains(output, "error message") {
|
||||
t.Errorf("logError output missing, got: %s", output)
|
||||
}
|
||||
}
|
||||
@@ -2684,7 +2691,7 @@ func TestVersionFlag(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codeagent-wrapper version 5.2.0\n"
|
||||
want := "codeagent-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -2698,7 +2705,7 @@ func TestVersionShortFlag(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codeagent-wrapper version 5.2.0\n"
|
||||
want := "codeagent-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -2712,7 +2719,7 @@ func TestVersionLegacyAlias(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codex-wrapper version 5.2.0\n"
|
||||
want := "codex-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -3293,7 +3300,7 @@ func TestRun_PipedTaskReadError(t *testing.T) {
|
||||
if exitCode != 1 {
|
||||
t.Fatalf("exit=%d, want 1", exitCode)
|
||||
}
|
||||
if !strings.Contains(logOutput, "ERROR: Failed to read piped stdin: read stdin: pipe failure") {
|
||||
if !strings.Contains(logOutput, "Failed to read piped stdin: read stdin: pipe failure") {
|
||||
t.Fatalf("log missing piped read error, got %q", logOutput)
|
||||
}
|
||||
// Log file is always removed after completion (new behavior)
|
||||
|
||||
@@ -53,9 +53,22 @@ func parseJSONStreamWithLog(r io.Reader, warnFn func(string), infoFn func(string
|
||||
return parseJSONStreamInternal(r, warnFn, infoFn, nil)
|
||||
}
|
||||
|
||||
const (
|
||||
jsonLineReaderSize = 64 * 1024
|
||||
jsonLineMaxBytes = 10 * 1024 * 1024
|
||||
jsonLinePreviewBytes = 256
|
||||
)
|
||||
|
||||
type codexHeader struct {
|
||||
Type string `json:"type"`
|
||||
ThreadID string `json:"thread_id,omitempty"`
|
||||
Item *struct {
|
||||
Type string `json:"type"`
|
||||
} `json:"item,omitempty"`
|
||||
}
|
||||
|
||||
func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func()) (message, threadID string) {
|
||||
scanner := bufio.NewScanner(r)
|
||||
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
|
||||
reader := bufio.NewReaderSize(r, jsonLineReaderSize)
|
||||
|
||||
if warnFn == nil {
|
||||
warnFn = func(string) {}
|
||||
@@ -78,79 +91,89 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
geminiBuffer strings.Builder
|
||||
)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if line == "" {
|
||||
for {
|
||||
line, tooLong, err := readLineWithLimit(reader, jsonLineMaxBytes, jsonLinePreviewBytes)
|
||||
if err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
break
|
||||
}
|
||||
warnFn("Read stdout error: " + err.Error())
|
||||
break
|
||||
}
|
||||
|
||||
line = bytes.TrimSpace(line)
|
||||
if len(line) == 0 {
|
||||
continue
|
||||
}
|
||||
totalEvents++
|
||||
|
||||
var raw map[string]json.RawMessage
|
||||
if err := json.Unmarshal([]byte(line), &raw); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse line: %s", truncate(line, 100)))
|
||||
if tooLong {
|
||||
warnFn(fmt.Sprintf("Skipped overlong JSON line (> %d bytes): %s", jsonLineMaxBytes, truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
hasItemType := false
|
||||
if rawItem, ok := raw["item"]; ok {
|
||||
var itemMap map[string]json.RawMessage
|
||||
if err := json.Unmarshal(rawItem, &itemMap); err == nil {
|
||||
if _, ok := itemMap["type"]; ok {
|
||||
hasItemType = true
|
||||
var codex codexHeader
|
||||
if err := json.Unmarshal(line, &codex); err == nil {
|
||||
isCodex := codex.ThreadID != "" || (codex.Item != nil && codex.Item.Type != "")
|
||||
if isCodex {
|
||||
var details []string
|
||||
if codex.ThreadID != "" {
|
||||
details = append(details, fmt.Sprintf("thread_id=%s", codex.ThreadID))
|
||||
}
|
||||
if codex.Item != nil && codex.Item.Type != "" {
|
||||
details = append(details, fmt.Sprintf("item_type=%s", codex.Item.Type))
|
||||
}
|
||||
if len(details) > 0 {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s (%s)", totalEvents, codex.Type, strings.Join(details, ", ")))
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s", totalEvents, codex.Type))
|
||||
}
|
||||
|
||||
switch codex.Type {
|
||||
case "thread.started":
|
||||
threadID = codex.ThreadID
|
||||
infoFn(fmt.Sprintf("thread.started event thread_id=%s", threadID))
|
||||
case "item.completed":
|
||||
itemType := ""
|
||||
if codex.Item != nil {
|
||||
itemType = codex.Item.Type
|
||||
}
|
||||
|
||||
if itemType == "agent_message" {
|
||||
var event JSONEvent
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Codex event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
normalized := ""
|
||||
if event.Item != nil {
|
||||
normalized = normalizeText(event.Item.Text)
|
||||
}
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s message_len=%d", itemType, len(normalized)))
|
||||
if normalized != "" {
|
||||
codexMessage = normalized
|
||||
notifyMessage()
|
||||
}
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s", itemType))
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
isCodex := hasItemType
|
||||
if !isCodex {
|
||||
if _, ok := raw["thread_id"]; ok {
|
||||
isCodex = true
|
||||
}
|
||||
var raw map[string]json.RawMessage
|
||||
if err := json.Unmarshal(line, &raw); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse line: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
switch {
|
||||
case isCodex:
|
||||
var event JSONEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Codex event: %s", truncate(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
var details []string
|
||||
if event.ThreadID != "" {
|
||||
details = append(details, fmt.Sprintf("thread_id=%s", event.ThreadID))
|
||||
}
|
||||
if event.Item != nil && event.Item.Type != "" {
|
||||
details = append(details, fmt.Sprintf("item_type=%s", event.Item.Type))
|
||||
}
|
||||
if len(details) > 0 {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s (%s)", totalEvents, event.Type, strings.Join(details, ", ")))
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s", totalEvents, event.Type))
|
||||
}
|
||||
|
||||
switch event.Type {
|
||||
case "thread.started":
|
||||
threadID = event.ThreadID
|
||||
infoFn(fmt.Sprintf("thread.started event thread_id=%s", threadID))
|
||||
case "item.completed":
|
||||
var itemType string
|
||||
var normalized string
|
||||
if event.Item != nil {
|
||||
itemType = event.Item.Type
|
||||
normalized = normalizeText(event.Item.Text)
|
||||
}
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s message_len=%d", itemType, len(normalized)))
|
||||
if event.Item != nil && event.Item.Type == "agent_message" && normalized != "" {
|
||||
codexMessage = normalized
|
||||
notifyMessage()
|
||||
}
|
||||
}
|
||||
|
||||
case hasKey(raw, "subtype") || hasKey(raw, "result"):
|
||||
var event ClaudeEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Claude event: %s", truncate(line, 100)))
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Claude event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -167,8 +190,8 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
|
||||
case hasKey(raw, "role") || hasKey(raw, "delta"):
|
||||
var event GeminiEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Gemini event: %s", truncate(line, 100)))
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Gemini event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -184,14 +207,10 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
infoFn(fmt.Sprintf("Parsed Gemini event #%d type=%s role=%s delta=%t status=%s content_len=%d", totalEvents, event.Type, event.Role, event.Delta, event.Status, len(event.Content)))
|
||||
|
||||
default:
|
||||
warnFn(fmt.Sprintf("Unknown event format: %s", truncate(line, 100)))
|
||||
warnFn(fmt.Sprintf("Unknown event format: %s", truncateBytes(line, 100)))
|
||||
}
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil && !errors.Is(err, io.EOF) {
|
||||
warnFn("Read stdout error: " + err.Error())
|
||||
}
|
||||
|
||||
switch {
|
||||
case geminiBuffer.Len() > 0:
|
||||
message = geminiBuffer.String()
|
||||
@@ -236,6 +255,79 @@ func discardInvalidJSON(decoder *json.Decoder, reader *bufio.Reader) (*bufio.Rea
|
||||
return bufio.NewReader(io.MultiReader(bytes.NewReader(remaining), reader)), err
|
||||
}
|
||||
|
||||
func readLineWithLimit(r *bufio.Reader, maxBytes int, previewBytes int) (line []byte, tooLong bool, err error) {
|
||||
if r == nil {
|
||||
return nil, false, errors.New("reader is nil")
|
||||
}
|
||||
if maxBytes <= 0 {
|
||||
return nil, false, errors.New("maxBytes must be > 0")
|
||||
}
|
||||
if previewBytes < 0 {
|
||||
previewBytes = 0
|
||||
}
|
||||
|
||||
part, isPrefix, err := r.ReadLine()
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
if !isPrefix {
|
||||
if len(part) > maxBytes {
|
||||
return part[:min(len(part), previewBytes)], true, nil
|
||||
}
|
||||
return part, false, nil
|
||||
}
|
||||
|
||||
preview := make([]byte, 0, min(previewBytes, len(part)))
|
||||
if previewBytes > 0 {
|
||||
preview = append(preview, part[:min(previewBytes, len(part))]...)
|
||||
}
|
||||
|
||||
buf := make([]byte, 0, min(maxBytes, len(part)*2))
|
||||
total := 0
|
||||
if len(part) > maxBytes {
|
||||
tooLong = true
|
||||
} else {
|
||||
buf = append(buf, part...)
|
||||
total = len(part)
|
||||
}
|
||||
|
||||
for isPrefix {
|
||||
part, isPrefix, err = r.ReadLine()
|
||||
if err != nil {
|
||||
return nil, tooLong, err
|
||||
}
|
||||
|
||||
if previewBytes > 0 && len(preview) < previewBytes {
|
||||
preview = append(preview, part[:min(previewBytes-len(preview), len(part))]...)
|
||||
}
|
||||
|
||||
if !tooLong {
|
||||
if total+len(part) > maxBytes {
|
||||
tooLong = true
|
||||
continue
|
||||
}
|
||||
buf = append(buf, part...)
|
||||
total += len(part)
|
||||
}
|
||||
}
|
||||
|
||||
if tooLong {
|
||||
return preview, true, nil
|
||||
}
|
||||
return buf, false, nil
|
||||
}
|
||||
|
||||
func truncateBytes(b []byte, maxLen int) string {
|
||||
if len(b) <= maxLen {
|
||||
return string(b)
|
||||
}
|
||||
if maxLen < 0 {
|
||||
return ""
|
||||
}
|
||||
return string(b[:maxLen]) + "..."
|
||||
}
|
||||
|
||||
func normalizeText(text interface{}) string {
|
||||
switch v := text.(type) {
|
||||
case string:
|
||||
|
||||
31
codeagent-wrapper/parser_token_too_long_test.go
Normal file
31
codeagent-wrapper/parser_token_too_long_test.go
Normal file
@@ -0,0 +1,31 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseJSONStream_SkipsOverlongLineAndContinues(t *testing.T) {
|
||||
// Exceed the 10MB bufio.Scanner limit in parseJSONStreamInternal.
|
||||
tooLong := strings.Repeat("a", 11*1024*1024)
|
||||
|
||||
input := strings.Join([]string{
|
||||
`{"type":"item.completed","item":{"type":"other_type","text":"` + tooLong + `"}}`,
|
||||
`{"type":"thread.started","thread_id":"t-1"}`,
|
||||
`{"type":"item.completed","item":{"type":"agent_message","text":"ok"}}`,
|
||||
}, "\n")
|
||||
|
||||
var warns []string
|
||||
warnFn := func(msg string) { warns = append(warns, msg) }
|
||||
|
||||
gotMessage, gotThreadID := parseJSONStreamInternal(strings.NewReader(input), warnFn, nil, nil)
|
||||
if gotMessage != "ok" {
|
||||
t.Fatalf("message=%q, want %q (warns=%v)", gotMessage, "ok", warns)
|
||||
}
|
||||
if gotThreadID != "t-1" {
|
||||
t.Fatalf("threadID=%q, want %q (warns=%v)", gotThreadID, "t-1", warns)
|
||||
}
|
||||
if len(warns) == 0 || !strings.Contains(warns[0], "Skipped overlong JSON line") {
|
||||
t.Fatalf("expected warning about overlong JSON line, got %v", warns)
|
||||
}
|
||||
}
|
||||
@@ -78,6 +78,7 @@ type logWriter struct {
|
||||
prefix string
|
||||
maxLen int
|
||||
buf bytes.Buffer
|
||||
dropped bool
|
||||
}
|
||||
|
||||
func newLogWriter(prefix string, maxLen int) *logWriter {
|
||||
@@ -94,12 +95,12 @@ func (lw *logWriter) Write(p []byte) (int, error) {
|
||||
total := len(p)
|
||||
for len(p) > 0 {
|
||||
if idx := bytes.IndexByte(p, '\n'); idx >= 0 {
|
||||
lw.buf.Write(p[:idx])
|
||||
lw.writeLimited(p[:idx])
|
||||
lw.logLine(true)
|
||||
p = p[idx+1:]
|
||||
continue
|
||||
}
|
||||
lw.buf.Write(p)
|
||||
lw.writeLimited(p)
|
||||
break
|
||||
}
|
||||
return total, nil
|
||||
@@ -117,21 +118,53 @@ func (lw *logWriter) logLine(force bool) {
|
||||
return
|
||||
}
|
||||
line := lw.buf.String()
|
||||
dropped := lw.dropped
|
||||
lw.dropped = false
|
||||
lw.buf.Reset()
|
||||
if line == "" && !force {
|
||||
return
|
||||
}
|
||||
if lw.maxLen > 0 && len(line) > lw.maxLen {
|
||||
cutoff := lw.maxLen
|
||||
if cutoff > 3 {
|
||||
line = line[:cutoff-3] + "..."
|
||||
} else {
|
||||
line = line[:cutoff]
|
||||
if lw.maxLen > 0 {
|
||||
if dropped {
|
||||
if lw.maxLen > 3 {
|
||||
line = line[:min(len(line), lw.maxLen-3)] + "..."
|
||||
} else {
|
||||
line = line[:min(len(line), lw.maxLen)]
|
||||
}
|
||||
} else if len(line) > lw.maxLen {
|
||||
cutoff := lw.maxLen
|
||||
if cutoff > 3 {
|
||||
line = line[:cutoff-3] + "..."
|
||||
} else {
|
||||
line = line[:cutoff]
|
||||
}
|
||||
}
|
||||
}
|
||||
logInfo(lw.prefix + line)
|
||||
}
|
||||
|
||||
func (lw *logWriter) writeLimited(p []byte) {
|
||||
if lw == nil || len(p) == 0 {
|
||||
return
|
||||
}
|
||||
if lw.maxLen <= 0 {
|
||||
lw.buf.Write(p)
|
||||
return
|
||||
}
|
||||
|
||||
remaining := lw.maxLen - lw.buf.Len()
|
||||
if remaining <= 0 {
|
||||
lw.dropped = true
|
||||
return
|
||||
}
|
||||
if len(p) <= remaining {
|
||||
lw.buf.Write(p)
|
||||
return
|
||||
}
|
||||
lw.buf.Write(p[:remaining])
|
||||
lw.dropped = true
|
||||
}
|
||||
|
||||
type tailBuffer struct {
|
||||
limit int
|
||||
data []byte
|
||||
|
||||
60
config.json
60
config.json
@@ -20,9 +20,33 @@
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/codex/SKILL.md",
|
||||
"target": "skills/codex/SKILL.md",
|
||||
"description": "Install codex skill"
|
||||
"source": "skills/codeagent/SKILL.md",
|
||||
"target": "skills/codeagent/SKILL.md",
|
||||
"description": "Install codeagent skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/product-requirements/SKILL.md",
|
||||
"target": "skills/product-requirements/SKILL.md",
|
||||
"description": "Install product-requirements skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/SKILL.md",
|
||||
"target": "skills/prototype-prompt-generator/SKILL.md",
|
||||
"description": "Install prototype-prompt-generator skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/references/prompt-structure.md",
|
||||
"target": "skills/prototype-prompt-generator/references/prompt-structure.md",
|
||||
"description": "Install prototype-prompt-generator prompt structure reference"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/references/design-systems.md",
|
||||
"target": "skills/prototype-prompt-generator/references/design-systems.md",
|
||||
"description": "Install prototype-prompt-generator design systems reference"
|
||||
},
|
||||
{
|
||||
"type": "run_command",
|
||||
@@ -84,36 +108,6 @@
|
||||
"description": "Copy development commands documentation"
|
||||
}
|
||||
]
|
||||
},
|
||||
"gh": {
|
||||
"enabled": true,
|
||||
"description": "GitHub issue-to-PR workflow with codeagent integration",
|
||||
"operations": [
|
||||
{
|
||||
"type": "merge_dir",
|
||||
"source": "github-workflow",
|
||||
"description": "Merge GitHub workflow commands"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/codeagent/SKILL.md",
|
||||
"target": "skills/codeagent/SKILL.md",
|
||||
"description": "Install codeagent skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_dir",
|
||||
"source": "hooks",
|
||||
"target": "hooks",
|
||||
"description": "Copy hooks scripts"
|
||||
},
|
||||
{
|
||||
"type": "merge_json",
|
||||
"source": "hooks/hooks-config.json",
|
||||
"target": "settings.json",
|
||||
"merge_key": "hooks",
|
||||
"description": "Merge hooks configuration into settings.json"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: dev-plan-generator
|
||||
description: Use this agent when you need to generate a structured development plan document (`dev-plan.md`) that breaks down a feature into concrete implementation tasks with testing requirements and acceptance criteria. This agent should be called after requirements analysis and before actual implementation begins.\n\n<example>\nContext: User is orchestrating a feature development workflow and needs to create a development plan after Codex analysis is complete.\nuser: "Create a development plan for the user authentication feature based on the requirements and analysis"\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to create the structured development plan document."\n<commentary>\nThe user needs a dev-plan.md document generated from requirements and analysis. Use the dev-plan-generator agent to create the structured task breakdown.\n</commentary>\n</example>\n\n<example>\nContext: Orchestrator has completed requirements gathering and Codex analysis for a new feature and needs to generate the development plan before moving to implementation.\nuser: "We've completed the analysis for the payment integration feature. Generate the development plan."\nassistant: "I'm going to use the Task tool to launch the dev-plan-generator agent to create the dev-plan.md document with task breakdown and testing requirements."\n<commentary>\nThis is the step in the workflow where the development plan document needs to be generated. Use the dev-plan-generator agent to create the structured plan.\n</commentary>\n</example>\n\n<example>\nContext: User is working through a requirements-driven workflow and has just approved the technical specifications.\nuser: "The specs look good. Let's move forward with creating the implementation plan."\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to generate the dev-plan.md document with the task breakdown."\n<commentary>\nAfter spec approval, the next step is generating the development plan. Use the dev-plan-generator agent to create the structured document.\n</commentary>\n</example>
|
||||
description: Use this agent when you need to generate a structured development plan document (`dev-plan.md`) that breaks down a feature into concrete implementation tasks with testing requirements and acceptance criteria. This agent should be called after requirements analysis and before actual implementation begins.\n\n<example>\nContext: User is orchestrating a feature development workflow and needs to create a development plan after codeagent analysis is complete.\nuser: "Create a development plan for the user authentication feature based on the requirements and analysis"\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to create the structured development plan document."\n<commentary>\nThe user needs a dev-plan.md document generated from requirements and analysis. Use the dev-plan-generator agent to create the structured task breakdown.\n</commentary>\n</example>\n\n<example>\nContext: Orchestrator has completed requirements gathering and codeagent analysis for a new feature and needs to generate the development plan before moving to implementation.\nuser: "We've completed the analysis for the payment integration feature. Generate the development plan."\nassistant: "I'm going to use the Task tool to launch the dev-plan-generator agent to create the dev-plan.md document with task breakdown and testing requirements."\n<commentary>\nThis is the step in the workflow where the development plan document needs to be generated. Use the dev-plan-generator agent to create the structured plan.\n</commentary>\n</example>\n\n<example>\nContext: User is working through a requirements-driven workflow and has just approved the technical specifications.\nuser: "The specs look good. Let's move forward with creating the implementation plan."\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to generate the dev-plan.md document with the task breakdown."\n<commentary>\nAfter spec approval, the next step is generating the development plan. Use the dev-plan-generator agent to create the structured document.\n</commentary>\n</example>
|
||||
tools: Glob, Grep, Read, Edit, Write, TodoWrite
|
||||
model: sonnet
|
||||
color: green
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
set -e
|
||||
|
||||
# Get staged files
|
||||
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM)
|
||||
STAGED_FILES="$(git diff --cached --name-only --diff-filter=ACM)"
|
||||
|
||||
if [ -z "$STAGED_FILES" ]; then
|
||||
echo "No files to validate"
|
||||
@@ -15,17 +15,32 @@ fi
|
||||
echo "Running pre-commit checks..."
|
||||
|
||||
# Check Go files
|
||||
GO_FILES=$(echo "$STAGED_FILES" | grep '\.go$' || true)
|
||||
GO_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.go$' || true)"
|
||||
if [ -n "$GO_FILES" ]; then
|
||||
echo "Checking Go files..."
|
||||
|
||||
if ! command -v gofmt &> /dev/null; then
|
||||
echo "❌ gofmt not found. Please install Go (gofmt is included with the Go toolchain)."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Format check
|
||||
gofmt -l $GO_FILES | while read -r file; do
|
||||
GO_FILE_ARGS=()
|
||||
while IFS= read -r file; do
|
||||
if [ -n "$file" ]; then
|
||||
echo "❌ $file needs formatting (run: gofmt -w $file)"
|
||||
GO_FILE_ARGS+=("$file")
|
||||
fi
|
||||
done <<< "$GO_FILES"
|
||||
|
||||
if [ "${#GO_FILE_ARGS[@]}" -gt 0 ]; then
|
||||
UNFORMATTED="$(gofmt -l "${GO_FILE_ARGS[@]}")"
|
||||
if [ -n "$UNFORMATTED" ]; then
|
||||
echo "❌ The following files need formatting:"
|
||||
echo "$UNFORMATTED"
|
||||
echo "Run: gofmt -w <file>"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Run tests
|
||||
if command -v go &> /dev/null; then
|
||||
@@ -38,19 +53,26 @@ if [ -n "$GO_FILES" ]; then
|
||||
fi
|
||||
|
||||
# Check JSON files
|
||||
JSON_FILES=$(echo "$STAGED_FILES" | grep '\.json$' || true)
|
||||
JSON_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.json$' || true)"
|
||||
if [ -n "$JSON_FILES" ]; then
|
||||
echo "Validating JSON files..."
|
||||
for file in $JSON_FILES; do
|
||||
if ! command -v jq &> /dev/null; then
|
||||
echo "❌ jq not found. Please install jq to validate JSON files."
|
||||
exit 1
|
||||
fi
|
||||
while IFS= read -r file; do
|
||||
if [ -z "$file" ]; then
|
||||
continue
|
||||
fi
|
||||
if ! jq empty "$file" 2>/dev/null; then
|
||||
echo "❌ Invalid JSON: $file"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
done <<< "$JSON_FILES"
|
||||
fi
|
||||
|
||||
# Check Markdown files
|
||||
MD_FILES=$(echo "$STAGED_FILES" | grep '\.md$' || true)
|
||||
MD_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.md$' || true)"
|
||||
if [ -n "$MD_FILES" ]; then
|
||||
echo "Checking markdown files..."
|
||||
# Add markdown linting if needed
|
||||
|
||||
10
install.bat
10
install.bat
@@ -9,13 +9,13 @@ set "OS=windows"
|
||||
call :detect_arch
|
||||
if errorlevel 1 goto :fail
|
||||
|
||||
set "BINARY_NAME=codex-wrapper-%OS%-%ARCH%.exe"
|
||||
set "BINARY_NAME=codeagent-wrapper-%OS%-%ARCH%.exe"
|
||||
set "URL=https://github.com/%REPO%/releases/%VERSION%/download/%BINARY_NAME%"
|
||||
set "TEMP_FILE=%TEMP%\codex-wrapper-%ARCH%-%RANDOM%.exe"
|
||||
set "TEMP_FILE=%TEMP%\codeagent-wrapper-%ARCH%-%RANDOM%.exe"
|
||||
set "DEST_DIR=%USERPROFILE%\bin"
|
||||
set "DEST=%DEST_DIR%\codex-wrapper.exe"
|
||||
set "DEST=%DEST_DIR%\codeagent-wrapper.exe"
|
||||
|
||||
echo Downloading codex-wrapper for %ARCH% ...
|
||||
echo Downloading codeagent-wrapper for %ARCH% ...
|
||||
echo %URL%
|
||||
call :download
|
||||
if errorlevel 1 goto :fail
|
||||
@@ -43,7 +43,7 @@ if errorlevel 1 (
|
||||
)
|
||||
|
||||
echo.
|
||||
echo codex-wrapper installed successfully at:
|
||||
echo codeagent-wrapper installed successfully at:
|
||||
echo %DEST%
|
||||
|
||||
rem Automatically ensure %USERPROFILE%\bin is in the USER (HKCU) PATH
|
||||
|
||||
57
install.py
57
install.py
@@ -357,26 +357,47 @@ def op_run_command(op: Dict[str, Any], ctx: Dict[str, Any]) -> None:
|
||||
stderr_lines: List[str] = []
|
||||
|
||||
# Read stdout and stderr in real-time
|
||||
import selectors
|
||||
sel = selectors.DefaultSelector()
|
||||
sel.register(process.stdout, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
sel.register(process.stderr, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
if sys.platform == "win32":
|
||||
# On Windows, use threads instead of selectors (pipes aren't selectable)
|
||||
import threading
|
||||
|
||||
while process.poll() is None or sel.get_map():
|
||||
for key, _ in sel.select(timeout=0.1):
|
||||
line = key.fileobj.readline() # type: ignore[union-attr]
|
||||
if not line:
|
||||
sel.unregister(key.fileobj)
|
||||
continue
|
||||
if key.fileobj == process.stdout:
|
||||
stdout_lines.append(line)
|
||||
print(line, end="", flush=True)
|
||||
else:
|
||||
stderr_lines.append(line)
|
||||
print(line, end="", file=sys.stderr, flush=True)
|
||||
def read_output(pipe, lines, file=None):
|
||||
for line in iter(pipe.readline, ''):
|
||||
lines.append(line)
|
||||
print(line, end="", flush=True, file=file)
|
||||
pipe.close()
|
||||
|
||||
sel.close()
|
||||
process.wait()
|
||||
stdout_thread = threading.Thread(target=read_output, args=(process.stdout, stdout_lines))
|
||||
stderr_thread = threading.Thread(target=read_output, args=(process.stderr, stderr_lines, sys.stderr))
|
||||
|
||||
stdout_thread.start()
|
||||
stderr_thread.start()
|
||||
|
||||
stdout_thread.join()
|
||||
stderr_thread.join()
|
||||
process.wait()
|
||||
else:
|
||||
# On Unix, use selectors for more efficient I/O
|
||||
import selectors
|
||||
sel = selectors.DefaultSelector()
|
||||
sel.register(process.stdout, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
sel.register(process.stderr, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
|
||||
while process.poll() is None or sel.get_map():
|
||||
for key, _ in sel.select(timeout=0.1):
|
||||
line = key.fileobj.readline() # type: ignore[union-attr]
|
||||
if not line:
|
||||
sel.unregister(key.fileobj)
|
||||
continue
|
||||
if key.fileobj == process.stdout:
|
||||
stdout_lines.append(line)
|
||||
print(line, end="", flush=True)
|
||||
else:
|
||||
stderr_lines.append(line)
|
||||
print(line, end="", file=sys.stderr, flush=True)
|
||||
|
||||
sel.close()
|
||||
process.wait()
|
||||
|
||||
write_log(
|
||||
{
|
||||
|
||||
15
install.sh
15
install.sh
@@ -1,12 +1,15 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "⚠️ WARNING: install.sh is LEGACY and will be removed in future versions."
|
||||
echo "Please use the new installation method:"
|
||||
echo " python3 install.py --install-dir ~/.claude"
|
||||
echo ""
|
||||
echo "Continuing with legacy installation in 5 seconds..."
|
||||
sleep 5
|
||||
if [ -z "${SKIP_WARNING:-}" ]; then
|
||||
echo "⚠️ WARNING: install.sh is LEGACY and will be removed in future versions."
|
||||
echo "Please use the new installation method:"
|
||||
echo " python3 install.py --install-dir ~/.claude"
|
||||
echo ""
|
||||
echo "Set SKIP_WARNING=1 to bypass this message"
|
||||
echo "Continuing with legacy installation in 5 seconds..."
|
||||
sleep 5
|
||||
fi
|
||||
|
||||
# Detect platform
|
||||
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
You are Linus Torvalds. Obey the following priority stack (highest first) and refuse conflicts by citing the higher rule:
|
||||
1. Role + Safety: stay in character, enforce KISS/YAGNI/never break userspace, think in English, respond to the user in Chinese, stay technical.
|
||||
2. Workflow Contract: Claude Code performs intake, context gathering, planning, and verification only; every edit or test must be executed via Codex skill (`codex`).
|
||||
2. Workflow Contract: Claude Code performs intake, context gathering, planning, and verification only; every edit or test must be executed via Codeagent skill (`codeagent`).
|
||||
3. Tooling & Safety Rules:
|
||||
- Capture errors, retry once if transient, document fallbacks.
|
||||
4. Context Blocks & Persistence: honor `<context_gathering>`, `<exploration>`, `<persistence>`, `<tool_preambles>`, `<self_reflection>`, and `<testing>` exactly as written below.
|
||||
@@ -21,8 +21,8 @@ Trigger conditions:
|
||||
- User explicitly requests deep analysis
|
||||
Process:
|
||||
- Requirements: Break the ask into explicit requirements, unclear areas, and hidden assumptions.
|
||||
- Scope mapping: Identify codebase regions, files, functions, or libraries likely involved. If unknown, perform targeted parallel searches NOW before planning. For complex codebases or deep call chains, delegate scope analysis to Codex skill.
|
||||
- Dependencies: Identify relevant frameworks, APIs, config files, data formats, and versioning concerns. When dependencies involve complex framework internals or multi-layer interactions, delegate to Codex skill for analysis.
|
||||
- Scope mapping: Identify codebase regions, files, functions, or libraries likely involved. If unknown, perform targeted parallel searches NOW before planning. For complex codebases or deep call chains, delegate scope analysis to Codeagent skill.
|
||||
- Dependencies: Identify relevant frameworks, APIs, config files, data formats, and versioning concerns. When dependencies involve complex framework internals or multi-layer interactions, delegate to Codeagent skill for analysis.
|
||||
- Ambiguity resolution: Choose the most probable interpretation based on repo context, conventions, and dependency docs. Document assumptions explicitly.
|
||||
- Output contract: Define exact deliverables (files changed, expected outputs, API responses, CLI behavior, tests passing, etc.).
|
||||
In plan mode: Invest extra effort here—this phase determines plan quality and depth.
|
||||
|
||||
@@ -104,6 +104,10 @@ You adhere to core software engineering principles like KISS (Keep It Simple, St
|
||||
|
||||
## Implementation Constraints
|
||||
|
||||
### Language Rules
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, SQL, CRUD, etc.) in English; translate explanatory text only.
|
||||
|
||||
### MUST Requirements
|
||||
- **Working Solution**: Code must fully implement the specified functionality
|
||||
- **Integration Compatibility**: Must work seamlessly with existing codebase
|
||||
|
||||
@@ -88,6 +88,10 @@ Each phase should be independently deployable and testable.
|
||||
|
||||
## Key Constraints
|
||||
|
||||
### Language Rules
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, SQL, CRUD, etc.) in English; translate explanatory text only.
|
||||
|
||||
### MUST Requirements
|
||||
- **Direct Implementability**: Every item must be directly translatable to code
|
||||
- **Specific Technical Details**: Include exact file paths, function names, table schemas
|
||||
|
||||
@@ -176,6 +176,10 @@ You adhere to core software engineering principles like KISS (Keep It Simple, St
|
||||
|
||||
## Key Constraints
|
||||
|
||||
### Language Rules
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, E2E, CI/CD, etc.) in English; translate explanatory text only.
|
||||
|
||||
### MUST Requirements
|
||||
- **Functional Verification**: Verify all specified functionality works
|
||||
- **Integration Testing**: Ensure seamless integration with existing code
|
||||
|
||||
@@ -199,6 +199,10 @@ func TestAPIEndpoint(t *testing.T) {
|
||||
|
||||
## Key Constraints
|
||||
|
||||
### Language Rules
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, E2E, CI/CD, Mock, etc.) in English; translate explanatory text only.
|
||||
|
||||
### MUST Requirements
|
||||
- **Specification Coverage**: Must test all requirements from `./.claude/specs/{feature_name}/requirements-spec.md`
|
||||
- **Critical Path Testing**: Must test all critical business functionality
|
||||
|
||||
@@ -39,11 +39,35 @@ codeagent-wrapper --backend gemini "simple task"
|
||||
|
||||
## Backends
|
||||
|
||||
| Backend | Command | Description |
|
||||
|---------|---------|-------------|
|
||||
| codex | `--backend codex` | OpenAI Codex (default) |
|
||||
| claude | `--backend claude` | Anthropic Claude |
|
||||
| gemini | `--backend gemini` | Google Gemini |
|
||||
| Backend | Command | Description | Best For |
|
||||
|---------|---------|-------------|----------|
|
||||
| codex | `--backend codex` | OpenAI Codex (default) | Code analysis, complex development |
|
||||
| claude | `--backend claude` | Anthropic Claude | Simple tasks, documentation, prompts |
|
||||
| gemini | `--backend gemini` | Google Gemini | UI/UX prototyping |
|
||||
|
||||
### Backend Selection Guide
|
||||
|
||||
**Codex** (default):
|
||||
- Deep code understanding and complex logic implementation
|
||||
- Large-scale refactoring with precise dependency tracking
|
||||
- Algorithm optimization and performance tuning
|
||||
- Example: "Analyze the call graph of @src/core and refactor the module dependency structure"
|
||||
|
||||
**Claude**:
|
||||
- Quick feature implementation with clear requirements
|
||||
- Technical documentation, API specs, README generation
|
||||
- Professional prompt engineering (e.g., product requirements, design specs)
|
||||
- Example: "Generate a comprehensive README for @package.json with installation, usage, and API docs"
|
||||
|
||||
**Gemini**:
|
||||
- UI component scaffolding and layout prototyping
|
||||
- Design system implementation with style consistency
|
||||
- Interactive element generation with accessibility support
|
||||
- Example: "Create a responsive dashboard layout with sidebar navigation and data visualization cards"
|
||||
|
||||
**Backend Switching**:
|
||||
- Start with Codex for analysis, switch to Claude for documentation, then Gemini for UI implementation
|
||||
- Use per-task backend selection in parallel mode to optimize for each task's strengths
|
||||
|
||||
## Parameters
|
||||
|
||||
|
||||
@@ -21,6 +21,29 @@
|
||||
]
|
||||
}
|
||||
},
|
||||
"codeagent": {
|
||||
"type": "execution",
|
||||
"enforcement": "suggest",
|
||||
"priority": "high",
|
||||
"promptTriggers": {
|
||||
"keywords": [
|
||||
"codeagent",
|
||||
"multi-backend",
|
||||
"backend selection",
|
||||
"parallel task",
|
||||
"多后端",
|
||||
"并行任务",
|
||||
"gemini",
|
||||
"claude backend"
|
||||
],
|
||||
"intentPatterns": [
|
||||
"\\bcodeagent\\b",
|
||||
"backend\\s+(codex|claude|gemini)",
|
||||
"parallel.*task",
|
||||
"multi.*backend"
|
||||
]
|
||||
}
|
||||
},
|
||||
"gh-workflow": {
|
||||
"type": "domain",
|
||||
"enforcement": "suggest",
|
||||
@@ -39,6 +62,55 @@
|
||||
"\\bgithub\\b|\\bgh\\b"
|
||||
]
|
||||
}
|
||||
},
|
||||
"product-requirements": {
|
||||
"type": "domain",
|
||||
"enforcement": "suggest",
|
||||
"priority": "high",
|
||||
"promptTriggers": {
|
||||
"keywords": [
|
||||
"requirements",
|
||||
"prd",
|
||||
"product requirements",
|
||||
"feature specification",
|
||||
"product owner",
|
||||
"需求",
|
||||
"产品需求",
|
||||
"需求文档",
|
||||
"功能规格"
|
||||
],
|
||||
"intentPatterns": [
|
||||
"(product|feature).*requirement",
|
||||
"\\bPRD\\b",
|
||||
"requirements?.*document",
|
||||
"(gather|collect|define|write).*requirement"
|
||||
]
|
||||
}
|
||||
},
|
||||
"prototype-prompt-generator": {
|
||||
"type": "domain",
|
||||
"enforcement": "suggest",
|
||||
"priority": "medium",
|
||||
"promptTriggers": {
|
||||
"keywords": [
|
||||
"prototype",
|
||||
"ui design",
|
||||
"ux design",
|
||||
"mobile app design",
|
||||
"web app design",
|
||||
"原型",
|
||||
"界面设计",
|
||||
"UI设计",
|
||||
"UX设计",
|
||||
"移动应用设计"
|
||||
],
|
||||
"intentPatterns": [
|
||||
"(create|generate|design).*(prototype|UI|UX|interface)",
|
||||
"(mobile|web).*app.*(design|prototype)",
|
||||
"prototype.*prompt",
|
||||
"design.*system"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,76 +0,0 @@
|
||||
1: import copy
|
||||
1: import json
|
||||
1: import unittest
|
||||
1: from pathlib import Path
|
||||
|
||||
1: import jsonschema
|
||||
|
||||
|
||||
1: CONFIG_PATH = Path(__file__).resolve().parents[1] / "config.json"
|
||||
1: SCHEMA_PATH = Path(__file__).resolve().parents[1] / "config.schema.json"
|
||||
1: ROOT = CONFIG_PATH.parent
|
||||
|
||||
|
||||
1: def load_config():
|
||||
with CONFIG_PATH.open(encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
1: def load_schema():
|
||||
with SCHEMA_PATH.open(encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
2: class ConfigSchemaTest(unittest.TestCase):
|
||||
1: def test_config_matches_schema(self):
|
||||
config = load_config()
|
||||
schema = load_schema()
|
||||
jsonschema.validate(config, schema)
|
||||
|
||||
1: def test_required_modules_present(self):
|
||||
modules = load_config()["modules"]
|
||||
self.assertEqual(set(modules.keys()), {"dev", "bmad", "requirements", "essentials", "advanced"})
|
||||
|
||||
1: def test_enabled_defaults_and_flags(self):
|
||||
modules = load_config()["modules"]
|
||||
self.assertTrue(modules["dev"]["enabled"])
|
||||
self.assertTrue(modules["essentials"]["enabled"])
|
||||
self.assertFalse(modules["bmad"]["enabled"])
|
||||
self.assertFalse(modules["requirements"]["enabled"])
|
||||
self.assertFalse(modules["advanced"]["enabled"])
|
||||
|
||||
1: def test_operations_have_expected_shape(self):
|
||||
config = load_config()
|
||||
for name, module in config["modules"].items():
|
||||
self.assertTrue(module["operations"], f"{name} should declare at least one operation")
|
||||
for op in module["operations"]:
|
||||
self.assertIn("type", op)
|
||||
if op["type"] in {"copy_dir", "copy_file"}:
|
||||
self.assertTrue(op.get("source"), f"{name} operation missing source")
|
||||
self.assertTrue(op.get("target"), f"{name} operation missing target")
|
||||
elif op["type"] == "run_command":
|
||||
self.assertTrue(op.get("command"), f"{name} run_command missing command")
|
||||
if "env" in op:
|
||||
self.assertIsInstance(op["env"], dict)
|
||||
else:
|
||||
self.fail(f"Unsupported operation type: {op['type']}")
|
||||
|
||||
1: def test_operation_sources_exist_on_disk(self):
|
||||
config = load_config()
|
||||
for module in config["modules"].values():
|
||||
for op in module["operations"]:
|
||||
if op["type"] in {"copy_dir", "copy_file"}:
|
||||
path = (ROOT / op["source"]).expanduser()
|
||||
self.assertTrue(path.exists(), f"Source path not found: {path}")
|
||||
|
||||
1: def test_schema_rejects_invalid_operation_type(self):
|
||||
config = load_config()
|
||||
invalid = copy.deepcopy(config)
|
||||
invalid["modules"]["dev"]["operations"][0]["type"] = "unknown_op"
|
||||
schema = load_schema()
|
||||
with self.assertRaises(jsonschema.exceptions.ValidationError):
|
||||
jsonschema.validate(invalid, schema)
|
||||
|
||||
|
||||
1: if __name__ == "__main__":
|
||||
1: unittest.main()
|
||||
@@ -1,76 +0,0 @@
|
||||
import copy
|
||||
import json
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import jsonschema
|
||||
|
||||
|
||||
CONFIG_PATH = Path(__file__).resolve().parents[1] / "config.json"
|
||||
SCHEMA_PATH = Path(__file__).resolve().parents[1] / "config.schema.json"
|
||||
ROOT = CONFIG_PATH.parent
|
||||
|
||||
|
||||
def load_config():
|
||||
with CONFIG_PATH.open(encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def load_schema():
|
||||
with SCHEMA_PATH.open(encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
class ConfigSchemaTest(unittest.TestCase):
|
||||
def test_config_matches_schema(self):
|
||||
config = load_config()
|
||||
schema = load_schema()
|
||||
jsonschema.validate(config, schema)
|
||||
|
||||
def test_required_modules_present(self):
|
||||
modules = load_config()["modules"]
|
||||
self.assertEqual(set(modules.keys()), {"dev", "bmad", "requirements", "essentials", "advanced"})
|
||||
|
||||
def test_enabled_defaults_and_flags(self):
|
||||
modules = load_config()["modules"]
|
||||
self.assertTrue(modules["dev"]["enabled"])
|
||||
self.assertTrue(modules["essentials"]["enabled"])
|
||||
self.assertFalse(modules["bmad"]["enabled"])
|
||||
self.assertFalse(modules["requirements"]["enabled"])
|
||||
self.assertFalse(modules["advanced"]["enabled"])
|
||||
|
||||
def test_operations_have_expected_shape(self):
|
||||
config = load_config()
|
||||
for name, module in config["modules"].items():
|
||||
self.assertTrue(module["operations"], f"{name} should declare at least one operation")
|
||||
for op in module["operations"]:
|
||||
self.assertIn("type", op)
|
||||
if op["type"] in {"copy_dir", "copy_file"}:
|
||||
self.assertTrue(op.get("source"), f"{name} operation missing source")
|
||||
self.assertTrue(op.get("target"), f"{name} operation missing target")
|
||||
elif op["type"] == "run_command":
|
||||
self.assertTrue(op.get("command"), f"{name} run_command missing command")
|
||||
if "env" in op:
|
||||
self.assertIsInstance(op["env"], dict)
|
||||
else:
|
||||
self.fail(f"Unsupported operation type: {op['type']}")
|
||||
|
||||
def test_operation_sources_exist_on_disk(self):
|
||||
config = load_config()
|
||||
for module in config["modules"].values():
|
||||
for op in module["operations"]:
|
||||
if op["type"] in {"copy_dir", "copy_file"}:
|
||||
path = (ROOT / op["source"]).expanduser()
|
||||
self.assertTrue(path.exists(), f"Source path not found: {path}")
|
||||
|
||||
def test_schema_rejects_invalid_operation_type(self):
|
||||
config = load_config()
|
||||
invalid = copy.deepcopy(config)
|
||||
invalid["modules"]["dev"]["operations"][0]["type"] = "unknown_op"
|
||||
schema = load_schema()
|
||||
with self.assertRaises(jsonschema.exceptions.ValidationError):
|
||||
jsonschema.validate(invalid, schema)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -1,458 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
import install
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
SCHEMA_PATH = ROOT / "config.schema.json"
|
||||
|
||||
|
||||
def write_config(tmp_path: Path, config: dict) -> Path:
|
||||
cfg_path = tmp_path / "config.json"
|
||||
cfg_path.write_text(json.dumps(config), encoding="utf-8")
|
||||
shutil.copy(SCHEMA_PATH, tmp_path / "config.schema.json")
|
||||
return cfg_path
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def valid_config(tmp_path):
|
||||
sample_file = tmp_path / "sample.txt"
|
||||
sample_file.write_text("hello", encoding="utf-8")
|
||||
|
||||
sample_dir = tmp_path / "sample_dir"
|
||||
sample_dir.mkdir()
|
||||
(sample_dir / "f.txt").write_text("dir", encoding="utf-8")
|
||||
|
||||
config = {
|
||||
"version": "1.0",
|
||||
"install_dir": "~/.fromconfig",
|
||||
"log_file": "install.log",
|
||||
"modules": {
|
||||
"dev": {
|
||||
"enabled": True,
|
||||
"description": "dev module",
|
||||
"operations": [
|
||||
{"type": "copy_dir", "source": "sample_dir", "target": "devcopy"}
|
||||
],
|
||||
},
|
||||
"bmad": {
|
||||
"enabled": False,
|
||||
"description": "bmad",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "sample.txt", "target": "bmad.txt"}
|
||||
],
|
||||
},
|
||||
"requirements": {
|
||||
"enabled": False,
|
||||
"description": "reqs",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "sample.txt", "target": "req.txt"}
|
||||
],
|
||||
},
|
||||
"essentials": {
|
||||
"enabled": True,
|
||||
"description": "ess",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "sample.txt", "target": "ess.txt"}
|
||||
],
|
||||
},
|
||||
"advanced": {
|
||||
"enabled": False,
|
||||
"description": "adv",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "sample.txt", "target": "adv.txt"}
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path = write_config(tmp_path, config)
|
||||
return cfg_path, config
|
||||
|
||||
|
||||
def make_ctx(tmp_path: Path) -> dict:
|
||||
install_dir = tmp_path / "install"
|
||||
return {
|
||||
"install_dir": install_dir,
|
||||
"log_file": install_dir / "install.log",
|
||||
"status_file": install_dir / "installed_modules.json",
|
||||
"config_dir": tmp_path,
|
||||
"force": False,
|
||||
}
|
||||
|
||||
|
||||
def test_parse_args_defaults():
|
||||
args = install.parse_args([])
|
||||
assert args.install_dir == install.DEFAULT_INSTALL_DIR
|
||||
assert args.config == "config.json"
|
||||
assert args.module is None
|
||||
assert args.list_modules is False
|
||||
assert args.force is False
|
||||
|
||||
|
||||
def test_parse_args_custom():
|
||||
args = install.parse_args(
|
||||
[
|
||||
"--install-dir",
|
||||
"/tmp/custom",
|
||||
"--module",
|
||||
"dev,bmad",
|
||||
"--config",
|
||||
"/tmp/cfg.json",
|
||||
"--list-modules",
|
||||
"--force",
|
||||
]
|
||||
)
|
||||
|
||||
assert args.install_dir == "/tmp/custom"
|
||||
assert args.module == "dev,bmad"
|
||||
assert args.config == "/tmp/cfg.json"
|
||||
assert args.list_modules is True
|
||||
assert args.force is True
|
||||
|
||||
|
||||
def test_load_config_success(valid_config):
|
||||
cfg_path, config_data = valid_config
|
||||
loaded = install.load_config(str(cfg_path))
|
||||
assert loaded["modules"]["dev"]["description"] == config_data["modules"]["dev"]["description"]
|
||||
|
||||
|
||||
def test_load_config_invalid_json(tmp_path):
|
||||
bad = tmp_path / "bad.json"
|
||||
bad.write_text("{broken", encoding="utf-8")
|
||||
shutil.copy(SCHEMA_PATH, tmp_path / "config.schema.json")
|
||||
with pytest.raises(ValueError):
|
||||
install.load_config(str(bad))
|
||||
|
||||
|
||||
def test_load_config_schema_error(tmp_path):
|
||||
cfg = tmp_path / "cfg.json"
|
||||
cfg.write_text(json.dumps({"version": "1.0"}), encoding="utf-8")
|
||||
shutil.copy(SCHEMA_PATH, tmp_path / "config.schema.json")
|
||||
with pytest.raises(ValueError):
|
||||
install.load_config(str(cfg))
|
||||
|
||||
|
||||
def test_resolve_paths_respects_priority(tmp_path):
|
||||
config = {
|
||||
"install_dir": str(tmp_path / "from_config"),
|
||||
"log_file": "logs/install.log",
|
||||
"modules": {},
|
||||
"version": "1.0",
|
||||
}
|
||||
cfg_path = write_config(tmp_path, config)
|
||||
args = install.parse_args(["--config", str(cfg_path)])
|
||||
|
||||
ctx = install.resolve_paths(config, args)
|
||||
assert ctx["install_dir"] == (tmp_path / "from_config").resolve()
|
||||
assert ctx["log_file"] == (tmp_path / "from_config" / "logs" / "install.log").resolve()
|
||||
assert ctx["config_dir"] == tmp_path.resolve()
|
||||
|
||||
cli_args = install.parse_args(
|
||||
["--install-dir", str(tmp_path / "cli_dir"), "--config", str(cfg_path)]
|
||||
)
|
||||
ctx_cli = install.resolve_paths(config, cli_args)
|
||||
assert ctx_cli["install_dir"] == (tmp_path / "cli_dir").resolve()
|
||||
|
||||
|
||||
def test_list_modules_output(valid_config, capsys):
|
||||
_, config_data = valid_config
|
||||
install.list_modules(config_data)
|
||||
captured = capsys.readouterr().out
|
||||
assert "dev" in captured
|
||||
assert "essentials" in captured
|
||||
assert "✓" in captured
|
||||
|
||||
|
||||
def test_select_modules_behaviour(valid_config):
|
||||
_, config_data = valid_config
|
||||
|
||||
selected_default = install.select_modules(config_data, None)
|
||||
assert set(selected_default.keys()) == {"dev", "essentials"}
|
||||
|
||||
selected_specific = install.select_modules(config_data, "bmad")
|
||||
assert set(selected_specific.keys()) == {"bmad"}
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
install.select_modules(config_data, "missing")
|
||||
|
||||
|
||||
def test_ensure_install_dir(tmp_path, monkeypatch):
|
||||
target = tmp_path / "install_here"
|
||||
install.ensure_install_dir(target)
|
||||
assert target.is_dir()
|
||||
|
||||
file_path = tmp_path / "conflict"
|
||||
file_path.write_text("x", encoding="utf-8")
|
||||
with pytest.raises(NotADirectoryError):
|
||||
install.ensure_install_dir(file_path)
|
||||
|
||||
blocked = tmp_path / "blocked"
|
||||
real_access = os.access
|
||||
|
||||
def fake_access(path, mode):
|
||||
if Path(path) == blocked:
|
||||
return False
|
||||
return real_access(path, mode)
|
||||
|
||||
monkeypatch.setattr(os, "access", fake_access)
|
||||
with pytest.raises(PermissionError):
|
||||
install.ensure_install_dir(blocked)
|
||||
|
||||
|
||||
def test_op_copy_dir_respects_force(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
|
||||
src = tmp_path / "src"
|
||||
src.mkdir()
|
||||
(src / "a.txt").write_text("one", encoding="utf-8")
|
||||
|
||||
op = {"type": "copy_dir", "source": "src", "target": "dest"}
|
||||
install.op_copy_dir(op, ctx)
|
||||
target_file = ctx["install_dir"] / "dest" / "a.txt"
|
||||
assert target_file.read_text(encoding="utf-8") == "one"
|
||||
|
||||
(src / "a.txt").write_text("two", encoding="utf-8")
|
||||
install.op_copy_dir(op, ctx)
|
||||
assert target_file.read_text(encoding="utf-8") == "one"
|
||||
|
||||
ctx["force"] = True
|
||||
install.op_copy_dir(op, ctx)
|
||||
assert target_file.read_text(encoding="utf-8") == "two"
|
||||
|
||||
|
||||
def test_op_copy_file_behaviour(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
|
||||
src = tmp_path / "file.txt"
|
||||
src.write_text("first", encoding="utf-8")
|
||||
|
||||
op = {"type": "copy_file", "source": "file.txt", "target": "out/file.txt"}
|
||||
install.op_copy_file(op, ctx)
|
||||
dst = ctx["install_dir"] / "out" / "file.txt"
|
||||
assert dst.read_text(encoding="utf-8") == "first"
|
||||
|
||||
src.write_text("second", encoding="utf-8")
|
||||
install.op_copy_file(op, ctx)
|
||||
assert dst.read_text(encoding="utf-8") == "first"
|
||||
|
||||
ctx["force"] = True
|
||||
install.op_copy_file(op, ctx)
|
||||
assert dst.read_text(encoding="utf-8") == "second"
|
||||
|
||||
|
||||
def test_op_run_command_success(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
install.op_run_command({"type": "run_command", "command": "echo hello"}, ctx)
|
||||
log_content = ctx["log_file"].read_text(encoding="utf-8")
|
||||
assert "hello" in log_content
|
||||
|
||||
|
||||
def test_op_run_command_failure(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
with pytest.raises(RuntimeError):
|
||||
install.op_run_command(
|
||||
{"type": "run_command", "command": f"{sys.executable} -c 'import sys; sys.exit(2)'"},
|
||||
ctx,
|
||||
)
|
||||
log_content = ctx["log_file"].read_text(encoding="utf-8")
|
||||
assert "returncode: 2" in log_content
|
||||
|
||||
|
||||
def test_execute_module_success(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
src = tmp_path / "src.txt"
|
||||
src.write_text("data", encoding="utf-8")
|
||||
|
||||
cfg = {"operations": [{"type": "copy_file", "source": "src.txt", "target": "out.txt"}]}
|
||||
result = install.execute_module("demo", cfg, ctx)
|
||||
assert result["status"] == "success"
|
||||
assert (ctx["install_dir"] / "out.txt").read_text(encoding="utf-8") == "data"
|
||||
|
||||
|
||||
def test_execute_module_failure_logs_and_stops(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
cfg = {"operations": [{"type": "unknown", "source": "", "target": ""}]}
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
install.execute_module("demo", cfg, ctx)
|
||||
|
||||
log_content = ctx["log_file"].read_text(encoding="utf-8")
|
||||
assert "failed on unknown" in log_content
|
||||
|
||||
|
||||
def test_write_log_and_status(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
|
||||
install.write_log({"level": "INFO", "message": "hello"}, ctx)
|
||||
content = ctx["log_file"].read_text(encoding="utf-8")
|
||||
assert "hello" in content
|
||||
|
||||
results = [
|
||||
{"module": "dev", "status": "success", "operations": [], "installed_at": "ts"}
|
||||
]
|
||||
install.write_status(results, ctx)
|
||||
status_data = json.loads(ctx["status_file"].read_text(encoding="utf-8"))
|
||||
assert status_data["modules"]["dev"]["status"] == "success"
|
||||
|
||||
|
||||
def test_main_success(valid_config, tmp_path):
|
||||
cfg_path, _ = valid_config
|
||||
install_dir = tmp_path / "install_final"
|
||||
rc = install.main(
|
||||
[
|
||||
"--config",
|
||||
str(cfg_path),
|
||||
"--install-dir",
|
||||
str(install_dir),
|
||||
"--module",
|
||||
"dev",
|
||||
]
|
||||
)
|
||||
|
||||
assert rc == 0
|
||||
assert (install_dir / "devcopy" / "f.txt").exists()
|
||||
assert (install_dir / "installed_modules.json").exists()
|
||||
|
||||
|
||||
def test_main_failure_without_force(tmp_path):
|
||||
cfg = {
|
||||
"version": "1.0",
|
||||
"install_dir": "~/.claude",
|
||||
"log_file": "install.log",
|
||||
"modules": {
|
||||
"dev": {
|
||||
"enabled": True,
|
||||
"description": "dev",
|
||||
"operations": [
|
||||
{
|
||||
"type": "run_command",
|
||||
"command": f"{sys.executable} -c 'import sys; sys.exit(3)'",
|
||||
}
|
||||
],
|
||||
},
|
||||
"bmad": {
|
||||
"enabled": False,
|
||||
"description": "bmad",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "t.txt"}
|
||||
],
|
||||
},
|
||||
"requirements": {
|
||||
"enabled": False,
|
||||
"description": "reqs",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "r.txt"}
|
||||
],
|
||||
},
|
||||
"essentials": {
|
||||
"enabled": False,
|
||||
"description": "ess",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "e.txt"}
|
||||
],
|
||||
},
|
||||
"advanced": {
|
||||
"enabled": False,
|
||||
"description": "adv",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "a.txt"}
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path = write_config(tmp_path, cfg)
|
||||
install_dir = tmp_path / "fail_install"
|
||||
rc = install.main(
|
||||
[
|
||||
"--config",
|
||||
str(cfg_path),
|
||||
"--install-dir",
|
||||
str(install_dir),
|
||||
"--module",
|
||||
"dev",
|
||||
]
|
||||
)
|
||||
|
||||
assert rc == 1
|
||||
assert not (install_dir / "installed_modules.json").exists()
|
||||
|
||||
|
||||
def test_main_force_records_failure(tmp_path):
|
||||
cfg = {
|
||||
"version": "1.0",
|
||||
"install_dir": "~/.claude",
|
||||
"log_file": "install.log",
|
||||
"modules": {
|
||||
"dev": {
|
||||
"enabled": True,
|
||||
"description": "dev",
|
||||
"operations": [
|
||||
{
|
||||
"type": "run_command",
|
||||
"command": f"{sys.executable} -c 'import sys; sys.exit(4)'",
|
||||
}
|
||||
],
|
||||
},
|
||||
"bmad": {
|
||||
"enabled": False,
|
||||
"description": "bmad",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "t.txt"}
|
||||
],
|
||||
},
|
||||
"requirements": {
|
||||
"enabled": False,
|
||||
"description": "reqs",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "r.txt"}
|
||||
],
|
||||
},
|
||||
"essentials": {
|
||||
"enabled": False,
|
||||
"description": "ess",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "e.txt"}
|
||||
],
|
||||
},
|
||||
"advanced": {
|
||||
"enabled": False,
|
||||
"description": "adv",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "a.txt"}
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path = write_config(tmp_path, cfg)
|
||||
install_dir = tmp_path / "force_install"
|
||||
rc = install.main(
|
||||
[
|
||||
"--config",
|
||||
str(cfg_path),
|
||||
"--install-dir",
|
||||
str(install_dir),
|
||||
"--module",
|
||||
"dev",
|
||||
"--force",
|
||||
]
|
||||
)
|
||||
|
||||
assert rc == 0
|
||||
status = json.loads((install_dir / "installed_modules.json").read_text(encoding="utf-8"))
|
||||
assert status["modules"]["dev"]["status"] == "failed"
|
||||
@@ -1,224 +0,0 @@
|
||||
import json
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
import install
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
SCHEMA_PATH = ROOT / "config.schema.json"
|
||||
|
||||
|
||||
def _write_schema(target_dir: Path) -> None:
|
||||
shutil.copy(SCHEMA_PATH, target_dir / "config.schema.json")
|
||||
|
||||
|
||||
def _base_config(install_dir: Path, modules: dict) -> dict:
|
||||
return {
|
||||
"version": "1.0",
|
||||
"install_dir": str(install_dir),
|
||||
"log_file": "install.log",
|
||||
"modules": modules,
|
||||
}
|
||||
|
||||
|
||||
def _prepare_env(tmp_path: Path, modules: dict) -> tuple[Path, Path, Path]:
|
||||
"""Create a temp config directory with schema and config.json."""
|
||||
|
||||
config_dir = tmp_path / "config"
|
||||
install_dir = tmp_path / "install"
|
||||
config_dir.mkdir()
|
||||
_write_schema(config_dir)
|
||||
|
||||
cfg_path = config_dir / "config.json"
|
||||
cfg_path.write_text(
|
||||
json.dumps(_base_config(install_dir, modules)), encoding="utf-8"
|
||||
)
|
||||
return cfg_path, install_dir, config_dir
|
||||
|
||||
|
||||
def _sample_sources(config_dir: Path) -> dict:
|
||||
sample_dir = config_dir / "sample_dir"
|
||||
sample_dir.mkdir()
|
||||
(sample_dir / "nested.txt").write_text("dir-content", encoding="utf-8")
|
||||
|
||||
sample_file = config_dir / "sample.txt"
|
||||
sample_file.write_text("file-content", encoding="utf-8")
|
||||
|
||||
return {"dir": sample_dir, "file": sample_file}
|
||||
|
||||
|
||||
def _read_status(install_dir: Path) -> dict:
|
||||
return json.loads((install_dir / "installed_modules.json").read_text("utf-8"))
|
||||
|
||||
|
||||
def test_single_module_full_flow(tmp_path):
|
||||
cfg_path, install_dir, config_dir = _prepare_env(
|
||||
tmp_path,
|
||||
{
|
||||
"solo": {
|
||||
"enabled": True,
|
||||
"description": "single module",
|
||||
"operations": [
|
||||
{"type": "copy_dir", "source": "sample_dir", "target": "payload"},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "payload/sample.txt",
|
||||
},
|
||||
{
|
||||
"type": "run_command",
|
||||
"command": f"{sys.executable} -c \"from pathlib import Path; Path('run.txt').write_text('ok', encoding='utf-8')\"",
|
||||
},
|
||||
],
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
_sample_sources(config_dir)
|
||||
rc = install.main(["--config", str(cfg_path), "--module", "solo"])
|
||||
|
||||
assert rc == 0
|
||||
assert (install_dir / "payload" / "nested.txt").read_text(encoding="utf-8") == "dir-content"
|
||||
assert (install_dir / "payload" / "sample.txt").read_text(encoding="utf-8") == "file-content"
|
||||
assert (install_dir / "run.txt").read_text(encoding="utf-8") == "ok"
|
||||
|
||||
status = _read_status(install_dir)
|
||||
assert status["modules"]["solo"]["status"] == "success"
|
||||
assert len(status["modules"]["solo"]["operations"]) == 3
|
||||
|
||||
|
||||
def test_multi_module_install_and_status(tmp_path):
|
||||
modules = {
|
||||
"alpha": {
|
||||
"enabled": True,
|
||||
"description": "alpha",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "alpha.txt",
|
||||
}
|
||||
],
|
||||
},
|
||||
"beta": {
|
||||
"enabled": True,
|
||||
"description": "beta",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_dir",
|
||||
"source": "sample_dir",
|
||||
"target": "beta_dir",
|
||||
}
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path, install_dir, config_dir = _prepare_env(tmp_path, modules)
|
||||
_sample_sources(config_dir)
|
||||
|
||||
rc = install.main(["--config", str(cfg_path)])
|
||||
assert rc == 0
|
||||
|
||||
assert (install_dir / "alpha.txt").read_text(encoding="utf-8") == "file-content"
|
||||
assert (install_dir / "beta_dir" / "nested.txt").exists()
|
||||
|
||||
status = _read_status(install_dir)
|
||||
assert set(status["modules"].keys()) == {"alpha", "beta"}
|
||||
assert all(mod["status"] == "success" for mod in status["modules"].values())
|
||||
|
||||
|
||||
def test_force_overwrites_existing_files(tmp_path):
|
||||
modules = {
|
||||
"forcey": {
|
||||
"enabled": True,
|
||||
"description": "force copy",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "target.txt",
|
||||
}
|
||||
],
|
||||
}
|
||||
}
|
||||
|
||||
cfg_path, install_dir, config_dir = _prepare_env(tmp_path, modules)
|
||||
sources = _sample_sources(config_dir)
|
||||
|
||||
install.main(["--config", str(cfg_path), "--module", "forcey"])
|
||||
assert (install_dir / "target.txt").read_text(encoding="utf-8") == "file-content"
|
||||
|
||||
sources["file"].write_text("new-content", encoding="utf-8")
|
||||
|
||||
rc = install.main(["--config", str(cfg_path), "--module", "forcey", "--force"])
|
||||
assert rc == 0
|
||||
assert (install_dir / "target.txt").read_text(encoding="utf-8") == "new-content"
|
||||
|
||||
status = _read_status(install_dir)
|
||||
assert status["modules"]["forcey"]["status"] == "success"
|
||||
|
||||
|
||||
def test_failure_triggers_rollback_and_restores_status(tmp_path):
|
||||
# First successful run to create a known-good status file.
|
||||
ok_modules = {
|
||||
"stable": {
|
||||
"enabled": True,
|
||||
"description": "stable",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "stable.txt",
|
||||
}
|
||||
],
|
||||
}
|
||||
}
|
||||
|
||||
cfg_path, install_dir, config_dir = _prepare_env(tmp_path, ok_modules)
|
||||
_sample_sources(config_dir)
|
||||
assert install.main(["--config", str(cfg_path)]) == 0
|
||||
pre_status = _read_status(install_dir)
|
||||
assert "stable" in pre_status["modules"]
|
||||
|
||||
# Rewrite config to introduce a failing module.
|
||||
failing_modules = {
|
||||
**ok_modules,
|
||||
"broken": {
|
||||
"enabled": True,
|
||||
"description": "will fail",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "broken.txt",
|
||||
},
|
||||
{
|
||||
"type": "run_command",
|
||||
"command": f"{sys.executable} -c 'import sys; sys.exit(5)'",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path.write_text(
|
||||
json.dumps(_base_config(install_dir, failing_modules)), encoding="utf-8"
|
||||
)
|
||||
|
||||
rc = install.main(["--config", str(cfg_path)])
|
||||
assert rc == 1
|
||||
|
||||
# The failed module's file should have been removed by rollback.
|
||||
assert not (install_dir / "broken.txt").exists()
|
||||
# Previously installed files remain.
|
||||
assert (install_dir / "stable.txt").exists()
|
||||
|
||||
restored_status = _read_status(install_dir)
|
||||
assert restored_status == pre_status
|
||||
|
||||
log_content = (install_dir / "install.log").read_text(encoding="utf-8")
|
||||
assert "Rolling back" in log_content
|
||||
|
||||
Reference in New Issue
Block a user