mirror of
https://github.com/cexll/myclaude.git
synced 2026-02-05 02:30:26 +08:00
Compare commits
25 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fe5508228f | ||
|
|
50093036c3 | ||
|
|
0cae0ede08 | ||
|
|
4613b57240 | ||
|
|
7535a7b101 | ||
|
|
f6bb97eba9 | ||
|
|
78a411462b | ||
|
|
9471a981e3 | ||
|
|
3d27d44676 | ||
|
|
6a66c9741f | ||
|
|
a09c103cfb | ||
|
|
1dec763e26 | ||
|
|
f57ea2df59 | ||
|
|
d215c33549 | ||
|
|
b3f8fcfea6 | ||
|
|
806bb04a35 | ||
|
|
b1156038de | ||
|
|
0c93bbe574 | ||
|
|
6f4f4e701b | ||
|
|
ff301507fe | ||
|
|
93b72eba42 | ||
|
|
b01758e7e1 | ||
|
|
c51b38c671 | ||
|
|
b227fee225 | ||
|
|
2b7569335b |
29
.github/workflows/release.yml
vendored
29
.github/workflows/release.yml
vendored
@@ -97,6 +97,11 @@ jobs:
|
||||
with:
|
||||
path: artifacts
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Prepare release files
|
||||
run: |
|
||||
mkdir -p release
|
||||
@@ -104,32 +109,20 @@ jobs:
|
||||
cp install.sh install.bat release/
|
||||
ls -la release/
|
||||
|
||||
- name: Extract release notes from CHANGELOG
|
||||
id: extract_notes
|
||||
- name: Generate release notes with git-cliff
|
||||
run: |
|
||||
VERSION=${GITHUB_REF#refs/tags/v}
|
||||
# Install git-cliff via npx
|
||||
npx git-cliff@latest --current --strip all -o release_notes.md
|
||||
|
||||
# Extract version section from CHANGELOG.md
|
||||
awk -v ver="$VERSION" '
|
||||
/^## [0-9]+\.[0-9]+\.[0-9]+ - / {
|
||||
if (found) exit
|
||||
if ($2 == ver) {
|
||||
found = 1
|
||||
next
|
||||
}
|
||||
}
|
||||
found && /^## / { exit }
|
||||
found { print }
|
||||
' CHANGELOG.md > release_notes.md
|
||||
|
||||
# Fallback to auto-generated if extraction failed
|
||||
# Fallback if generation failed
|
||||
if [ ! -s release_notes.md ]; then
|
||||
echo "⚠️ No release notes found in CHANGELOG.md for version $VERSION" > release_notes.md
|
||||
echo "⚠️ Failed to generate release notes with git-cliff" > release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
echo "## What's Changed" >> release_notes.md
|
||||
echo "See commits in this release for details." >> release_notes.md
|
||||
fi
|
||||
|
||||
echo "--- Generated Release Notes ---"
|
||||
cat release_notes.md
|
||||
|
||||
- name: Create Release
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -4,3 +4,4 @@
|
||||
.pytest_cache
|
||||
__pycache__
|
||||
.coverage
|
||||
coverage.out
|
||||
|
||||
771
CHANGELOG.md
771
CHANGELOG.md
@@ -1,129 +1,712 @@
|
||||
# Changelog
|
||||
|
||||
## 5.2.0 - 2025-12-13
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
### 🚀 Core Features
|
||||
## [5.2.4] - 2025-12-16
|
||||
|
||||
#### Skills System Enhancements
|
||||
- **New Skills**: Added `codeagent`, `product-requirements`, `prototype-prompt-generator` to `skill-rules.json`
|
||||
- **Auto-Activation**: Skills automatically trigger based on keyword/pattern matching via hooks
|
||||
- **Backward Compatibility**: Retained `skills/codex/SKILL.md` for existing workflows
|
||||
|
||||
#### Multi-Backend Support (codeagent-wrapper)
|
||||
- **Renamed**: `codex-wrapper` → `codeagent-wrapper` with pluggable backend architecture
|
||||
- **Three Backends**: Codex (default), Claude, Gemini via `--backend` flag
|
||||
- **Smart Parser**: Auto-detects backend JSON stream formats
|
||||
- **Session Resume**: All backends support `-r <session_id>` cross-session resume
|
||||
- **Parallel Execution**: DAG task scheduling with global and per-task backend configuration
|
||||
- **Concurrency Control**: `CODEAGENT_MAX_PARALLEL_WORKERS` env var limits concurrent tasks (max 100)
|
||||
- **Test Coverage**: 93.4% (backend.go 100%, config.go 97.8%, executor.go 96.4%)
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
#### Dev Workflow
|
||||
- **`/dev`**: 6-step minimal dev workflow with mandatory 90% test coverage
|
||||
|
||||
#### Hooks System
|
||||
- **UserPromptSubmit**: Auto-activate skills based on context
|
||||
- **PostToolUse**: Auto-validation/formatting after tool execution
|
||||
- **Stop**: Cleanup and reporting on session end
|
||||
- **Examples**: Skill auto-activation, pre-commit checks
|
||||
- integrate git-cliff for automated changelog generation
|
||||
|
||||
#### Skills System
|
||||
- **Auto-Activation**: `skill-rules.json` regex trigger rules
|
||||
- **codeagent skill**: Multi-backend wrapper integration
|
||||
- **Modular Design**: Easy to extend with custom skills
|
||||
- bump version to 5.2.4
|
||||
|
||||
#### Installation System Enhancements
|
||||
- **`merge_json` operation**: Auto-merge `settings.json` configuration
|
||||
- **Modular Installation**: `python3 install.py --module dev`
|
||||
- **Verbose Logging**: `--verbose/-v` enables terminal real-time output
|
||||
- **Streaming Output**: `op_run_command` streams bash script execution
|
||||
- **Configuration Cleanup**: Removed deprecated `gh` module from `config.json`
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 防止 Claude backend 无限递归调用
|
||||
|
||||
- isolate log files per task in parallel mode
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #70 from cexll/fix/prevent-codeagent-infinite-recursion
|
||||
|
||||
- Merge pull request #69 from cexll/myclaude-master-20251215-073053-338465000
|
||||
|
||||
- update CHANGELOG.md
|
||||
|
||||
- Merge pull request #65 from cexll/fix/issue-64-buffer-overflow
|
||||
|
||||
## [5.2.3] - 2025-12-15
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 修复 bufio.Scanner token too long 错误 ([#64](https://github.com/cexll/myclaude/issues/64))
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- change version
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 同步测试中的版本号至 5.2.3
|
||||
|
||||
## [5.2.2] - 2025-12-13
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- Bump version and clean up documentation
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codeagent backend claude no auto
|
||||
|
||||
- fix install.py dev fail
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- Fix tests for ClaudeBackend default --dangerously-skip-permissions
|
||||
|
||||
## [5.2.1] - 2025-12-13
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codeagent claude and gemini root dir
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update readme
|
||||
|
||||
## [5.2.0] - 2025-12-13
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- Update CHANGELOG and remove deprecated test files
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix race condition in stdout parsing
|
||||
|
||||
- add worker limit cap and remove legacy alias
|
||||
|
||||
- use -r flag for gemini backend resume
|
||||
|
||||
- clarify module list shows default state not enabled
|
||||
|
||||
- use -r flag for claude backend resume
|
||||
|
||||
- remove binary artifacts and improve error messages
|
||||
|
||||
- 异常退出时显示最近错误信息
|
||||
|
||||
- op_run_command 实时流式输出
|
||||
|
||||
- 修复权限标志逻辑和版本号测试
|
||||
|
||||
- 重构信号处理逻辑避免重复 nil 检查
|
||||
|
||||
- 移除 .claude 配置文件验证步骤
|
||||
|
||||
- 修复并行执行启动横幅重复打印问题
|
||||
|
||||
- 修复master合并后的编译和测试问题
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge rc/5.2 into master: v5.2.0 release improvements
|
||||
|
||||
- Merge pull request #53 from cexll/rc/5.2
|
||||
|
||||
- remove docs
|
||||
|
||||
- remove docs
|
||||
|
||||
- add prototype prompt skill
|
||||
|
||||
- add prd skill
|
||||
|
||||
- update memory claude
|
||||
|
||||
- remove command gh flow
|
||||
|
||||
- update license
|
||||
|
||||
- Merge branch 'master' into rc/5.2
|
||||
|
||||
- Merge pull request #52 from cexll/fix/parallel-log-path-on-startup
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- `docs/architecture.md` (21KB): Architecture overview with ASCII diagrams
|
||||
- `docs/CODEAGENT-WRAPPER.md` (9KB): Complete usage guide
|
||||
- `docs/HOOKS.md` (4KB): Customization guide
|
||||
- `README.md`: Added documentation index, corrected default backend description
|
||||
|
||||
### 🔧 Important Fixes
|
||||
- remove GitHub workflow related content
|
||||
|
||||
#### codeagent-wrapper
|
||||
- Fixed Claude/Gemini backend `-C` (workdir) and `-r` (resume) parameter support (codeagent-wrapper/backend.go:80-120)
|
||||
- Corrected Claude backend permission flag logic `if cfg.SkipPermissions` (codeagent-wrapper/backend.go:95)
|
||||
- Fixed parallel mode startup banner duplication (codeagent-wrapper/main.go:184-194 removed)
|
||||
- Extract and display recent errors on abnormal exit `Logger.ExtractRecentErrors()` (codeagent-wrapper/logger.go:156)
|
||||
- Added task block index to parallel config error messages (codeagent-wrapper/config.go:245)
|
||||
- Refactored signal handling logic to avoid duplicate nil checks (codeagent-wrapper/main.go:290-305)
|
||||
- Removed binary artifacts from tracking (codeagent-wrapper, *.test, coverage.out)
|
||||
### 🚀 Features
|
||||
|
||||
#### Installation Scripts
|
||||
- Fixed issue #55: `op_run_command` uses Popen + selectors for real-time streaming output
|
||||
- Fixed issue #56: Display recent errors instead of entire log
|
||||
- Changed module list header from "Enabled" to "Default" to avoid ambiguity
|
||||
|
||||
#### CI/CD
|
||||
- Removed `.claude/` config file validation step (.github/workflows/ci.yml:45)
|
||||
- Updated version test case from 5.1.0 → 5.2.0 (codeagent-wrapper/main_test.go:23)
|
||||
- Complete skills system integration and config cleanup
|
||||
|
||||
#### Commands & Documentation
|
||||
- Reverted `skills/codex/SKILL.md` to `codex-wrapper` for backward compatibility
|
||||
- Improve release notes and installation scripts
|
||||
|
||||
#### dev-workflow
|
||||
- Replaced Codex skill → codeagent skill throughout
|
||||
- Added UI auto-detection: backend tasks use codex, UI tasks use gemini
|
||||
- Corrected agent name: `develop-doc-generator` → `dev-plan-generator`
|
||||
- 添加终端日志输出和 verbose 模式
|
||||
|
||||
### ⚙️ Configuration & Environment Variables
|
||||
- 完整多后端支持与安全优化
|
||||
|
||||
#### New Environment Variables
|
||||
- `CODEAGENT_SKIP_PERMISSIONS`: Control permission check behavior
|
||||
- Claude backend defaults to `--dangerously-skip-permissions` enabled, set to `true` to disable
|
||||
- Codex/Gemini backends default to permission checks enabled, set to `true` to skip
|
||||
- `CODEAGENT_MAX_PARALLEL_WORKERS`: Parallel task concurrency limit (default: unlimited, recommended: 8, max: 100)
|
||||
- 替换 Codex 为 codeagent 并添加 UI 自动检测
|
||||
|
||||
#### Configuration Files
|
||||
- `config.schema.json`: Added `op_merge_json` schema validation
|
||||
### 🚜 Refactor
|
||||
|
||||
### ⚠️ Breaking Changes
|
||||
|
||||
**codex-wrapper → codeagent-wrapper rename**
|
||||
- 调整文件命名和技能定义
|
||||
|
||||
**Migration**:
|
||||
```bash
|
||||
python3 install.py --module dev --force
|
||||
```
|
||||
### 🧪 Testing
|
||||
|
||||
**Backward Compatibility**: `codex-wrapper/main.go` provides compatibility entry point
|
||||
|
||||
### 📦 Installation
|
||||
- 添加 ExtractRecentErrors 单元测试
|
||||
|
||||
```bash
|
||||
# Install dev module
|
||||
python3 install.py --module dev
|
||||
## [5.1.4] - 2025-12-09
|
||||
|
||||
# List all modules
|
||||
python3 install.py --list-modules
|
||||
|
||||
# Verbose logging mode
|
||||
python3 install.py --module dev --verbose
|
||||
```
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
### 🧪 Test Results
|
||||
|
||||
✅ **All tests passing**
|
||||
- Overall coverage: 93.4%
|
||||
- Security scan: 0 issues (gosec)
|
||||
- Linting: Pass
|
||||
- 任务启动时立即返回日志文件路径以支持实时调试
|
||||
|
||||
### 📄 Related PRs & Issues
|
||||
## [5.1.3] - 2025-12-08
|
||||
|
||||
- PR #53: Enterprise Workflow with Multi-Backend Support
|
||||
- Issue #55: Installation script execution not visible
|
||||
- Issue #56: Unfriendly error logging on abnormal exit
|
||||
|
||||
### 👥 Contributors
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- Claude Sonnet 4.5
|
||||
- Claude Opus 4.5
|
||||
- SWE-Agent-Bot
|
||||
|
||||
- resolve CI timing race in TestFakeCmdInfra
|
||||
|
||||
## [5.1.2] - 2025-12-08
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 修复channel同步竞态条件和死锁问题
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #51 from cexll/fix/channel-sync-race-conditions
|
||||
|
||||
- change codex-wrapper version
|
||||
|
||||
## [5.1.1] - 2025-12-08
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 增强日志清理的安全性和可靠性
|
||||
|
||||
- resolve data race on forceKillDelay with atomic operations
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #49 from cexll/freespace8/master
|
||||
|
||||
- resolve signal handling conflict preserving testability and Windows support
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 补充测试覆盖提升至 89.3%
|
||||
|
||||
## [5.1.0] - 2025-12-07
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #45 from Michaelxwb/master
|
||||
|
||||
- 修改windows安装说明
|
||||
|
||||
- 修改打包脚本
|
||||
|
||||
- 支持windows系统的安装
|
||||
|
||||
- Merge pull request #1 from Michaelxwb/feature-win
|
||||
|
||||
- 支持window
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- 添加启动时清理日志的功能和--cleanup标志支持
|
||||
|
||||
- implement enterprise workflow with multi-backend support
|
||||
|
||||
## [5.0.0] - 2025-12-05
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- clarify unit-test coverage levels in requirement questions
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- defer startup log until args parsed
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #43 from gurdasnijor/smithery/add-badge
|
||||
|
||||
- Add Smithery badge
|
||||
|
||||
- Merge pull request #42 from freespace8/master
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- rewrite documentation for v5.0 modular architecture
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat install.py
|
||||
|
||||
- implement modular installation system
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
|
||||
- remove deprecated plugin modules
|
||||
|
||||
## [4.8.2] - 2025-12-02
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- skip signal test in CI environment
|
||||
|
||||
- make forceKillDelay testable to prevent signal test timeout
|
||||
|
||||
- correct Go version in go.mod from 1.25.3 to 1.21
|
||||
|
||||
- fix codex wrapper async log
|
||||
|
||||
- capture and include stderr in error messages
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #41 from cexll/fix-async-log
|
||||
|
||||
- remove test case 90
|
||||
|
||||
- optimize codex-wrapper
|
||||
|
||||
- Merge branch 'master' into fix-async-log
|
||||
|
||||
## [4.8.1] - 2025-12-01
|
||||
|
||||
|
||||
### 🎨 Styling
|
||||
|
||||
|
||||
- replace emoji with text labels
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- improve --parallel parameter validation and docs
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- remove codex-wrapper bin
|
||||
|
||||
## [4.8.0] - 2025-11-30
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update codex skill dependencies
|
||||
|
||||
## [4.7.3] - 2025-11-29
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 保留日志文件以便程序退出后调试并完善日志输出功能
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #34 from cexll/cce-worktree-master-20251129-111802-997076000
|
||||
|
||||
- update CLAUDE.md and codex skill
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- improve codex skill parameter best practices
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- add session resume support and improve output format
|
||||
|
||||
- add parallel execution support to codex-wrapper
|
||||
|
||||
- add async logging to temp file with lifecycle management
|
||||
|
||||
## [4.7.2] - 2025-11-28
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- improve buffer size and streamline message extraction
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #32 from freespace8/master
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 增加对超大单行文本和非字符串文本的处理测试
|
||||
|
||||
## [4.7.1] - 2025-11-27
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- optimize dev pipline
|
||||
|
||||
- Merge feat/codex-wrapper: fix repository URLs
|
||||
|
||||
## [4.7] - 2025-11-27
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- update repository URLs to cexll/myclaude
|
||||
|
||||
## [4.7-alpha1] - 2025-11-27
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix marketplace schema validation error in dev-workflow plugin
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #29 from cexll/feat/codex-wrapper
|
||||
|
||||
- Add codex-wrapper Go implementation
|
||||
|
||||
- update readme
|
||||
|
||||
- update readme
|
||||
|
||||
## [4.6] - 2025-11-25
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update dev workflow
|
||||
|
||||
- update dev workflow
|
||||
|
||||
## [4.5] - 2025-11-25
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skill eof
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update dev workflow plugin
|
||||
|
||||
- update readme
|
||||
|
||||
## [4.4] - 2025-11-22
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skill timeout and add more log
|
||||
|
||||
- fix codex skill
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update gemini skills
|
||||
|
||||
- update dev workflow
|
||||
|
||||
- update codex skills model config
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #24 from cexll/swe-agent/23-1763544297
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- 支持通过环境变量配置 skills 模型
|
||||
|
||||
## [4.3] - 2025-11-19
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skills running
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update skills plugin
|
||||
|
||||
- update gemini
|
||||
|
||||
- update doc
|
||||
|
||||
- Add Gemini CLI integration skill
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat simple dev workflow
|
||||
|
||||
## [4.2.2] - 2025-11-15
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update codex skills
|
||||
|
||||
## [4.2.1] - 2025-11-14
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #21 from Tshoiasc/master
|
||||
|
||||
- Merge branch 'master' into master
|
||||
|
||||
- Change default model to gpt-5.1-codex
|
||||
|
||||
- Enhance codex.py to auto-detect long inputs and switch to stdin mode, improving handling of shell argument issues. Updated build_codex_args to support stdin and added relevant logging for task length warnings.
|
||||
|
||||
## [4.2] - 2025-11-13
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex.py wsl run err
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- optimize codex skills
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Rename SKILLS.md to SKILL.md
|
||||
|
||||
- optimize codex skills
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat codex skills
|
||||
|
||||
## [4.1] - 2025-11-04
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update enhance-prompt.md response
|
||||
|
||||
- update readme
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- 新增 /enhance-prompt 命令并更新所有 README 文档
|
||||
|
||||
## [4.0] - 2025-10-22
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix skills format
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #18 from cexll/swe-agent/17-1760969135
|
||||
|
||||
- update requirements clarity
|
||||
|
||||
- update .gitignore
|
||||
|
||||
- Fix #17: Update root marketplace.json to use skills array
|
||||
|
||||
- Fix #17: Convert requirements-clarity to correct plugin directory format
|
||||
|
||||
- Fix #17: Convert requirements-clarity to correct plugin directory format
|
||||
|
||||
- Convert requirements-clarity to plugin format with English prompts
|
||||
|
||||
- Translate requirements-clarity skill to English for plugin compatibility
|
||||
|
||||
- Add requirements-clarity Claude Skill
|
||||
|
||||
- Add requirements clarification command
|
||||
|
||||
- update
|
||||
|
||||
## [3.5] - 2025-10-20
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #15 from cexll/swe-agent/13-1760944712
|
||||
|
||||
- Fix #13: Clean up redundant README files
|
||||
|
||||
- Optimize README structure - Solution A (modular)
|
||||
|
||||
- Merge pull request #14 from cexll/swe-agent/12-1760944588
|
||||
|
||||
- Fix #12: Update Makefile install paths for new directory structure
|
||||
|
||||
## [3.4] - 2025-10-20
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #11 from cexll/swe-agent/10-1760752533
|
||||
|
||||
- Fix marketplace metadata references
|
||||
|
||||
- Fix plugin configuration: rename to marketplace.json and update repository URLs
|
||||
|
||||
- Fix #10: Restructure plugin directories to ensure proper command isolation
|
||||
|
||||
## [3.3] - 2025-10-15
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Update README-zh.md
|
||||
|
||||
- Update README.md
|
||||
|
||||
- Update marketplace.json
|
||||
|
||||
- Update Chinese README with v3.2 plugin system documentation
|
||||
|
||||
- Update README with v3.2 plugin system documentation
|
||||
|
||||
## [3.2] - 2025-10-10
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Add Claude Code plugin system support
|
||||
|
||||
- update readme
|
||||
|
||||
- Add Makefile for quick deployment and update READMEs
|
||||
|
||||
## [3.1] - 2025-09-17
|
||||
|
||||
|
||||
### ◀️ Revert
|
||||
|
||||
|
||||
- revert
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fixed bmad-orchestrator not fund
|
||||
|
||||
- fix bmad
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update bmad review with codex support
|
||||
|
||||
- 优化 BMAD 工作流和代理配置
|
||||
|
||||
- update gpt5
|
||||
|
||||
- support bmad output-style
|
||||
|
||||
- update bmad user guide
|
||||
|
||||
- update bmad readme
|
||||
|
||||
- optimize requirements pilot
|
||||
|
||||
- add use gpt5 codex
|
||||
|
||||
- add bmad pilot
|
||||
|
||||
- sync READMEs with actual commands/agents; remove nonexistent commands; enhance requirements-pilot with testing decision gate and options.
|
||||
|
||||
- Update Chinese README and requirements-pilot command to align with latest workflow
|
||||
|
||||
- update readme
|
||||
|
||||
- update agent
|
||||
|
||||
- update bugfix sub agents
|
||||
|
||||
- Update ask support KISS YAGNI SOLID
|
||||
|
||||
- Add comprehensive documentation and multi-agent workflow system
|
||||
|
||||
- update commands
|
||||
<!-- generated by git-cliff -->
|
||||
|
||||
17
Makefile
17
Makefile
@@ -1,7 +1,7 @@
|
||||
# Claude Code Multi-Agent Workflow System Makefile
|
||||
# Quick deployment for BMAD and Requirements workflows
|
||||
|
||||
.PHONY: help install deploy-bmad deploy-requirements deploy-essentials deploy-advanced deploy-all deploy-commands deploy-agents clean test
|
||||
.PHONY: help install deploy-bmad deploy-requirements deploy-essentials deploy-advanced deploy-all deploy-commands deploy-agents clean test changelog
|
||||
|
||||
# Default target
|
||||
help:
|
||||
@@ -22,6 +22,7 @@ help:
|
||||
@echo " deploy-all - Deploy everything (commands + agents)"
|
||||
@echo " test-bmad - Test BMAD workflow with sample"
|
||||
@echo " test-requirements - Test Requirements workflow with sample"
|
||||
@echo " changelog - Update CHANGELOG.md using git-cliff"
|
||||
@echo " clean - Clean generated artifacts"
|
||||
@echo " help - Show this help message"
|
||||
|
||||
@@ -145,3 +146,17 @@ all: deploy-all
|
||||
version:
|
||||
@echo "Claude Code Multi-Agent Workflow System v3.1"
|
||||
@echo "BMAD + Requirements-Driven Development"
|
||||
|
||||
# Update CHANGELOG.md using git-cliff
|
||||
changelog:
|
||||
@echo "📝 Updating CHANGELOG.md with git-cliff..."
|
||||
@if ! command -v git-cliff > /dev/null 2>&1; then \
|
||||
echo "❌ git-cliff not found. Installing via Homebrew..."; \
|
||||
brew install git-cliff; \
|
||||
fi
|
||||
@git-cliff -o CHANGELOG.md
|
||||
@echo "✅ CHANGELOG.md updated successfully!"
|
||||
@echo ""
|
||||
@echo "Preview the changes:"
|
||||
@echo " git diff CHANGELOG.md"
|
||||
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
[中文](README_CN.md) [English](README.md)
|
||||
|
||||
# Claude Code Multi-Agent Workflow System
|
||||
|
||||
[](https://smithery.ai/skills?ns=cexll&utm_source=github&utm_medium=badge)
|
||||
@@ -5,7 +7,7 @@
|
||||
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://claude.ai/code)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
|
||||
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini)
|
||||
|
||||
@@ -318,13 +320,10 @@ python3 install.py --module dev --force
|
||||
## Documentation
|
||||
|
||||
### Core Guides
|
||||
- **[Architecture Overview](docs/architecture.md)** - System architecture and component design
|
||||
- **[Codeagent-Wrapper Guide](docs/CODEAGENT-WRAPPER.md)** - Multi-backend execution wrapper
|
||||
- **[GitHub Workflow Guide](docs/GITHUB-WORKFLOW.md)** - Issue-to-PR automation
|
||||
- **[Hooks Documentation](docs/HOOKS.md)** - Custom hooks and automation
|
||||
|
||||
### Additional Resources
|
||||
- **[Enterprise Workflow Ideas](docs/enterprise-workflow-ideas.md)** - Advanced patterns and best practices
|
||||
- **[Installation Log](install.log)** - Installation history and troubleshooting
|
||||
|
||||
---
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://claude.ai/code)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
|
||||
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini)
|
||||
|
||||
|
||||
72
cliff.toml
Normal file
72
cliff.toml
Normal file
@@ -0,0 +1,72 @@
|
||||
# git-cliff configuration file
|
||||
# https://git-cliff.org/docs/configuration
|
||||
|
||||
[changelog]
|
||||
# changelog header
|
||||
header = """
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
"""
|
||||
# template for the changelog body
|
||||
body = """
|
||||
{% if version %}
|
||||
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
|
||||
{% else %}
|
||||
## Unreleased
|
||||
{% endif %}
|
||||
{% for group, commits in commits | group_by(attribute="group") %}
|
||||
### {{ group }}
|
||||
|
||||
{% for commit in commits %}
|
||||
- {{ commit.message | split(pat="\n") | first }}
|
||||
{% endfor -%}
|
||||
{% endfor -%}
|
||||
"""
|
||||
# remove the leading and trailing whitespace from the template
|
||||
trim = true
|
||||
# changelog footer
|
||||
footer = """
|
||||
<!-- generated by git-cliff -->
|
||||
"""
|
||||
|
||||
[git]
|
||||
# parse the commits based on https://www.conventionalcommits.org
|
||||
conventional_commits = true
|
||||
# filter out the commits that are not conventional
|
||||
filter_unconventional = false
|
||||
# process each line of a commit as an individual commit
|
||||
split_commits = false
|
||||
# regex for preprocessing the commit messages
|
||||
commit_preprocessors = [
|
||||
{ pattern = '\((\w+\s)?#([0-9]+)\)', replace = "([#${2}](https://github.com/cexll/myclaude/issues/${2}))" },
|
||||
]
|
||||
# regex for parsing and grouping commits
|
||||
commit_parsers = [
|
||||
{ message = "^feat", group = "🚀 Features" },
|
||||
{ message = "^fix", group = "🐛 Bug Fixes" },
|
||||
{ message = "^doc", group = "📚 Documentation" },
|
||||
{ message = "^perf", group = "⚡ Performance" },
|
||||
{ message = "^refactor", group = "🚜 Refactor" },
|
||||
{ message = "^style", group = "🎨 Styling" },
|
||||
{ message = "^test", group = "🧪 Testing" },
|
||||
{ message = "^chore\\(release\\):", skip = true },
|
||||
{ message = "^chore", group = "⚙️ Miscellaneous Tasks" },
|
||||
{ body = ".*security", group = "🛡️ Security" },
|
||||
{ message = "^revert", group = "◀️ Revert" },
|
||||
{ message = ".*", group = "💼 Other" },
|
||||
]
|
||||
# protect breaking changes from being skipped due to matching a skipping commit_parser
|
||||
protect_breaking_commits = false
|
||||
# filter out the commits that are not matched by commit parsers
|
||||
filter_commits = false
|
||||
# glob pattern for matching git tags
|
||||
tag_pattern = "v[0-9]*"
|
||||
# regex for skipping tags
|
||||
skip_tags = "v0.1.0-beta.1"
|
||||
# regex for ignoring tags
|
||||
ignore_tags = ""
|
||||
# sort the tags topologically
|
||||
topo_order = false
|
||||
# sort the commits inside sections by oldest/newest order
|
||||
sort_commits = "newest"
|
||||
@@ -29,26 +29,24 @@ func (ClaudeBackend) BuildArgs(cfg *Config, targetArg string) []string {
|
||||
if cfg == nil {
|
||||
return nil
|
||||
}
|
||||
args := []string{"-p"}
|
||||
args := []string{"-p", "--dangerously-skip-permissions"}
|
||||
|
||||
// Only skip permissions when explicitly requested
|
||||
if cfg.SkipPermissions {
|
||||
args = append(args, "--dangerously-skip-permissions")
|
||||
}
|
||||
// if cfg.SkipPermissions {
|
||||
// args = append(args, "--dangerously-skip-permissions")
|
||||
// }
|
||||
|
||||
workdir := cfg.WorkDir
|
||||
if workdir == "" {
|
||||
workdir = defaultWorkdir
|
||||
}
|
||||
// Prevent infinite recursion: disable all setting sources (user, project, local)
|
||||
// This ensures a clean execution environment without CLAUDE.md or skills that would trigger codeagent
|
||||
args = append(args, "--setting-sources", "")
|
||||
|
||||
if cfg.Mode == "resume" {
|
||||
if cfg.SessionID != "" {
|
||||
// Claude CLI uses -r <session_id> for resume.
|
||||
args = append(args, "-r", cfg.SessionID)
|
||||
}
|
||||
} else {
|
||||
args = append(args, "-C", workdir)
|
||||
}
|
||||
// Note: claude CLI doesn't support -C flag; workdir set via cmd.Dir
|
||||
|
||||
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
|
||||
|
||||
@@ -67,18 +65,12 @@ func (GeminiBackend) BuildArgs(cfg *Config, targetArg string) []string {
|
||||
}
|
||||
args := []string{"-o", "stream-json", "-y"}
|
||||
|
||||
workdir := cfg.WorkDir
|
||||
if workdir == "" {
|
||||
workdir = defaultWorkdir
|
||||
}
|
||||
|
||||
if cfg.Mode == "resume" {
|
||||
if cfg.SessionID != "" {
|
||||
args = append(args, "-r", cfg.SessionID)
|
||||
}
|
||||
} else {
|
||||
args = append(args, "-C", workdir)
|
||||
}
|
||||
// Note: gemini CLI doesn't support -C flag; workdir set via cmd.Dir
|
||||
|
||||
args = append(args, "-p", targetArg)
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("new mode uses workdir without skip by default", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "new", WorkDir: "/repo"}
|
||||
got := backend.BuildArgs(cfg, "todo")
|
||||
want := []string{"-p", "-C", "/repo", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -20,7 +20,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("new mode opt-in skip permissions with default workdir", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "new", SkipPermissions: true}
|
||||
got := backend.BuildArgs(cfg, "-")
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "-C", defaultWorkdir, "--output-format", "stream-json", "--verbose", "-"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "-"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -29,7 +29,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("resume mode uses session id and omits workdir", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "resume", SessionID: "sid-123", WorkDir: "/ignored"}
|
||||
got := backend.BuildArgs(cfg, "resume-task")
|
||||
want := []string{"-p", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -38,7 +38,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("resume mode without session still returns base flags", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "resume", WorkDir: "/ignored"}
|
||||
got := backend.BuildArgs(cfg, "follow-up")
|
||||
want := []string{"-p", "--output-format", "stream-json", "--verbose", "follow-up"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "follow-up"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -56,7 +56,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
|
||||
backend := GeminiBackend{}
|
||||
cfg := &Config{Mode: "new", WorkDir: "/workspace"}
|
||||
got := backend.BuildArgs(cfg, "task")
|
||||
want := []string{"-o", "stream-json", "-y", "-C", "/workspace", "-p", "task"}
|
||||
want := []string{"-o", "stream-json", "-y", "-p", "task"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
|
||||
@@ -76,8 +76,8 @@ func TestConcurrentStressLogger(t *testing.T) {
|
||||
t.Logf("Successfully wrote %d/%d logs (%.1f%%)",
|
||||
actualCount, totalExpected, float64(actualCount)/float64(totalExpected)*100)
|
||||
|
||||
// 验证日志格式
|
||||
formatRE := regexp.MustCompile(`^\[\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}\] \[PID:\d+\] INFO: goroutine-`)
|
||||
// 验证日志格式(纯文本,无前缀)
|
||||
formatRE := regexp.MustCompile(`^goroutine-\d+-msg-\d+$`)
|
||||
for i, line := range lines[:min(10, len(lines))] {
|
||||
if !formatRE.MatchString(line) {
|
||||
t.Errorf("line %d has invalid format: %s", i, line)
|
||||
@@ -293,16 +293,13 @@ func TestLoggerOrderPreservation(t *testing.T) {
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
var gid, seq int
|
||||
parts := strings.SplitN(line, " INFO: ", 2)
|
||||
if len(parts) != 2 {
|
||||
t.Errorf("invalid log format: %s", line)
|
||||
// Parse format: G0-SEQ0001 (without INFO: prefix)
|
||||
_, err := fmt.Sscanf(line, "G%d-SEQ%04d", &gid, &seq)
|
||||
if err != nil {
|
||||
t.Errorf("invalid log format: %s (error: %v)", line, err)
|
||||
continue
|
||||
}
|
||||
if _, err := fmt.Sscanf(parts[1], "G%d-SEQ%d", &gid, &seq); err == nil {
|
||||
sequences[gid] = append(sequences[gid], seq)
|
||||
} else {
|
||||
t.Errorf("failed to parse sequence from line: %s", line)
|
||||
}
|
||||
sequences[gid] = append(sequences[gid], seq)
|
||||
}
|
||||
|
||||
// 验证每个 goroutine 内部顺序
|
||||
|
||||
@@ -49,6 +49,7 @@ type TaskResult struct {
|
||||
SessionID string `json:"session_id"`
|
||||
Error string `json:"error"`
|
||||
LogPath string `json:"log_path"`
|
||||
sharedLog bool
|
||||
}
|
||||
|
||||
var backendRegistry = map[string]Backend{
|
||||
|
||||
@@ -23,6 +23,7 @@ type commandRunner interface {
|
||||
StdoutPipe() (io.ReadCloser, error)
|
||||
StdinPipe() (io.WriteCloser, error)
|
||||
SetStderr(io.Writer)
|
||||
SetDir(string)
|
||||
Process() processHandle
|
||||
}
|
||||
|
||||
@@ -72,6 +73,12 @@ func (r *realCmd) SetStderr(w io.Writer) {
|
||||
}
|
||||
}
|
||||
|
||||
func (r *realCmd) SetDir(dir string) {
|
||||
if r.cmd != nil {
|
||||
r.cmd.Dir = dir
|
||||
}
|
||||
}
|
||||
|
||||
func (r *realCmd) Process() processHandle {
|
||||
if r == nil || r.cmd == nil || r.cmd.Process == nil {
|
||||
return nil
|
||||
@@ -115,7 +122,57 @@ type parseResult struct {
|
||||
threadID string
|
||||
}
|
||||
|
||||
var runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
type taskLoggerContextKey struct{}
|
||||
|
||||
func withTaskLogger(ctx context.Context, logger *Logger) context.Context {
|
||||
if ctx == nil || logger == nil {
|
||||
return ctx
|
||||
}
|
||||
return context.WithValue(ctx, taskLoggerContextKey{}, logger)
|
||||
}
|
||||
|
||||
func taskLoggerFromContext(ctx context.Context) *Logger {
|
||||
if ctx == nil {
|
||||
return nil
|
||||
}
|
||||
logger, _ := ctx.Value(taskLoggerContextKey{}).(*Logger)
|
||||
return logger
|
||||
}
|
||||
|
||||
type taskLoggerHandle struct {
|
||||
logger *Logger
|
||||
path string
|
||||
shared bool
|
||||
closeFn func()
|
||||
}
|
||||
|
||||
func newTaskLoggerHandle(taskID string) taskLoggerHandle {
|
||||
taskLogger, err := NewLoggerWithSuffix(taskID)
|
||||
if err == nil {
|
||||
return taskLoggerHandle{
|
||||
logger: taskLogger,
|
||||
path: taskLogger.Path(),
|
||||
closeFn: func() { _ = taskLogger.Close() },
|
||||
}
|
||||
}
|
||||
|
||||
msg := fmt.Sprintf("Failed to create task logger for %s: %v, using main logger", taskID, err)
|
||||
mainLogger := activeLogger()
|
||||
if mainLogger != nil {
|
||||
logWarn(msg)
|
||||
return taskLoggerHandle{
|
||||
logger: mainLogger,
|
||||
path: mainLogger.Path(),
|
||||
shared: true,
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintln(os.Stderr, msg)
|
||||
return taskLoggerHandle{}
|
||||
}
|
||||
|
||||
// defaultRunCodexTaskFn is the default implementation of runCodexTaskFn (exposed for test reset)
|
||||
func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
|
||||
if task.WorkDir == "" {
|
||||
task.WorkDir = defaultWorkdir
|
||||
}
|
||||
@@ -144,6 +201,8 @@ var runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
return runCodexTaskWithContext(parentCtx, task, backend, nil, false, true, timeout)
|
||||
}
|
||||
|
||||
var runCodexTaskFn = defaultRunCodexTaskFn
|
||||
|
||||
func topologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
|
||||
idToTask := make(map[string]TaskSpec, len(tasks))
|
||||
indegree := make(map[string]int, len(tasks))
|
||||
@@ -228,13 +287,8 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
var startPrintMu sync.Mutex
|
||||
bannerPrinted := false
|
||||
|
||||
printTaskStart := func(taskID string) {
|
||||
logger := activeLogger()
|
||||
if logger == nil {
|
||||
return
|
||||
}
|
||||
path := logger.Path()
|
||||
if path == "" {
|
||||
printTaskStart := func(taskID, logPath string, shared bool) {
|
||||
if logPath == "" {
|
||||
return
|
||||
}
|
||||
startPrintMu.Lock()
|
||||
@@ -242,7 +296,11 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
fmt.Fprintln(os.Stderr, "=== Starting Parallel Execution ===")
|
||||
bannerPrinted = true
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Task %s: Log: %s\n", taskID, path)
|
||||
label := "Log"
|
||||
if shared {
|
||||
label = "Log (shared)"
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Task %s: %s: %s\n", taskID, label, logPath)
|
||||
startPrintMu.Unlock()
|
||||
}
|
||||
|
||||
@@ -312,9 +370,11 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
wg.Add(1)
|
||||
go func(ts TaskSpec) {
|
||||
defer wg.Done()
|
||||
var taskLogPath string
|
||||
handle := taskLoggerHandle{}
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
resultsCh <- TaskResult{TaskID: ts.ID, ExitCode: 1, Error: fmt.Sprintf("panic: %v", r)}
|
||||
resultsCh <- TaskResult{TaskID: ts.ID, ExitCode: 1, Error: fmt.Sprintf("panic: %v", r), LogPath: taskLogPath, sharedLog: handle.shared}
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -331,9 +391,31 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
logConcurrencyState("done", ts.ID, int(after), workerLimit)
|
||||
}()
|
||||
|
||||
ts.Context = ctx
|
||||
printTaskStart(ts.ID)
|
||||
resultsCh <- runCodexTaskFn(ts, timeout)
|
||||
handle = newTaskLoggerHandle(ts.ID)
|
||||
taskLogPath = handle.path
|
||||
if handle.closeFn != nil {
|
||||
defer handle.closeFn()
|
||||
}
|
||||
|
||||
taskCtx := ctx
|
||||
if handle.logger != nil {
|
||||
taskCtx = withTaskLogger(ctx, handle.logger)
|
||||
}
|
||||
ts.Context = taskCtx
|
||||
|
||||
printTaskStart(ts.ID, taskLogPath, handle.shared)
|
||||
|
||||
res := runCodexTaskFn(ts, timeout)
|
||||
if taskLogPath != "" {
|
||||
if res.LogPath == "" || (handle.shared && handle.logger != nil && res.LogPath == handle.logger.Path()) {
|
||||
res.LogPath = taskLogPath
|
||||
}
|
||||
}
|
||||
// 只有当最终的 LogPath 确实是共享 logger 的路径时才标记为 shared
|
||||
if handle.shared && handle.logger != nil && res.LogPath == handle.logger.Path() {
|
||||
res.sharedLog = true
|
||||
}
|
||||
resultsCh <- res
|
||||
}(task)
|
||||
}
|
||||
|
||||
@@ -409,7 +491,11 @@ func generateFinalOutput(results []TaskResult) string {
|
||||
sb.WriteString(fmt.Sprintf("Session: %s\n", res.SessionID))
|
||||
}
|
||||
if res.LogPath != "" {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s\n", res.LogPath))
|
||||
if res.sharedLog {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s (shared)\n", res.LogPath))
|
||||
} else {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s\n", res.LogPath))
|
||||
}
|
||||
}
|
||||
if res.Message != "" {
|
||||
sb.WriteString(fmt.Sprintf("\n%s\n", res.Message))
|
||||
@@ -450,15 +536,16 @@ func runCodexProcess(parentCtx context.Context, codexArgs []string, taskText str
|
||||
}
|
||||
|
||||
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
|
||||
result := TaskResult{TaskID: taskSpec.ID}
|
||||
setLogPath := func() {
|
||||
if result.LogPath != "" {
|
||||
return
|
||||
}
|
||||
if logger := activeLogger(); logger != nil {
|
||||
result.LogPath = logger.Path()
|
||||
}
|
||||
if parentCtx == nil {
|
||||
parentCtx = taskSpec.Context
|
||||
}
|
||||
if parentCtx == nil {
|
||||
parentCtx = context.Background()
|
||||
}
|
||||
|
||||
result := TaskResult{TaskID: taskSpec.ID}
|
||||
injectedLogger := taskLoggerFromContext(parentCtx)
|
||||
logger := injectedLogger
|
||||
|
||||
cfg := &Config{
|
||||
Mode: taskSpec.Mode,
|
||||
@@ -514,17 +601,17 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
if silent {
|
||||
// Silent mode: only persist to file when available; avoid stderr noise.
|
||||
logInfoFn = func(msg string) {
|
||||
if logger := activeLogger(); logger != nil {
|
||||
if logger != nil {
|
||||
logger.Info(prefixMsg(msg))
|
||||
}
|
||||
}
|
||||
logWarnFn = func(msg string) {
|
||||
if logger := activeLogger(); logger != nil {
|
||||
if logger != nil {
|
||||
logger.Warn(prefixMsg(msg))
|
||||
}
|
||||
}
|
||||
logErrorFn = func(msg string) {
|
||||
if logger := activeLogger(); logger != nil {
|
||||
if logger != nil {
|
||||
logger.Error(prefixMsg(msg))
|
||||
}
|
||||
}
|
||||
@@ -540,10 +627,11 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
var stderrLogger *logWriter
|
||||
|
||||
var tempLogger *Logger
|
||||
if silent && activeLogger() == nil {
|
||||
if logger == nil && silent && activeLogger() == nil {
|
||||
if l, err := NewLogger(); err == nil {
|
||||
setLogger(l)
|
||||
tempLogger = l
|
||||
logger = l
|
||||
}
|
||||
}
|
||||
defer func() {
|
||||
@@ -551,21 +639,29 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
_ = closeLogger()
|
||||
}
|
||||
}()
|
||||
defer setLogPath()
|
||||
if logger := activeLogger(); logger != nil {
|
||||
defer func() {
|
||||
if result.LogPath != "" || logger == nil {
|
||||
return
|
||||
}
|
||||
result.LogPath = logger.Path()
|
||||
}()
|
||||
if logger == nil {
|
||||
logger = activeLogger()
|
||||
}
|
||||
if logger != nil {
|
||||
result.LogPath = logger.Path()
|
||||
}
|
||||
|
||||
if !silent {
|
||||
stdoutLogger = newLogWriter("CODEX_STDOUT: ", codexLogLineLimit)
|
||||
stderrLogger = newLogWriter("CODEX_STDERR: ", codexLogLineLimit)
|
||||
// Note: Empty prefix ensures backend output is logged as-is without any wrapper format.
|
||||
// This preserves the original stdout/stderr content from codex/claude/gemini backends.
|
||||
// Trade-off: Reduces distinguishability between stdout/stderr in logs, but maintains
|
||||
// output fidelity which is critical for debugging backend-specific issues.
|
||||
stdoutLogger = newLogWriter("", codexLogLineLimit)
|
||||
stderrLogger = newLogWriter("", codexLogLineLimit)
|
||||
}
|
||||
|
||||
ctx := parentCtx
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, time.Duration(timeoutSec)*time.Second)
|
||||
defer cancel()
|
||||
ctx, stop := signal.NotifyContext(ctx, syscall.SIGINT, syscall.SIGTERM)
|
||||
@@ -577,6 +673,12 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
|
||||
cmd := newCommandRunner(ctx, commandName, codexArgs...)
|
||||
|
||||
// For backends that don't support -C flag (claude, gemini), set working directory via cmd.Dir
|
||||
// Codex passes workdir via -C flag, so we skip setting Dir for it to avoid conflicts
|
||||
if cfg.Mode != "resume" && commandName != "codex" && cfg.WorkDir != "" {
|
||||
cmd.SetDir(cfg.WorkDir)
|
||||
}
|
||||
|
||||
stderrWriters := []io.Writer{stderrBuf}
|
||||
if stderrLogger != nil {
|
||||
stderrWriters = append(stderrWriters, stderrLogger)
|
||||
@@ -646,7 +748,7 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
}
|
||||
|
||||
logInfoFn(fmt.Sprintf("Starting %s with PID: %d", commandName, cmd.Process().Pid()))
|
||||
if logger := activeLogger(); logger != nil {
|
||||
if logger != nil {
|
||||
logInfoFn(fmt.Sprintf("Log capturing to: %s", logger.Path()))
|
||||
}
|
||||
|
||||
@@ -743,8 +845,8 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
result.ExitCode = 0
|
||||
result.Message = message
|
||||
result.SessionID = threadID
|
||||
if logger := activeLogger(); logger != nil {
|
||||
result.LogPath = logger.Path()
|
||||
if result.LogPath == "" && injectedLogger != nil {
|
||||
result.LogPath = injectedLogger.Path()
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
@@ -15,6 +18,12 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
var executorTestTaskCounter atomic.Int64
|
||||
|
||||
func nextExecutorTestTaskID(prefix string) string {
|
||||
return fmt.Sprintf("%s-%d", prefix, executorTestTaskCounter.Add(1))
|
||||
}
|
||||
|
||||
type execFakeProcess struct {
|
||||
pid int
|
||||
signals []os.Signal
|
||||
@@ -76,6 +85,7 @@ type execFakeRunner struct {
|
||||
stdout io.ReadCloser
|
||||
process processHandle
|
||||
stdin io.WriteCloser
|
||||
dir string
|
||||
waitErr error
|
||||
waitDelay time.Duration
|
||||
startErr error
|
||||
@@ -117,6 +127,7 @@ func (f *execFakeRunner) StdinPipe() (io.WriteCloser, error) {
|
||||
return &writeCloserStub{}, nil
|
||||
}
|
||||
func (f *execFakeRunner) SetStderr(io.Writer) {}
|
||||
func (f *execFakeRunner) SetDir(dir string) { f.dir = dir }
|
||||
func (f *execFakeRunner) Process() processHandle {
|
||||
if f.process != nil {
|
||||
return f.process
|
||||
@@ -148,6 +159,10 @@ func TestExecutorHelperCoverage(t *testing.T) {
|
||||
}
|
||||
rcWithCmd := &realCmd{cmd: &exec.Cmd{}}
|
||||
rcWithCmd.SetStderr(io.Discard)
|
||||
rcWithCmd.SetDir("/tmp")
|
||||
if rcWithCmd.cmd.Dir != "/tmp" {
|
||||
t.Fatalf("expected SetDir to set cmd.Dir, got %q", rcWithCmd.cmd.Dir)
|
||||
}
|
||||
echoCmd := exec.Command("echo", "ok")
|
||||
rcProc := &realCmd{cmd: echoCmd}
|
||||
stdoutPipe, err := rcProc.StdoutPipe()
|
||||
@@ -420,6 +435,100 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
|
||||
_ = closeLogger()
|
||||
})
|
||||
|
||||
t.Run("injectedLogger", func(t *testing.T) {
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
return &execFakeRunner{
|
||||
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"injected"}}`),
|
||||
process: &execFakeProcess{pid: 12},
|
||||
}
|
||||
}
|
||||
_ = closeLogger()
|
||||
|
||||
injected, err := NewLoggerWithSuffix("executor-injected")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = injected.Close()
|
||||
_ = os.Remove(injected.Path())
|
||||
}()
|
||||
|
||||
ctx := withTaskLogger(context.Background(), injected)
|
||||
res := runCodexTaskWithContext(ctx, TaskSpec{ID: "task-injected", Task: "payload", WorkDir: "."}, nil, nil, false, true, 1)
|
||||
if res.ExitCode != 0 || res.LogPath != injected.Path() {
|
||||
t.Fatalf("expected injected logger path, got %+v", res)
|
||||
}
|
||||
if activeLogger() != nil {
|
||||
t.Fatalf("expected no global logger to be created when injected")
|
||||
}
|
||||
|
||||
injected.Flush()
|
||||
data, err := os.ReadFile(injected.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read injected log file: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(data), "task-injected") {
|
||||
t.Fatalf("injected log missing task prefix, content: %s", string(data))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("contextLoggerWithoutParent", func(t *testing.T) {
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
return &execFakeRunner{
|
||||
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ctx"}}`),
|
||||
process: &execFakeProcess{pid: 14},
|
||||
}
|
||||
}
|
||||
_ = closeLogger()
|
||||
|
||||
taskLogger, err := NewLoggerWithSuffix("executor-taskctx")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
_ = taskLogger.Close()
|
||||
_ = os.Remove(taskLogger.Path())
|
||||
})
|
||||
|
||||
ctx := withTaskLogger(context.Background(), taskLogger)
|
||||
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-context", Task: "payload", WorkDir: ".", Context: ctx}, nil, nil, false, true, 1)
|
||||
if res.ExitCode != 0 || res.LogPath != taskLogger.Path() {
|
||||
t.Fatalf("expected task logger to be reused from spec context, got %+v", res)
|
||||
}
|
||||
if activeLogger() != nil {
|
||||
t.Fatalf("expected no global logger to be created when task context provides one")
|
||||
}
|
||||
|
||||
taskLogger.Flush()
|
||||
data, err := os.ReadFile(taskLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read task log: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(data), "task-context") {
|
||||
t.Fatalf("task log missing task id, content: %s", string(data))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("backendSetsDirAndNilContext", func(t *testing.T) {
|
||||
var rc *execFakeRunner
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
rc = &execFakeRunner{
|
||||
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"backend"}}`),
|
||||
process: &execFakeProcess{pid: 13},
|
||||
}
|
||||
return rc
|
||||
}
|
||||
|
||||
_ = closeLogger()
|
||||
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-backend", Task: "payload", WorkDir: "/tmp"}, ClaudeBackend{}, nil, false, false, 1)
|
||||
if res.ExitCode != 0 || res.Message != "backend" {
|
||||
t.Fatalf("unexpected result: %+v", res)
|
||||
}
|
||||
if rc == nil || rc.dir != "/tmp" {
|
||||
t.Fatalf("expected backend to set cmd.Dir, got runner=%v dir=%q", rc, rc.dir)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("missingMessage", func(t *testing.T) {
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
return &execFakeRunner{
|
||||
@@ -434,6 +543,613 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestExecutorParallelLogIsolation(t *testing.T) {
|
||||
mainLogger, err := NewLoggerWithSuffix("executor-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
t.Cleanup(func() {
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
})
|
||||
|
||||
taskA := nextExecutorTestTaskID("iso-a")
|
||||
taskB := nextExecutorTestTaskID("iso-b")
|
||||
markerA := "ISOLATION_MARKER:" + taskA
|
||||
markerB := "ISOLATION_MARKER:" + taskB
|
||||
|
||||
origRun := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
if logger == nil {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing task logger"}
|
||||
}
|
||||
switch task.ID {
|
||||
case taskA:
|
||||
logger.Info(markerA)
|
||||
case taskB:
|
||||
logger.Info(markerB)
|
||||
default:
|
||||
logger.Info("unexpected task: " + task.ID)
|
||||
}
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = origRun })
|
||||
|
||||
stderrR, stderrW, err := os.Pipe()
|
||||
if err != nil {
|
||||
t.Fatalf("os.Pipe() error = %v", err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = stderrW
|
||||
defer func() { os.Stderr = oldStderr }()
|
||||
|
||||
results := executeConcurrentWithContext(nil, [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, -1)
|
||||
|
||||
_ = stderrW.Close()
|
||||
os.Stderr = oldStderr
|
||||
stderrData, _ := io.ReadAll(stderrR)
|
||||
_ = stderrR.Close()
|
||||
stderrOut := string(stderrData)
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Fatalf("expected 2 results, got %d", len(results))
|
||||
}
|
||||
|
||||
paths := map[string]string{}
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 {
|
||||
t.Fatalf("unexpected failure: %+v", res)
|
||||
}
|
||||
if res.LogPath == "" {
|
||||
t.Fatalf("missing LogPath for task %q", res.TaskID)
|
||||
}
|
||||
paths[res.TaskID] = res.LogPath
|
||||
}
|
||||
if paths[taskA] == paths[taskB] {
|
||||
t.Fatalf("expected distinct task log paths, got %q", paths[taskA])
|
||||
}
|
||||
|
||||
if strings.Contains(stderrOut, mainLogger.Path()) {
|
||||
t.Fatalf("stderr should not print main log path: %s", stderrOut)
|
||||
}
|
||||
if !strings.Contains(stderrOut, paths[taskA]) || !strings.Contains(stderrOut, paths[taskB]) {
|
||||
t.Fatalf("stderr should include task log paths, got: %s", stderrOut)
|
||||
}
|
||||
|
||||
mainLogger.Flush()
|
||||
mainData, err := os.ReadFile(mainLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read main log: %v", err)
|
||||
}
|
||||
if strings.Contains(string(mainData), markerA) || strings.Contains(string(mainData), markerB) {
|
||||
t.Fatalf("main log should not contain task markers, content: %s", string(mainData))
|
||||
}
|
||||
|
||||
taskAData, err := os.ReadFile(paths[taskA])
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read task A log: %v", err)
|
||||
}
|
||||
taskBData, err := os.ReadFile(paths[taskB])
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read task B log: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(taskAData), markerA) || strings.Contains(string(taskAData), markerB) {
|
||||
t.Fatalf("task A log isolation failed, content: %s", string(taskAData))
|
||||
}
|
||||
if !strings.Contains(string(taskBData), markerB) || strings.Contains(string(taskBData), markerA) {
|
||||
t.Fatalf("task B log isolation failed, content: %s", string(taskBData))
|
||||
}
|
||||
|
||||
_ = os.Remove(paths[taskA])
|
||||
_ = os.Remove(paths[taskB])
|
||||
}
|
||||
|
||||
func TestConcurrentExecutorParallelLogIsolationAndClosure(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
oldArgs := os.Args
|
||||
os.Args = []string{defaultWrapperName}
|
||||
t.Cleanup(func() { os.Args = oldArgs })
|
||||
|
||||
mainLogger, err := NewLoggerWithSuffix("concurrent-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
t.Cleanup(func() {
|
||||
mainLogger.Flush()
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
})
|
||||
|
||||
const taskCount = 16
|
||||
const writersPerTask = 4
|
||||
const logsPerWriter = 50
|
||||
const expectedTaskLines = writersPerTask * logsPerWriter
|
||||
|
||||
taskIDs := make([]string, 0, taskCount)
|
||||
tasks := make([]TaskSpec, 0, taskCount)
|
||||
for i := 0; i < taskCount; i++ {
|
||||
id := nextExecutorTestTaskID("iso")
|
||||
taskIDs = append(taskIDs, id)
|
||||
tasks = append(tasks, TaskSpec{ID: id})
|
||||
}
|
||||
|
||||
type taskLoggerInfo struct {
|
||||
taskID string
|
||||
logger *Logger
|
||||
}
|
||||
loggerCh := make(chan taskLoggerInfo, taskCount)
|
||||
readyCh := make(chan struct{}, taskCount)
|
||||
startCh := make(chan struct{})
|
||||
|
||||
go func() {
|
||||
for i := 0; i < taskCount; i++ {
|
||||
<-readyCh
|
||||
}
|
||||
close(startCh)
|
||||
}()
|
||||
|
||||
origRun := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
readyCh <- struct{}{}
|
||||
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
loggerCh <- taskLoggerInfo{taskID: task.ID, logger: logger}
|
||||
if logger == nil {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing task logger"}
|
||||
}
|
||||
|
||||
<-startCh
|
||||
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(writersPerTask)
|
||||
for g := 0; g < writersPerTask; g++ {
|
||||
go func(g int) {
|
||||
defer wg.Done()
|
||||
for i := 0; i < logsPerWriter; i++ {
|
||||
logger.Info(fmt.Sprintf("TASK=%s g=%d i=%d", task.ID, g, i))
|
||||
}
|
||||
}(g)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = origRun })
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{tasks}, 1, 0)
|
||||
|
||||
if len(results) != taskCount {
|
||||
t.Fatalf("expected %d results, got %d", taskCount, len(results))
|
||||
}
|
||||
|
||||
taskLogPaths := make(map[string]string, taskCount)
|
||||
seenPaths := make(map[string]struct{}, taskCount)
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 || res.Error != "" {
|
||||
t.Fatalf("unexpected task failure: %+v", res)
|
||||
}
|
||||
if res.LogPath == "" {
|
||||
t.Fatalf("missing LogPath for task %q", res.TaskID)
|
||||
}
|
||||
if _, ok := taskLogPaths[res.TaskID]; ok {
|
||||
t.Fatalf("duplicate TaskID in results: %q", res.TaskID)
|
||||
}
|
||||
taskLogPaths[res.TaskID] = res.LogPath
|
||||
if _, ok := seenPaths[res.LogPath]; ok {
|
||||
t.Fatalf("expected unique log path per task; duplicate path %q", res.LogPath)
|
||||
}
|
||||
seenPaths[res.LogPath] = struct{}{}
|
||||
}
|
||||
if len(taskLogPaths) != taskCount {
|
||||
t.Fatalf("expected %d unique task IDs, got %d", taskCount, len(taskLogPaths))
|
||||
}
|
||||
|
||||
prefix := primaryLogPrefix()
|
||||
pid := os.Getpid()
|
||||
for _, id := range taskIDs {
|
||||
path := taskLogPaths[id]
|
||||
if path == "" {
|
||||
t.Fatalf("missing log path for task %q", id)
|
||||
}
|
||||
if _, err := os.Stat(path); err != nil {
|
||||
t.Fatalf("task log file not created for %q: %v", id, err)
|
||||
}
|
||||
wantBase := fmt.Sprintf("%s-%d-%s.log", prefix, pid, id)
|
||||
if got := filepath.Base(path); got != wantBase {
|
||||
t.Fatalf("unexpected log filename for %q: got %q, want %q", id, got, wantBase)
|
||||
}
|
||||
}
|
||||
|
||||
loggers := make(map[string]*Logger, taskCount)
|
||||
for i := 0; i < taskCount; i++ {
|
||||
info := <-loggerCh
|
||||
if info.taskID == "" {
|
||||
t.Fatalf("missing taskID in logger info")
|
||||
}
|
||||
if info.logger == nil {
|
||||
t.Fatalf("missing logger in context for task %q", info.taskID)
|
||||
}
|
||||
if prev, ok := loggers[info.taskID]; ok && prev != info.logger {
|
||||
t.Fatalf("task %q received multiple logger instances", info.taskID)
|
||||
}
|
||||
loggers[info.taskID] = info.logger
|
||||
}
|
||||
if len(loggers) != taskCount {
|
||||
t.Fatalf("expected %d task loggers, got %d", taskCount, len(loggers))
|
||||
}
|
||||
|
||||
for taskID, logger := range loggers {
|
||||
if !logger.closed.Load() {
|
||||
t.Fatalf("expected task logger to be closed for %q", taskID)
|
||||
}
|
||||
if logger.file == nil {
|
||||
t.Fatalf("expected task logger file to be non-nil for %q", taskID)
|
||||
}
|
||||
if _, err := logger.file.Write([]byte("x")); err == nil {
|
||||
t.Fatalf("expected task logger file to be closed for %q", taskID)
|
||||
}
|
||||
}
|
||||
|
||||
mainLogger.Flush()
|
||||
mainData, err := os.ReadFile(mainLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read main log: %v", err)
|
||||
}
|
||||
mainText := string(mainData)
|
||||
if !strings.Contains(mainText, "parallel: worker_limit=") {
|
||||
t.Fatalf("expected main log to include concurrency planning, content: %s", mainText)
|
||||
}
|
||||
if strings.Contains(mainText, "TASK=") {
|
||||
t.Fatalf("main log should not contain task output, content: %s", mainText)
|
||||
}
|
||||
|
||||
for taskID, path := range taskLogPaths {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open task log for %q: %v", taskID, err)
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
lines := 0
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.Contains(line, "parallel:") {
|
||||
t.Fatalf("task log should not contain main log entries for %q: %s", taskID, line)
|
||||
}
|
||||
gotID, ok := parseTaskIDFromLogLine(line)
|
||||
if !ok {
|
||||
t.Fatalf("task log entry missing task marker for %q: %s", taskID, line)
|
||||
}
|
||||
if gotID != taskID {
|
||||
t.Fatalf("task log isolation failed: file=%q got TASK=%q want TASK=%q", path, gotID, taskID)
|
||||
}
|
||||
lines++
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
_ = f.Close()
|
||||
t.Fatalf("scanner error for %q: %v", taskID, err)
|
||||
}
|
||||
if err := f.Close(); err != nil {
|
||||
t.Fatalf("failed to close task log for %q: %v", taskID, err)
|
||||
}
|
||||
if lines != expectedTaskLines {
|
||||
t.Fatalf("unexpected task log line count for %q: got %d, want %d", taskID, lines, expectedTaskLines)
|
||||
}
|
||||
}
|
||||
|
||||
for _, path := range taskLogPaths {
|
||||
_ = os.Remove(path)
|
||||
}
|
||||
}
|
||||
|
||||
func parseTaskIDFromLogLine(line string) (string, bool) {
|
||||
const marker = "TASK="
|
||||
idx := strings.Index(line, marker)
|
||||
if idx == -1 {
|
||||
return "", false
|
||||
}
|
||||
rest := line[idx+len(marker):]
|
||||
end := strings.IndexByte(rest, ' ')
|
||||
if end == -1 {
|
||||
return rest, rest != ""
|
||||
}
|
||||
return rest[:end], rest[:end] != ""
|
||||
}
|
||||
|
||||
func TestExecutorTaskLoggerContext(t *testing.T) {
|
||||
if taskLoggerFromContext(nil) != nil {
|
||||
t.Fatalf("expected nil logger from nil context")
|
||||
}
|
||||
if taskLoggerFromContext(context.Background()) != nil {
|
||||
t.Fatalf("expected nil logger when context has no logger")
|
||||
}
|
||||
|
||||
logger, err := NewLoggerWithSuffix("executor-taskctx")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = logger.Close()
|
||||
_ = os.Remove(logger.Path())
|
||||
}()
|
||||
|
||||
ctx := withTaskLogger(context.Background(), logger)
|
||||
if got := taskLoggerFromContext(ctx); got != logger {
|
||||
t.Fatalf("expected logger roundtrip, got %v", got)
|
||||
}
|
||||
|
||||
if taskLoggerFromContext(withTaskLogger(context.Background(), nil)) != nil {
|
||||
t.Fatalf("expected nil logger when injected logger is nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
|
||||
devNull, err := os.OpenFile(os.DevNull, os.O_WRONLY, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open %s: %v", os.DevNull, err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = devNull
|
||||
t.Cleanup(func() {
|
||||
os.Stderr = oldStderr
|
||||
_ = devNull.Close()
|
||||
})
|
||||
|
||||
t.Run("skipOnFailedDependencies", func(t *testing.T) {
|
||||
root := nextExecutorTestTaskID("root")
|
||||
child := nextExecutorTestTaskID("child")
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
if task.ID == root {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "boom"}
|
||||
}
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{
|
||||
{{ID: root}},
|
||||
{{ID: child, Dependencies: []string{root}}},
|
||||
}, 1, 0)
|
||||
|
||||
foundChild := false
|
||||
for _, res := range results {
|
||||
if res.LogPath != "" {
|
||||
_ = os.Remove(res.LogPath)
|
||||
}
|
||||
if res.TaskID != child {
|
||||
continue
|
||||
}
|
||||
foundChild = true
|
||||
if res.ExitCode == 0 || !strings.Contains(res.Error, "skipped") {
|
||||
t.Fatalf("expected skipped child task result, got %+v", res)
|
||||
}
|
||||
}
|
||||
if !foundChild {
|
||||
t.Fatalf("expected child task to be present in results")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("panicRecovered", func(t *testing.T) {
|
||||
taskID := nextExecutorTestTaskID("panic")
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
panic("boom")
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: taskID}}}, 1, 0)
|
||||
if len(results) != 1 {
|
||||
t.Fatalf("expected 1 result, got %d", len(results))
|
||||
}
|
||||
if results[0].ExitCode == 0 || !strings.Contains(results[0].Error, "panic") {
|
||||
t.Fatalf("expected panic result, got %+v", results[0])
|
||||
}
|
||||
if results[0].LogPath == "" {
|
||||
t.Fatalf("expected LogPath on panic result")
|
||||
}
|
||||
_ = os.Remove(results[0].LogPath)
|
||||
})
|
||||
|
||||
t.Run("cancelWhileWaitingForWorker", func(t *testing.T) {
|
||||
task1 := nextExecutorTestTaskID("slot")
|
||||
task2 := nextExecutorTestTaskID("slot")
|
||||
|
||||
parentCtx, cancel := context.WithCancel(context.Background())
|
||||
started := make(chan struct{})
|
||||
unblock := make(chan struct{})
|
||||
var startedOnce sync.Once
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
startedOnce.Do(func() { close(started) })
|
||||
<-unblock
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
go func() {
|
||||
<-started
|
||||
cancel()
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
close(unblock)
|
||||
}()
|
||||
|
||||
results := executeConcurrentWithContext(parentCtx, [][]TaskSpec{{{ID: task1}, {ID: task2}}}, 1, 1)
|
||||
foundCancelled := false
|
||||
for _, res := range results {
|
||||
if res.LogPath != "" {
|
||||
_ = os.Remove(res.LogPath)
|
||||
}
|
||||
if res.ExitCode == 130 {
|
||||
foundCancelled = true
|
||||
}
|
||||
}
|
||||
if !foundCancelled {
|
||||
t.Fatalf("expected a task to be cancelled")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("loggerCreateFails", func(t *testing.T) {
|
||||
taskID := nextExecutorTestTaskID("bad") + "/id"
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: taskID}}}, 1, 0)
|
||||
if len(results) != 1 || results[0].ExitCode != 0 {
|
||||
t.Fatalf("unexpected results: %+v", results)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("TestConcurrentTaskLoggerFailure", func(t *testing.T) {
|
||||
// Create a writable temp dir for the main logger, then flip TMPDIR to a read-only
|
||||
// location so task-specific loggers fail to open.
|
||||
writable := t.TempDir()
|
||||
t.Setenv("TMPDIR", writable)
|
||||
|
||||
mainLogger, err := NewLoggerWithSuffix("shared-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
t.Cleanup(func() {
|
||||
mainLogger.Flush()
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
})
|
||||
|
||||
noWrite := filepath.Join(writable, "ro")
|
||||
if err := os.Mkdir(noWrite, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only temp dir: %v", err)
|
||||
}
|
||||
t.Setenv("TMPDIR", noWrite)
|
||||
|
||||
taskA := nextExecutorTestTaskID("shared-a")
|
||||
taskB := nextExecutorTestTaskID("shared-b")
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
if logger != mainLogger {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "unexpected logger"}
|
||||
}
|
||||
logger.Info("TASK=" + task.ID)
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
stderrR, stderrW, err := os.Pipe()
|
||||
if err != nil {
|
||||
t.Fatalf("os.Pipe() error = %v", err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = stderrW
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, 0)
|
||||
|
||||
_ = stderrW.Close()
|
||||
os.Stderr = oldStderr
|
||||
stderrData, _ := io.ReadAll(stderrR)
|
||||
_ = stderrR.Close()
|
||||
stderrOut := string(stderrData)
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Fatalf("expected 2 results, got %d", len(results))
|
||||
}
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 || res.Error != "" {
|
||||
t.Fatalf("task failed unexpectedly: %+v", res)
|
||||
}
|
||||
if res.LogPath != mainLogger.Path() {
|
||||
t.Fatalf("shared log path mismatch: got %q want %q", res.LogPath, mainLogger.Path())
|
||||
}
|
||||
if !res.sharedLog {
|
||||
t.Fatalf("expected sharedLog flag for %+v", res)
|
||||
}
|
||||
if !strings.Contains(stderrOut, "Log (shared)") {
|
||||
t.Fatalf("stderr missing shared marker: %s", stderrOut)
|
||||
}
|
||||
}
|
||||
|
||||
summary := generateFinalOutput(results)
|
||||
if !strings.Contains(summary, "(shared)") {
|
||||
t.Fatalf("summary missing shared marker: %s", summary)
|
||||
}
|
||||
|
||||
mainLogger.Flush()
|
||||
data, err := os.ReadFile(mainLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read main log: %v", err)
|
||||
}
|
||||
content := string(data)
|
||||
if !strings.Contains(content, "TASK="+taskA) || !strings.Contains(content, "TASK="+taskB) {
|
||||
t.Fatalf("expected shared log to contain both tasks, got: %s", content)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("TestSanitizeTaskID", func(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
if logger == nil {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing logger"}
|
||||
}
|
||||
logger.Info("TASK=" + task.ID)
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
idA := "../bad id"
|
||||
idB := "tab\tid"
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: idA}, {ID: idB}}}, 1, 0)
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Fatalf("expected 2 results, got %d", len(results))
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
idA: sanitizeLogSuffix(idA),
|
||||
idB: sanitizeLogSuffix(idB),
|
||||
}
|
||||
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 || res.Error != "" {
|
||||
t.Fatalf("unexpected failure: %+v", res)
|
||||
}
|
||||
safe, ok := expected[res.TaskID]
|
||||
if !ok {
|
||||
t.Fatalf("unexpected task id %q in results", res.TaskID)
|
||||
}
|
||||
wantBase := fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), safe)
|
||||
if filepath.Base(res.LogPath) != wantBase {
|
||||
t.Fatalf("log filename for %q = %q, want %q", res.TaskID, filepath.Base(res.LogPath), wantBase)
|
||||
}
|
||||
data, err := os.ReadFile(res.LogPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read log %q: %v", res.LogPath, err)
|
||||
}
|
||||
if !strings.Contains(string(data), "TASK="+res.TaskID) {
|
||||
t.Fatalf("log for %q missing task marker, content: %s", res.TaskID, string(data))
|
||||
}
|
||||
_ = os.Remove(res.LogPath)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestExecutorSignalAndTermination(t *testing.T) {
|
||||
forceKillDelay.Store(0)
|
||||
defer forceKillDelay.Store(5)
|
||||
@@ -574,3 +1290,70 @@ func TestExecutorForwardSignalsDefaults(t *testing.T) {
|
||||
forwardSignals(ctx, &execFakeRunner{process: &execFakeProcess{pid: 80}}, func(string) {})
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
|
||||
func TestExecutorSharedLogFalseWhenCustomLogPath(t *testing.T) {
|
||||
devNull, err := os.OpenFile(os.DevNull, os.O_WRONLY, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open %s: %v", os.DevNull, err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = devNull
|
||||
t.Cleanup(func() {
|
||||
os.Stderr = oldStderr
|
||||
_ = devNull.Close()
|
||||
})
|
||||
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
// Setup: 创建主 logger
|
||||
mainLogger, err := NewLoggerWithSuffix("shared-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
defer func() {
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
}()
|
||||
|
||||
// 模拟场景:task logger 创建失败(通过设置只读的 TMPDIR),
|
||||
// 回退到主 logger(handle.shared=true),
|
||||
// 但 runCodexTaskFn 返回自定义的 LogPath(不等于主 logger 的路径)
|
||||
roDir := filepath.Join(tempDir, "ro")
|
||||
if err := os.Mkdir(roDir, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only dir: %v", err)
|
||||
}
|
||||
t.Setenv("TMPDIR", roDir)
|
||||
|
||||
orig := runCodexTaskFn
|
||||
customLogPath := "/custom/path/to.log"
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
// 返回自定义 LogPath,不等于主 logger 的路径
|
||||
return TaskResult{
|
||||
TaskID: task.ID,
|
||||
ExitCode: 0,
|
||||
LogPath: customLogPath,
|
||||
}
|
||||
}
|
||||
defer func() { runCodexTaskFn = orig }()
|
||||
|
||||
// 执行任务
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: "task1"}}}, 1, 0)
|
||||
|
||||
if len(results) != 1 {
|
||||
t.Fatalf("expected 1 result, got %d", len(results))
|
||||
}
|
||||
|
||||
res := results[0]
|
||||
// 关键断言:即使 handle.shared=true(因为 task logger 创建失败),
|
||||
// 但因为 LogPath 不等于主 logger 的路径,sharedLog 应为 false
|
||||
if res.sharedLog {
|
||||
t.Fatalf("expected sharedLog=false when LogPath differs from shared logger, got true")
|
||||
}
|
||||
|
||||
// 验证 LogPath 确实是自定义的
|
||||
if res.LogPath != customLogPath {
|
||||
t.Fatalf("expected custom LogPath %s, got %s", customLogPath, res.LogPath)
|
||||
}
|
||||
}
|
||||
|
||||
39
codeagent-wrapper/log_writer_limit_test.go
Normal file
39
codeagent-wrapper/log_writer_limit_test.go
Normal file
@@ -0,0 +1,39 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLogWriterWriteLimitsBuffer(t *testing.T) {
|
||||
defer resetTestHooks()
|
||||
|
||||
logger, err := NewLogger()
|
||||
if err != nil {
|
||||
t.Fatalf("NewLogger error: %v", err)
|
||||
}
|
||||
setLogger(logger)
|
||||
defer closeLogger()
|
||||
|
||||
lw := newLogWriter("P:", 10)
|
||||
_, _ = lw.Write([]byte(strings.Repeat("a", 100)))
|
||||
|
||||
if lw.buf.Len() != 10 {
|
||||
t.Fatalf("logWriter buffer len=%d, want %d", lw.buf.Len(), 10)
|
||||
}
|
||||
if !lw.dropped {
|
||||
t.Fatalf("expected logWriter to drop overlong line bytes")
|
||||
}
|
||||
|
||||
lw.Flush()
|
||||
logger.Flush()
|
||||
data, err := os.ReadFile(logger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("ReadFile error: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(data), "P:aaaaaaa...") {
|
||||
t.Fatalf("log output missing truncated entry, got %q", string(data))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
@@ -18,22 +19,25 @@ import (
|
||||
// It is intentionally minimal: a buffered channel + single worker goroutine
|
||||
// to avoid contention while keeping ordering guarantees.
|
||||
type Logger struct {
|
||||
path string
|
||||
file *os.File
|
||||
writer *bufio.Writer
|
||||
ch chan logEntry
|
||||
flushReq chan chan struct{}
|
||||
done chan struct{}
|
||||
closed atomic.Bool
|
||||
closeOnce sync.Once
|
||||
workerWG sync.WaitGroup
|
||||
pendingWG sync.WaitGroup
|
||||
flushMu sync.Mutex
|
||||
path string
|
||||
file *os.File
|
||||
writer *bufio.Writer
|
||||
ch chan logEntry
|
||||
flushReq chan chan struct{}
|
||||
done chan struct{}
|
||||
closed atomic.Bool
|
||||
closeOnce sync.Once
|
||||
workerWG sync.WaitGroup
|
||||
pendingWG sync.WaitGroup
|
||||
flushMu sync.Mutex
|
||||
workerErr error
|
||||
errorEntries []string // Cache of recent ERROR/WARN entries
|
||||
errorMu sync.Mutex
|
||||
}
|
||||
|
||||
type logEntry struct {
|
||||
level string
|
||||
msg string
|
||||
msg string
|
||||
isError bool // true for ERROR or WARN levels
|
||||
}
|
||||
|
||||
// CleanupStats captures the outcome of a cleanupOldLogs run.
|
||||
@@ -55,6 +59,10 @@ var (
|
||||
evalSymlinksFn = filepath.EvalSymlinks
|
||||
)
|
||||
|
||||
const maxLogSuffixLen = 64
|
||||
|
||||
var logSuffixCounter atomic.Uint64
|
||||
|
||||
// NewLogger creates the async logger and starts the worker goroutine.
|
||||
// The log file is created under os.TempDir() using the required naming scheme.
|
||||
func NewLogger() (*Logger, error) {
|
||||
@@ -64,14 +72,23 @@ func NewLogger() (*Logger, error) {
|
||||
// NewLoggerWithSuffix creates a logger with an optional suffix in the filename.
|
||||
// Useful for tests that need isolated log files within the same process.
|
||||
func NewLoggerWithSuffix(suffix string) (*Logger, error) {
|
||||
filename := fmt.Sprintf("%s-%d", primaryLogPrefix(), os.Getpid())
|
||||
pid := os.Getpid()
|
||||
filename := fmt.Sprintf("%s-%d", primaryLogPrefix(), pid)
|
||||
var safeSuffix string
|
||||
if suffix != "" {
|
||||
filename += "-" + suffix
|
||||
safeSuffix = sanitizeLogSuffix(suffix)
|
||||
}
|
||||
if safeSuffix != "" {
|
||||
filename += "-" + safeSuffix
|
||||
}
|
||||
filename += ".log"
|
||||
|
||||
path := filepath.Clean(filepath.Join(os.TempDir(), filename))
|
||||
|
||||
if err := os.MkdirAll(filepath.Dir(path), 0o700); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0o600)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -92,6 +109,73 @@ func NewLoggerWithSuffix(suffix string) (*Logger, error) {
|
||||
return l, nil
|
||||
}
|
||||
|
||||
func sanitizeLogSuffix(raw string) string {
|
||||
trimmed := strings.TrimSpace(raw)
|
||||
if trimmed == "" {
|
||||
return fallbackLogSuffix()
|
||||
}
|
||||
|
||||
var b strings.Builder
|
||||
changed := false
|
||||
for _, r := range trimmed {
|
||||
if isSafeLogRune(r) {
|
||||
b.WriteRune(r)
|
||||
} else {
|
||||
changed = true
|
||||
b.WriteByte('-')
|
||||
}
|
||||
if b.Len() >= maxLogSuffixLen {
|
||||
changed = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
sanitized := strings.Trim(b.String(), "-.")
|
||||
if sanitized != b.String() {
|
||||
changed = true // Mark if trim removed any characters
|
||||
}
|
||||
if sanitized == "" {
|
||||
return fallbackLogSuffix()
|
||||
}
|
||||
|
||||
if changed || len(sanitized) > maxLogSuffixLen {
|
||||
hash := crc32.ChecksumIEEE([]byte(trimmed))
|
||||
hashStr := fmt.Sprintf("%x", hash)
|
||||
|
||||
maxPrefix := maxLogSuffixLen - len(hashStr) - 1
|
||||
if maxPrefix < 1 {
|
||||
maxPrefix = 1
|
||||
}
|
||||
if len(sanitized) > maxPrefix {
|
||||
sanitized = sanitized[:maxPrefix]
|
||||
}
|
||||
|
||||
sanitized = fmt.Sprintf("%s-%s", sanitized, hashStr)
|
||||
}
|
||||
|
||||
return sanitized
|
||||
}
|
||||
|
||||
func fallbackLogSuffix() string {
|
||||
next := logSuffixCounter.Add(1)
|
||||
return fmt.Sprintf("task-%d", next)
|
||||
}
|
||||
|
||||
func isSafeLogRune(r rune) bool {
|
||||
switch {
|
||||
case r >= 'a' && r <= 'z':
|
||||
return true
|
||||
case r >= 'A' && r <= 'Z':
|
||||
return true
|
||||
case r >= '0' && r <= '9':
|
||||
return true
|
||||
case r == '-', r == '_', r == '.':
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Path returns the underlying log file path (useful for tests/inspection).
|
||||
func (l *Logger) Path() string {
|
||||
if l == nil {
|
||||
@@ -112,10 +196,11 @@ func (l *Logger) Debug(msg string) { l.log("DEBUG", msg) }
|
||||
// Error logs at ERROR level.
|
||||
func (l *Logger) Error(msg string) { l.log("ERROR", msg) }
|
||||
|
||||
// Close stops the worker and syncs the log file.
|
||||
// Close signals the worker to flush and close the log file.
|
||||
// The log file is NOT removed, allowing inspection after program exit.
|
||||
// It is safe to call multiple times.
|
||||
// Returns after a 5-second timeout if worker doesn't stop gracefully.
|
||||
// Waits up to CODEAGENT_LOGGER_CLOSE_TIMEOUT_MS (default: 5000) for shutdown; set to 0 to wait indefinitely.
|
||||
// Returns an error if shutdown doesn't complete within the timeout.
|
||||
func (l *Logger) Close() error {
|
||||
if l == nil {
|
||||
return nil
|
||||
@@ -126,42 +211,51 @@ func (l *Logger) Close() error {
|
||||
l.closeOnce.Do(func() {
|
||||
l.closed.Store(true)
|
||||
close(l.done)
|
||||
close(l.ch)
|
||||
|
||||
// Wait for worker with timeout
|
||||
timeout := loggerCloseTimeout()
|
||||
workerDone := make(chan struct{})
|
||||
go func() {
|
||||
l.workerWG.Wait()
|
||||
close(workerDone)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-workerDone:
|
||||
// Worker stopped gracefully
|
||||
case <-time.After(5 * time.Second):
|
||||
// Worker timeout - proceed with cleanup anyway
|
||||
closeErr = fmt.Errorf("logger worker timeout during close")
|
||||
if timeout > 0 {
|
||||
select {
|
||||
case <-workerDone:
|
||||
// Worker stopped gracefully
|
||||
case <-time.After(timeout):
|
||||
closeErr = fmt.Errorf("logger worker timeout during close")
|
||||
return
|
||||
}
|
||||
} else {
|
||||
<-workerDone
|
||||
}
|
||||
|
||||
if err := l.writer.Flush(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
if l.workerErr != nil && closeErr == nil {
|
||||
closeErr = l.workerErr
|
||||
}
|
||||
|
||||
if err := l.file.Sync(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
}
|
||||
|
||||
if err := l.file.Close(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
}
|
||||
|
||||
// Log file is kept for debugging - NOT removed
|
||||
// Users can manually clean up /tmp/<wrapper>-*.log files
|
||||
})
|
||||
|
||||
return closeErr
|
||||
}
|
||||
|
||||
func loggerCloseTimeout() time.Duration {
|
||||
const defaultTimeout = 5 * time.Second
|
||||
|
||||
raw := strings.TrimSpace(os.Getenv("CODEAGENT_LOGGER_CLOSE_TIMEOUT_MS"))
|
||||
if raw == "" {
|
||||
return defaultTimeout
|
||||
}
|
||||
ms, err := strconv.Atoi(raw)
|
||||
if err != nil {
|
||||
return defaultTimeout
|
||||
}
|
||||
if ms <= 0 {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(ms) * time.Millisecond
|
||||
}
|
||||
|
||||
// RemoveLogFile removes the log file. Should only be called after Close().
|
||||
func (l *Logger) RemoveLogFile() error {
|
||||
if l == nil {
|
||||
@@ -170,34 +264,29 @@ func (l *Logger) RemoveLogFile() error {
|
||||
return os.Remove(l.path)
|
||||
}
|
||||
|
||||
// ExtractRecentErrors reads the log file and returns the most recent ERROR and WARN entries.
|
||||
// ExtractRecentErrors returns the most recent ERROR and WARN entries from memory cache.
|
||||
// Returns up to maxEntries entries in chronological order.
|
||||
func (l *Logger) ExtractRecentErrors(maxEntries int) []string {
|
||||
if l == nil || l.path == "" {
|
||||
if l == nil || maxEntries <= 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
f, err := os.Open(l.path)
|
||||
if err != nil {
|
||||
l.errorMu.Lock()
|
||||
defer l.errorMu.Unlock()
|
||||
|
||||
if len(l.errorEntries) == 0 {
|
||||
return nil
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
var entries []string
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.Contains(line, "] ERROR:") || strings.Contains(line, "] WARN:") {
|
||||
entries = append(entries, line)
|
||||
}
|
||||
// Return last N entries
|
||||
start := 0
|
||||
if len(l.errorEntries) > maxEntries {
|
||||
start = len(l.errorEntries) - maxEntries
|
||||
}
|
||||
|
||||
// Keep only the last maxEntries
|
||||
if len(entries) > maxEntries {
|
||||
entries = entries[len(entries)-maxEntries:]
|
||||
}
|
||||
|
||||
return entries
|
||||
result := make([]string, len(l.errorEntries)-start)
|
||||
copy(result, l.errorEntries[start:])
|
||||
return result
|
||||
}
|
||||
|
||||
// Flush waits for all pending log entries to be written. Primarily for tests.
|
||||
@@ -254,7 +343,8 @@ func (l *Logger) log(level, msg string) {
|
||||
return
|
||||
}
|
||||
|
||||
entry := logEntry{level: level, msg: msg}
|
||||
isError := level == "WARN" || level == "ERROR"
|
||||
entry := logEntry{msg: msg, isError: isError}
|
||||
l.flushMu.Lock()
|
||||
l.pendingWG.Add(1)
|
||||
l.flushMu.Unlock()
|
||||
@@ -275,18 +365,42 @@ func (l *Logger) run() {
|
||||
ticker := time.NewTicker(500 * time.Millisecond)
|
||||
defer ticker.Stop()
|
||||
|
||||
writeEntry := func(entry logEntry) {
|
||||
fmt.Fprintf(l.writer, "%s\n", entry.msg)
|
||||
|
||||
// Cache error/warn entries in memory for fast extraction
|
||||
if entry.isError {
|
||||
l.errorMu.Lock()
|
||||
l.errorEntries = append(l.errorEntries, entry.msg)
|
||||
if len(l.errorEntries) > 100 { // Keep last 100
|
||||
l.errorEntries = l.errorEntries[1:]
|
||||
}
|
||||
l.errorMu.Unlock()
|
||||
}
|
||||
|
||||
l.pendingWG.Done()
|
||||
}
|
||||
|
||||
finalize := func() {
|
||||
if err := l.writer.Flush(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
if err := l.file.Sync(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
if err := l.file.Close(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case entry, ok := <-l.ch:
|
||||
if !ok {
|
||||
// Channel closed, final flush
|
||||
_ = l.writer.Flush()
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
timestamp := time.Now().Format("2006-01-02 15:04:05.000")
|
||||
pid := os.Getpid()
|
||||
fmt.Fprintf(l.writer, "[%s] [PID:%d] %s: %s\n", timestamp, pid, entry.level, entry.msg)
|
||||
l.pendingWG.Done()
|
||||
writeEntry(entry)
|
||||
|
||||
case <-ticker.C:
|
||||
_ = l.writer.Flush()
|
||||
@@ -296,6 +410,21 @@ func (l *Logger) run() {
|
||||
_ = l.writer.Flush()
|
||||
_ = l.file.Sync()
|
||||
close(flushDone)
|
||||
|
||||
case <-l.done:
|
||||
for {
|
||||
select {
|
||||
case entry, ok := <-l.ch:
|
||||
if !ok {
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
writeEntry(entry)
|
||||
default:
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
158
codeagent-wrapper/logger_additional_coverage_test.go
Normal file
158
codeagent-wrapper/logger_additional_coverage_test.go
Normal file
@@ -0,0 +1,158 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLoggerNilReceiverNoop(t *testing.T) {
|
||||
var logger *Logger
|
||||
logger.Info("info")
|
||||
logger.Warn("warn")
|
||||
logger.Debug("debug")
|
||||
logger.Error("error")
|
||||
logger.Flush()
|
||||
if err := logger.Close(); err != nil {
|
||||
t.Fatalf("Close() on nil logger should return nil, got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerConcurrencyLogHelpers(t *testing.T) {
|
||||
setTempDirEnv(t, t.TempDir())
|
||||
|
||||
logger, err := NewLoggerWithSuffix("concurrency")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix error: %v", err)
|
||||
}
|
||||
setLogger(logger)
|
||||
defer closeLogger()
|
||||
|
||||
logConcurrencyPlanning(0, 2)
|
||||
logConcurrencyPlanning(3, 2)
|
||||
logConcurrencyState("start", "task-1", 1, 0)
|
||||
logConcurrencyState("done", "task-1", 0, 3)
|
||||
logger.Flush()
|
||||
|
||||
data, err := os.ReadFile(logger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read log file: %v", err)
|
||||
}
|
||||
output := string(data)
|
||||
|
||||
checks := []string{
|
||||
"parallel: worker_limit=unbounded total_tasks=2",
|
||||
"parallel: worker_limit=3 total_tasks=2",
|
||||
"parallel: start task=task-1 active=1 limit=unbounded",
|
||||
"parallel: done task=task-1 active=0 limit=3",
|
||||
}
|
||||
for _, c := range checks {
|
||||
if !strings.Contains(output, c) {
|
||||
t.Fatalf("log output missing %q, got: %s", c, output)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerConcurrencyLogHelpersNoopWithoutActiveLogger(t *testing.T) {
|
||||
_ = closeLogger()
|
||||
logConcurrencyPlanning(1, 1)
|
||||
logConcurrencyState("start", "task-1", 0, 1)
|
||||
}
|
||||
|
||||
func TestLoggerCleanupOldLogsSkipsUnsafeAndHandlesAlreadyDeleted(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
unsafePath := createTempLog(t, tempDir, fmt.Sprintf("%s-%d.log", primaryLogPrefix(), 222))
|
||||
orphanPath := createTempLog(t, tempDir, fmt.Sprintf("%s-%d.log", primaryLogPrefix(), 111))
|
||||
|
||||
stubFileStat(t, func(path string) (os.FileInfo, error) {
|
||||
if path == unsafePath {
|
||||
return fakeFileInfo{mode: os.ModeSymlink}, nil
|
||||
}
|
||||
return os.Lstat(path)
|
||||
})
|
||||
|
||||
stubProcessRunning(t, func(pid int) bool {
|
||||
if pid == 111 {
|
||||
_ = os.Remove(orphanPath)
|
||||
}
|
||||
return false
|
||||
})
|
||||
|
||||
stats, err := cleanupOldLogs()
|
||||
if err != nil {
|
||||
t.Fatalf("cleanupOldLogs() unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if stats.Scanned != 2 {
|
||||
t.Fatalf("scanned = %d, want %d", stats.Scanned, 2)
|
||||
}
|
||||
if stats.Deleted != 0 {
|
||||
t.Fatalf("deleted = %d, want %d", stats.Deleted, 0)
|
||||
}
|
||||
if stats.Kept != 2 {
|
||||
t.Fatalf("kept = %d, want %d", stats.Kept, 2)
|
||||
}
|
||||
if stats.Errors != 0 {
|
||||
t.Fatalf("errors = %d, want %d", stats.Errors, 0)
|
||||
}
|
||||
|
||||
hasSkip := false
|
||||
hasAlreadyDeleted := false
|
||||
for _, name := range stats.KeptFiles {
|
||||
if strings.Contains(name, "already deleted") {
|
||||
hasAlreadyDeleted = true
|
||||
}
|
||||
if strings.Contains(name, filepath.Base(unsafePath)) {
|
||||
hasSkip = true
|
||||
}
|
||||
}
|
||||
if !hasSkip {
|
||||
t.Fatalf("expected kept files to include unsafe log %q, got %+v", filepath.Base(unsafePath), stats.KeptFiles)
|
||||
}
|
||||
if !hasAlreadyDeleted {
|
||||
t.Fatalf("expected kept files to include already deleted marker, got %+v", stats.KeptFiles)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerIsUnsafeFileErrorPaths(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
|
||||
t.Run("stat ErrNotExist", func(t *testing.T) {
|
||||
stubFileStat(t, func(string) (os.FileInfo, error) {
|
||||
return nil, os.ErrNotExist
|
||||
})
|
||||
|
||||
unsafe, reason := isUnsafeFile("missing.log", tempDir)
|
||||
if !unsafe || reason != "" {
|
||||
t.Fatalf("expected missing file to be skipped silently, got unsafe=%v reason=%q", unsafe, reason)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("stat error", func(t *testing.T) {
|
||||
stubFileStat(t, func(string) (os.FileInfo, error) {
|
||||
return nil, fmt.Errorf("boom")
|
||||
})
|
||||
|
||||
unsafe, reason := isUnsafeFile("broken.log", tempDir)
|
||||
if !unsafe || !strings.Contains(reason, "stat failed") {
|
||||
t.Fatalf("expected stat failure to be unsafe, got unsafe=%v reason=%q", unsafe, reason)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("EvalSymlinks error", func(t *testing.T) {
|
||||
stubFileStat(t, func(string) (os.FileInfo, error) {
|
||||
return fakeFileInfo{}, nil
|
||||
})
|
||||
stubEvalSymlinks(t, func(string) (string, error) {
|
||||
return "", fmt.Errorf("resolve failed")
|
||||
})
|
||||
|
||||
unsafe, reason := isUnsafeFile("cannot-resolve.log", tempDir)
|
||||
if !unsafe || !strings.Contains(reason, "path resolution failed") {
|
||||
t.Fatalf("expected resolution failure to be unsafe, got unsafe=%v reason=%q", unsafe, reason)
|
||||
}
|
||||
})
|
||||
}
|
||||
115
codeagent-wrapper/logger_suffix_test.go
Normal file
115
codeagent-wrapper/logger_suffix_test.go
Normal file
@@ -0,0 +1,115 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLoggerWithSuffixNamingAndIsolation(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
taskA := "task-1"
|
||||
taskB := "task-2"
|
||||
|
||||
loggerA, err := NewLoggerWithSuffix(taskA)
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix(%q) error = %v", taskA, err)
|
||||
}
|
||||
defer loggerA.Close()
|
||||
|
||||
loggerB, err := NewLoggerWithSuffix(taskB)
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix(%q) error = %v", taskB, err)
|
||||
}
|
||||
defer loggerB.Close()
|
||||
|
||||
wantA := filepath.Join(tempDir, fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), taskA))
|
||||
if loggerA.Path() != wantA {
|
||||
t.Fatalf("loggerA path = %q, want %q", loggerA.Path(), wantA)
|
||||
}
|
||||
|
||||
wantB := filepath.Join(tempDir, fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), taskB))
|
||||
if loggerB.Path() != wantB {
|
||||
t.Fatalf("loggerB path = %q, want %q", loggerB.Path(), wantB)
|
||||
}
|
||||
|
||||
if loggerA.Path() == loggerB.Path() {
|
||||
t.Fatalf("expected different log files, got %q", loggerA.Path())
|
||||
}
|
||||
|
||||
loggerA.Info("from taskA")
|
||||
loggerB.Info("from taskB")
|
||||
loggerA.Flush()
|
||||
loggerB.Flush()
|
||||
|
||||
dataA, err := os.ReadFile(loggerA.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read loggerA file: %v", err)
|
||||
}
|
||||
dataB, err := os.ReadFile(loggerB.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read loggerB file: %v", err)
|
||||
}
|
||||
|
||||
if !strings.Contains(string(dataA), "from taskA") {
|
||||
t.Fatalf("loggerA missing its message, got: %q", string(dataA))
|
||||
}
|
||||
if strings.Contains(string(dataA), "from taskB") {
|
||||
t.Fatalf("loggerA contains loggerB message, got: %q", string(dataA))
|
||||
}
|
||||
if !strings.Contains(string(dataB), "from taskB") {
|
||||
t.Fatalf("loggerB missing its message, got: %q", string(dataB))
|
||||
}
|
||||
if strings.Contains(string(dataB), "from taskA") {
|
||||
t.Fatalf("loggerB contains loggerA message, got: %q", string(dataB))
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerWithSuffixReturnsErrorWhenTempDirNotWritable(t *testing.T) {
|
||||
base := t.TempDir()
|
||||
noWrite := filepath.Join(base, "ro")
|
||||
if err := os.Mkdir(noWrite, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only temp dir: %v", err)
|
||||
}
|
||||
t.Cleanup(func() { _ = os.Chmod(noWrite, 0o700) })
|
||||
setTempDirEnv(t, noWrite)
|
||||
|
||||
logger, err := NewLoggerWithSuffix("task-err")
|
||||
if err == nil {
|
||||
_ = logger.Close()
|
||||
t.Fatalf("expected error when temp dir is not writable")
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerWithSuffixSanitizesUnsafeSuffix(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
raw := "../bad id/with?chars"
|
||||
safe := sanitizeLogSuffix(raw)
|
||||
if safe == "" {
|
||||
t.Fatalf("sanitizeLogSuffix returned empty string")
|
||||
}
|
||||
if strings.ContainsAny(safe, "/\\") {
|
||||
t.Fatalf("sanitized suffix should not contain path separators, got %q", safe)
|
||||
}
|
||||
|
||||
logger, err := NewLoggerWithSuffix(raw)
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix(%q) error = %v", raw, err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
_ = logger.Close()
|
||||
_ = os.Remove(logger.Path())
|
||||
})
|
||||
|
||||
wantBase := fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), safe)
|
||||
if gotBase := filepath.Base(logger.Path()); gotBase != wantBase {
|
||||
t.Fatalf("log filename = %q, want %q", gotBase, wantBase)
|
||||
}
|
||||
if dir := filepath.Dir(logger.Path()); dir != tempDir {
|
||||
t.Fatalf("logger path dir = %q, want %q", dir, tempDir)
|
||||
}
|
||||
}
|
||||
@@ -26,7 +26,7 @@ func compareCleanupStats(got, want CleanupStats) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func TestRunLoggerCreatesFileWithPID(t *testing.T) {
|
||||
func TestLoggerCreatesFileWithPID(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
@@ -46,7 +46,7 @@ func TestRunLoggerCreatesFileWithPID(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerWritesLevels(t *testing.T) {
|
||||
func TestLoggerWritesLevels(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
@@ -69,7 +69,7 @@ func TestRunLoggerWritesLevels(t *testing.T) {
|
||||
}
|
||||
|
||||
content := string(data)
|
||||
checks := []string{"INFO: info message", "WARN: warn message", "DEBUG: debug message", "ERROR: error message"}
|
||||
checks := []string{"info message", "warn message", "debug message", "error message"}
|
||||
for _, c := range checks {
|
||||
if !strings.Contains(content, c) {
|
||||
t.Fatalf("log file missing entry %q, content: %s", c, content)
|
||||
@@ -77,7 +77,31 @@ func TestRunLoggerWritesLevels(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerCloseRemovesFileAndStopsWorker(t *testing.T) {
|
||||
func TestLoggerDefaultIsTerminalCoverage(t *testing.T) {
|
||||
oldStdin := os.Stdin
|
||||
t.Cleanup(func() { os.Stdin = oldStdin })
|
||||
|
||||
f, err := os.CreateTemp(t.TempDir(), "stdin-*")
|
||||
if err != nil {
|
||||
t.Fatalf("os.CreateTemp() error = %v", err)
|
||||
}
|
||||
defer os.Remove(f.Name())
|
||||
|
||||
os.Stdin = f
|
||||
if got := defaultIsTerminal(); got {
|
||||
t.Fatalf("defaultIsTerminal() = %v, want false for regular file", got)
|
||||
}
|
||||
|
||||
if err := f.Close(); err != nil {
|
||||
t.Fatalf("Close() error = %v", err)
|
||||
}
|
||||
os.Stdin = f
|
||||
if got := defaultIsTerminal(); !got {
|
||||
t.Fatalf("defaultIsTerminal() = %v, want true when Stat fails", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerCloseStopsWorkerAndKeepsFile(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
@@ -94,6 +118,11 @@ func TestRunLoggerCloseRemovesFileAndStopsWorker(t *testing.T) {
|
||||
if err := logger.Close(); err != nil {
|
||||
t.Fatalf("Close() returned error: %v", err)
|
||||
}
|
||||
if logger.file != nil {
|
||||
if _, err := logger.file.Write([]byte("x")); err == nil {
|
||||
t.Fatalf("expected file to be closed after Close()")
|
||||
}
|
||||
}
|
||||
|
||||
// After recent changes, log file is kept for debugging - NOT removed
|
||||
if _, err := os.Stat(logPath); os.IsNotExist(err) {
|
||||
@@ -116,7 +145,7 @@ func TestRunLoggerCloseRemovesFileAndStopsWorker(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerConcurrentWritesSafe(t *testing.T) {
|
||||
func TestLoggerConcurrentWritesSafe(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
@@ -165,7 +194,7 @@ func TestRunLoggerConcurrentWritesSafe(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerTerminateProcessActive(t *testing.T) {
|
||||
func TestLoggerTerminateProcessActive(t *testing.T) {
|
||||
cmd := exec.Command("sleep", "5")
|
||||
if err := cmd.Start(); err != nil {
|
||||
t.Skipf("cannot start sleep command: %v", err)
|
||||
@@ -193,7 +222,7 @@ func TestRunLoggerTerminateProcessActive(t *testing.T) {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
|
||||
func TestRunTerminateProcessNil(t *testing.T) {
|
||||
func TestLoggerTerminateProcessNil(t *testing.T) {
|
||||
if timer := terminateProcess(nil); timer != nil {
|
||||
t.Fatalf("terminateProcess(nil) should return nil timer")
|
||||
}
|
||||
@@ -202,7 +231,7 @@ func TestRunTerminateProcessNil(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsRemovesOrphans(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsRemovesOrphans(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
orphan1 := createTempLog(t, tempDir, "codex-wrapper-111.log")
|
||||
@@ -252,7 +281,7 @@ func TestRunCleanupOldLogsRemovesOrphans(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsHandlesInvalidNamesAndErrors(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsHandlesInvalidNamesAndErrors(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
invalid := []string{
|
||||
@@ -310,7 +339,7 @@ func TestRunCleanupOldLogsHandlesInvalidNamesAndErrors(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsHandlesGlobFailures(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsHandlesGlobFailures(t *testing.T) {
|
||||
stubProcessRunning(t, func(pid int) bool {
|
||||
t.Fatalf("process check should not run when glob fails")
|
||||
return false
|
||||
@@ -336,7 +365,7 @@ func TestRunCleanupOldLogsHandlesGlobFailures(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsEmptyDirectoryStats(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsEmptyDirectoryStats(t *testing.T) {
|
||||
setTempDirEnv(t, t.TempDir())
|
||||
|
||||
stubProcessRunning(t, func(int) bool {
|
||||
@@ -356,7 +385,7 @@ func TestRunCleanupOldLogsEmptyDirectoryStats(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsHandlesTempDirPermissionErrors(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsHandlesTempDirPermissionErrors(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
paths := []string{
|
||||
@@ -396,7 +425,7 @@ func TestRunCleanupOldLogsHandlesTempDirPermissionErrors(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsHandlesPermissionDeniedFile(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsHandlesPermissionDeniedFile(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
protected := createTempLog(t, tempDir, "codex-wrapper-6200.log")
|
||||
@@ -433,7 +462,7 @@ func TestRunCleanupOldLogsHandlesPermissionDeniedFile(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsPerformanceBound(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsPerformanceBound(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
const fileCount = 400
|
||||
@@ -476,17 +505,98 @@ func TestRunCleanupOldLogsPerformanceBound(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsCoverageSuite(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsCoverageSuite(t *testing.T) {
|
||||
TestBackendParseJSONStream_CoverageSuite(t)
|
||||
}
|
||||
|
||||
// Reuse the existing coverage suite so the focused TestLogger run still exercises
|
||||
// the rest of the codebase and keeps coverage high.
|
||||
func TestRunLoggerCoverageSuite(t *testing.T) {
|
||||
TestBackendParseJSONStream_CoverageSuite(t)
|
||||
func TestLoggerCoverageSuite(t *testing.T) {
|
||||
suite := []struct {
|
||||
name string
|
||||
fn func(*testing.T)
|
||||
}{
|
||||
{"TestBackendParseJSONStream_CoverageSuite", TestBackendParseJSONStream_CoverageSuite},
|
||||
{"TestVersionCoverageFullRun", TestVersionCoverageFullRun},
|
||||
{"TestVersionMainWrapper", TestVersionMainWrapper},
|
||||
|
||||
{"TestExecutorHelperCoverage", TestExecutorHelperCoverage},
|
||||
{"TestExecutorRunCodexTaskWithContext", TestExecutorRunCodexTaskWithContext},
|
||||
{"TestExecutorParallelLogIsolation", TestExecutorParallelLogIsolation},
|
||||
{"TestExecutorTaskLoggerContext", TestExecutorTaskLoggerContext},
|
||||
{"TestExecutorExecuteConcurrentWithContextBranches", TestExecutorExecuteConcurrentWithContextBranches},
|
||||
{"TestExecutorSignalAndTermination", TestExecutorSignalAndTermination},
|
||||
{"TestExecutorCancelReasonAndCloseWithReason", TestExecutorCancelReasonAndCloseWithReason},
|
||||
{"TestExecutorForceKillTimerStop", TestExecutorForceKillTimerStop},
|
||||
{"TestExecutorForwardSignalsDefaults", TestExecutorForwardSignalsDefaults},
|
||||
|
||||
{"TestBackendParseArgs_NewMode", TestBackendParseArgs_NewMode},
|
||||
{"TestBackendParseArgs_ResumeMode", TestBackendParseArgs_ResumeMode},
|
||||
{"TestBackendParseArgs_BackendFlag", TestBackendParseArgs_BackendFlag},
|
||||
{"TestBackendParseArgs_SkipPermissions", TestBackendParseArgs_SkipPermissions},
|
||||
{"TestBackendParseBoolFlag", TestBackendParseBoolFlag},
|
||||
{"TestBackendEnvFlagEnabled", TestBackendEnvFlagEnabled},
|
||||
{"TestRunResolveTimeout", TestRunResolveTimeout},
|
||||
{"TestRunIsTerminal", TestRunIsTerminal},
|
||||
{"TestRunReadPipedTask", TestRunReadPipedTask},
|
||||
{"TestTailBufferWrite", TestTailBufferWrite},
|
||||
{"TestLogWriterWriteLimitsBuffer", TestLogWriterWriteLimitsBuffer},
|
||||
{"TestLogWriterLogLine", TestLogWriterLogLine},
|
||||
{"TestNewLogWriterDefaultMaxLen", TestNewLogWriterDefaultMaxLen},
|
||||
{"TestNewLogWriterDefaultLimit", TestNewLogWriterDefaultLimit},
|
||||
{"TestRunHello", TestRunHello},
|
||||
{"TestRunGreet", TestRunGreet},
|
||||
{"TestRunFarewell", TestRunFarewell},
|
||||
{"TestRunFarewellEmpty", TestRunFarewellEmpty},
|
||||
|
||||
{"TestParallelParseConfig_Success", TestParallelParseConfig_Success},
|
||||
{"TestParallelParseConfig_Backend", TestParallelParseConfig_Backend},
|
||||
{"TestParallelParseConfig_InvalidFormat", TestParallelParseConfig_InvalidFormat},
|
||||
{"TestParallelParseConfig_EmptyTasks", TestParallelParseConfig_EmptyTasks},
|
||||
{"TestParallelParseConfig_MissingID", TestParallelParseConfig_MissingID},
|
||||
{"TestParallelParseConfig_MissingTask", TestParallelParseConfig_MissingTask},
|
||||
{"TestParallelParseConfig_DuplicateID", TestParallelParseConfig_DuplicateID},
|
||||
{"TestParallelParseConfig_DelimiterFormat", TestParallelParseConfig_DelimiterFormat},
|
||||
|
||||
{"TestBackendSelectBackend", TestBackendSelectBackend},
|
||||
{"TestBackendSelectBackend_Invalid", TestBackendSelectBackend_Invalid},
|
||||
{"TestBackendSelectBackend_DefaultOnEmpty", TestBackendSelectBackend_DefaultOnEmpty},
|
||||
{"TestBackendBuildArgs_CodexBackend", TestBackendBuildArgs_CodexBackend},
|
||||
{"TestBackendBuildArgs_ClaudeBackend", TestBackendBuildArgs_ClaudeBackend},
|
||||
{"TestClaudeBackendBuildArgs_OutputValidation", TestClaudeBackendBuildArgs_OutputValidation},
|
||||
{"TestBackendBuildArgs_GeminiBackend", TestBackendBuildArgs_GeminiBackend},
|
||||
{"TestGeminiBackendBuildArgs_OutputValidation", TestGeminiBackendBuildArgs_OutputValidation},
|
||||
{"TestBackendNamesAndCommands", TestBackendNamesAndCommands},
|
||||
|
||||
{"TestBackendParseJSONStream", TestBackendParseJSONStream},
|
||||
{"TestBackendParseJSONStream_ClaudeEvents", TestBackendParseJSONStream_ClaudeEvents},
|
||||
{"TestBackendParseJSONStream_GeminiEvents", TestBackendParseJSONStream_GeminiEvents},
|
||||
{"TestBackendParseJSONStreamWithWarn_InvalidLine", TestBackendParseJSONStreamWithWarn_InvalidLine},
|
||||
{"TestBackendParseJSONStream_OnMessage", TestBackendParseJSONStream_OnMessage},
|
||||
{"TestBackendParseJSONStream_ScannerError", TestBackendParseJSONStream_ScannerError},
|
||||
{"TestBackendDiscardInvalidJSON", TestBackendDiscardInvalidJSON},
|
||||
{"TestBackendDiscardInvalidJSONBuffer", TestBackendDiscardInvalidJSONBuffer},
|
||||
|
||||
{"TestCurrentWrapperNameFallsBackToExecutable", TestCurrentWrapperNameFallsBackToExecutable},
|
||||
{"TestCurrentWrapperNameDetectsLegacyAliasSymlink", TestCurrentWrapperNameDetectsLegacyAliasSymlink},
|
||||
|
||||
{"TestIsProcessRunning", TestIsProcessRunning},
|
||||
{"TestGetProcessStartTimeReadsProcStat", TestGetProcessStartTimeReadsProcStat},
|
||||
{"TestGetProcessStartTimeInvalidData", TestGetProcessStartTimeInvalidData},
|
||||
{"TestGetBootTimeParsesBtime", TestGetBootTimeParsesBtime},
|
||||
{"TestGetBootTimeInvalidData", TestGetBootTimeInvalidData},
|
||||
|
||||
{"TestClaudeBuildArgs_ModesAndPermissions", TestClaudeBuildArgs_ModesAndPermissions},
|
||||
{"TestClaudeBuildArgs_GeminiAndCodexModes", TestClaudeBuildArgs_GeminiAndCodexModes},
|
||||
{"TestClaudeBuildArgs_BackendMetadata", TestClaudeBuildArgs_BackendMetadata},
|
||||
}
|
||||
|
||||
for _, tc := range suite {
|
||||
t.Run(tc.name, tc.fn)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunCleanupOldLogsKeepsCurrentProcessLog(t *testing.T) {
|
||||
func TestLoggerCleanupOldLogsKeepsCurrentProcessLog(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
currentPID := os.Getpid()
|
||||
@@ -518,7 +628,7 @@ func TestRunCleanupOldLogsKeepsCurrentProcessLog(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsPIDReusedScenarios(t *testing.T) {
|
||||
func TestLoggerIsPIDReusedScenarios(t *testing.T) {
|
||||
now := time.Now()
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -552,7 +662,7 @@ func TestIsPIDReusedScenarios(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsUnsafeFileSecurityChecks(t *testing.T) {
|
||||
func TestLoggerIsUnsafeFileSecurityChecks(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
absTempDir, err := filepath.Abs(tempDir)
|
||||
if err != nil {
|
||||
@@ -601,7 +711,7 @@ func TestIsUnsafeFileSecurityChecks(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestRunLoggerPathAndRemove(t *testing.T) {
|
||||
func TestLoggerPathAndRemove(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
path := filepath.Join(tempDir, "sample.log")
|
||||
if err := os.WriteFile(path, []byte("test"), 0o644); err != nil {
|
||||
@@ -628,7 +738,19 @@ func TestRunLoggerPathAndRemove(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunLoggerInternalLog(t *testing.T) {
|
||||
func TestLoggerTruncateBytesCoverage(t *testing.T) {
|
||||
if got := truncateBytes([]byte("abc"), 3); got != "abc" {
|
||||
t.Fatalf("truncateBytes() = %q, want %q", got, "abc")
|
||||
}
|
||||
if got := truncateBytes([]byte("abcd"), 3); got != "abc..." {
|
||||
t.Fatalf("truncateBytes() = %q, want %q", got, "abc...")
|
||||
}
|
||||
if got := truncateBytes([]byte("abcd"), -1); got != "" {
|
||||
t.Fatalf("truncateBytes() = %q, want empty string", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerInternalLog(t *testing.T) {
|
||||
logger := &Logger{
|
||||
ch: make(chan logEntry, 1),
|
||||
done: make(chan struct{}),
|
||||
@@ -644,7 +766,7 @@ func TestRunLoggerInternalLog(t *testing.T) {
|
||||
|
||||
logger.log("INFO", "hello")
|
||||
entry := <-done
|
||||
if entry.level != "INFO" || entry.msg != "hello" {
|
||||
if entry.msg != "hello" {
|
||||
t.Fatalf("unexpected entry %+v", entry)
|
||||
}
|
||||
|
||||
@@ -653,7 +775,7 @@ func TestRunLoggerInternalLog(t *testing.T) {
|
||||
close(logger.done)
|
||||
}
|
||||
|
||||
func TestRunParsePIDFromLog(t *testing.T) {
|
||||
func TestLoggerParsePIDFromLog(t *testing.T) {
|
||||
hugePID := strconv.FormatInt(math.MaxInt64, 10) + "0"
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -769,69 +891,93 @@ func (f fakeFileInfo) ModTime() time.Time { return f.modTime }
|
||||
func (f fakeFileInfo) IsDir() bool { return false }
|
||||
func (f fakeFileInfo) Sys() interface{} { return nil }
|
||||
|
||||
func TestExtractRecentErrors(t *testing.T) {
|
||||
func TestLoggerExtractRecentErrors(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
content string
|
||||
logs []struct{ level, msg string }
|
||||
maxEntries int
|
||||
want []string
|
||||
}{
|
||||
{
|
||||
name: "empty log",
|
||||
content: "",
|
||||
logs: nil,
|
||||
maxEntries: 10,
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
name: "no errors",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] DEBUG: processing`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"DEBUG", "processing"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
name: "single error",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] ERROR: something failed`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"ERROR", "something failed"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: []string{"[2025-01-01 12:00:01.000] [PID:123] ERROR: something failed"},
|
||||
want: []string{"something failed"},
|
||||
},
|
||||
{
|
||||
name: "error and warn",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] WARN: warning message
|
||||
[2025-01-01 12:00:02.000] [PID:123] ERROR: error message`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"WARN", "warning message"},
|
||||
{"ERROR", "error message"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: []string{
|
||||
"[2025-01-01 12:00:01.000] [PID:123] WARN: warning message",
|
||||
"[2025-01-01 12:00:02.000] [PID:123] ERROR: error message",
|
||||
"warning message",
|
||||
"error message",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "truncate to max",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] ERROR: error 1
|
||||
[2025-01-01 12:00:01.000] [PID:123] ERROR: error 2
|
||||
[2025-01-01 12:00:02.000] [PID:123] ERROR: error 3
|
||||
[2025-01-01 12:00:03.000] [PID:123] ERROR: error 4
|
||||
[2025-01-01 12:00:04.000] [PID:123] ERROR: error 5`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"ERROR", "error 1"},
|
||||
{"ERROR", "error 2"},
|
||||
{"ERROR", "error 3"},
|
||||
{"ERROR", "error 4"},
|
||||
{"ERROR", "error 5"},
|
||||
},
|
||||
maxEntries: 3,
|
||||
want: []string{
|
||||
"[2025-01-01 12:00:02.000] [PID:123] ERROR: error 3",
|
||||
"[2025-01-01 12:00:03.000] [PID:123] ERROR: error 4",
|
||||
"[2025-01-01 12:00:04.000] [PID:123] ERROR: error 5",
|
||||
"error 3",
|
||||
"error 4",
|
||||
"error 5",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
logPath := filepath.Join(tempDir, "test.log")
|
||||
if err := os.WriteFile(logPath, []byte(tt.content), 0o644); err != nil {
|
||||
t.Fatalf("failed to write test log: %v", err)
|
||||
logger, err := NewLoggerWithSuffix("extract-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write logs using logger methods
|
||||
for _, entry := range tt.logs {
|
||||
switch entry.level {
|
||||
case "INFO":
|
||||
logger.Info(entry.msg)
|
||||
case "WARN":
|
||||
logger.Warn(entry.msg)
|
||||
case "ERROR":
|
||||
logger.Error(entry.msg)
|
||||
case "DEBUG":
|
||||
logger.Debug(entry.msg)
|
||||
}
|
||||
}
|
||||
|
||||
logger := &Logger{path: logPath}
|
||||
logger.Flush()
|
||||
|
||||
got := logger.ExtractRecentErrors(tt.maxEntries)
|
||||
|
||||
if len(got) != len(tt.want) {
|
||||
@@ -846,23 +992,137 @@ func TestExtractRecentErrors(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsNilLogger(t *testing.T) {
|
||||
func TestLoggerExtractRecentErrorsNilLogger(t *testing.T) {
|
||||
var logger *Logger
|
||||
if got := logger.ExtractRecentErrors(10); got != nil {
|
||||
t.Fatalf("nil logger ExtractRecentErrors() should return nil, got %v", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsEmptyPath(t *testing.T) {
|
||||
func TestLoggerExtractRecentErrorsEmptyPath(t *testing.T) {
|
||||
logger := &Logger{path: ""}
|
||||
if got := logger.ExtractRecentErrors(10); got != nil {
|
||||
t.Fatalf("empty path ExtractRecentErrors() should return nil, got %v", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsFileNotExist(t *testing.T) {
|
||||
func TestLoggerExtractRecentErrorsFileNotExist(t *testing.T) {
|
||||
logger := &Logger{path: "/nonexistent/path/to/log.log"}
|
||||
if got := logger.ExtractRecentErrors(10); got != nil {
|
||||
t.Fatalf("nonexistent file ExtractRecentErrors() should return nil, got %v", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSanitizeLogSuffixNoDuplicates(t *testing.T) {
|
||||
testCases := []string{
|
||||
"task",
|
||||
"task.",
|
||||
".task",
|
||||
"-task",
|
||||
"task-",
|
||||
"--task--",
|
||||
"..task..",
|
||||
}
|
||||
|
||||
seen := make(map[string]string)
|
||||
for _, input := range testCases {
|
||||
result := sanitizeLogSuffix(input)
|
||||
if result == "" {
|
||||
t.Fatalf("sanitizeLogSuffix(%q) returned empty string", input)
|
||||
}
|
||||
|
||||
if prev, exists := seen[result]; exists {
|
||||
t.Fatalf("collision detected: %q and %q both produce %q", input, prev, result)
|
||||
}
|
||||
seen[result] = input
|
||||
|
||||
// Verify result is safe for file names
|
||||
if strings.ContainsAny(result, "/\\:*?\"<>|") {
|
||||
t.Fatalf("sanitizeLogSuffix(%q) = %q contains unsafe characters", input, result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsBoundaryCheck(t *testing.T) {
|
||||
logger, err := NewLoggerWithSuffix("boundary-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write some errors
|
||||
logger.Error("error 1")
|
||||
logger.Warn("warn 1")
|
||||
logger.Error("error 2")
|
||||
logger.Flush()
|
||||
|
||||
// Test zero
|
||||
result := logger.ExtractRecentErrors(0)
|
||||
if result != nil {
|
||||
t.Fatalf("ExtractRecentErrors(0) should return nil, got %v", result)
|
||||
}
|
||||
|
||||
// Test negative
|
||||
result = logger.ExtractRecentErrors(-5)
|
||||
if result != nil {
|
||||
t.Fatalf("ExtractRecentErrors(-5) should return nil, got %v", result)
|
||||
}
|
||||
|
||||
// Test positive still works
|
||||
result = logger.ExtractRecentErrors(10)
|
||||
if len(result) != 3 {
|
||||
t.Fatalf("ExtractRecentErrors(10) expected 3 entries, got %d", len(result))
|
||||
}
|
||||
}
|
||||
|
||||
func TestErrorEntriesMaxLimit(t *testing.T) {
|
||||
logger, err := NewLoggerWithSuffix("max-limit-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write 150 error/warn entries
|
||||
for i := 1; i <= 150; i++ {
|
||||
if i%2 == 0 {
|
||||
logger.Error(fmt.Sprintf("error-%03d", i))
|
||||
} else {
|
||||
logger.Warn(fmt.Sprintf("warn-%03d", i))
|
||||
}
|
||||
}
|
||||
logger.Flush()
|
||||
|
||||
// Extract all cached errors
|
||||
result := logger.ExtractRecentErrors(200) // Request more than cache size
|
||||
|
||||
// Should only have last 100 entries (entries 51-150 in sequence)
|
||||
if len(result) != 100 {
|
||||
t.Fatalf("expected 100 cached entries, got %d", len(result))
|
||||
}
|
||||
|
||||
// Verify entries are the last 100 (entries 51-150)
|
||||
if !strings.Contains(result[0], "051") {
|
||||
t.Fatalf("first cached entry should be entry 51, got: %s", result[0])
|
||||
}
|
||||
if !strings.Contains(result[99], "150") {
|
||||
t.Fatalf("last cached entry should be entry 150, got: %s", result[99])
|
||||
}
|
||||
|
||||
// Verify order is preserved - simplified logic
|
||||
for i := 0; i < len(result)-1; i++ {
|
||||
expectedNum := 51 + i
|
||||
nextNum := 51 + i + 1
|
||||
|
||||
expectedEntry := fmt.Sprintf("%03d", expectedNum)
|
||||
nextEntry := fmt.Sprintf("%03d", nextNum)
|
||||
|
||||
if !strings.Contains(result[i], expectedEntry) {
|
||||
t.Fatalf("entry at index %d should contain %s, got: %s", i, expectedEntry, result[i])
|
||||
}
|
||||
if !strings.Contains(result[i+1], nextEntry) {
|
||||
t.Fatalf("entry at index %d should contain %s, got: %s", i+1, nextEntry, result[i+1])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
version = "5.2.0"
|
||||
version = "5.2.5"
|
||||
defaultWorkdir = "."
|
||||
defaultTimeout = 7200 // seconds
|
||||
codexLogLineLimit = 1000
|
||||
|
||||
@@ -426,10 +426,11 @@ ok-d`
|
||||
t.Fatalf("expected startup banner in stderr, got:\n%s", stderrOut)
|
||||
}
|
||||
|
||||
// After parallel log isolation fix, each task has its own log file
|
||||
expectedLines := map[string]struct{}{
|
||||
fmt.Sprintf("Task a: Log: %s", expectedLog): {},
|
||||
fmt.Sprintf("Task b: Log: %s", expectedLog): {},
|
||||
fmt.Sprintf("Task d: Log: %s", expectedLog): {},
|
||||
fmt.Sprintf("Task a: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-a.log", os.Getpid()))): {},
|
||||
fmt.Sprintf("Task b: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-b.log", os.Getpid()))): {},
|
||||
fmt.Sprintf("Task d: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-d.log", os.Getpid()))): {},
|
||||
}
|
||||
|
||||
if len(taskLines) != len(expectedLines) {
|
||||
|
||||
@@ -41,6 +41,7 @@ func resetTestHooks() {
|
||||
closeLogger()
|
||||
executablePathFn = os.Executable
|
||||
runTaskFn = runCodexTask
|
||||
runCodexTaskFn = defaultRunCodexTaskFn
|
||||
exitFn = os.Exit
|
||||
}
|
||||
|
||||
@@ -250,6 +251,10 @@ func (d *drainBlockingCmd) SetStderr(w io.Writer) {
|
||||
d.inner.SetStderr(w)
|
||||
}
|
||||
|
||||
func (d *drainBlockingCmd) SetDir(dir string) {
|
||||
d.inner.SetDir(dir)
|
||||
}
|
||||
|
||||
func (d *drainBlockingCmd) Process() processHandle {
|
||||
return d.inner.Process()
|
||||
}
|
||||
@@ -504,6 +509,8 @@ func (f *fakeCmd) SetStderr(w io.Writer) {
|
||||
f.stderr = w
|
||||
}
|
||||
|
||||
func (f *fakeCmd) SetDir(string) {}
|
||||
|
||||
func (f *fakeCmd) Process() processHandle {
|
||||
if f == nil {
|
||||
return nil
|
||||
@@ -1371,7 +1378,7 @@ func TestBackendBuildArgs_ClaudeBackend(t *testing.T) {
|
||||
backend := ClaudeBackend{}
|
||||
cfg := &Config{Mode: "new", WorkDir: defaultWorkdir}
|
||||
got := backend.BuildArgs(cfg, "todo")
|
||||
want := []string{"-p", "-C", defaultWorkdir, "--output-format", "stream-json", "--verbose", "todo"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
if len(got) != len(want) {
|
||||
t.Fatalf("length mismatch")
|
||||
}
|
||||
@@ -1392,7 +1399,7 @@ func TestClaudeBackendBuildArgs_OutputValidation(t *testing.T) {
|
||||
target := "ensure-flags"
|
||||
|
||||
args := backend.BuildArgs(cfg, target)
|
||||
expectedPrefix := []string{"-p", "--output-format", "stream-json", "--verbose"}
|
||||
expectedPrefix := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose"}
|
||||
|
||||
if len(args) != len(expectedPrefix)+1 {
|
||||
t.Fatalf("args length=%d, want %d", len(args), len(expectedPrefix)+1)
|
||||
@@ -1411,7 +1418,7 @@ func TestBackendBuildArgs_GeminiBackend(t *testing.T) {
|
||||
backend := GeminiBackend{}
|
||||
cfg := &Config{Mode: "new"}
|
||||
got := backend.BuildArgs(cfg, "task")
|
||||
want := []string{"-o", "stream-json", "-y", "-C", defaultWorkdir, "-p", "task"}
|
||||
want := []string{"-o", "stream-json", "-y", "-p", "task"}
|
||||
if len(got) != len(want) {
|
||||
t.Fatalf("length mismatch")
|
||||
}
|
||||
@@ -1777,13 +1784,13 @@ func TestRunLogFunctions(t *testing.T) {
|
||||
}
|
||||
|
||||
output := string(data)
|
||||
if !strings.Contains(output, "INFO: info message") {
|
||||
if !strings.Contains(output, "info message") {
|
||||
t.Errorf("logInfo output missing, got: %s", output)
|
||||
}
|
||||
if !strings.Contains(output, "WARN: warn message") {
|
||||
if !strings.Contains(output, "warn message") {
|
||||
t.Errorf("logWarn output missing, got: %s", output)
|
||||
}
|
||||
if !strings.Contains(output, "ERROR: error message") {
|
||||
if !strings.Contains(output, "error message") {
|
||||
t.Errorf("logError output missing, got: %s", output)
|
||||
}
|
||||
}
|
||||
@@ -2684,7 +2691,7 @@ func TestVersionFlag(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codeagent-wrapper version 5.2.0\n"
|
||||
want := "codeagent-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -2698,7 +2705,7 @@ func TestVersionShortFlag(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codeagent-wrapper version 5.2.0\n"
|
||||
want := "codeagent-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -2712,7 +2719,7 @@ func TestVersionLegacyAlias(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codex-wrapper version 5.2.0\n"
|
||||
want := "codex-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -3293,7 +3300,7 @@ func TestRun_PipedTaskReadError(t *testing.T) {
|
||||
if exitCode != 1 {
|
||||
t.Fatalf("exit=%d, want 1", exitCode)
|
||||
}
|
||||
if !strings.Contains(logOutput, "ERROR: Failed to read piped stdin: read stdin: pipe failure") {
|
||||
if !strings.Contains(logOutput, "Failed to read piped stdin: read stdin: pipe failure") {
|
||||
t.Fatalf("log missing piped read error, got %q", logOutput)
|
||||
}
|
||||
// Log file is always removed after completion (new behavior)
|
||||
|
||||
@@ -53,9 +53,22 @@ func parseJSONStreamWithLog(r io.Reader, warnFn func(string), infoFn func(string
|
||||
return parseJSONStreamInternal(r, warnFn, infoFn, nil)
|
||||
}
|
||||
|
||||
const (
|
||||
jsonLineReaderSize = 64 * 1024
|
||||
jsonLineMaxBytes = 10 * 1024 * 1024
|
||||
jsonLinePreviewBytes = 256
|
||||
)
|
||||
|
||||
type codexHeader struct {
|
||||
Type string `json:"type"`
|
||||
ThreadID string `json:"thread_id,omitempty"`
|
||||
Item *struct {
|
||||
Type string `json:"type"`
|
||||
} `json:"item,omitempty"`
|
||||
}
|
||||
|
||||
func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func()) (message, threadID string) {
|
||||
scanner := bufio.NewScanner(r)
|
||||
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
|
||||
reader := bufio.NewReaderSize(r, jsonLineReaderSize)
|
||||
|
||||
if warnFn == nil {
|
||||
warnFn = func(string) {}
|
||||
@@ -78,79 +91,89 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
geminiBuffer strings.Builder
|
||||
)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if line == "" {
|
||||
for {
|
||||
line, tooLong, err := readLineWithLimit(reader, jsonLineMaxBytes, jsonLinePreviewBytes)
|
||||
if err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
break
|
||||
}
|
||||
warnFn("Read stdout error: " + err.Error())
|
||||
break
|
||||
}
|
||||
|
||||
line = bytes.TrimSpace(line)
|
||||
if len(line) == 0 {
|
||||
continue
|
||||
}
|
||||
totalEvents++
|
||||
|
||||
var raw map[string]json.RawMessage
|
||||
if err := json.Unmarshal([]byte(line), &raw); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse line: %s", truncate(line, 100)))
|
||||
if tooLong {
|
||||
warnFn(fmt.Sprintf("Skipped overlong JSON line (> %d bytes): %s", jsonLineMaxBytes, truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
hasItemType := false
|
||||
if rawItem, ok := raw["item"]; ok {
|
||||
var itemMap map[string]json.RawMessage
|
||||
if err := json.Unmarshal(rawItem, &itemMap); err == nil {
|
||||
if _, ok := itemMap["type"]; ok {
|
||||
hasItemType = true
|
||||
var codex codexHeader
|
||||
if err := json.Unmarshal(line, &codex); err == nil {
|
||||
isCodex := codex.ThreadID != "" || (codex.Item != nil && codex.Item.Type != "")
|
||||
if isCodex {
|
||||
var details []string
|
||||
if codex.ThreadID != "" {
|
||||
details = append(details, fmt.Sprintf("thread_id=%s", codex.ThreadID))
|
||||
}
|
||||
if codex.Item != nil && codex.Item.Type != "" {
|
||||
details = append(details, fmt.Sprintf("item_type=%s", codex.Item.Type))
|
||||
}
|
||||
if len(details) > 0 {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s (%s)", totalEvents, codex.Type, strings.Join(details, ", ")))
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s", totalEvents, codex.Type))
|
||||
}
|
||||
|
||||
switch codex.Type {
|
||||
case "thread.started":
|
||||
threadID = codex.ThreadID
|
||||
infoFn(fmt.Sprintf("thread.started event thread_id=%s", threadID))
|
||||
case "item.completed":
|
||||
itemType := ""
|
||||
if codex.Item != nil {
|
||||
itemType = codex.Item.Type
|
||||
}
|
||||
|
||||
if itemType == "agent_message" {
|
||||
var event JSONEvent
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Codex event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
normalized := ""
|
||||
if event.Item != nil {
|
||||
normalized = normalizeText(event.Item.Text)
|
||||
}
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s message_len=%d", itemType, len(normalized)))
|
||||
if normalized != "" {
|
||||
codexMessage = normalized
|
||||
notifyMessage()
|
||||
}
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s", itemType))
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
isCodex := hasItemType
|
||||
if !isCodex {
|
||||
if _, ok := raw["thread_id"]; ok {
|
||||
isCodex = true
|
||||
}
|
||||
var raw map[string]json.RawMessage
|
||||
if err := json.Unmarshal(line, &raw); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse line: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
switch {
|
||||
case isCodex:
|
||||
var event JSONEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Codex event: %s", truncate(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
var details []string
|
||||
if event.ThreadID != "" {
|
||||
details = append(details, fmt.Sprintf("thread_id=%s", event.ThreadID))
|
||||
}
|
||||
if event.Item != nil && event.Item.Type != "" {
|
||||
details = append(details, fmt.Sprintf("item_type=%s", event.Item.Type))
|
||||
}
|
||||
if len(details) > 0 {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s (%s)", totalEvents, event.Type, strings.Join(details, ", ")))
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s", totalEvents, event.Type))
|
||||
}
|
||||
|
||||
switch event.Type {
|
||||
case "thread.started":
|
||||
threadID = event.ThreadID
|
||||
infoFn(fmt.Sprintf("thread.started event thread_id=%s", threadID))
|
||||
case "item.completed":
|
||||
var itemType string
|
||||
var normalized string
|
||||
if event.Item != nil {
|
||||
itemType = event.Item.Type
|
||||
normalized = normalizeText(event.Item.Text)
|
||||
}
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s message_len=%d", itemType, len(normalized)))
|
||||
if event.Item != nil && event.Item.Type == "agent_message" && normalized != "" {
|
||||
codexMessage = normalized
|
||||
notifyMessage()
|
||||
}
|
||||
}
|
||||
|
||||
case hasKey(raw, "subtype") || hasKey(raw, "result"):
|
||||
var event ClaudeEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Claude event: %s", truncate(line, 100)))
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Claude event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -167,8 +190,8 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
|
||||
case hasKey(raw, "role") || hasKey(raw, "delta"):
|
||||
var event GeminiEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Gemini event: %s", truncate(line, 100)))
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Gemini event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -184,14 +207,10 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
infoFn(fmt.Sprintf("Parsed Gemini event #%d type=%s role=%s delta=%t status=%s content_len=%d", totalEvents, event.Type, event.Role, event.Delta, event.Status, len(event.Content)))
|
||||
|
||||
default:
|
||||
warnFn(fmt.Sprintf("Unknown event format: %s", truncate(line, 100)))
|
||||
warnFn(fmt.Sprintf("Unknown event format: %s", truncateBytes(line, 100)))
|
||||
}
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil && !errors.Is(err, io.EOF) {
|
||||
warnFn("Read stdout error: " + err.Error())
|
||||
}
|
||||
|
||||
switch {
|
||||
case geminiBuffer.Len() > 0:
|
||||
message = geminiBuffer.String()
|
||||
@@ -236,6 +255,79 @@ func discardInvalidJSON(decoder *json.Decoder, reader *bufio.Reader) (*bufio.Rea
|
||||
return bufio.NewReader(io.MultiReader(bytes.NewReader(remaining), reader)), err
|
||||
}
|
||||
|
||||
func readLineWithLimit(r *bufio.Reader, maxBytes int, previewBytes int) (line []byte, tooLong bool, err error) {
|
||||
if r == nil {
|
||||
return nil, false, errors.New("reader is nil")
|
||||
}
|
||||
if maxBytes <= 0 {
|
||||
return nil, false, errors.New("maxBytes must be > 0")
|
||||
}
|
||||
if previewBytes < 0 {
|
||||
previewBytes = 0
|
||||
}
|
||||
|
||||
part, isPrefix, err := r.ReadLine()
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
if !isPrefix {
|
||||
if len(part) > maxBytes {
|
||||
return part[:min(len(part), previewBytes)], true, nil
|
||||
}
|
||||
return part, false, nil
|
||||
}
|
||||
|
||||
preview := make([]byte, 0, min(previewBytes, len(part)))
|
||||
if previewBytes > 0 {
|
||||
preview = append(preview, part[:min(previewBytes, len(part))]...)
|
||||
}
|
||||
|
||||
buf := make([]byte, 0, min(maxBytes, len(part)*2))
|
||||
total := 0
|
||||
if len(part) > maxBytes {
|
||||
tooLong = true
|
||||
} else {
|
||||
buf = append(buf, part...)
|
||||
total = len(part)
|
||||
}
|
||||
|
||||
for isPrefix {
|
||||
part, isPrefix, err = r.ReadLine()
|
||||
if err != nil {
|
||||
return nil, tooLong, err
|
||||
}
|
||||
|
||||
if previewBytes > 0 && len(preview) < previewBytes {
|
||||
preview = append(preview, part[:min(previewBytes-len(preview), len(part))]...)
|
||||
}
|
||||
|
||||
if !tooLong {
|
||||
if total+len(part) > maxBytes {
|
||||
tooLong = true
|
||||
continue
|
||||
}
|
||||
buf = append(buf, part...)
|
||||
total += len(part)
|
||||
}
|
||||
}
|
||||
|
||||
if tooLong {
|
||||
return preview, true, nil
|
||||
}
|
||||
return buf, false, nil
|
||||
}
|
||||
|
||||
func truncateBytes(b []byte, maxLen int) string {
|
||||
if len(b) <= maxLen {
|
||||
return string(b)
|
||||
}
|
||||
if maxLen < 0 {
|
||||
return ""
|
||||
}
|
||||
return string(b[:maxLen]) + "..."
|
||||
}
|
||||
|
||||
func normalizeText(text interface{}) string {
|
||||
switch v := text.(type) {
|
||||
case string:
|
||||
|
||||
31
codeagent-wrapper/parser_token_too_long_test.go
Normal file
31
codeagent-wrapper/parser_token_too_long_test.go
Normal file
@@ -0,0 +1,31 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseJSONStream_SkipsOverlongLineAndContinues(t *testing.T) {
|
||||
// Exceed the 10MB bufio.Scanner limit in parseJSONStreamInternal.
|
||||
tooLong := strings.Repeat("a", 11*1024*1024)
|
||||
|
||||
input := strings.Join([]string{
|
||||
`{"type":"item.completed","item":{"type":"other_type","text":"` + tooLong + `"}}`,
|
||||
`{"type":"thread.started","thread_id":"t-1"}`,
|
||||
`{"type":"item.completed","item":{"type":"agent_message","text":"ok"}}`,
|
||||
}, "\n")
|
||||
|
||||
var warns []string
|
||||
warnFn := func(msg string) { warns = append(warns, msg) }
|
||||
|
||||
gotMessage, gotThreadID := parseJSONStreamInternal(strings.NewReader(input), warnFn, nil, nil)
|
||||
if gotMessage != "ok" {
|
||||
t.Fatalf("message=%q, want %q (warns=%v)", gotMessage, "ok", warns)
|
||||
}
|
||||
if gotThreadID != "t-1" {
|
||||
t.Fatalf("threadID=%q, want %q (warns=%v)", gotThreadID, "t-1", warns)
|
||||
}
|
||||
if len(warns) == 0 || !strings.Contains(warns[0], "Skipped overlong JSON line") {
|
||||
t.Fatalf("expected warning about overlong JSON line, got %v", warns)
|
||||
}
|
||||
}
|
||||
@@ -78,6 +78,7 @@ type logWriter struct {
|
||||
prefix string
|
||||
maxLen int
|
||||
buf bytes.Buffer
|
||||
dropped bool
|
||||
}
|
||||
|
||||
func newLogWriter(prefix string, maxLen int) *logWriter {
|
||||
@@ -94,12 +95,12 @@ func (lw *logWriter) Write(p []byte) (int, error) {
|
||||
total := len(p)
|
||||
for len(p) > 0 {
|
||||
if idx := bytes.IndexByte(p, '\n'); idx >= 0 {
|
||||
lw.buf.Write(p[:idx])
|
||||
lw.writeLimited(p[:idx])
|
||||
lw.logLine(true)
|
||||
p = p[idx+1:]
|
||||
continue
|
||||
}
|
||||
lw.buf.Write(p)
|
||||
lw.writeLimited(p)
|
||||
break
|
||||
}
|
||||
return total, nil
|
||||
@@ -117,21 +118,53 @@ func (lw *logWriter) logLine(force bool) {
|
||||
return
|
||||
}
|
||||
line := lw.buf.String()
|
||||
dropped := lw.dropped
|
||||
lw.dropped = false
|
||||
lw.buf.Reset()
|
||||
if line == "" && !force {
|
||||
return
|
||||
}
|
||||
if lw.maxLen > 0 && len(line) > lw.maxLen {
|
||||
cutoff := lw.maxLen
|
||||
if cutoff > 3 {
|
||||
line = line[:cutoff-3] + "..."
|
||||
} else {
|
||||
line = line[:cutoff]
|
||||
if lw.maxLen > 0 {
|
||||
if dropped {
|
||||
if lw.maxLen > 3 {
|
||||
line = line[:min(len(line), lw.maxLen-3)] + "..."
|
||||
} else {
|
||||
line = line[:min(len(line), lw.maxLen)]
|
||||
}
|
||||
} else if len(line) > lw.maxLen {
|
||||
cutoff := lw.maxLen
|
||||
if cutoff > 3 {
|
||||
line = line[:cutoff-3] + "..."
|
||||
} else {
|
||||
line = line[:cutoff]
|
||||
}
|
||||
}
|
||||
}
|
||||
logInfo(lw.prefix + line)
|
||||
}
|
||||
|
||||
func (lw *logWriter) writeLimited(p []byte) {
|
||||
if lw == nil || len(p) == 0 {
|
||||
return
|
||||
}
|
||||
if lw.maxLen <= 0 {
|
||||
lw.buf.Write(p)
|
||||
return
|
||||
}
|
||||
|
||||
remaining := lw.maxLen - lw.buf.Len()
|
||||
if remaining <= 0 {
|
||||
lw.dropped = true
|
||||
return
|
||||
}
|
||||
if len(p) <= remaining {
|
||||
lw.buf.Write(p)
|
||||
return
|
||||
}
|
||||
lw.buf.Write(p[:remaining])
|
||||
lw.dropped = true
|
||||
}
|
||||
|
||||
type tailBuffer struct {
|
||||
limit int
|
||||
data []byte
|
||||
|
||||
30
config.json
30
config.json
@@ -20,9 +20,33 @@
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/codex/SKILL.md",
|
||||
"target": "skills/codex/SKILL.md",
|
||||
"description": "Install codex skill"
|
||||
"source": "skills/codeagent/SKILL.md",
|
||||
"target": "skills/codeagent/SKILL.md",
|
||||
"description": "Install codeagent skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/product-requirements/SKILL.md",
|
||||
"target": "skills/product-requirements/SKILL.md",
|
||||
"description": "Install product-requirements skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/SKILL.md",
|
||||
"target": "skills/prototype-prompt-generator/SKILL.md",
|
||||
"description": "Install prototype-prompt-generator skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/references/prompt-structure.md",
|
||||
"target": "skills/prototype-prompt-generator/references/prompt-structure.md",
|
||||
"description": "Install prototype-prompt-generator prompt structure reference"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/references/design-systems.md",
|
||||
"target": "skills/prototype-prompt-generator/references/design-systems.md",
|
||||
"description": "Install prototype-prompt-generator design systems reference"
|
||||
},
|
||||
{
|
||||
"type": "run_command",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: dev-plan-generator
|
||||
description: Use this agent when you need to generate a structured development plan document (`dev-plan.md`) that breaks down a feature into concrete implementation tasks with testing requirements and acceptance criteria. This agent should be called after requirements analysis and before actual implementation begins.\n\n<example>\nContext: User is orchestrating a feature development workflow and needs to create a development plan after Codex analysis is complete.\nuser: "Create a development plan for the user authentication feature based on the requirements and analysis"\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to create the structured development plan document."\n<commentary>\nThe user needs a dev-plan.md document generated from requirements and analysis. Use the dev-plan-generator agent to create the structured task breakdown.\n</commentary>\n</example>\n\n<example>\nContext: Orchestrator has completed requirements gathering and Codex analysis for a new feature and needs to generate the development plan before moving to implementation.\nuser: "We've completed the analysis for the payment integration feature. Generate the development plan."\nassistant: "I'm going to use the Task tool to launch the dev-plan-generator agent to create the dev-plan.md document with task breakdown and testing requirements."\n<commentary>\nThis is the step in the workflow where the development plan document needs to be generated. Use the dev-plan-generator agent to create the structured plan.\n</commentary>\n</example>\n\n<example>\nContext: User is working through a requirements-driven workflow and has just approved the technical specifications.\nuser: "The specs look good. Let's move forward with creating the implementation plan."\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to generate the dev-plan.md document with the task breakdown."\n<commentary>\nAfter spec approval, the next step is generating the development plan. Use the dev-plan-generator agent to create the structured document.\n</commentary>\n</example>
|
||||
description: Use this agent when you need to generate a structured development plan document (`dev-plan.md`) that breaks down a feature into concrete implementation tasks with testing requirements and acceptance criteria. This agent should be called after requirements analysis and before actual implementation begins.\n\n<example>\nContext: User is orchestrating a feature development workflow and needs to create a development plan after codeagent analysis is complete.\nuser: "Create a development plan for the user authentication feature based on the requirements and analysis"\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to create the structured development plan document."\n<commentary>\nThe user needs a dev-plan.md document generated from requirements and analysis. Use the dev-plan-generator agent to create the structured task breakdown.\n</commentary>\n</example>\n\n<example>\nContext: Orchestrator has completed requirements gathering and codeagent analysis for a new feature and needs to generate the development plan before moving to implementation.\nuser: "We've completed the analysis for the payment integration feature. Generate the development plan."\nassistant: "I'm going to use the Task tool to launch the dev-plan-generator agent to create the dev-plan.md document with task breakdown and testing requirements."\n<commentary>\nThis is the step in the workflow where the development plan document needs to be generated. Use the dev-plan-generator agent to create the structured plan.\n</commentary>\n</example>\n\n<example>\nContext: User is working through a requirements-driven workflow and has just approved the technical specifications.\nuser: "The specs look good. Let's move forward with creating the implementation plan."\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to generate the dev-plan.md document with the task breakdown."\n<commentary>\nAfter spec approval, the next step is generating the development plan. Use the dev-plan-generator agent to create the structured document.\n</commentary>\n</example>
|
||||
tools: Glob, Grep, Read, Edit, Write, TodoWrite
|
||||
model: sonnet
|
||||
color: green
|
||||
|
||||
57
install.py
57
install.py
@@ -357,26 +357,47 @@ def op_run_command(op: Dict[str, Any], ctx: Dict[str, Any]) -> None:
|
||||
stderr_lines: List[str] = []
|
||||
|
||||
# Read stdout and stderr in real-time
|
||||
import selectors
|
||||
sel = selectors.DefaultSelector()
|
||||
sel.register(process.stdout, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
sel.register(process.stderr, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
if sys.platform == "win32":
|
||||
# On Windows, use threads instead of selectors (pipes aren't selectable)
|
||||
import threading
|
||||
|
||||
while process.poll() is None or sel.get_map():
|
||||
for key, _ in sel.select(timeout=0.1):
|
||||
line = key.fileobj.readline() # type: ignore[union-attr]
|
||||
if not line:
|
||||
sel.unregister(key.fileobj)
|
||||
continue
|
||||
if key.fileobj == process.stdout:
|
||||
stdout_lines.append(line)
|
||||
print(line, end="", flush=True)
|
||||
else:
|
||||
stderr_lines.append(line)
|
||||
print(line, end="", file=sys.stderr, flush=True)
|
||||
def read_output(pipe, lines, file=None):
|
||||
for line in iter(pipe.readline, ''):
|
||||
lines.append(line)
|
||||
print(line, end="", flush=True, file=file)
|
||||
pipe.close()
|
||||
|
||||
sel.close()
|
||||
process.wait()
|
||||
stdout_thread = threading.Thread(target=read_output, args=(process.stdout, stdout_lines))
|
||||
stderr_thread = threading.Thread(target=read_output, args=(process.stderr, stderr_lines, sys.stderr))
|
||||
|
||||
stdout_thread.start()
|
||||
stderr_thread.start()
|
||||
|
||||
stdout_thread.join()
|
||||
stderr_thread.join()
|
||||
process.wait()
|
||||
else:
|
||||
# On Unix, use selectors for more efficient I/O
|
||||
import selectors
|
||||
sel = selectors.DefaultSelector()
|
||||
sel.register(process.stdout, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
sel.register(process.stderr, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
|
||||
while process.poll() is None or sel.get_map():
|
||||
for key, _ in sel.select(timeout=0.1):
|
||||
line = key.fileobj.readline() # type: ignore[union-attr]
|
||||
if not line:
|
||||
sel.unregister(key.fileobj)
|
||||
continue
|
||||
if key.fileobj == process.stdout:
|
||||
stdout_lines.append(line)
|
||||
print(line, end="", flush=True)
|
||||
else:
|
||||
stderr_lines.append(line)
|
||||
print(line, end="", file=sys.stderr, flush=True)
|
||||
|
||||
sel.close()
|
||||
process.wait()
|
||||
|
||||
write_log(
|
||||
{
|
||||
|
||||
@@ -39,11 +39,35 @@ codeagent-wrapper --backend gemini "simple task"
|
||||
|
||||
## Backends
|
||||
|
||||
| Backend | Command | Description |
|
||||
|---------|---------|-------------|
|
||||
| codex | `--backend codex` | OpenAI Codex (default) |
|
||||
| claude | `--backend claude` | Anthropic Claude |
|
||||
| gemini | `--backend gemini` | Google Gemini |
|
||||
| Backend | Command | Description | Best For |
|
||||
|---------|---------|-------------|----------|
|
||||
| codex | `--backend codex` | OpenAI Codex (default) | Code analysis, complex development |
|
||||
| claude | `--backend claude` | Anthropic Claude | Simple tasks, documentation, prompts |
|
||||
| gemini | `--backend gemini` | Google Gemini | UI/UX prototyping |
|
||||
|
||||
### Backend Selection Guide
|
||||
|
||||
**Codex** (default):
|
||||
- Deep code understanding and complex logic implementation
|
||||
- Large-scale refactoring with precise dependency tracking
|
||||
- Algorithm optimization and performance tuning
|
||||
- Example: "Analyze the call graph of @src/core and refactor the module dependency structure"
|
||||
|
||||
**Claude**:
|
||||
- Quick feature implementation with clear requirements
|
||||
- Technical documentation, API specs, README generation
|
||||
- Professional prompt engineering (e.g., product requirements, design specs)
|
||||
- Example: "Generate a comprehensive README for @package.json with installation, usage, and API docs"
|
||||
|
||||
**Gemini**:
|
||||
- UI component scaffolding and layout prototyping
|
||||
- Design system implementation with style consistency
|
||||
- Interactive element generation with accessibility support
|
||||
- Example: "Create a responsive dashboard layout with sidebar navigation and data visualization cards"
|
||||
|
||||
**Backend Switching**:
|
||||
- Start with Codex for analysis, switch to Claude for documentation, then Gemini for UI implementation
|
||||
- Use per-task backend selection in parallel mode to optimize for each task's strengths
|
||||
|
||||
## Parameters
|
||||
|
||||
|
||||
Reference in New Issue
Block a user