mirror of
https://github.com/cexll/myclaude.git
synced 2026-02-05 02:30:26 +08:00
Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a30f434b5d | ||
|
|
41f4e21268 | ||
|
|
a67aa00c9a | ||
|
|
d61a0f9ffd | ||
|
|
fe5508228f | ||
|
|
50093036c3 | ||
|
|
0cae0ede08 | ||
|
|
4613b57240 | ||
|
|
7535a7b101 | ||
|
|
f6bb97eba9 | ||
|
|
78a411462b | ||
|
|
9471a981e3 |
22
.gitattributes
vendored
Normal file
22
.gitattributes
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
# Ensure shell scripts always use LF line endings on all platforms
|
||||
*.sh text eol=lf
|
||||
|
||||
# Ensure Python files use LF line endings
|
||||
*.py text eol=lf
|
||||
|
||||
# Auto-detect text files and normalize line endings to LF
|
||||
* text=auto eol=lf
|
||||
|
||||
# Explicitly declare files that should always be treated as binary
|
||||
*.exe binary
|
||||
*.png binary
|
||||
*.jpg binary
|
||||
*.jpeg binary
|
||||
*.gif binary
|
||||
*.ico binary
|
||||
*.mov binary
|
||||
*.mp4 binary
|
||||
*.mp3 binary
|
||||
*.zip binary
|
||||
*.gz binary
|
||||
*.tar binary
|
||||
833
CHANGELOG.md
833
CHANGELOG.md
@@ -4,206 +4,709 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
## [5.2.4] - 2025-12-16
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(executor)* Isolate log files per task in parallel mode
|
||||
- *(codeagent)* 防止 Claude backend 无限递归调用
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
- Bump version to 5.2.4
|
||||
|
||||
## [5.2.3] - 2025-12-15
|
||||
- integrate git-cliff for automated changelog generation
|
||||
|
||||
- bump version to 5.2.4
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(parser)* 修复 bufio.Scanner token too long 错误 (#64)
|
||||
|
||||
- 防止 Claude backend 无限递归调用
|
||||
|
||||
- isolate log files per task in parallel mode
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #70 from cexll/fix/prevent-codeagent-infinite-recursion
|
||||
|
||||
- Merge pull request #69 from cexll/myclaude-master-20251215-073053-338465000
|
||||
|
||||
- update CHANGELOG.md
|
||||
|
||||
- Merge pull request #65 from cexll/fix/issue-64-buffer-overflow
|
||||
|
||||
## [5.2.3] - 2025-12-15
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 修复 bufio.Scanner token too long 错误 ([#64](https://github.com/cexll/myclaude/issues/64))
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- change version
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 同步测试中的版本号至 5.2.3
|
||||
|
||||
## [5.2.2] - 2025-12-13
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- Bump version and clean up documentation
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codeagent backend claude no auto
|
||||
|
||||
- fix install.py dev fail
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- Fix tests for ClaudeBackend default --dangerously-skip-permissions
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
## [5.2.1] - 2025-12-13
|
||||
|
||||
- *(v5.2.2)* Bump version and clean up documentation
|
||||
|
||||
## [5.2.0] - 2025-12-13
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- *(dev-workflow)* 替换 Codex 为 codeagent 并添加 UI 自动检测
|
||||
- *(codeagent-wrapper)* 完整多后端支持与安全优化
|
||||
- *(install)* 添加终端日志输出和 verbose 模式
|
||||
- *(v5.2.0)* Improve release notes and installation scripts
|
||||
- *(v5.2.0)* Complete skills system integration and config cleanup
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(merge)* 修复master合并后的编译和测试问题
|
||||
- *(parallel)* 修复并行执行启动横幅重复打印问题
|
||||
- *(ci)* 移除 .claude 配置文件验证步骤
|
||||
- *(codeagent-wrapper)* 重构信号处理逻辑避免重复 nil 检查
|
||||
- *(codeagent-wrapper)* 修复权限标志逻辑和版本号测试
|
||||
- *(install)* Op_run_command 实时流式输出
|
||||
- *(codeagent-wrapper)* 异常退出时显示最近错误信息
|
||||
- *(codeagent-wrapper)* Remove binary artifacts and improve error messages
|
||||
- *(codeagent-wrapper)* Use -r flag for claude backend resume
|
||||
- *(install)* Clarify module list shows default state not enabled
|
||||
- *(codeagent-wrapper)* Use -r flag for gemini backend resume
|
||||
- *(codeagent-wrapper)* Add worker limit cap and remove legacy alias
|
||||
- *(codeagent-wrapper)* Fix race condition in stdout parsing
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
- *(pr-53)* 调整文件命名和技能定义
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- *(changelog)* Remove GitHub workflow related content
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
- *(codeagent-wrapper)* 添加 ExtractRecentErrors 单元测试
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
- *(v5.2.0)* Update CHANGELOG and remove deprecated test files
|
||||
|
||||
## [5.1.4] - 2025-12-09
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(parallel)* 任务启动时立即返回日志文件路径以支持实时调试
|
||||
|
||||
## [5.1.3] - 2025-12-08
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(test)* Resolve CI timing race in TestFakeCmdInfra
|
||||
|
||||
## [5.1.2] - 2025-12-08
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- 修复channel同步竞态条件和死锁问题
|
||||
|
||||
## [5.1.1] - 2025-12-08
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(test)* Resolve data race on forceKillDelay with atomic operations
|
||||
- 增强日志清理的安全性和可靠性
|
||||
- fix codeagent claude and gemini root dir
|
||||
|
||||
### 💼 Other
|
||||
|
||||
- Resolve signal handling conflict preserving testability and Windows support
|
||||
|
||||
- update readme
|
||||
|
||||
## [5.2.0] - 2025-12-13
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- Update CHANGELOG and remove deprecated test files
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix race condition in stdout parsing
|
||||
|
||||
- add worker limit cap and remove legacy alias
|
||||
|
||||
- use -r flag for gemini backend resume
|
||||
|
||||
- clarify module list shows default state not enabled
|
||||
|
||||
- use -r flag for claude backend resume
|
||||
|
||||
- remove binary artifacts and improve error messages
|
||||
|
||||
- 异常退出时显示最近错误信息
|
||||
|
||||
- op_run_command 实时流式输出
|
||||
|
||||
- 修复权限标志逻辑和版本号测试
|
||||
|
||||
- 重构信号处理逻辑避免重复 nil 检查
|
||||
|
||||
- 移除 .claude 配置文件验证步骤
|
||||
|
||||
- 修复并行执行启动横幅重复打印问题
|
||||
|
||||
- 修复master合并后的编译和测试问题
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge rc/5.2 into master: v5.2.0 release improvements
|
||||
|
||||
- Merge pull request #53 from cexll/rc/5.2
|
||||
|
||||
- remove docs
|
||||
|
||||
- remove docs
|
||||
|
||||
- add prototype prompt skill
|
||||
|
||||
- add prd skill
|
||||
|
||||
- update memory claude
|
||||
|
||||
- remove command gh flow
|
||||
|
||||
- update license
|
||||
|
||||
- Merge branch 'master' into rc/5.2
|
||||
|
||||
- Merge pull request #52 from cexll/fix/parallel-log-path-on-startup
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- remove GitHub workflow related content
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- Complete skills system integration and config cleanup
|
||||
|
||||
- Improve release notes and installation scripts
|
||||
|
||||
- 添加终端日志输出和 verbose 模式
|
||||
|
||||
- 完整多后端支持与安全优化
|
||||
|
||||
- 替换 Codex 为 codeagent 并添加 UI 自动检测
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
|
||||
- 调整文件命名和技能定义
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 添加 ExtractRecentErrors 单元测试
|
||||
|
||||
## [5.1.4] - 2025-12-09
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 任务启动时立即返回日志文件路径以支持实时调试
|
||||
|
||||
## [5.1.3] - 2025-12-08
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- resolve CI timing race in TestFakeCmdInfra
|
||||
|
||||
## [5.1.2] - 2025-12-08
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 修复channel同步竞态条件和死锁问题
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #51 from cexll/fix/channel-sync-race-conditions
|
||||
|
||||
- change codex-wrapper version
|
||||
|
||||
## [5.1.1] - 2025-12-08
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 增强日志清理的安全性和可靠性
|
||||
|
||||
- resolve data race on forceKillDelay with atomic operations
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #49 from cexll/freespace8/master
|
||||
|
||||
- resolve signal handling conflict preserving testability and Windows support
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 补充测试覆盖提升至 89.3%
|
||||
|
||||
## [5.1.0] - 2025-12-07
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Implement enterprise workflow with multi-backend support
|
||||
- *(cleanup)* 添加启动时清理日志的功能和--cleanup标志支持
|
||||
|
||||
## [5.0.0] - 2025-12-05
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Implement modular installation system
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(codex-wrapper)* Defer startup log until args parsed
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
- Remove deprecated plugin modules
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- Rewrite documentation for v5.0 modular architecture
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
- Clarify unit-test coverage levels in requirement questions
|
||||
|
||||
## [4.8.2] - 2025-12-02
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(codex-wrapper)* Capture and include stderr in error messages
|
||||
- Correct Go version in go.mod from 1.25.3 to 1.21
|
||||
- Make forceKillDelay testable to prevent signal test timeout
|
||||
- Skip signal test in CI environment
|
||||
|
||||
## [4.8.1] - 2025-12-01
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(codex-wrapper)* Improve --parallel parameter validation and docs
|
||||
|
||||
### 🎨 Styling
|
||||
|
||||
- *(codex-skill)* Replace emoji with text labels
|
||||
|
||||
## [4.7.3] - 2025-11-29
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Add async logging to temp file with lifecycle management
|
||||
- Add parallel execution support to codex-wrapper
|
||||
- Add session resume support and improve output format
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(logger)* 保留日志文件以便程序退出后调试并完善日志输出功能
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- Improve codex skill parameter best practices
|
||||
|
||||
## [4.7.2] - 2025-11-28
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(main)* Improve buffer size and streamline message extraction
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
- *(ParseJSONStream)* 增加对超大单行文本和非字符串文本的处理测试
|
||||
|
||||
## [4.7] - 2025-11-27
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- Update repository URLs to cexll/myclaude
|
||||
|
||||
## [4.4] - 2025-11-22
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- 支持通过环境变量配置 skills 模型
|
||||
|
||||
## [4.1] - 2025-11-04
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- 新增 /enhance-prompt 命令并更新所有 README 文档
|
||||
|
||||
## [3.1] - 2025-09-17
|
||||
|
||||
### 💼 Other
|
||||
|
||||
- Sync READMEs with actual commands/agents; remove nonexistent commands; enhance requirements-pilot with testing decision gate and options.
|
||||
|
||||
- Merge pull request #45 from Michaelxwb/master
|
||||
|
||||
- 修改windows安装说明
|
||||
|
||||
- 修改打包脚本
|
||||
|
||||
- 支持windows系统的安装
|
||||
|
||||
- Merge pull request #1 from Michaelxwb/feature-win
|
||||
|
||||
- 支持window
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- 添加启动时清理日志的功能和--cleanup标志支持
|
||||
|
||||
- implement enterprise workflow with multi-backend support
|
||||
|
||||
## [5.0.0] - 2025-12-05
|
||||
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
|
||||
- clarify unit-test coverage levels in requirement questions
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- defer startup log until args parsed
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #43 from gurdasnijor/smithery/add-badge
|
||||
|
||||
- Add Smithery badge
|
||||
|
||||
- Merge pull request #42 from freespace8/master
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- rewrite documentation for v5.0 modular architecture
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat install.py
|
||||
|
||||
- implement modular installation system
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
|
||||
- remove deprecated plugin modules
|
||||
|
||||
## [4.8.2] - 2025-12-02
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- skip signal test in CI environment
|
||||
|
||||
- make forceKillDelay testable to prevent signal test timeout
|
||||
|
||||
- correct Go version in go.mod from 1.25.3 to 1.21
|
||||
|
||||
- fix codex wrapper async log
|
||||
|
||||
- capture and include stderr in error messages
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #41 from cexll/fix-async-log
|
||||
|
||||
- remove test case 90
|
||||
|
||||
- optimize codex-wrapper
|
||||
|
||||
- Merge branch 'master' into fix-async-log
|
||||
|
||||
## [4.8.1] - 2025-12-01
|
||||
|
||||
|
||||
### 🎨 Styling
|
||||
|
||||
|
||||
- replace emoji with text labels
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- improve --parallel parameter validation and docs
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- remove codex-wrapper bin
|
||||
|
||||
## [4.8.0] - 2025-11-30
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update codex skill dependencies
|
||||
|
||||
## [4.7.3] - 2025-11-29
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- 保留日志文件以便程序退出后调试并完善日志输出功能
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #34 from cexll/cce-worktree-master-20251129-111802-997076000
|
||||
|
||||
- update CLAUDE.md and codex skill
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- improve codex skill parameter best practices
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- add session resume support and improve output format
|
||||
|
||||
- add parallel execution support to codex-wrapper
|
||||
|
||||
- add async logging to temp file with lifecycle management
|
||||
|
||||
## [4.7.2] - 2025-11-28
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- improve buffer size and streamline message extraction
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #32 from freespace8/master
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
|
||||
- 增加对超大单行文本和非字符串文本的处理测试
|
||||
|
||||
## [4.7.1] - 2025-11-27
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- optimize dev pipline
|
||||
|
||||
- Merge feat/codex-wrapper: fix repository URLs
|
||||
|
||||
## [4.7] - 2025-11-27
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- update repository URLs to cexll/myclaude
|
||||
|
||||
## [4.7-alpha1] - 2025-11-27
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix marketplace schema validation error in dev-workflow plugin
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #29 from cexll/feat/codex-wrapper
|
||||
|
||||
- Add codex-wrapper Go implementation
|
||||
|
||||
- update readme
|
||||
|
||||
- update readme
|
||||
|
||||
## [4.6] - 2025-11-25
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update dev workflow
|
||||
|
||||
- update dev workflow
|
||||
|
||||
## [4.5] - 2025-11-25
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skill eof
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update dev workflow plugin
|
||||
|
||||
- update readme
|
||||
|
||||
## [4.4] - 2025-11-22
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skill timeout and add more log
|
||||
|
||||
- fix codex skill
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update gemini skills
|
||||
|
||||
- update dev workflow
|
||||
|
||||
- update codex skills model config
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #24 from cexll/swe-agent/23-1763544297
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- 支持通过环境变量配置 skills 模型
|
||||
|
||||
## [4.3] - 2025-11-19
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex skills running
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update skills plugin
|
||||
|
||||
- update gemini
|
||||
|
||||
- update doc
|
||||
|
||||
- Add Gemini CLI integration skill
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat simple dev workflow
|
||||
|
||||
## [4.2.2] - 2025-11-15
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update codex skills
|
||||
|
||||
## [4.2.1] - 2025-11-14
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #21 from Tshoiasc/master
|
||||
|
||||
- Merge branch 'master' into master
|
||||
|
||||
- Change default model to gpt-5.1-codex
|
||||
|
||||
- Enhance codex.py to auto-detect long inputs and switch to stdin mode, improving handling of shell argument issues. Updated build_codex_args to support stdin and added relevant logging for task length warnings.
|
||||
|
||||
## [4.2] - 2025-11-13
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix codex.py wsl run err
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- optimize codex skills
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Rename SKILLS.md to SKILL.md
|
||||
|
||||
- optimize codex skills
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
|
||||
- feat codex skills
|
||||
|
||||
## [4.1] - 2025-11-04
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update enhance-prompt.md response
|
||||
|
||||
- update readme
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
|
||||
- 新增 /enhance-prompt 命令并更新所有 README 文档
|
||||
|
||||
## [4.0] - 2025-10-22
|
||||
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fix skills format
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge branch 'master' of github.com:cexll/myclaude
|
||||
|
||||
- Merge pull request #18 from cexll/swe-agent/17-1760969135
|
||||
|
||||
- update requirements clarity
|
||||
|
||||
- update .gitignore
|
||||
|
||||
- Fix #17: Update root marketplace.json to use skills array
|
||||
|
||||
- Fix #17: Convert requirements-clarity to correct plugin directory format
|
||||
|
||||
- Fix #17: Convert requirements-clarity to correct plugin directory format
|
||||
|
||||
- Convert requirements-clarity to plugin format with English prompts
|
||||
|
||||
- Translate requirements-clarity skill to English for plugin compatibility
|
||||
|
||||
- Add requirements-clarity Claude Skill
|
||||
|
||||
- Add requirements clarification command
|
||||
|
||||
- update
|
||||
|
||||
## [3.5] - 2025-10-20
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #15 from cexll/swe-agent/13-1760944712
|
||||
|
||||
- Fix #13: Clean up redundant README files
|
||||
|
||||
- Optimize README structure - Solution A (modular)
|
||||
|
||||
- Merge pull request #14 from cexll/swe-agent/12-1760944588
|
||||
|
||||
- Fix #12: Update Makefile install paths for new directory structure
|
||||
|
||||
## [3.4] - 2025-10-20
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Merge pull request #11 from cexll/swe-agent/10-1760752533
|
||||
|
||||
- Fix marketplace metadata references
|
||||
|
||||
- Fix plugin configuration: rename to marketplace.json and update repository URLs
|
||||
|
||||
- Fix #10: Restructure plugin directories to ensure proper command isolation
|
||||
|
||||
## [3.3] - 2025-10-15
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Update README-zh.md
|
||||
|
||||
- Update README.md
|
||||
|
||||
- Update marketplace.json
|
||||
|
||||
- Update Chinese README with v3.2 plugin system documentation
|
||||
|
||||
- Update README with v3.2 plugin system documentation
|
||||
|
||||
## [3.2] - 2025-10-10
|
||||
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- Add Claude Code plugin system support
|
||||
|
||||
- update readme
|
||||
|
||||
- Add Makefile for quick deployment and update READMEs
|
||||
|
||||
## [3.1] - 2025-09-17
|
||||
|
||||
|
||||
### ◀️ Revert
|
||||
|
||||
|
||||
- revert
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
|
||||
- fixed bmad-orchestrator not fund
|
||||
|
||||
- fix bmad
|
||||
|
||||
### 💼 Other
|
||||
|
||||
|
||||
- update bmad review with codex support
|
||||
|
||||
- 优化 BMAD 工作流和代理配置
|
||||
|
||||
- update gpt5
|
||||
|
||||
- support bmad output-style
|
||||
|
||||
- update bmad user guide
|
||||
|
||||
- update bmad readme
|
||||
|
||||
- optimize requirements pilot
|
||||
|
||||
- add use gpt5 codex
|
||||
|
||||
- add bmad pilot
|
||||
|
||||
- sync READMEs with actual commands/agents; remove nonexistent commands; enhance requirements-pilot with testing decision gate and options.
|
||||
|
||||
- Update Chinese README and requirements-pilot command to align with latest workflow
|
||||
|
||||
- update readme
|
||||
|
||||
- update agent
|
||||
|
||||
- update bugfix sub agents
|
||||
|
||||
- Update ask support KISS YAGNI SOLID
|
||||
|
||||
- Add comprehensive documentation and multi-agent workflow system
|
||||
|
||||
- update commands
|
||||
<!-- generated by git-cliff -->
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://claude.ai/code)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
|
||||
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini)
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://claude.ai/code)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
|
||||
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini)
|
||||
|
||||
|
||||
@@ -427,6 +427,10 @@ Generate architecture document at `./.claude/specs/{feature_name}/02-system-arch
|
||||
|
||||
## Important Behaviors
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, REST, GraphQL, JWT, RBAC, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Start by reviewing and referencing the PRD
|
||||
- Present initial architecture based on requirements
|
||||
|
||||
@@ -419,6 +419,10 @@ logger.info('User created', {
|
||||
|
||||
## Important Implementation Rules
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, CRUD, JWT, SQL, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Follow architecture specifications exactly
|
||||
- Implement all acceptance criteria from PRD
|
||||
|
||||
@@ -22,6 +22,10 @@ You are the BMAD Orchestrator. Your core focus is repository analysis, workflow
|
||||
- Consistency: ensure conventions and patterns discovered in scan are preserved downstream
|
||||
- Explicit handoffs: clearly document assumptions, risks, and integration points for other agents
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, PRD, Sprint, etc.) in English; translate explanatory text only.
|
||||
|
||||
## UltraThink Repository Scan
|
||||
|
||||
When asked to analyze the repository, follow this structure and return a clear, actionable summary.
|
||||
|
||||
@@ -313,6 +313,10 @@ Generate PRD at `./.claude/specs/{feature_name}/01-product-requirements.md`:
|
||||
|
||||
## Important Behaviors
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, Sprint, PRD, KPI, MVP, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Start immediately with greeting and initial understanding
|
||||
- Show quality scores transparently
|
||||
|
||||
@@ -478,6 +478,10 @@ module.exports = {
|
||||
|
||||
## Important Testing Rules
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, E2E, CI/CD, Mock, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Test all acceptance criteria from PRD
|
||||
- Cover happy path, edge cases, and error scenarios
|
||||
|
||||
@@ -45,3 +45,7 @@ You are an independent code review agent responsible for conducting reviews betw
|
||||
- Focus on actionable findings
|
||||
- Provide specific QA guidance
|
||||
- Use clear, parseable output format
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, PRD, Sprint, etc.) in English; translate explanatory text only.
|
||||
|
||||
@@ -351,6 +351,10 @@ So that [benefit]
|
||||
|
||||
## Important Behaviors
|
||||
|
||||
### Language Rules:
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (Sprint, Epic, Story, Backlog, Velocity, etc.) in English; translate explanatory text only.
|
||||
|
||||
### DO:
|
||||
- Read both PRD and Architecture documents thoroughly
|
||||
- Create comprehensive task breakdown
|
||||
|
||||
@@ -76,8 +76,8 @@ func TestConcurrentStressLogger(t *testing.T) {
|
||||
t.Logf("Successfully wrote %d/%d logs (%.1f%%)",
|
||||
actualCount, totalExpected, float64(actualCount)/float64(totalExpected)*100)
|
||||
|
||||
// 验证日志格式
|
||||
formatRE := regexp.MustCompile(`^\[\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}\] \[PID:\d+\] INFO: goroutine-`)
|
||||
// 验证日志格式(纯文本,无前缀)
|
||||
formatRE := regexp.MustCompile(`^goroutine-\d+-msg-\d+$`)
|
||||
for i, line := range lines[:min(10, len(lines))] {
|
||||
if !formatRE.MatchString(line) {
|
||||
t.Errorf("line %d has invalid format: %s", i, line)
|
||||
@@ -293,16 +293,13 @@ func TestLoggerOrderPreservation(t *testing.T) {
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
var gid, seq int
|
||||
parts := strings.SplitN(line, " INFO: ", 2)
|
||||
if len(parts) != 2 {
|
||||
t.Errorf("invalid log format: %s", line)
|
||||
// Parse format: G0-SEQ0001 (without INFO: prefix)
|
||||
_, err := fmt.Sscanf(line, "G%d-SEQ%04d", &gid, &seq)
|
||||
if err != nil {
|
||||
t.Errorf("invalid log format: %s (error: %v)", line, err)
|
||||
continue
|
||||
}
|
||||
if _, err := fmt.Sscanf(parts[1], "G%d-SEQ%d", &gid, &seq); err == nil {
|
||||
sequences[gid] = append(sequences[gid], seq)
|
||||
} else {
|
||||
t.Errorf("failed to parse sequence from line: %s", line)
|
||||
}
|
||||
sequences[gid] = append(sequences[gid], seq)
|
||||
}
|
||||
|
||||
// 验证每个 goroutine 内部顺序
|
||||
|
||||
@@ -49,6 +49,7 @@ type TaskResult struct {
|
||||
SessionID string `json:"session_id"`
|
||||
Error string `json:"error"`
|
||||
LogPath string `json:"log_path"`
|
||||
sharedLog bool
|
||||
}
|
||||
|
||||
var backendRegistry = map[string]Backend{
|
||||
|
||||
@@ -139,6 +139,38 @@ func taskLoggerFromContext(ctx context.Context) *Logger {
|
||||
return logger
|
||||
}
|
||||
|
||||
type taskLoggerHandle struct {
|
||||
logger *Logger
|
||||
path string
|
||||
shared bool
|
||||
closeFn func()
|
||||
}
|
||||
|
||||
func newTaskLoggerHandle(taskID string) taskLoggerHandle {
|
||||
taskLogger, err := NewLoggerWithSuffix(taskID)
|
||||
if err == nil {
|
||||
return taskLoggerHandle{
|
||||
logger: taskLogger,
|
||||
path: taskLogger.Path(),
|
||||
closeFn: func() { _ = taskLogger.Close() },
|
||||
}
|
||||
}
|
||||
|
||||
msg := fmt.Sprintf("Failed to create task logger for %s: %v, using main logger", taskID, err)
|
||||
mainLogger := activeLogger()
|
||||
if mainLogger != nil {
|
||||
logWarn(msg)
|
||||
return taskLoggerHandle{
|
||||
logger: mainLogger,
|
||||
path: mainLogger.Path(),
|
||||
shared: true,
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintln(os.Stderr, msg)
|
||||
return taskLoggerHandle{}
|
||||
}
|
||||
|
||||
// defaultRunCodexTaskFn is the default implementation of runCodexTaskFn (exposed for test reset)
|
||||
func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
|
||||
if task.WorkDir == "" {
|
||||
@@ -255,7 +287,7 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
var startPrintMu sync.Mutex
|
||||
bannerPrinted := false
|
||||
|
||||
printTaskStart := func(taskID, logPath string) {
|
||||
printTaskStart := func(taskID, logPath string, shared bool) {
|
||||
if logPath == "" {
|
||||
return
|
||||
}
|
||||
@@ -264,7 +296,11 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
fmt.Fprintln(os.Stderr, "=== Starting Parallel Execution ===")
|
||||
bannerPrinted = true
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Task %s: Log: %s\n", taskID, logPath)
|
||||
label := "Log"
|
||||
if shared {
|
||||
label = "Log (shared)"
|
||||
}
|
||||
fmt.Fprintf(os.Stderr, "Task %s: %s: %s\n", taskID, label, logPath)
|
||||
startPrintMu.Unlock()
|
||||
}
|
||||
|
||||
@@ -334,11 +370,11 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
wg.Add(1)
|
||||
go func(ts TaskSpec) {
|
||||
defer wg.Done()
|
||||
var taskLogger *Logger
|
||||
var taskLogPath string
|
||||
handle := taskLoggerHandle{}
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
resultsCh <- TaskResult{TaskID: ts.ID, ExitCode: 1, Error: fmt.Sprintf("panic: %v", r), LogPath: taskLogPath}
|
||||
resultsCh <- TaskResult{TaskID: ts.ID, ExitCode: 1, Error: fmt.Sprintf("panic: %v", r), LogPath: taskLogPath, sharedLog: handle.shared}
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -355,18 +391,29 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
|
||||
logConcurrencyState("done", ts.ID, int(after), workerLimit)
|
||||
}()
|
||||
|
||||
if l, err := NewLoggerWithSuffix(ts.ID); err == nil {
|
||||
taskLogger = l
|
||||
taskLogPath = l.Path()
|
||||
defer func() { _ = taskLogger.Close() }()
|
||||
handle = newTaskLoggerHandle(ts.ID)
|
||||
taskLogPath = handle.path
|
||||
if handle.closeFn != nil {
|
||||
defer handle.closeFn()
|
||||
}
|
||||
|
||||
ts.Context = withTaskLogger(ctx, taskLogger)
|
||||
printTaskStart(ts.ID, taskLogPath)
|
||||
taskCtx := ctx
|
||||
if handle.logger != nil {
|
||||
taskCtx = withTaskLogger(ctx, handle.logger)
|
||||
}
|
||||
ts.Context = taskCtx
|
||||
|
||||
printTaskStart(ts.ID, taskLogPath, handle.shared)
|
||||
|
||||
res := runCodexTaskFn(ts, timeout)
|
||||
if res.LogPath == "" && taskLogPath != "" {
|
||||
res.LogPath = taskLogPath
|
||||
if taskLogPath != "" {
|
||||
if res.LogPath == "" || (handle.shared && handle.logger != nil && res.LogPath == handle.logger.Path()) {
|
||||
res.LogPath = taskLogPath
|
||||
}
|
||||
}
|
||||
// 只有当最终的 LogPath 确实是共享 logger 的路径时才标记为 shared
|
||||
if handle.shared && handle.logger != nil && res.LogPath == handle.logger.Path() {
|
||||
res.sharedLog = true
|
||||
}
|
||||
resultsCh <- res
|
||||
}(task)
|
||||
@@ -444,7 +491,11 @@ func generateFinalOutput(results []TaskResult) string {
|
||||
sb.WriteString(fmt.Sprintf("Session: %s\n", res.SessionID))
|
||||
}
|
||||
if res.LogPath != "" {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s\n", res.LogPath))
|
||||
if res.sharedLog {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s (shared)\n", res.LogPath))
|
||||
} else {
|
||||
sb.WriteString(fmt.Sprintf("Log: %s\n", res.LogPath))
|
||||
}
|
||||
}
|
||||
if res.Message != "" {
|
||||
sb.WriteString(fmt.Sprintf("\n%s\n", res.Message))
|
||||
@@ -485,6 +536,13 @@ func runCodexProcess(parentCtx context.Context, codexArgs []string, taskText str
|
||||
}
|
||||
|
||||
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
|
||||
if parentCtx == nil {
|
||||
parentCtx = taskSpec.Context
|
||||
}
|
||||
if parentCtx == nil {
|
||||
parentCtx = context.Background()
|
||||
}
|
||||
|
||||
result := TaskResult{TaskID: taskSpec.ID}
|
||||
injectedLogger := taskLoggerFromContext(parentCtx)
|
||||
logger := injectedLogger
|
||||
@@ -595,15 +653,15 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
}
|
||||
|
||||
if !silent {
|
||||
stdoutLogger = newLogWriter("CODEX_STDOUT: ", codexLogLineLimit)
|
||||
stderrLogger = newLogWriter("CODEX_STDERR: ", codexLogLineLimit)
|
||||
// Note: Empty prefix ensures backend output is logged as-is without any wrapper format.
|
||||
// This preserves the original stdout/stderr content from codex/claude/gemini backends.
|
||||
// Trade-off: Reduces distinguishability between stdout/stderr in logs, but maintains
|
||||
// output fidelity which is critical for debugging backend-specific issues.
|
||||
stdoutLogger = newLogWriter("", codexLogLineLimit)
|
||||
stderrLogger = newLogWriter("", codexLogLineLimit)
|
||||
}
|
||||
|
||||
ctx := parentCtx
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, time.Duration(timeoutSec)*time.Second)
|
||||
defer cancel()
|
||||
ctx, stop := signal.NotifyContext(ctx, syscall.SIGINT, syscall.SIGTERM)
|
||||
@@ -625,8 +683,17 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
if stderrLogger != nil {
|
||||
stderrWriters = append(stderrWriters, stderrLogger)
|
||||
}
|
||||
|
||||
// For gemini backend, filter noisy stderr output
|
||||
var stderrFilter *filteringWriter
|
||||
if !silent {
|
||||
stderrWriters = append([]io.Writer{os.Stderr}, stderrWriters...)
|
||||
stderrOut := io.Writer(os.Stderr)
|
||||
if cfg.Backend == "gemini" {
|
||||
stderrFilter = newFilteringWriter(os.Stderr, geminiNoisePatterns)
|
||||
stderrOut = stderrFilter
|
||||
defer stderrFilter.Flush()
|
||||
}
|
||||
stderrWriters = append([]io.Writer{stderrOut}, stderrWriters...)
|
||||
}
|
||||
if len(stderrWriters) == 1 {
|
||||
cmd.SetStderr(stderrWriters[0])
|
||||
|
||||
@@ -472,6 +472,43 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("contextLoggerWithoutParent", func(t *testing.T) {
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
return &execFakeRunner{
|
||||
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ctx"}}`),
|
||||
process: &execFakeProcess{pid: 14},
|
||||
}
|
||||
}
|
||||
_ = closeLogger()
|
||||
|
||||
taskLogger, err := NewLoggerWithSuffix("executor-taskctx")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
_ = taskLogger.Close()
|
||||
_ = os.Remove(taskLogger.Path())
|
||||
})
|
||||
|
||||
ctx := withTaskLogger(context.Background(), taskLogger)
|
||||
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-context", Task: "payload", WorkDir: ".", Context: ctx}, nil, nil, false, true, 1)
|
||||
if res.ExitCode != 0 || res.LogPath != taskLogger.Path() {
|
||||
t.Fatalf("expected task logger to be reused from spec context, got %+v", res)
|
||||
}
|
||||
if activeLogger() != nil {
|
||||
t.Fatalf("expected no global logger to be created when task context provides one")
|
||||
}
|
||||
|
||||
taskLogger.Flush()
|
||||
data, err := os.ReadFile(taskLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read task log: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(data), "task-context") {
|
||||
t.Fatalf("task log missing task id, content: %s", string(data))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("backendSetsDirAndNilContext", func(t *testing.T) {
|
||||
var rc *execFakeRunner
|
||||
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
|
||||
@@ -974,6 +1011,143 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
|
||||
t.Fatalf("unexpected results: %+v", results)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("TestConcurrentTaskLoggerFailure", func(t *testing.T) {
|
||||
// Create a writable temp dir for the main logger, then flip TMPDIR to a read-only
|
||||
// location so task-specific loggers fail to open.
|
||||
writable := t.TempDir()
|
||||
t.Setenv("TMPDIR", writable)
|
||||
|
||||
mainLogger, err := NewLoggerWithSuffix("shared-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
t.Cleanup(func() {
|
||||
mainLogger.Flush()
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
})
|
||||
|
||||
noWrite := filepath.Join(writable, "ro")
|
||||
if err := os.Mkdir(noWrite, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only temp dir: %v", err)
|
||||
}
|
||||
t.Setenv("TMPDIR", noWrite)
|
||||
|
||||
taskA := nextExecutorTestTaskID("shared-a")
|
||||
taskB := nextExecutorTestTaskID("shared-b")
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
if logger != mainLogger {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "unexpected logger"}
|
||||
}
|
||||
logger.Info("TASK=" + task.ID)
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
stderrR, stderrW, err := os.Pipe()
|
||||
if err != nil {
|
||||
t.Fatalf("os.Pipe() error = %v", err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = stderrW
|
||||
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, 0)
|
||||
|
||||
_ = stderrW.Close()
|
||||
os.Stderr = oldStderr
|
||||
stderrData, _ := io.ReadAll(stderrR)
|
||||
_ = stderrR.Close()
|
||||
stderrOut := string(stderrData)
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Fatalf("expected 2 results, got %d", len(results))
|
||||
}
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 || res.Error != "" {
|
||||
t.Fatalf("task failed unexpectedly: %+v", res)
|
||||
}
|
||||
if res.LogPath != mainLogger.Path() {
|
||||
t.Fatalf("shared log path mismatch: got %q want %q", res.LogPath, mainLogger.Path())
|
||||
}
|
||||
if !res.sharedLog {
|
||||
t.Fatalf("expected sharedLog flag for %+v", res)
|
||||
}
|
||||
if !strings.Contains(stderrOut, "Log (shared)") {
|
||||
t.Fatalf("stderr missing shared marker: %s", stderrOut)
|
||||
}
|
||||
}
|
||||
|
||||
summary := generateFinalOutput(results)
|
||||
if !strings.Contains(summary, "(shared)") {
|
||||
t.Fatalf("summary missing shared marker: %s", summary)
|
||||
}
|
||||
|
||||
mainLogger.Flush()
|
||||
data, err := os.ReadFile(mainLogger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read main log: %v", err)
|
||||
}
|
||||
content := string(data)
|
||||
if !strings.Contains(content, "TASK="+taskA) || !strings.Contains(content, "TASK="+taskB) {
|
||||
t.Fatalf("expected shared log to contain both tasks, got: %s", content)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("TestSanitizeTaskID", func(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
orig := runCodexTaskFn
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
logger := taskLoggerFromContext(task.Context)
|
||||
if logger == nil {
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing logger"}
|
||||
}
|
||||
logger.Info("TASK=" + task.ID)
|
||||
return TaskResult{TaskID: task.ID, ExitCode: 0}
|
||||
}
|
||||
t.Cleanup(func() { runCodexTaskFn = orig })
|
||||
|
||||
idA := "../bad id"
|
||||
idB := "tab\tid"
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: idA}, {ID: idB}}}, 1, 0)
|
||||
|
||||
if len(results) != 2 {
|
||||
t.Fatalf("expected 2 results, got %d", len(results))
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
idA: sanitizeLogSuffix(idA),
|
||||
idB: sanitizeLogSuffix(idB),
|
||||
}
|
||||
|
||||
for _, res := range results {
|
||||
if res.ExitCode != 0 || res.Error != "" {
|
||||
t.Fatalf("unexpected failure: %+v", res)
|
||||
}
|
||||
safe, ok := expected[res.TaskID]
|
||||
if !ok {
|
||||
t.Fatalf("unexpected task id %q in results", res.TaskID)
|
||||
}
|
||||
wantBase := fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), safe)
|
||||
if filepath.Base(res.LogPath) != wantBase {
|
||||
t.Fatalf("log filename for %q = %q, want %q", res.TaskID, filepath.Base(res.LogPath), wantBase)
|
||||
}
|
||||
data, err := os.ReadFile(res.LogPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read log %q: %v", res.LogPath, err)
|
||||
}
|
||||
if !strings.Contains(string(data), "TASK="+res.TaskID) {
|
||||
t.Fatalf("log for %q missing task marker, content: %s", res.TaskID, string(data))
|
||||
}
|
||||
_ = os.Remove(res.LogPath)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestExecutorSignalAndTermination(t *testing.T) {
|
||||
@@ -1116,3 +1290,70 @@ func TestExecutorForwardSignalsDefaults(t *testing.T) {
|
||||
forwardSignals(ctx, &execFakeRunner{process: &execFakeProcess{pid: 80}}, func(string) {})
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
}
|
||||
|
||||
func TestExecutorSharedLogFalseWhenCustomLogPath(t *testing.T) {
|
||||
devNull, err := os.OpenFile(os.DevNull, os.O_WRONLY, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open %s: %v", os.DevNull, err)
|
||||
}
|
||||
oldStderr := os.Stderr
|
||||
os.Stderr = devNull
|
||||
t.Cleanup(func() {
|
||||
os.Stderr = oldStderr
|
||||
_ = devNull.Close()
|
||||
})
|
||||
|
||||
tempDir := t.TempDir()
|
||||
t.Setenv("TMPDIR", tempDir)
|
||||
|
||||
// Setup: 创建主 logger
|
||||
mainLogger, err := NewLoggerWithSuffix("shared-main")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
setLogger(mainLogger)
|
||||
defer func() {
|
||||
_ = closeLogger()
|
||||
_ = os.Remove(mainLogger.Path())
|
||||
}()
|
||||
|
||||
// 模拟场景:task logger 创建失败(通过设置只读的 TMPDIR),
|
||||
// 回退到主 logger(handle.shared=true),
|
||||
// 但 runCodexTaskFn 返回自定义的 LogPath(不等于主 logger 的路径)
|
||||
roDir := filepath.Join(tempDir, "ro")
|
||||
if err := os.Mkdir(roDir, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only dir: %v", err)
|
||||
}
|
||||
t.Setenv("TMPDIR", roDir)
|
||||
|
||||
orig := runCodexTaskFn
|
||||
customLogPath := "/custom/path/to.log"
|
||||
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
|
||||
// 返回自定义 LogPath,不等于主 logger 的路径
|
||||
return TaskResult{
|
||||
TaskID: task.ID,
|
||||
ExitCode: 0,
|
||||
LogPath: customLogPath,
|
||||
}
|
||||
}
|
||||
defer func() { runCodexTaskFn = orig }()
|
||||
|
||||
// 执行任务
|
||||
results := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: "task1"}}}, 1, 0)
|
||||
|
||||
if len(results) != 1 {
|
||||
t.Fatalf("expected 1 result, got %d", len(results))
|
||||
}
|
||||
|
||||
res := results[0]
|
||||
// 关键断言:即使 handle.shared=true(因为 task logger 创建失败),
|
||||
// 但因为 LogPath 不等于主 logger 的路径,sharedLog 应为 false
|
||||
if res.sharedLog {
|
||||
t.Fatalf("expected sharedLog=false when LogPath differs from shared logger, got true")
|
||||
}
|
||||
|
||||
// 验证 LogPath 确实是自定义的
|
||||
if res.LogPath != customLogPath {
|
||||
t.Fatalf("expected custom LogPath %s, got %s", customLogPath, res.LogPath)
|
||||
}
|
||||
}
|
||||
|
||||
66
codeagent-wrapper/filter.go
Normal file
66
codeagent-wrapper/filter.go
Normal file
@@ -0,0 +1,66 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// geminiNoisePatterns contains stderr patterns to filter for gemini backend
|
||||
var geminiNoisePatterns = []string{
|
||||
"[STARTUP]",
|
||||
"Session cleanup disabled",
|
||||
"Warning:",
|
||||
"(node:",
|
||||
"(Use `node --trace-warnings",
|
||||
"Loaded cached credentials",
|
||||
"Loading extension:",
|
||||
"YOLO mode is enabled",
|
||||
}
|
||||
|
||||
// filteringWriter wraps an io.Writer and filters out lines matching patterns
|
||||
type filteringWriter struct {
|
||||
w io.Writer
|
||||
patterns []string
|
||||
buf bytes.Buffer
|
||||
}
|
||||
|
||||
func newFilteringWriter(w io.Writer, patterns []string) *filteringWriter {
|
||||
return &filteringWriter{w: w, patterns: patterns}
|
||||
}
|
||||
|
||||
func (f *filteringWriter) Write(p []byte) (n int, err error) {
|
||||
f.buf.Write(p)
|
||||
for {
|
||||
line, err := f.buf.ReadString('\n')
|
||||
if err != nil {
|
||||
// incomplete line, put it back
|
||||
f.buf.WriteString(line)
|
||||
break
|
||||
}
|
||||
if !f.shouldFilter(line) {
|
||||
f.w.Write([]byte(line))
|
||||
}
|
||||
}
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
func (f *filteringWriter) shouldFilter(line string) bool {
|
||||
for _, pattern := range f.patterns {
|
||||
if strings.Contains(line, pattern) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Flush writes any remaining buffered content
|
||||
func (f *filteringWriter) Flush() {
|
||||
if f.buf.Len() > 0 {
|
||||
remaining := f.buf.String()
|
||||
if !f.shouldFilter(remaining) {
|
||||
f.w.Write([]byte(remaining))
|
||||
}
|
||||
f.buf.Reset()
|
||||
}
|
||||
}
|
||||
73
codeagent-wrapper/filter_test.go
Normal file
73
codeagent-wrapper/filter_test.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestFilteringWriter(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
patterns []string
|
||||
input string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "filter STARTUP lines",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "[STARTUP] Recording metric\nHello World\n[STARTUP] Another line\n",
|
||||
want: "Hello World\n",
|
||||
},
|
||||
{
|
||||
name: "filter Warning lines",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "Warning: something bad\nActual output\n",
|
||||
want: "Actual output\n",
|
||||
},
|
||||
{
|
||||
name: "filter multiple patterns",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "YOLO mode is enabled\nSession cleanup disabled\nReal content\nLoading extension: foo\n",
|
||||
want: "Real content\n",
|
||||
},
|
||||
{
|
||||
name: "no filtering needed",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "Line 1\nLine 2\nLine 3\n",
|
||||
want: "Line 1\nLine 2\nLine 3\n",
|
||||
},
|
||||
{
|
||||
name: "empty input",
|
||||
patterns: geminiNoisePatterns,
|
||||
input: "",
|
||||
want: "",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
fw := newFilteringWriter(&buf, tt.patterns)
|
||||
fw.Write([]byte(tt.input))
|
||||
fw.Flush()
|
||||
|
||||
if got := buf.String(); got != tt.want {
|
||||
t.Errorf("got %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilteringWriterPartialLines(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
fw := newFilteringWriter(&buf, geminiNoisePatterns)
|
||||
|
||||
// Write partial line
|
||||
fw.Write([]byte("Hello "))
|
||||
fw.Write([]byte("World\n"))
|
||||
fw.Flush()
|
||||
|
||||
if got := buf.String(); got != "Hello World\n" {
|
||||
t.Errorf("got %q, want %q", got, "Hello World\n")
|
||||
}
|
||||
}
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
@@ -18,22 +19,25 @@ import (
|
||||
// It is intentionally minimal: a buffered channel + single worker goroutine
|
||||
// to avoid contention while keeping ordering guarantees.
|
||||
type Logger struct {
|
||||
path string
|
||||
file *os.File
|
||||
writer *bufio.Writer
|
||||
ch chan logEntry
|
||||
flushReq chan chan struct{}
|
||||
done chan struct{}
|
||||
closed atomic.Bool
|
||||
closeOnce sync.Once
|
||||
workerWG sync.WaitGroup
|
||||
pendingWG sync.WaitGroup
|
||||
flushMu sync.Mutex
|
||||
path string
|
||||
file *os.File
|
||||
writer *bufio.Writer
|
||||
ch chan logEntry
|
||||
flushReq chan chan struct{}
|
||||
done chan struct{}
|
||||
closed atomic.Bool
|
||||
closeOnce sync.Once
|
||||
workerWG sync.WaitGroup
|
||||
pendingWG sync.WaitGroup
|
||||
flushMu sync.Mutex
|
||||
workerErr error
|
||||
errorEntries []string // Cache of recent ERROR/WARN entries
|
||||
errorMu sync.Mutex
|
||||
}
|
||||
|
||||
type logEntry struct {
|
||||
level string
|
||||
msg string
|
||||
msg string
|
||||
isError bool // true for ERROR or WARN levels
|
||||
}
|
||||
|
||||
// CleanupStats captures the outcome of a cleanupOldLogs run.
|
||||
@@ -55,6 +59,10 @@ var (
|
||||
evalSymlinksFn = filepath.EvalSymlinks
|
||||
)
|
||||
|
||||
const maxLogSuffixLen = 64
|
||||
|
||||
var logSuffixCounter atomic.Uint64
|
||||
|
||||
// NewLogger creates the async logger and starts the worker goroutine.
|
||||
// The log file is created under os.TempDir() using the required naming scheme.
|
||||
func NewLogger() (*Logger, error) {
|
||||
@@ -64,14 +72,23 @@ func NewLogger() (*Logger, error) {
|
||||
// NewLoggerWithSuffix creates a logger with an optional suffix in the filename.
|
||||
// Useful for tests that need isolated log files within the same process.
|
||||
func NewLoggerWithSuffix(suffix string) (*Logger, error) {
|
||||
filename := fmt.Sprintf("%s-%d", primaryLogPrefix(), os.Getpid())
|
||||
pid := os.Getpid()
|
||||
filename := fmt.Sprintf("%s-%d", primaryLogPrefix(), pid)
|
||||
var safeSuffix string
|
||||
if suffix != "" {
|
||||
filename += "-" + suffix
|
||||
safeSuffix = sanitizeLogSuffix(suffix)
|
||||
}
|
||||
if safeSuffix != "" {
|
||||
filename += "-" + safeSuffix
|
||||
}
|
||||
filename += ".log"
|
||||
|
||||
path := filepath.Clean(filepath.Join(os.TempDir(), filename))
|
||||
|
||||
if err := os.MkdirAll(filepath.Dir(path), 0o700); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0o600)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -92,6 +109,73 @@ func NewLoggerWithSuffix(suffix string) (*Logger, error) {
|
||||
return l, nil
|
||||
}
|
||||
|
||||
func sanitizeLogSuffix(raw string) string {
|
||||
trimmed := strings.TrimSpace(raw)
|
||||
if trimmed == "" {
|
||||
return fallbackLogSuffix()
|
||||
}
|
||||
|
||||
var b strings.Builder
|
||||
changed := false
|
||||
for _, r := range trimmed {
|
||||
if isSafeLogRune(r) {
|
||||
b.WriteRune(r)
|
||||
} else {
|
||||
changed = true
|
||||
b.WriteByte('-')
|
||||
}
|
||||
if b.Len() >= maxLogSuffixLen {
|
||||
changed = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
sanitized := strings.Trim(b.String(), "-.")
|
||||
if sanitized != b.String() {
|
||||
changed = true // Mark if trim removed any characters
|
||||
}
|
||||
if sanitized == "" {
|
||||
return fallbackLogSuffix()
|
||||
}
|
||||
|
||||
if changed || len(sanitized) > maxLogSuffixLen {
|
||||
hash := crc32.ChecksumIEEE([]byte(trimmed))
|
||||
hashStr := fmt.Sprintf("%x", hash)
|
||||
|
||||
maxPrefix := maxLogSuffixLen - len(hashStr) - 1
|
||||
if maxPrefix < 1 {
|
||||
maxPrefix = 1
|
||||
}
|
||||
if len(sanitized) > maxPrefix {
|
||||
sanitized = sanitized[:maxPrefix]
|
||||
}
|
||||
|
||||
sanitized = fmt.Sprintf("%s-%s", sanitized, hashStr)
|
||||
}
|
||||
|
||||
return sanitized
|
||||
}
|
||||
|
||||
func fallbackLogSuffix() string {
|
||||
next := logSuffixCounter.Add(1)
|
||||
return fmt.Sprintf("task-%d", next)
|
||||
}
|
||||
|
||||
func isSafeLogRune(r rune) bool {
|
||||
switch {
|
||||
case r >= 'a' && r <= 'z':
|
||||
return true
|
||||
case r >= 'A' && r <= 'Z':
|
||||
return true
|
||||
case r >= '0' && r <= '9':
|
||||
return true
|
||||
case r == '-', r == '_', r == '.':
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Path returns the underlying log file path (useful for tests/inspection).
|
||||
func (l *Logger) Path() string {
|
||||
if l == nil {
|
||||
@@ -112,10 +196,11 @@ func (l *Logger) Debug(msg string) { l.log("DEBUG", msg) }
|
||||
// Error logs at ERROR level.
|
||||
func (l *Logger) Error(msg string) { l.log("ERROR", msg) }
|
||||
|
||||
// Close stops the worker and syncs the log file.
|
||||
// Close signals the worker to flush and close the log file.
|
||||
// The log file is NOT removed, allowing inspection after program exit.
|
||||
// It is safe to call multiple times.
|
||||
// Returns after a 5-second timeout if worker doesn't stop gracefully.
|
||||
// Waits up to CODEAGENT_LOGGER_CLOSE_TIMEOUT_MS (default: 5000) for shutdown; set to 0 to wait indefinitely.
|
||||
// Returns an error if shutdown doesn't complete within the timeout.
|
||||
func (l *Logger) Close() error {
|
||||
if l == nil {
|
||||
return nil
|
||||
@@ -126,42 +211,51 @@ func (l *Logger) Close() error {
|
||||
l.closeOnce.Do(func() {
|
||||
l.closed.Store(true)
|
||||
close(l.done)
|
||||
close(l.ch)
|
||||
|
||||
// Wait for worker with timeout
|
||||
timeout := loggerCloseTimeout()
|
||||
workerDone := make(chan struct{})
|
||||
go func() {
|
||||
l.workerWG.Wait()
|
||||
close(workerDone)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-workerDone:
|
||||
// Worker stopped gracefully
|
||||
case <-time.After(5 * time.Second):
|
||||
// Worker timeout - proceed with cleanup anyway
|
||||
closeErr = fmt.Errorf("logger worker timeout during close")
|
||||
if timeout > 0 {
|
||||
select {
|
||||
case <-workerDone:
|
||||
// Worker stopped gracefully
|
||||
case <-time.After(timeout):
|
||||
closeErr = fmt.Errorf("logger worker timeout during close")
|
||||
return
|
||||
}
|
||||
} else {
|
||||
<-workerDone
|
||||
}
|
||||
|
||||
if err := l.writer.Flush(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
if l.workerErr != nil && closeErr == nil {
|
||||
closeErr = l.workerErr
|
||||
}
|
||||
|
||||
if err := l.file.Sync(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
}
|
||||
|
||||
if err := l.file.Close(); err != nil && closeErr == nil {
|
||||
closeErr = err
|
||||
}
|
||||
|
||||
// Log file is kept for debugging - NOT removed
|
||||
// Users can manually clean up /tmp/<wrapper>-*.log files
|
||||
})
|
||||
|
||||
return closeErr
|
||||
}
|
||||
|
||||
func loggerCloseTimeout() time.Duration {
|
||||
const defaultTimeout = 5 * time.Second
|
||||
|
||||
raw := strings.TrimSpace(os.Getenv("CODEAGENT_LOGGER_CLOSE_TIMEOUT_MS"))
|
||||
if raw == "" {
|
||||
return defaultTimeout
|
||||
}
|
||||
ms, err := strconv.Atoi(raw)
|
||||
if err != nil {
|
||||
return defaultTimeout
|
||||
}
|
||||
if ms <= 0 {
|
||||
return 0
|
||||
}
|
||||
return time.Duration(ms) * time.Millisecond
|
||||
}
|
||||
|
||||
// RemoveLogFile removes the log file. Should only be called after Close().
|
||||
func (l *Logger) RemoveLogFile() error {
|
||||
if l == nil {
|
||||
@@ -170,34 +264,29 @@ func (l *Logger) RemoveLogFile() error {
|
||||
return os.Remove(l.path)
|
||||
}
|
||||
|
||||
// ExtractRecentErrors reads the log file and returns the most recent ERROR and WARN entries.
|
||||
// ExtractRecentErrors returns the most recent ERROR and WARN entries from memory cache.
|
||||
// Returns up to maxEntries entries in chronological order.
|
||||
func (l *Logger) ExtractRecentErrors(maxEntries int) []string {
|
||||
if l == nil || l.path == "" {
|
||||
if l == nil || maxEntries <= 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
f, err := os.Open(l.path)
|
||||
if err != nil {
|
||||
l.errorMu.Lock()
|
||||
defer l.errorMu.Unlock()
|
||||
|
||||
if len(l.errorEntries) == 0 {
|
||||
return nil
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
var entries []string
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.Contains(line, "] ERROR:") || strings.Contains(line, "] WARN:") {
|
||||
entries = append(entries, line)
|
||||
}
|
||||
// Return last N entries
|
||||
start := 0
|
||||
if len(l.errorEntries) > maxEntries {
|
||||
start = len(l.errorEntries) - maxEntries
|
||||
}
|
||||
|
||||
// Keep only the last maxEntries
|
||||
if len(entries) > maxEntries {
|
||||
entries = entries[len(entries)-maxEntries:]
|
||||
}
|
||||
|
||||
return entries
|
||||
result := make([]string, len(l.errorEntries)-start)
|
||||
copy(result, l.errorEntries[start:])
|
||||
return result
|
||||
}
|
||||
|
||||
// Flush waits for all pending log entries to be written. Primarily for tests.
|
||||
@@ -254,7 +343,8 @@ func (l *Logger) log(level, msg string) {
|
||||
return
|
||||
}
|
||||
|
||||
entry := logEntry{level: level, msg: msg}
|
||||
isError := level == "WARN" || level == "ERROR"
|
||||
entry := logEntry{msg: msg, isError: isError}
|
||||
l.flushMu.Lock()
|
||||
l.pendingWG.Add(1)
|
||||
l.flushMu.Unlock()
|
||||
@@ -275,18 +365,42 @@ func (l *Logger) run() {
|
||||
ticker := time.NewTicker(500 * time.Millisecond)
|
||||
defer ticker.Stop()
|
||||
|
||||
writeEntry := func(entry logEntry) {
|
||||
fmt.Fprintf(l.writer, "%s\n", entry.msg)
|
||||
|
||||
// Cache error/warn entries in memory for fast extraction
|
||||
if entry.isError {
|
||||
l.errorMu.Lock()
|
||||
l.errorEntries = append(l.errorEntries, entry.msg)
|
||||
if len(l.errorEntries) > 100 { // Keep last 100
|
||||
l.errorEntries = l.errorEntries[1:]
|
||||
}
|
||||
l.errorMu.Unlock()
|
||||
}
|
||||
|
||||
l.pendingWG.Done()
|
||||
}
|
||||
|
||||
finalize := func() {
|
||||
if err := l.writer.Flush(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
if err := l.file.Sync(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
if err := l.file.Close(); err != nil && l.workerErr == nil {
|
||||
l.workerErr = err
|
||||
}
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case entry, ok := <-l.ch:
|
||||
if !ok {
|
||||
// Channel closed, final flush
|
||||
_ = l.writer.Flush()
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
timestamp := time.Now().Format("2006-01-02 15:04:05.000")
|
||||
pid := os.Getpid()
|
||||
fmt.Fprintf(l.writer, "[%s] [PID:%d] %s: %s\n", timestamp, pid, entry.level, entry.msg)
|
||||
l.pendingWG.Done()
|
||||
writeEntry(entry)
|
||||
|
||||
case <-ticker.C:
|
||||
_ = l.writer.Flush()
|
||||
@@ -296,6 +410,21 @@ func (l *Logger) run() {
|
||||
_ = l.writer.Flush()
|
||||
_ = l.file.Sync()
|
||||
close(flushDone)
|
||||
|
||||
case <-l.done:
|
||||
for {
|
||||
select {
|
||||
case entry, ok := <-l.ch:
|
||||
if !ok {
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
writeEntry(entry)
|
||||
default:
|
||||
finalize()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -68,13 +68,48 @@ func TestLoggerWithSuffixNamingAndIsolation(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerWithSuffixReturnsErrorWhenTempDirMissing(t *testing.T) {
|
||||
missingTempDir := filepath.Join(t.TempDir(), "does-not-exist")
|
||||
setTempDirEnv(t, missingTempDir)
|
||||
func TestLoggerWithSuffixReturnsErrorWhenTempDirNotWritable(t *testing.T) {
|
||||
base := t.TempDir()
|
||||
noWrite := filepath.Join(base, "ro")
|
||||
if err := os.Mkdir(noWrite, 0o500); err != nil {
|
||||
t.Fatalf("failed to create read-only temp dir: %v", err)
|
||||
}
|
||||
t.Cleanup(func() { _ = os.Chmod(noWrite, 0o700) })
|
||||
setTempDirEnv(t, noWrite)
|
||||
|
||||
logger, err := NewLoggerWithSuffix("task-err")
|
||||
if err == nil {
|
||||
_ = logger.Close()
|
||||
t.Fatalf("expected error, got nil")
|
||||
t.Fatalf("expected error when temp dir is not writable")
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoggerWithSuffixSanitizesUnsafeSuffix(t *testing.T) {
|
||||
tempDir := setTempDirEnv(t, t.TempDir())
|
||||
|
||||
raw := "../bad id/with?chars"
|
||||
safe := sanitizeLogSuffix(raw)
|
||||
if safe == "" {
|
||||
t.Fatalf("sanitizeLogSuffix returned empty string")
|
||||
}
|
||||
if strings.ContainsAny(safe, "/\\") {
|
||||
t.Fatalf("sanitized suffix should not contain path separators, got %q", safe)
|
||||
}
|
||||
|
||||
logger, err := NewLoggerWithSuffix(raw)
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix(%q) error = %v", raw, err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
_ = logger.Close()
|
||||
_ = os.Remove(logger.Path())
|
||||
})
|
||||
|
||||
wantBase := fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), safe)
|
||||
if gotBase := filepath.Base(logger.Path()); gotBase != wantBase {
|
||||
t.Fatalf("log filename = %q, want %q", gotBase, wantBase)
|
||||
}
|
||||
if dir := filepath.Dir(logger.Path()); dir != tempDir {
|
||||
t.Fatalf("logger path dir = %q, want %q", dir, tempDir)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -69,7 +69,7 @@ func TestLoggerWritesLevels(t *testing.T) {
|
||||
}
|
||||
|
||||
content := string(data)
|
||||
checks := []string{"INFO: info message", "WARN: warn message", "DEBUG: debug message", "ERROR: error message"}
|
||||
checks := []string{"info message", "warn message", "debug message", "error message"}
|
||||
for _, c := range checks {
|
||||
if !strings.Contains(content, c) {
|
||||
t.Fatalf("log file missing entry %q, content: %s", c, content)
|
||||
@@ -766,7 +766,7 @@ func TestLoggerInternalLog(t *testing.T) {
|
||||
|
||||
logger.log("INFO", "hello")
|
||||
entry := <-done
|
||||
if entry.level != "INFO" || entry.msg != "hello" {
|
||||
if entry.msg != "hello" {
|
||||
t.Fatalf("unexpected entry %+v", entry)
|
||||
}
|
||||
|
||||
@@ -894,66 +894,90 @@ func (f fakeFileInfo) Sys() interface{} { return nil }
|
||||
func TestLoggerExtractRecentErrors(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
content string
|
||||
logs []struct{ level, msg string }
|
||||
maxEntries int
|
||||
want []string
|
||||
}{
|
||||
{
|
||||
name: "empty log",
|
||||
content: "",
|
||||
logs: nil,
|
||||
maxEntries: 10,
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
name: "no errors",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] DEBUG: processing`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"DEBUG", "processing"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
name: "single error",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] ERROR: something failed`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"ERROR", "something failed"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: []string{"[2025-01-01 12:00:01.000] [PID:123] ERROR: something failed"},
|
||||
want: []string{"something failed"},
|
||||
},
|
||||
{
|
||||
name: "error and warn",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] INFO: started
|
||||
[2025-01-01 12:00:01.000] [PID:123] WARN: warning message
|
||||
[2025-01-01 12:00:02.000] [PID:123] ERROR: error message`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"INFO", "started"},
|
||||
{"WARN", "warning message"},
|
||||
{"ERROR", "error message"},
|
||||
},
|
||||
maxEntries: 10,
|
||||
want: []string{
|
||||
"[2025-01-01 12:00:01.000] [PID:123] WARN: warning message",
|
||||
"[2025-01-01 12:00:02.000] [PID:123] ERROR: error message",
|
||||
"warning message",
|
||||
"error message",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "truncate to max",
|
||||
content: `[2025-01-01 12:00:00.000] [PID:123] ERROR: error 1
|
||||
[2025-01-01 12:00:01.000] [PID:123] ERROR: error 2
|
||||
[2025-01-01 12:00:02.000] [PID:123] ERROR: error 3
|
||||
[2025-01-01 12:00:03.000] [PID:123] ERROR: error 4
|
||||
[2025-01-01 12:00:04.000] [PID:123] ERROR: error 5`,
|
||||
logs: []struct{ level, msg string }{
|
||||
{"ERROR", "error 1"},
|
||||
{"ERROR", "error 2"},
|
||||
{"ERROR", "error 3"},
|
||||
{"ERROR", "error 4"},
|
||||
{"ERROR", "error 5"},
|
||||
},
|
||||
maxEntries: 3,
|
||||
want: []string{
|
||||
"[2025-01-01 12:00:02.000] [PID:123] ERROR: error 3",
|
||||
"[2025-01-01 12:00:03.000] [PID:123] ERROR: error 4",
|
||||
"[2025-01-01 12:00:04.000] [PID:123] ERROR: error 5",
|
||||
"error 3",
|
||||
"error 4",
|
||||
"error 5",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tempDir := t.TempDir()
|
||||
logPath := filepath.Join(tempDir, "test.log")
|
||||
if err := os.WriteFile(logPath, []byte(tt.content), 0o644); err != nil {
|
||||
t.Fatalf("failed to write test log: %v", err)
|
||||
logger, err := NewLoggerWithSuffix("extract-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write logs using logger methods
|
||||
for _, entry := range tt.logs {
|
||||
switch entry.level {
|
||||
case "INFO":
|
||||
logger.Info(entry.msg)
|
||||
case "WARN":
|
||||
logger.Warn(entry.msg)
|
||||
case "ERROR":
|
||||
logger.Error(entry.msg)
|
||||
case "DEBUG":
|
||||
logger.Debug(entry.msg)
|
||||
}
|
||||
}
|
||||
|
||||
logger := &Logger{path: logPath}
|
||||
logger.Flush()
|
||||
|
||||
got := logger.ExtractRecentErrors(tt.maxEntries)
|
||||
|
||||
if len(got) != len(tt.want) {
|
||||
@@ -988,3 +1012,117 @@ func TestLoggerExtractRecentErrorsFileNotExist(t *testing.T) {
|
||||
t.Fatalf("nonexistent file ExtractRecentErrors() should return nil, got %v", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSanitizeLogSuffixNoDuplicates(t *testing.T) {
|
||||
testCases := []string{
|
||||
"task",
|
||||
"task.",
|
||||
".task",
|
||||
"-task",
|
||||
"task-",
|
||||
"--task--",
|
||||
"..task..",
|
||||
}
|
||||
|
||||
seen := make(map[string]string)
|
||||
for _, input := range testCases {
|
||||
result := sanitizeLogSuffix(input)
|
||||
if result == "" {
|
||||
t.Fatalf("sanitizeLogSuffix(%q) returned empty string", input)
|
||||
}
|
||||
|
||||
if prev, exists := seen[result]; exists {
|
||||
t.Fatalf("collision detected: %q and %q both produce %q", input, prev, result)
|
||||
}
|
||||
seen[result] = input
|
||||
|
||||
// Verify result is safe for file names
|
||||
if strings.ContainsAny(result, "/\\:*?\"<>|") {
|
||||
t.Fatalf("sanitizeLogSuffix(%q) = %q contains unsafe characters", input, result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractRecentErrorsBoundaryCheck(t *testing.T) {
|
||||
logger, err := NewLoggerWithSuffix("boundary-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write some errors
|
||||
logger.Error("error 1")
|
||||
logger.Warn("warn 1")
|
||||
logger.Error("error 2")
|
||||
logger.Flush()
|
||||
|
||||
// Test zero
|
||||
result := logger.ExtractRecentErrors(0)
|
||||
if result != nil {
|
||||
t.Fatalf("ExtractRecentErrors(0) should return nil, got %v", result)
|
||||
}
|
||||
|
||||
// Test negative
|
||||
result = logger.ExtractRecentErrors(-5)
|
||||
if result != nil {
|
||||
t.Fatalf("ExtractRecentErrors(-5) should return nil, got %v", result)
|
||||
}
|
||||
|
||||
// Test positive still works
|
||||
result = logger.ExtractRecentErrors(10)
|
||||
if len(result) != 3 {
|
||||
t.Fatalf("ExtractRecentErrors(10) expected 3 entries, got %d", len(result))
|
||||
}
|
||||
}
|
||||
|
||||
func TestErrorEntriesMaxLimit(t *testing.T) {
|
||||
logger, err := NewLoggerWithSuffix("max-limit-test")
|
||||
if err != nil {
|
||||
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
|
||||
}
|
||||
defer logger.Close()
|
||||
defer logger.RemoveLogFile()
|
||||
|
||||
// Write 150 error/warn entries
|
||||
for i := 1; i <= 150; i++ {
|
||||
if i%2 == 0 {
|
||||
logger.Error(fmt.Sprintf("error-%03d", i))
|
||||
} else {
|
||||
logger.Warn(fmt.Sprintf("warn-%03d", i))
|
||||
}
|
||||
}
|
||||
logger.Flush()
|
||||
|
||||
// Extract all cached errors
|
||||
result := logger.ExtractRecentErrors(200) // Request more than cache size
|
||||
|
||||
// Should only have last 100 entries (entries 51-150 in sequence)
|
||||
if len(result) != 100 {
|
||||
t.Fatalf("expected 100 cached entries, got %d", len(result))
|
||||
}
|
||||
|
||||
// Verify entries are the last 100 (entries 51-150)
|
||||
if !strings.Contains(result[0], "051") {
|
||||
t.Fatalf("first cached entry should be entry 51, got: %s", result[0])
|
||||
}
|
||||
if !strings.Contains(result[99], "150") {
|
||||
t.Fatalf("last cached entry should be entry 150, got: %s", result[99])
|
||||
}
|
||||
|
||||
// Verify order is preserved - simplified logic
|
||||
for i := 0; i < len(result)-1; i++ {
|
||||
expectedNum := 51 + i
|
||||
nextNum := 51 + i + 1
|
||||
|
||||
expectedEntry := fmt.Sprintf("%03d", expectedNum)
|
||||
nextEntry := fmt.Sprintf("%03d", nextNum)
|
||||
|
||||
if !strings.Contains(result[i], expectedEntry) {
|
||||
t.Fatalf("entry at index %d should contain %s, got: %s", i, expectedEntry, result[i])
|
||||
}
|
||||
if !strings.Contains(result[i+1], nextEntry) {
|
||||
t.Fatalf("entry at index %d should contain %s, got: %s", i+1, nextEntry, result[i+1])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
version = "5.2.4"
|
||||
version = "5.2.5"
|
||||
defaultWorkdir = "."
|
||||
defaultTimeout = 7200 // seconds
|
||||
codexLogLineLimit = 1000
|
||||
|
||||
@@ -1784,13 +1784,13 @@ func TestRunLogFunctions(t *testing.T) {
|
||||
}
|
||||
|
||||
output := string(data)
|
||||
if !strings.Contains(output, "INFO: info message") {
|
||||
if !strings.Contains(output, "info message") {
|
||||
t.Errorf("logInfo output missing, got: %s", output)
|
||||
}
|
||||
if !strings.Contains(output, "WARN: warn message") {
|
||||
if !strings.Contains(output, "warn message") {
|
||||
t.Errorf("logWarn output missing, got: %s", output)
|
||||
}
|
||||
if !strings.Contains(output, "ERROR: error message") {
|
||||
if !strings.Contains(output, "error message") {
|
||||
t.Errorf("logError output missing, got: %s", output)
|
||||
}
|
||||
}
|
||||
@@ -2691,7 +2691,7 @@ func TestVersionFlag(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codeagent-wrapper version 5.2.4\n"
|
||||
want := "codeagent-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -2705,7 +2705,7 @@ func TestVersionShortFlag(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codeagent-wrapper version 5.2.4\n"
|
||||
want := "codeagent-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -2719,7 +2719,7 @@ func TestVersionLegacyAlias(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codex-wrapper version 5.2.4\n"
|
||||
want := "codex-wrapper version 5.2.5\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -3300,7 +3300,7 @@ func TestRun_PipedTaskReadError(t *testing.T) {
|
||||
if exitCode != 1 {
|
||||
t.Fatalf("exit=%d, want 1", exitCode)
|
||||
}
|
||||
if !strings.Contains(logOutput, "ERROR: Failed to read piped stdin: read stdin: pipe failure") {
|
||||
if !strings.Contains(logOutput, "Failed to read piped stdin: read stdin: pipe failure") {
|
||||
t.Fatalf("log missing piped read error, got %q", logOutput)
|
||||
}
|
||||
// Log file is always removed after completion (new behavior)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: dev-plan-generator
|
||||
description: Use this agent when you need to generate a structured development plan document (`dev-plan.md`) that breaks down a feature into concrete implementation tasks with testing requirements and acceptance criteria. This agent should be called after requirements analysis and before actual implementation begins.\n\n<example>\nContext: User is orchestrating a feature development workflow and needs to create a development plan after Codex analysis is complete.\nuser: "Create a development plan for the user authentication feature based on the requirements and analysis"\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to create the structured development plan document."\n<commentary>\nThe user needs a dev-plan.md document generated from requirements and analysis. Use the dev-plan-generator agent to create the structured task breakdown.\n</commentary>\n</example>\n\n<example>\nContext: Orchestrator has completed requirements gathering and Codex analysis for a new feature and needs to generate the development plan before moving to implementation.\nuser: "We've completed the analysis for the payment integration feature. Generate the development plan."\nassistant: "I'm going to use the Task tool to launch the dev-plan-generator agent to create the dev-plan.md document with task breakdown and testing requirements."\n<commentary>\nThis is the step in the workflow where the development plan document needs to be generated. Use the dev-plan-generator agent to create the structured plan.\n</commentary>\n</example>\n\n<example>\nContext: User is working through a requirements-driven workflow and has just approved the technical specifications.\nuser: "The specs look good. Let's move forward with creating the implementation plan."\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to generate the dev-plan.md document with the task breakdown."\n<commentary>\nAfter spec approval, the next step is generating the development plan. Use the dev-plan-generator agent to create the structured document.\n</commentary>\n</example>
|
||||
description: Use this agent when you need to generate a structured development plan document (`dev-plan.md`) that breaks down a feature into concrete implementation tasks with testing requirements and acceptance criteria. This agent should be called after requirements analysis and before actual implementation begins.\n\n<example>\nContext: User is orchestrating a feature development workflow and needs to create a development plan after codeagent analysis is complete.\nuser: "Create a development plan for the user authentication feature based on the requirements and analysis"\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to create the structured development plan document."\n<commentary>\nThe user needs a dev-plan.md document generated from requirements and analysis. Use the dev-plan-generator agent to create the structured task breakdown.\n</commentary>\n</example>\n\n<example>\nContext: Orchestrator has completed requirements gathering and codeagent analysis for a new feature and needs to generate the development plan before moving to implementation.\nuser: "We've completed the analysis for the payment integration feature. Generate the development plan."\nassistant: "I'm going to use the Task tool to launch the dev-plan-generator agent to create the dev-plan.md document with task breakdown and testing requirements."\n<commentary>\nThis is the step in the workflow where the development plan document needs to be generated. Use the dev-plan-generator agent to create the structured plan.\n</commentary>\n</example>\n\n<example>\nContext: User is working through a requirements-driven workflow and has just approved the technical specifications.\nuser: "The specs look good. Let's move forward with creating the implementation plan."\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to generate the dev-plan.md document with the task breakdown."\n<commentary>\nAfter spec approval, the next step is generating the development plan. Use the dev-plan-generator agent to create the structured document.\n</commentary>\n</example>
|
||||
tools: Glob, Grep, Read, Edit, Write, TodoWrite
|
||||
model: sonnet
|
||||
color: green
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
set -e
|
||||
|
||||
# Get staged files
|
||||
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACM)
|
||||
STAGED_FILES="$(git diff --cached --name-only --diff-filter=ACM)"
|
||||
|
||||
if [ -z "$STAGED_FILES" ]; then
|
||||
echo "No files to validate"
|
||||
@@ -15,17 +15,32 @@ fi
|
||||
echo "Running pre-commit checks..."
|
||||
|
||||
# Check Go files
|
||||
GO_FILES=$(echo "$STAGED_FILES" | grep '\.go$' || true)
|
||||
GO_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.go$' || true)"
|
||||
if [ -n "$GO_FILES" ]; then
|
||||
echo "Checking Go files..."
|
||||
|
||||
if ! command -v gofmt &> /dev/null; then
|
||||
echo "❌ gofmt not found. Please install Go (gofmt is included with the Go toolchain)."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Format check
|
||||
gofmt -l $GO_FILES | while read -r file; do
|
||||
GO_FILE_ARGS=()
|
||||
while IFS= read -r file; do
|
||||
if [ -n "$file" ]; then
|
||||
echo "❌ $file needs formatting (run: gofmt -w $file)"
|
||||
GO_FILE_ARGS+=("$file")
|
||||
fi
|
||||
done <<< "$GO_FILES"
|
||||
|
||||
if [ "${#GO_FILE_ARGS[@]}" -gt 0 ]; then
|
||||
UNFORMATTED="$(gofmt -l "${GO_FILE_ARGS[@]}")"
|
||||
if [ -n "$UNFORMATTED" ]; then
|
||||
echo "❌ The following files need formatting:"
|
||||
echo "$UNFORMATTED"
|
||||
echo "Run: gofmt -w <file>"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Run tests
|
||||
if command -v go &> /dev/null; then
|
||||
@@ -38,19 +53,26 @@ if [ -n "$GO_FILES" ]; then
|
||||
fi
|
||||
|
||||
# Check JSON files
|
||||
JSON_FILES=$(echo "$STAGED_FILES" | grep '\.json$' || true)
|
||||
JSON_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.json$' || true)"
|
||||
if [ -n "$JSON_FILES" ]; then
|
||||
echo "Validating JSON files..."
|
||||
for file in $JSON_FILES; do
|
||||
if ! command -v jq &> /dev/null; then
|
||||
echo "❌ jq not found. Please install jq to validate JSON files."
|
||||
exit 1
|
||||
fi
|
||||
while IFS= read -r file; do
|
||||
if [ -z "$file" ]; then
|
||||
continue
|
||||
fi
|
||||
if ! jq empty "$file" 2>/dev/null; then
|
||||
echo "❌ Invalid JSON: $file"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
done <<< "$JSON_FILES"
|
||||
fi
|
||||
|
||||
# Check Markdown files
|
||||
MD_FILES=$(echo "$STAGED_FILES" | grep '\.md$' || true)
|
||||
MD_FILES="$(printf '%s\n' "$STAGED_FILES" | grep '\.md$' || true)"
|
||||
if [ -n "$MD_FILES" ]; then
|
||||
echo "Checking markdown files..."
|
||||
# Add markdown linting if needed
|
||||
|
||||
57
install.py
57
install.py
@@ -357,26 +357,47 @@ def op_run_command(op: Dict[str, Any], ctx: Dict[str, Any]) -> None:
|
||||
stderr_lines: List[str] = []
|
||||
|
||||
# Read stdout and stderr in real-time
|
||||
import selectors
|
||||
sel = selectors.DefaultSelector()
|
||||
sel.register(process.stdout, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
sel.register(process.stderr, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
if sys.platform == "win32":
|
||||
# On Windows, use threads instead of selectors (pipes aren't selectable)
|
||||
import threading
|
||||
|
||||
while process.poll() is None or sel.get_map():
|
||||
for key, _ in sel.select(timeout=0.1):
|
||||
line = key.fileobj.readline() # type: ignore[union-attr]
|
||||
if not line:
|
||||
sel.unregister(key.fileobj)
|
||||
continue
|
||||
if key.fileobj == process.stdout:
|
||||
stdout_lines.append(line)
|
||||
print(line, end="", flush=True)
|
||||
else:
|
||||
stderr_lines.append(line)
|
||||
print(line, end="", file=sys.stderr, flush=True)
|
||||
def read_output(pipe, lines, file=None):
|
||||
for line in iter(pipe.readline, ''):
|
||||
lines.append(line)
|
||||
print(line, end="", flush=True, file=file)
|
||||
pipe.close()
|
||||
|
||||
sel.close()
|
||||
process.wait()
|
||||
stdout_thread = threading.Thread(target=read_output, args=(process.stdout, stdout_lines))
|
||||
stderr_thread = threading.Thread(target=read_output, args=(process.stderr, stderr_lines, sys.stderr))
|
||||
|
||||
stdout_thread.start()
|
||||
stderr_thread.start()
|
||||
|
||||
stdout_thread.join()
|
||||
stderr_thread.join()
|
||||
process.wait()
|
||||
else:
|
||||
# On Unix, use selectors for more efficient I/O
|
||||
import selectors
|
||||
sel = selectors.DefaultSelector()
|
||||
sel.register(process.stdout, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
sel.register(process.stderr, selectors.EVENT_READ) # type: ignore[arg-type]
|
||||
|
||||
while process.poll() is None or sel.get_map():
|
||||
for key, _ in sel.select(timeout=0.1):
|
||||
line = key.fileobj.readline() # type: ignore[union-attr]
|
||||
if not line:
|
||||
sel.unregister(key.fileobj)
|
||||
continue
|
||||
if key.fileobj == process.stdout:
|
||||
stdout_lines.append(line)
|
||||
print(line, end="", flush=True)
|
||||
else:
|
||||
stderr_lines.append(line)
|
||||
print(line, end="", file=sys.stderr, flush=True)
|
||||
|
||||
sel.close()
|
||||
process.wait()
|
||||
|
||||
write_log(
|
||||
{
|
||||
|
||||
15
install.sh
15
install.sh
@@ -1,12 +1,15 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "⚠️ WARNING: install.sh is LEGACY and will be removed in future versions."
|
||||
echo "Please use the new installation method:"
|
||||
echo " python3 install.py --install-dir ~/.claude"
|
||||
echo ""
|
||||
echo "Continuing with legacy installation in 5 seconds..."
|
||||
sleep 5
|
||||
if [ -z "${SKIP_WARNING:-}" ]; then
|
||||
echo "⚠️ WARNING: install.sh is LEGACY and will be removed in future versions."
|
||||
echo "Please use the new installation method:"
|
||||
echo " python3 install.py --install-dir ~/.claude"
|
||||
echo ""
|
||||
echo "Set SKIP_WARNING=1 to bypass this message"
|
||||
echo "Continuing with legacy installation in 5 seconds..."
|
||||
sleep 5
|
||||
fi
|
||||
|
||||
# Detect platform
|
||||
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
@@ -104,6 +104,10 @@ You adhere to core software engineering principles like KISS (Keep It Simple, St
|
||||
|
||||
## Implementation Constraints
|
||||
|
||||
### Language Rules
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, SQL, CRUD, etc.) in English; translate explanatory text only.
|
||||
|
||||
### MUST Requirements
|
||||
- **Working Solution**: Code must fully implement the specified functionality
|
||||
- **Integration Compatibility**: Must work seamlessly with existing codebase
|
||||
|
||||
@@ -88,6 +88,10 @@ Each phase should be independently deployable and testable.
|
||||
|
||||
## Key Constraints
|
||||
|
||||
### Language Rules
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, SQL, CRUD, etc.) in English; translate explanatory text only.
|
||||
|
||||
### MUST Requirements
|
||||
- **Direct Implementability**: Every item must be directly translatable to code
|
||||
- **Specific Technical Details**: Include exact file paths, function names, table schemas
|
||||
|
||||
@@ -176,6 +176,10 @@ You adhere to core software engineering principles like KISS (Keep It Simple, St
|
||||
|
||||
## Key Constraints
|
||||
|
||||
### Language Rules
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, E2E, CI/CD, etc.) in English; translate explanatory text only.
|
||||
|
||||
### MUST Requirements
|
||||
- **Functional Verification**: Verify all specified functionality works
|
||||
- **Integration Testing**: Ensure seamless integration with existing code
|
||||
|
||||
@@ -199,6 +199,10 @@ func TestAPIEndpoint(t *testing.T) {
|
||||
|
||||
## Key Constraints
|
||||
|
||||
### Language Rules
|
||||
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc). When language is ambiguous, default to Chinese.
|
||||
- **Technical Terms**: Keep technical terms (API, E2E, CI/CD, Mock, etc.) in English; translate explanatory text only.
|
||||
|
||||
### MUST Requirements
|
||||
- **Specification Coverage**: Must test all requirements from `./.claude/specs/{feature_name}/requirements-spec.md`
|
||||
- **Critical Path Testing**: Must test all critical business functionality
|
||||
|
||||
@@ -39,11 +39,35 @@ codeagent-wrapper --backend gemini "simple task"
|
||||
|
||||
## Backends
|
||||
|
||||
| Backend | Command | Description |
|
||||
|---------|---------|-------------|
|
||||
| codex | `--backend codex` | OpenAI Codex (default) |
|
||||
| claude | `--backend claude` | Anthropic Claude |
|
||||
| gemini | `--backend gemini` | Google Gemini |
|
||||
| Backend | Command | Description | Best For |
|
||||
|---------|---------|-------------|----------|
|
||||
| codex | `--backend codex` | OpenAI Codex (default) | Code analysis, complex development |
|
||||
| claude | `--backend claude` | Anthropic Claude | Simple tasks, documentation, prompts |
|
||||
| gemini | `--backend gemini` | Google Gemini | UI/UX prototyping |
|
||||
|
||||
### Backend Selection Guide
|
||||
|
||||
**Codex** (default):
|
||||
- Deep code understanding and complex logic implementation
|
||||
- Large-scale refactoring with precise dependency tracking
|
||||
- Algorithm optimization and performance tuning
|
||||
- Example: "Analyze the call graph of @src/core and refactor the module dependency structure"
|
||||
|
||||
**Claude**:
|
||||
- Quick feature implementation with clear requirements
|
||||
- Technical documentation, API specs, README generation
|
||||
- Professional prompt engineering (e.g., product requirements, design specs)
|
||||
- Example: "Generate a comprehensive README for @package.json with installation, usage, and API docs"
|
||||
|
||||
**Gemini**:
|
||||
- UI component scaffolding and layout prototyping
|
||||
- Design system implementation with style consistency
|
||||
- Interactive element generation with accessibility support
|
||||
- Example: "Create a responsive dashboard layout with sidebar navigation and data visualization cards"
|
||||
|
||||
**Backend Switching**:
|
||||
- Start with Codex for analysis, switch to Claude for documentation, then Gemini for UI implementation
|
||||
- Use per-task backend selection in parallel mode to optimize for each task's strengths
|
||||
|
||||
## Parameters
|
||||
|
||||
|
||||
Reference in New Issue
Block a user