mirror of
https://github.com/cexll/myclaude.git
synced 2026-02-05 02:30:26 +08:00
Compare commits
19 Commits
rc/5.2
...
fix/preven
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a09c103cfb | ||
|
|
b3f8fcfea6 | ||
|
|
806bb04a35 | ||
|
|
b1156038de | ||
|
|
0c93bbe574 | ||
|
|
6f4f4e701b | ||
|
|
ff301507fe | ||
|
|
93b72eba42 | ||
|
|
b01758e7e1 | ||
|
|
c51b38c671 | ||
|
|
b227fee225 | ||
|
|
2b7569335b | ||
|
|
9e667f0895 | ||
|
|
4759eb2c42 | ||
|
|
edbf168b57 | ||
|
|
9bfea81ca6 | ||
|
|
a9bcea45f5 | ||
|
|
8554da6e2f | ||
|
|
b2f941af5f |
30
.github/workflows/release.yml
vendored
30
.github/workflows/release.yml
vendored
@@ -104,10 +104,38 @@ jobs:
|
||||
cp install.sh install.bat release/
|
||||
ls -la release/
|
||||
|
||||
- name: Extract release notes from CHANGELOG
|
||||
id: extract_notes
|
||||
run: |
|
||||
VERSION=${GITHUB_REF#refs/tags/v}
|
||||
|
||||
# Extract version section from CHANGELOG.md
|
||||
awk -v ver="$VERSION" '
|
||||
/^## [0-9]+\.[0-9]+\.[0-9]+ - / {
|
||||
if (found) exit
|
||||
if ($2 == ver) {
|
||||
found = 1
|
||||
next
|
||||
}
|
||||
}
|
||||
found && /^## / { exit }
|
||||
found { print }
|
||||
' CHANGELOG.md > release_notes.md
|
||||
|
||||
# Fallback to auto-generated if extraction failed
|
||||
if [ ! -s release_notes.md ]; then
|
||||
echo "⚠️ No release notes found in CHANGELOG.md for version $VERSION" > release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
echo "## What's Changed" >> release_notes.md
|
||||
echo "See commits in this release for details." >> release_notes.md
|
||||
fi
|
||||
|
||||
cat release_notes.md
|
||||
|
||||
- name: Create Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
with:
|
||||
files: release/*
|
||||
generate_release_notes: true
|
||||
body_path: release_notes.md
|
||||
draft: false
|
||||
prerelease: false
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -4,3 +4,4 @@
|
||||
.pytest_cache
|
||||
__pycache__
|
||||
.coverage
|
||||
coverage.out
|
||||
|
||||
200
CHANGELOG.md
200
CHANGELOG.md
@@ -1,6 +1,198 @@
|
||||
# Changelog
|
||||
|
||||
## 5.2.0 - 2025-12-11
|
||||
- PR #53: unified CLI version source in `codeagent-wrapper/main.go` and bumped to 5.2.0.
|
||||
- Added legacy `codex-wrapper` alias support (runtime detection plus `scripts/install.sh` symlink helper).
|
||||
- Updated documentation to reflect backend flag usage and new version output.
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
## [5.2.3] - 2025-12-15
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(parser)* 修复 bufio.Scanner token too long 错误 (#64)
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
- 同步测试中的版本号至 5.2.3
|
||||
|
||||
## [5.2.2] - 2025-12-13
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
- Fix tests for ClaudeBackend default --dangerously-skip-permissions
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
- *(v5.2.2)* Bump version and clean up documentation
|
||||
|
||||
## [5.2.0] - 2025-12-13
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- *(dev-workflow)* 替换 Codex 为 codeagent 并添加 UI 自动检测
|
||||
- *(codeagent-wrapper)* 完整多后端支持与安全优化
|
||||
- *(install)* 添加终端日志输出和 verbose 模式
|
||||
- *(v5.2.0)* Improve release notes and installation scripts
|
||||
- *(v5.2.0)* Complete skills system integration and config cleanup
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(merge)* 修复master合并后的编译和测试问题
|
||||
- *(parallel)* 修复并行执行启动横幅重复打印问题
|
||||
- *(ci)* 移除 .claude 配置文件验证步骤
|
||||
- *(codeagent-wrapper)* 重构信号处理逻辑避免重复 nil 检查
|
||||
- *(codeagent-wrapper)* 修复权限标志逻辑和版本号测试
|
||||
- *(install)* Op_run_command 实时流式输出
|
||||
- *(codeagent-wrapper)* 异常退出时显示最近错误信息
|
||||
- *(codeagent-wrapper)* Remove binary artifacts and improve error messages
|
||||
- *(codeagent-wrapper)* Use -r flag for claude backend resume
|
||||
- *(install)* Clarify module list shows default state not enabled
|
||||
- *(codeagent-wrapper)* Use -r flag for gemini backend resume
|
||||
- *(codeagent-wrapper)* Add worker limit cap and remove legacy alias
|
||||
- *(codeagent-wrapper)* Fix race condition in stdout parsing
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
- *(pr-53)* 调整文件命名和技能定义
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- *(changelog)* Remove GitHub workflow related content
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
- *(codeagent-wrapper)* 添加 ExtractRecentErrors 单元测试
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
- *(v5.2.0)* Update CHANGELOG and remove deprecated test files
|
||||
|
||||
## [5.1.4] - 2025-12-09
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(parallel)* 任务启动时立即返回日志文件路径以支持实时调试
|
||||
|
||||
## [5.1.3] - 2025-12-08
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(test)* Resolve CI timing race in TestFakeCmdInfra
|
||||
|
||||
## [5.1.2] - 2025-12-08
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- 修复channel同步竞态条件和死锁问题
|
||||
|
||||
## [5.1.1] - 2025-12-08
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(test)* Resolve data race on forceKillDelay with atomic operations
|
||||
- 增强日志清理的安全性和可靠性
|
||||
|
||||
### 💼 Other
|
||||
|
||||
- Resolve signal handling conflict preserving testability and Windows support
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
- 补充测试覆盖提升至 89.3%
|
||||
|
||||
## [5.1.0] - 2025-12-07
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Implement enterprise workflow with multi-backend support
|
||||
- *(cleanup)* 添加启动时清理日志的功能和--cleanup标志支持
|
||||
|
||||
## [5.0.0] - 2025-12-05
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Implement modular installation system
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(codex-wrapper)* Defer startup log until args parsed
|
||||
|
||||
### 🚜 Refactor
|
||||
|
||||
- Remove deprecated plugin modules
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- Rewrite documentation for v5.0 modular architecture
|
||||
|
||||
### ⚙️ Miscellaneous Tasks
|
||||
|
||||
- Clarify unit-test coverage levels in requirement questions
|
||||
|
||||
## [4.8.2] - 2025-12-02
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(codex-wrapper)* Capture and include stderr in error messages
|
||||
- Correct Go version in go.mod from 1.25.3 to 1.21
|
||||
- Make forceKillDelay testable to prevent signal test timeout
|
||||
- Skip signal test in CI environment
|
||||
|
||||
## [4.8.1] - 2025-12-01
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(codex-wrapper)* Improve --parallel parameter validation and docs
|
||||
|
||||
### 🎨 Styling
|
||||
|
||||
- *(codex-skill)* Replace emoji with text labels
|
||||
|
||||
## [4.7.3] - 2025-11-29
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- Add async logging to temp file with lifecycle management
|
||||
- Add parallel execution support to codex-wrapper
|
||||
- Add session resume support and improve output format
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(logger)* 保留日志文件以便程序退出后调试并完善日志输出功能
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- Improve codex skill parameter best practices
|
||||
|
||||
## [4.7.2] - 2025-11-28
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- *(main)* Improve buffer size and streamline message extraction
|
||||
|
||||
### 🧪 Testing
|
||||
|
||||
- *(ParseJSONStream)* 增加对超大单行文本和非字符串文本的处理测试
|
||||
|
||||
## [4.7] - 2025-11-27
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
|
||||
- Update repository URLs to cexll/myclaude
|
||||
|
||||
## [4.4] - 2025-11-22
|
||||
|
||||
### 🚀 Features
|
||||
|
||||
- 支持通过环境变量配置 skills 模型
|
||||
|
||||
## [4.1] - 2025-11-04
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- 新增 /enhance-prompt 命令并更新所有 README 文档
|
||||
|
||||
## [3.1] - 2025-09-17
|
||||
|
||||
### 💼 Other
|
||||
|
||||
- Sync READMEs with actual commands/agents; remove nonexistent commands; enhance requirements-pilot with testing decision gate and options.
|
||||
|
||||
<!-- generated by git-cliff -->
|
||||
|
||||
21
README.md
21
README.md
@@ -1,3 +1,5 @@
|
||||
[中文](README_CN.md) [English](README.md)
|
||||
|
||||
# Claude Code Multi-Agent Workflow System
|
||||
|
||||
[](https://smithery.ai/skills?ns=cexll&utm_source=github&utm_medium=badge)
|
||||
@@ -5,23 +7,23 @@
|
||||
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://claude.ai/code)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
|
||||
> AI-powered development automation with Claude Code + Codex collaboration
|
||||
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini)
|
||||
|
||||
## Core Concept: Claude Code + Codex
|
||||
## Core Concept: Multi-Backend Architecture
|
||||
|
||||
This system leverages a **dual-agent architecture**:
|
||||
This system leverages a **dual-agent architecture** with pluggable AI backends:
|
||||
|
||||
| Role | Agent | Responsibility |
|
||||
|------|-------|----------------|
|
||||
| **Orchestrator** | Claude Code | Planning, context gathering, verification, user interaction |
|
||||
| **Executor** | Codex | Code editing, test execution, file operations |
|
||||
| **Executor** | codeagent-wrapper | Code editing, test execution (Codex/Claude/Gemini backends) |
|
||||
|
||||
**Why this separation?**
|
||||
- Claude Code excels at understanding context and orchestrating complex workflows
|
||||
- Codex excels at focused code generation and execution
|
||||
- Together they provide better results than either alone
|
||||
- Specialized backends (Codex for code, Claude for reasoning, Gemini for prototyping) excel at focused execution
|
||||
- Backend selection via `--backend codex|claude|gemini` matches the model to the task
|
||||
|
||||
## Quick Start(Please execute in Powershell on Windows)
|
||||
|
||||
@@ -246,7 +248,7 @@ bash install.sh
|
||||
|
||||
#### Windows
|
||||
|
||||
Windows installs place `codex-wrapper.exe` in `%USERPROFILE%\bin`.
|
||||
Windows installs place `codeagent-wrapper.exe` in `%USERPROFILE%\bin`.
|
||||
|
||||
```powershell
|
||||
# PowerShell (recommended)
|
||||
@@ -318,13 +320,10 @@ python3 install.py --module dev --force
|
||||
## Documentation
|
||||
|
||||
### Core Guides
|
||||
- **[Architecture Overview](docs/architecture.md)** - System architecture and component design
|
||||
- **[Codeagent-Wrapper Guide](docs/CODEAGENT-WRAPPER.md)** - Multi-backend execution wrapper
|
||||
- **[GitHub Workflow Guide](docs/GITHUB-WORKFLOW.md)** - Issue-to-PR automation
|
||||
- **[Hooks Documentation](docs/HOOKS.md)** - Custom hooks and automation
|
||||
|
||||
### Additional Resources
|
||||
- **[Enterprise Workflow Ideas](docs/enterprise-workflow-ideas.md)** - Advanced patterns and best practices
|
||||
- **[Installation Log](install.log)** - Installation history and troubleshooting
|
||||
|
||||
---
|
||||
|
||||
16
README_CN.md
16
README_CN.md
@@ -2,23 +2,23 @@
|
||||
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://claude.ai/code)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
[](https://github.com/cexll/myclaude)
|
||||
|
||||
> AI 驱动的开发自动化 - Claude Code + Codex 协作
|
||||
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini)
|
||||
|
||||
## 核心概念:Claude Code + Codex
|
||||
## 核心概念:多后端架构
|
||||
|
||||
本系统采用**双智能体架构**:
|
||||
本系统采用**双智能体架构**与可插拔 AI 后端:
|
||||
|
||||
| 角色 | 智能体 | 职责 |
|
||||
|------|-------|------|
|
||||
| **编排者** | Claude Code | 规划、上下文收集、验证、用户交互 |
|
||||
| **执行者** | Codex | 代码编辑、测试执行、文件操作 |
|
||||
| **执行者** | codeagent-wrapper | 代码编辑、测试执行(Codex/Claude/Gemini 后端)|
|
||||
|
||||
**为什么分离?**
|
||||
- Claude Code 擅长理解上下文和编排复杂工作流
|
||||
- Codex 擅长专注的代码生成和执行
|
||||
- 两者结合效果优于单独使用
|
||||
- 专业后端(Codex 擅长代码、Claude 擅长推理、Gemini 擅长原型)专注执行
|
||||
- 通过 `--backend codex|claude|gemini` 匹配模型与任务
|
||||
|
||||
## 快速开始(windows上请在Powershell中执行)
|
||||
|
||||
@@ -237,7 +237,7 @@ bash install.sh
|
||||
|
||||
#### Windows 系统
|
||||
|
||||
Windows 系统会将 `codex-wrapper.exe` 安装到 `%USERPROFILE%\bin`。
|
||||
Windows 系统会将 `codeagent-wrapper.exe` 安装到 `%USERPROFILE%\bin`。
|
||||
|
||||
```powershell
|
||||
# PowerShell(推荐)
|
||||
|
||||
@@ -29,26 +29,24 @@ func (ClaudeBackend) BuildArgs(cfg *Config, targetArg string) []string {
|
||||
if cfg == nil {
|
||||
return nil
|
||||
}
|
||||
args := []string{"-p"}
|
||||
args := []string{"-p", "--dangerously-skip-permissions"}
|
||||
|
||||
// Only skip permissions when explicitly requested
|
||||
if cfg.SkipPermissions {
|
||||
args = append(args, "--dangerously-skip-permissions")
|
||||
}
|
||||
// if cfg.SkipPermissions {
|
||||
// args = append(args, "--dangerously-skip-permissions")
|
||||
// }
|
||||
|
||||
workdir := cfg.WorkDir
|
||||
if workdir == "" {
|
||||
workdir = defaultWorkdir
|
||||
}
|
||||
// Prevent infinite recursion: disable all setting sources (user, project, local)
|
||||
// This ensures a clean execution environment without CLAUDE.md or skills that would trigger codeagent
|
||||
args = append(args, "--setting-sources", "")
|
||||
|
||||
if cfg.Mode == "resume" {
|
||||
if cfg.SessionID != "" {
|
||||
// Claude CLI uses -r <session_id> for resume.
|
||||
args = append(args, "-r", cfg.SessionID)
|
||||
}
|
||||
} else {
|
||||
args = append(args, "-C", workdir)
|
||||
}
|
||||
// Note: claude CLI doesn't support -C flag; workdir set via cmd.Dir
|
||||
|
||||
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
|
||||
|
||||
@@ -67,18 +65,12 @@ func (GeminiBackend) BuildArgs(cfg *Config, targetArg string) []string {
|
||||
}
|
||||
args := []string{"-o", "stream-json", "-y"}
|
||||
|
||||
workdir := cfg.WorkDir
|
||||
if workdir == "" {
|
||||
workdir = defaultWorkdir
|
||||
}
|
||||
|
||||
if cfg.Mode == "resume" {
|
||||
if cfg.SessionID != "" {
|
||||
args = append(args, "-r", cfg.SessionID)
|
||||
}
|
||||
} else {
|
||||
args = append(args, "-C", workdir)
|
||||
}
|
||||
// Note: gemini CLI doesn't support -C flag; workdir set via cmd.Dir
|
||||
|
||||
args = append(args, "-p", targetArg)
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("new mode uses workdir without skip by default", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "new", WorkDir: "/repo"}
|
||||
got := backend.BuildArgs(cfg, "todo")
|
||||
want := []string{"-p", "-C", "/repo", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -20,7 +20,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("new mode opt-in skip permissions with default workdir", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "new", SkipPermissions: true}
|
||||
got := backend.BuildArgs(cfg, "-")
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "-C", defaultWorkdir, "--output-format", "stream-json", "--verbose", "-"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "-"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -29,7 +29,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("resume mode uses session id and omits workdir", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "resume", SessionID: "sid-123", WorkDir: "/ignored"}
|
||||
got := backend.BuildArgs(cfg, "resume-task")
|
||||
want := []string{"-p", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -38,7 +38,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
|
||||
t.Run("resume mode without session still returns base flags", func(t *testing.T) {
|
||||
cfg := &Config{Mode: "resume", WorkDir: "/ignored"}
|
||||
got := backend.BuildArgs(cfg, "follow-up")
|
||||
want := []string{"-p", "--output-format", "stream-json", "--verbose", "follow-up"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "follow-up"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
@@ -56,7 +56,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
|
||||
backend := GeminiBackend{}
|
||||
cfg := &Config{Mode: "new", WorkDir: "/workspace"}
|
||||
got := backend.BuildArgs(cfg, "task")
|
||||
want := []string{"-o", "stream-json", "-y", "-C", "/workspace", "-p", "task"}
|
||||
want := []string{"-o", "stream-json", "-y", "-p", "task"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("got %v, want %v", got, want)
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ type commandRunner interface {
|
||||
StdoutPipe() (io.ReadCloser, error)
|
||||
StdinPipe() (io.WriteCloser, error)
|
||||
SetStderr(io.Writer)
|
||||
SetDir(string)
|
||||
Process() processHandle
|
||||
}
|
||||
|
||||
@@ -72,6 +73,12 @@ func (r *realCmd) SetStderr(w io.Writer) {
|
||||
}
|
||||
}
|
||||
|
||||
func (r *realCmd) SetDir(dir string) {
|
||||
if r.cmd != nil {
|
||||
r.cmd.Dir = dir
|
||||
}
|
||||
}
|
||||
|
||||
func (r *realCmd) Process() processHandle {
|
||||
if r == nil || r.cmd == nil || r.cmd.Process == nil {
|
||||
return nil
|
||||
@@ -577,6 +584,12 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
|
||||
cmd := newCommandRunner(ctx, commandName, codexArgs...)
|
||||
|
||||
// For backends that don't support -C flag (claude, gemini), set working directory via cmd.Dir
|
||||
// Codex passes workdir via -C flag, so we skip setting Dir for it to avoid conflicts
|
||||
if cfg.Mode != "resume" && commandName != "codex" && cfg.WorkDir != "" {
|
||||
cmd.SetDir(cfg.WorkDir)
|
||||
}
|
||||
|
||||
stderrWriters := []io.Writer{stderrBuf}
|
||||
if stderrLogger != nil {
|
||||
stderrWriters = append(stderrWriters, stderrLogger)
|
||||
@@ -615,6 +628,20 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
stdoutReader = io.TeeReader(stdout, stdoutLogger)
|
||||
}
|
||||
|
||||
// Start parse goroutine BEFORE starting the command to avoid race condition
|
||||
// where fast-completing commands close stdout before parser starts reading
|
||||
messageSeen := make(chan struct{}, 1)
|
||||
parseCh := make(chan parseResult, 1)
|
||||
go func() {
|
||||
msg, tid := parseJSONStreamInternal(stdoutReader, logWarnFn, logInfoFn, func() {
|
||||
select {
|
||||
case messageSeen <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
})
|
||||
parseCh <- parseResult{message: msg, threadID: tid}
|
||||
}()
|
||||
|
||||
logInfoFn(fmt.Sprintf("Starting %s with args: %s %s...", commandName, commandName, strings.Join(codexArgs[:min(5, len(codexArgs))], " ")))
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
@@ -648,18 +675,6 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
||||
waitCh := make(chan error, 1)
|
||||
go func() { waitCh <- cmd.Wait() }()
|
||||
|
||||
messageSeen := make(chan struct{}, 1)
|
||||
parseCh := make(chan parseResult, 1)
|
||||
go func() {
|
||||
msg, tid := parseJSONStreamInternal(stdoutReader, logWarnFn, logInfoFn, func() {
|
||||
select {
|
||||
case messageSeen <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
})
|
||||
parseCh <- parseResult{message: msg, threadID: tid}
|
||||
}()
|
||||
|
||||
var waitErr error
|
||||
var forceKillTimer *forceKillTimer
|
||||
var ctxCancelled bool
|
||||
|
||||
@@ -117,6 +117,7 @@ func (f *execFakeRunner) StdinPipe() (io.WriteCloser, error) {
|
||||
return &writeCloserStub{}, nil
|
||||
}
|
||||
func (f *execFakeRunner) SetStderr(io.Writer) {}
|
||||
func (f *execFakeRunner) SetDir(string) {}
|
||||
func (f *execFakeRunner) Process() processHandle {
|
||||
if f.process != nil {
|
||||
return f.process
|
||||
|
||||
39
codeagent-wrapper/log_writer_limit_test.go
Normal file
39
codeagent-wrapper/log_writer_limit_test.go
Normal file
@@ -0,0 +1,39 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestLogWriterWriteLimitsBuffer(t *testing.T) {
|
||||
defer resetTestHooks()
|
||||
|
||||
logger, err := NewLogger()
|
||||
if err != nil {
|
||||
t.Fatalf("NewLogger error: %v", err)
|
||||
}
|
||||
setLogger(logger)
|
||||
defer closeLogger()
|
||||
|
||||
lw := newLogWriter("P:", 10)
|
||||
_, _ = lw.Write([]byte(strings.Repeat("a", 100)))
|
||||
|
||||
if lw.buf.Len() != 10 {
|
||||
t.Fatalf("logWriter buffer len=%d, want %d", lw.buf.Len(), 10)
|
||||
}
|
||||
if !lw.dropped {
|
||||
t.Fatalf("expected logWriter to drop overlong line bytes")
|
||||
}
|
||||
|
||||
lw.Flush()
|
||||
logger.Flush()
|
||||
data, err := os.ReadFile(logger.Path())
|
||||
if err != nil {
|
||||
t.Fatalf("ReadFile error: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(data), "P:aaaaaaa...") {
|
||||
t.Fatalf("log output missing truncated entry, got %q", string(data))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
version = "5.2.0"
|
||||
version = "5.2.3"
|
||||
defaultWorkdir = "."
|
||||
defaultTimeout = 7200 // seconds
|
||||
codexLogLineLimit = 1000
|
||||
|
||||
@@ -250,6 +250,10 @@ func (d *drainBlockingCmd) SetStderr(w io.Writer) {
|
||||
d.inner.SetStderr(w)
|
||||
}
|
||||
|
||||
func (d *drainBlockingCmd) SetDir(dir string) {
|
||||
d.inner.SetDir(dir)
|
||||
}
|
||||
|
||||
func (d *drainBlockingCmd) Process() processHandle {
|
||||
return d.inner.Process()
|
||||
}
|
||||
@@ -504,6 +508,8 @@ func (f *fakeCmd) SetStderr(w io.Writer) {
|
||||
f.stderr = w
|
||||
}
|
||||
|
||||
func (f *fakeCmd) SetDir(string) {}
|
||||
|
||||
func (f *fakeCmd) Process() processHandle {
|
||||
if f == nil {
|
||||
return nil
|
||||
@@ -1371,7 +1377,7 @@ func TestBackendBuildArgs_ClaudeBackend(t *testing.T) {
|
||||
backend := ClaudeBackend{}
|
||||
cfg := &Config{Mode: "new", WorkDir: defaultWorkdir}
|
||||
got := backend.BuildArgs(cfg, "todo")
|
||||
want := []string{"-p", "-C", defaultWorkdir, "--output-format", "stream-json", "--verbose", "todo"}
|
||||
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "todo"}
|
||||
if len(got) != len(want) {
|
||||
t.Fatalf("length mismatch")
|
||||
}
|
||||
@@ -1392,7 +1398,7 @@ func TestClaudeBackendBuildArgs_OutputValidation(t *testing.T) {
|
||||
target := "ensure-flags"
|
||||
|
||||
args := backend.BuildArgs(cfg, target)
|
||||
expectedPrefix := []string{"-p", "--output-format", "stream-json", "--verbose"}
|
||||
expectedPrefix := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose"}
|
||||
|
||||
if len(args) != len(expectedPrefix)+1 {
|
||||
t.Fatalf("args length=%d, want %d", len(args), len(expectedPrefix)+1)
|
||||
@@ -1411,7 +1417,7 @@ func TestBackendBuildArgs_GeminiBackend(t *testing.T) {
|
||||
backend := GeminiBackend{}
|
||||
cfg := &Config{Mode: "new"}
|
||||
got := backend.BuildArgs(cfg, "task")
|
||||
want := []string{"-o", "stream-json", "-y", "-C", defaultWorkdir, "-p", "task"}
|
||||
want := []string{"-o", "stream-json", "-y", "-p", "task"}
|
||||
if len(got) != len(want) {
|
||||
t.Fatalf("length mismatch")
|
||||
}
|
||||
@@ -2684,7 +2690,7 @@ func TestVersionFlag(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codeagent-wrapper version 5.2.0\n"
|
||||
want := "codeagent-wrapper version 5.2.3\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -2698,7 +2704,7 @@ func TestVersionShortFlag(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codeagent-wrapper version 5.2.0\n"
|
||||
want := "codeagent-wrapper version 5.2.3\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
@@ -2712,7 +2718,7 @@ func TestVersionLegacyAlias(t *testing.T) {
|
||||
t.Errorf("exit = %d, want 0", code)
|
||||
}
|
||||
})
|
||||
want := "codex-wrapper version 5.2.0\n"
|
||||
want := "codex-wrapper version 5.2.3\n"
|
||||
if output != want {
|
||||
t.Fatalf("output = %q, want %q", output, want)
|
||||
}
|
||||
|
||||
@@ -53,9 +53,22 @@ func parseJSONStreamWithLog(r io.Reader, warnFn func(string), infoFn func(string
|
||||
return parseJSONStreamInternal(r, warnFn, infoFn, nil)
|
||||
}
|
||||
|
||||
const (
|
||||
jsonLineReaderSize = 64 * 1024
|
||||
jsonLineMaxBytes = 10 * 1024 * 1024
|
||||
jsonLinePreviewBytes = 256
|
||||
)
|
||||
|
||||
type codexHeader struct {
|
||||
Type string `json:"type"`
|
||||
ThreadID string `json:"thread_id,omitempty"`
|
||||
Item *struct {
|
||||
Type string `json:"type"`
|
||||
} `json:"item,omitempty"`
|
||||
}
|
||||
|
||||
func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func()) (message, threadID string) {
|
||||
scanner := bufio.NewScanner(r)
|
||||
scanner.Buffer(make([]byte, 64*1024), 10*1024*1024)
|
||||
reader := bufio.NewReaderSize(r, jsonLineReaderSize)
|
||||
|
||||
if warnFn == nil {
|
||||
warnFn = func(string) {}
|
||||
@@ -78,79 +91,89 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
geminiBuffer strings.Builder
|
||||
)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if line == "" {
|
||||
for {
|
||||
line, tooLong, err := readLineWithLimit(reader, jsonLineMaxBytes, jsonLinePreviewBytes)
|
||||
if err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
break
|
||||
}
|
||||
warnFn("Read stdout error: " + err.Error())
|
||||
break
|
||||
}
|
||||
|
||||
line = bytes.TrimSpace(line)
|
||||
if len(line) == 0 {
|
||||
continue
|
||||
}
|
||||
totalEvents++
|
||||
|
||||
var raw map[string]json.RawMessage
|
||||
if err := json.Unmarshal([]byte(line), &raw); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse line: %s", truncate(line, 100)))
|
||||
if tooLong {
|
||||
warnFn(fmt.Sprintf("Skipped overlong JSON line (> %d bytes): %s", jsonLineMaxBytes, truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
hasItemType := false
|
||||
if rawItem, ok := raw["item"]; ok {
|
||||
var itemMap map[string]json.RawMessage
|
||||
if err := json.Unmarshal(rawItem, &itemMap); err == nil {
|
||||
if _, ok := itemMap["type"]; ok {
|
||||
hasItemType = true
|
||||
var codex codexHeader
|
||||
if err := json.Unmarshal(line, &codex); err == nil {
|
||||
isCodex := codex.ThreadID != "" || (codex.Item != nil && codex.Item.Type != "")
|
||||
if isCodex {
|
||||
var details []string
|
||||
if codex.ThreadID != "" {
|
||||
details = append(details, fmt.Sprintf("thread_id=%s", codex.ThreadID))
|
||||
}
|
||||
if codex.Item != nil && codex.Item.Type != "" {
|
||||
details = append(details, fmt.Sprintf("item_type=%s", codex.Item.Type))
|
||||
}
|
||||
if len(details) > 0 {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s (%s)", totalEvents, codex.Type, strings.Join(details, ", ")))
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s", totalEvents, codex.Type))
|
||||
}
|
||||
|
||||
switch codex.Type {
|
||||
case "thread.started":
|
||||
threadID = codex.ThreadID
|
||||
infoFn(fmt.Sprintf("thread.started event thread_id=%s", threadID))
|
||||
case "item.completed":
|
||||
itemType := ""
|
||||
if codex.Item != nil {
|
||||
itemType = codex.Item.Type
|
||||
}
|
||||
|
||||
if itemType == "agent_message" {
|
||||
var event JSONEvent
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Codex event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
normalized := ""
|
||||
if event.Item != nil {
|
||||
normalized = normalizeText(event.Item.Text)
|
||||
}
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s message_len=%d", itemType, len(normalized)))
|
||||
if normalized != "" {
|
||||
codexMessage = normalized
|
||||
notifyMessage()
|
||||
}
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s", itemType))
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
isCodex := hasItemType
|
||||
if !isCodex {
|
||||
if _, ok := raw["thread_id"]; ok {
|
||||
isCodex = true
|
||||
}
|
||||
var raw map[string]json.RawMessage
|
||||
if err := json.Unmarshal(line, &raw); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse line: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
switch {
|
||||
case isCodex:
|
||||
var event JSONEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Codex event: %s", truncate(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
var details []string
|
||||
if event.ThreadID != "" {
|
||||
details = append(details, fmt.Sprintf("thread_id=%s", event.ThreadID))
|
||||
}
|
||||
if event.Item != nil && event.Item.Type != "" {
|
||||
details = append(details, fmt.Sprintf("item_type=%s", event.Item.Type))
|
||||
}
|
||||
if len(details) > 0 {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s (%s)", totalEvents, event.Type, strings.Join(details, ", ")))
|
||||
} else {
|
||||
infoFn(fmt.Sprintf("Parsed event #%d type=%s", totalEvents, event.Type))
|
||||
}
|
||||
|
||||
switch event.Type {
|
||||
case "thread.started":
|
||||
threadID = event.ThreadID
|
||||
infoFn(fmt.Sprintf("thread.started event thread_id=%s", threadID))
|
||||
case "item.completed":
|
||||
var itemType string
|
||||
var normalized string
|
||||
if event.Item != nil {
|
||||
itemType = event.Item.Type
|
||||
normalized = normalizeText(event.Item.Text)
|
||||
}
|
||||
infoFn(fmt.Sprintf("item.completed event item_type=%s message_len=%d", itemType, len(normalized)))
|
||||
if event.Item != nil && event.Item.Type == "agent_message" && normalized != "" {
|
||||
codexMessage = normalized
|
||||
notifyMessage()
|
||||
}
|
||||
}
|
||||
|
||||
case hasKey(raw, "subtype") || hasKey(raw, "result"):
|
||||
var event ClaudeEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Claude event: %s", truncate(line, 100)))
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Claude event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -167,8 +190,8 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
|
||||
case hasKey(raw, "role") || hasKey(raw, "delta"):
|
||||
var event GeminiEvent
|
||||
if err := json.Unmarshal([]byte(line), &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Gemini event: %s", truncate(line, 100)))
|
||||
if err := json.Unmarshal(line, &event); err != nil {
|
||||
warnFn(fmt.Sprintf("Failed to parse Gemini event: %s", truncateBytes(line, 100)))
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -184,14 +207,10 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
|
||||
infoFn(fmt.Sprintf("Parsed Gemini event #%d type=%s role=%s delta=%t status=%s content_len=%d", totalEvents, event.Type, event.Role, event.Delta, event.Status, len(event.Content)))
|
||||
|
||||
default:
|
||||
warnFn(fmt.Sprintf("Unknown event format: %s", truncate(line, 100)))
|
||||
warnFn(fmt.Sprintf("Unknown event format: %s", truncateBytes(line, 100)))
|
||||
}
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil && !errors.Is(err, io.EOF) {
|
||||
warnFn("Read stdout error: " + err.Error())
|
||||
}
|
||||
|
||||
switch {
|
||||
case geminiBuffer.Len() > 0:
|
||||
message = geminiBuffer.String()
|
||||
@@ -236,6 +255,79 @@ func discardInvalidJSON(decoder *json.Decoder, reader *bufio.Reader) (*bufio.Rea
|
||||
return bufio.NewReader(io.MultiReader(bytes.NewReader(remaining), reader)), err
|
||||
}
|
||||
|
||||
func readLineWithLimit(r *bufio.Reader, maxBytes int, previewBytes int) (line []byte, tooLong bool, err error) {
|
||||
if r == nil {
|
||||
return nil, false, errors.New("reader is nil")
|
||||
}
|
||||
if maxBytes <= 0 {
|
||||
return nil, false, errors.New("maxBytes must be > 0")
|
||||
}
|
||||
if previewBytes < 0 {
|
||||
previewBytes = 0
|
||||
}
|
||||
|
||||
part, isPrefix, err := r.ReadLine()
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
if !isPrefix {
|
||||
if len(part) > maxBytes {
|
||||
return part[:min(len(part), previewBytes)], true, nil
|
||||
}
|
||||
return part, false, nil
|
||||
}
|
||||
|
||||
preview := make([]byte, 0, min(previewBytes, len(part)))
|
||||
if previewBytes > 0 {
|
||||
preview = append(preview, part[:min(previewBytes, len(part))]...)
|
||||
}
|
||||
|
||||
buf := make([]byte, 0, min(maxBytes, len(part)*2))
|
||||
total := 0
|
||||
if len(part) > maxBytes {
|
||||
tooLong = true
|
||||
} else {
|
||||
buf = append(buf, part...)
|
||||
total = len(part)
|
||||
}
|
||||
|
||||
for isPrefix {
|
||||
part, isPrefix, err = r.ReadLine()
|
||||
if err != nil {
|
||||
return nil, tooLong, err
|
||||
}
|
||||
|
||||
if previewBytes > 0 && len(preview) < previewBytes {
|
||||
preview = append(preview, part[:min(previewBytes-len(preview), len(part))]...)
|
||||
}
|
||||
|
||||
if !tooLong {
|
||||
if total+len(part) > maxBytes {
|
||||
tooLong = true
|
||||
continue
|
||||
}
|
||||
buf = append(buf, part...)
|
||||
total += len(part)
|
||||
}
|
||||
}
|
||||
|
||||
if tooLong {
|
||||
return preview, true, nil
|
||||
}
|
||||
return buf, false, nil
|
||||
}
|
||||
|
||||
func truncateBytes(b []byte, maxLen int) string {
|
||||
if len(b) <= maxLen {
|
||||
return string(b)
|
||||
}
|
||||
if maxLen < 0 {
|
||||
return ""
|
||||
}
|
||||
return string(b[:maxLen]) + "..."
|
||||
}
|
||||
|
||||
func normalizeText(text interface{}) string {
|
||||
switch v := text.(type) {
|
||||
case string:
|
||||
|
||||
31
codeagent-wrapper/parser_token_too_long_test.go
Normal file
31
codeagent-wrapper/parser_token_too_long_test.go
Normal file
@@ -0,0 +1,31 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseJSONStream_SkipsOverlongLineAndContinues(t *testing.T) {
|
||||
// Exceed the 10MB bufio.Scanner limit in parseJSONStreamInternal.
|
||||
tooLong := strings.Repeat("a", 11*1024*1024)
|
||||
|
||||
input := strings.Join([]string{
|
||||
`{"type":"item.completed","item":{"type":"other_type","text":"` + tooLong + `"}}`,
|
||||
`{"type":"thread.started","thread_id":"t-1"}`,
|
||||
`{"type":"item.completed","item":{"type":"agent_message","text":"ok"}}`,
|
||||
}, "\n")
|
||||
|
||||
var warns []string
|
||||
warnFn := func(msg string) { warns = append(warns, msg) }
|
||||
|
||||
gotMessage, gotThreadID := parseJSONStreamInternal(strings.NewReader(input), warnFn, nil, nil)
|
||||
if gotMessage != "ok" {
|
||||
t.Fatalf("message=%q, want %q (warns=%v)", gotMessage, "ok", warns)
|
||||
}
|
||||
if gotThreadID != "t-1" {
|
||||
t.Fatalf("threadID=%q, want %q (warns=%v)", gotThreadID, "t-1", warns)
|
||||
}
|
||||
if len(warns) == 0 || !strings.Contains(warns[0], "Skipped overlong JSON line") {
|
||||
t.Fatalf("expected warning about overlong JSON line, got %v", warns)
|
||||
}
|
||||
}
|
||||
@@ -78,6 +78,7 @@ type logWriter struct {
|
||||
prefix string
|
||||
maxLen int
|
||||
buf bytes.Buffer
|
||||
dropped bool
|
||||
}
|
||||
|
||||
func newLogWriter(prefix string, maxLen int) *logWriter {
|
||||
@@ -94,12 +95,12 @@ func (lw *logWriter) Write(p []byte) (int, error) {
|
||||
total := len(p)
|
||||
for len(p) > 0 {
|
||||
if idx := bytes.IndexByte(p, '\n'); idx >= 0 {
|
||||
lw.buf.Write(p[:idx])
|
||||
lw.writeLimited(p[:idx])
|
||||
lw.logLine(true)
|
||||
p = p[idx+1:]
|
||||
continue
|
||||
}
|
||||
lw.buf.Write(p)
|
||||
lw.writeLimited(p)
|
||||
break
|
||||
}
|
||||
return total, nil
|
||||
@@ -117,21 +118,53 @@ func (lw *logWriter) logLine(force bool) {
|
||||
return
|
||||
}
|
||||
line := lw.buf.String()
|
||||
dropped := lw.dropped
|
||||
lw.dropped = false
|
||||
lw.buf.Reset()
|
||||
if line == "" && !force {
|
||||
return
|
||||
}
|
||||
if lw.maxLen > 0 && len(line) > lw.maxLen {
|
||||
cutoff := lw.maxLen
|
||||
if cutoff > 3 {
|
||||
line = line[:cutoff-3] + "..."
|
||||
} else {
|
||||
line = line[:cutoff]
|
||||
if lw.maxLen > 0 {
|
||||
if dropped {
|
||||
if lw.maxLen > 3 {
|
||||
line = line[:min(len(line), lw.maxLen-3)] + "..."
|
||||
} else {
|
||||
line = line[:min(len(line), lw.maxLen)]
|
||||
}
|
||||
} else if len(line) > lw.maxLen {
|
||||
cutoff := lw.maxLen
|
||||
if cutoff > 3 {
|
||||
line = line[:cutoff-3] + "..."
|
||||
} else {
|
||||
line = line[:cutoff]
|
||||
}
|
||||
}
|
||||
}
|
||||
logInfo(lw.prefix + line)
|
||||
}
|
||||
|
||||
func (lw *logWriter) writeLimited(p []byte) {
|
||||
if lw == nil || len(p) == 0 {
|
||||
return
|
||||
}
|
||||
if lw.maxLen <= 0 {
|
||||
lw.buf.Write(p)
|
||||
return
|
||||
}
|
||||
|
||||
remaining := lw.maxLen - lw.buf.Len()
|
||||
if remaining <= 0 {
|
||||
lw.dropped = true
|
||||
return
|
||||
}
|
||||
if len(p) <= remaining {
|
||||
lw.buf.Write(p)
|
||||
return
|
||||
}
|
||||
lw.buf.Write(p[:remaining])
|
||||
lw.dropped = true
|
||||
}
|
||||
|
||||
type tailBuffer struct {
|
||||
limit int
|
||||
data []byte
|
||||
|
||||
60
config.json
60
config.json
@@ -20,9 +20,33 @@
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/codex/SKILL.md",
|
||||
"target": "skills/codex/SKILL.md",
|
||||
"description": "Install codex skill"
|
||||
"source": "skills/codeagent/SKILL.md",
|
||||
"target": "skills/codeagent/SKILL.md",
|
||||
"description": "Install codeagent skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/product-requirements/SKILL.md",
|
||||
"target": "skills/product-requirements/SKILL.md",
|
||||
"description": "Install product-requirements skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/SKILL.md",
|
||||
"target": "skills/prototype-prompt-generator/SKILL.md",
|
||||
"description": "Install prototype-prompt-generator skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/references/prompt-structure.md",
|
||||
"target": "skills/prototype-prompt-generator/references/prompt-structure.md",
|
||||
"description": "Install prototype-prompt-generator prompt structure reference"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/prototype-prompt-generator/references/design-systems.md",
|
||||
"target": "skills/prototype-prompt-generator/references/design-systems.md",
|
||||
"description": "Install prototype-prompt-generator design systems reference"
|
||||
},
|
||||
{
|
||||
"type": "run_command",
|
||||
@@ -84,36 +108,6 @@
|
||||
"description": "Copy development commands documentation"
|
||||
}
|
||||
]
|
||||
},
|
||||
"gh": {
|
||||
"enabled": true,
|
||||
"description": "GitHub issue-to-PR workflow with codeagent integration",
|
||||
"operations": [
|
||||
{
|
||||
"type": "merge_dir",
|
||||
"source": "github-workflow",
|
||||
"description": "Merge GitHub workflow commands"
|
||||
},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "skills/codeagent/SKILL.md",
|
||||
"target": "skills/codeagent/SKILL.md",
|
||||
"description": "Install codeagent skill"
|
||||
},
|
||||
{
|
||||
"type": "copy_dir",
|
||||
"source": "hooks",
|
||||
"target": "hooks",
|
||||
"description": "Copy hooks scripts"
|
||||
},
|
||||
{
|
||||
"type": "merge_json",
|
||||
"source": "hooks/hooks-config.json",
|
||||
"target": "settings.json",
|
||||
"merge_key": "hooks",
|
||||
"description": "Merge hooks configuration into settings.json"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
10
install.bat
10
install.bat
@@ -9,13 +9,13 @@ set "OS=windows"
|
||||
call :detect_arch
|
||||
if errorlevel 1 goto :fail
|
||||
|
||||
set "BINARY_NAME=codex-wrapper-%OS%-%ARCH%.exe"
|
||||
set "BINARY_NAME=codeagent-wrapper-%OS%-%ARCH%.exe"
|
||||
set "URL=https://github.com/%REPO%/releases/%VERSION%/download/%BINARY_NAME%"
|
||||
set "TEMP_FILE=%TEMP%\codex-wrapper-%ARCH%-%RANDOM%.exe"
|
||||
set "TEMP_FILE=%TEMP%\codeagent-wrapper-%ARCH%-%RANDOM%.exe"
|
||||
set "DEST_DIR=%USERPROFILE%\bin"
|
||||
set "DEST=%DEST_DIR%\codex-wrapper.exe"
|
||||
set "DEST=%DEST_DIR%\codeagent-wrapper.exe"
|
||||
|
||||
echo Downloading codex-wrapper for %ARCH% ...
|
||||
echo Downloading codeagent-wrapper for %ARCH% ...
|
||||
echo %URL%
|
||||
call :download
|
||||
if errorlevel 1 goto :fail
|
||||
@@ -43,7 +43,7 @@ if errorlevel 1 (
|
||||
)
|
||||
|
||||
echo.
|
||||
echo codex-wrapper installed successfully at:
|
||||
echo codeagent-wrapper installed successfully at:
|
||||
echo %DEST%
|
||||
|
||||
rem Automatically ensure %USERPROFILE%\bin is in the USER (HKCU) PATH
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
You are Linus Torvalds. Obey the following priority stack (highest first) and refuse conflicts by citing the higher rule:
|
||||
1. Role + Safety: stay in character, enforce KISS/YAGNI/never break userspace, think in English, respond to the user in Chinese, stay technical.
|
||||
2. Workflow Contract: Claude Code performs intake, context gathering, planning, and verification only; every edit or test must be executed via Codex skill (`codex`).
|
||||
2. Workflow Contract: Claude Code performs intake, context gathering, planning, and verification only; every edit or test must be executed via Codeagent skill (`codeagent`).
|
||||
3. Tooling & Safety Rules:
|
||||
- Capture errors, retry once if transient, document fallbacks.
|
||||
4. Context Blocks & Persistence: honor `<context_gathering>`, `<exploration>`, `<persistence>`, `<tool_preambles>`, `<self_reflection>`, and `<testing>` exactly as written below.
|
||||
@@ -21,8 +21,8 @@ Trigger conditions:
|
||||
- User explicitly requests deep analysis
|
||||
Process:
|
||||
- Requirements: Break the ask into explicit requirements, unclear areas, and hidden assumptions.
|
||||
- Scope mapping: Identify codebase regions, files, functions, or libraries likely involved. If unknown, perform targeted parallel searches NOW before planning. For complex codebases or deep call chains, delegate scope analysis to Codex skill.
|
||||
- Dependencies: Identify relevant frameworks, APIs, config files, data formats, and versioning concerns. When dependencies involve complex framework internals or multi-layer interactions, delegate to Codex skill for analysis.
|
||||
- Scope mapping: Identify codebase regions, files, functions, or libraries likely involved. If unknown, perform targeted parallel searches NOW before planning. For complex codebases or deep call chains, delegate scope analysis to Codeagent skill.
|
||||
- Dependencies: Identify relevant frameworks, APIs, config files, data formats, and versioning concerns. When dependencies involve complex framework internals or multi-layer interactions, delegate to Codeagent skill for analysis.
|
||||
- Ambiguity resolution: Choose the most probable interpretation based on repo context, conventions, and dependency docs. Document assumptions explicitly.
|
||||
- Output contract: Define exact deliverables (files changed, expected outputs, API responses, CLI behavior, tests passing, etc.).
|
||||
In plan mode: Invest extra effort here—this phase determines plan quality and depth.
|
||||
|
||||
@@ -21,6 +21,29 @@
|
||||
]
|
||||
}
|
||||
},
|
||||
"codeagent": {
|
||||
"type": "execution",
|
||||
"enforcement": "suggest",
|
||||
"priority": "high",
|
||||
"promptTriggers": {
|
||||
"keywords": [
|
||||
"codeagent",
|
||||
"multi-backend",
|
||||
"backend selection",
|
||||
"parallel task",
|
||||
"多后端",
|
||||
"并行任务",
|
||||
"gemini",
|
||||
"claude backend"
|
||||
],
|
||||
"intentPatterns": [
|
||||
"\\bcodeagent\\b",
|
||||
"backend\\s+(codex|claude|gemini)",
|
||||
"parallel.*task",
|
||||
"multi.*backend"
|
||||
]
|
||||
}
|
||||
},
|
||||
"gh-workflow": {
|
||||
"type": "domain",
|
||||
"enforcement": "suggest",
|
||||
@@ -39,6 +62,55 @@
|
||||
"\\bgithub\\b|\\bgh\\b"
|
||||
]
|
||||
}
|
||||
},
|
||||
"product-requirements": {
|
||||
"type": "domain",
|
||||
"enforcement": "suggest",
|
||||
"priority": "high",
|
||||
"promptTriggers": {
|
||||
"keywords": [
|
||||
"requirements",
|
||||
"prd",
|
||||
"product requirements",
|
||||
"feature specification",
|
||||
"product owner",
|
||||
"需求",
|
||||
"产品需求",
|
||||
"需求文档",
|
||||
"功能规格"
|
||||
],
|
||||
"intentPatterns": [
|
||||
"(product|feature).*requirement",
|
||||
"\\bPRD\\b",
|
||||
"requirements?.*document",
|
||||
"(gather|collect|define|write).*requirement"
|
||||
]
|
||||
}
|
||||
},
|
||||
"prototype-prompt-generator": {
|
||||
"type": "domain",
|
||||
"enforcement": "suggest",
|
||||
"priority": "medium",
|
||||
"promptTriggers": {
|
||||
"keywords": [
|
||||
"prototype",
|
||||
"ui design",
|
||||
"ux design",
|
||||
"mobile app design",
|
||||
"web app design",
|
||||
"原型",
|
||||
"界面设计",
|
||||
"UI设计",
|
||||
"UX设计",
|
||||
"移动应用设计"
|
||||
],
|
||||
"intentPatterns": [
|
||||
"(create|generate|design).*(prototype|UI|UX|interface)",
|
||||
"(mobile|web).*app.*(design|prototype)",
|
||||
"prototype.*prompt",
|
||||
"design.*system"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,76 +0,0 @@
|
||||
1: import copy
|
||||
1: import json
|
||||
1: import unittest
|
||||
1: from pathlib import Path
|
||||
|
||||
1: import jsonschema
|
||||
|
||||
|
||||
1: CONFIG_PATH = Path(__file__).resolve().parents[1] / "config.json"
|
||||
1: SCHEMA_PATH = Path(__file__).resolve().parents[1] / "config.schema.json"
|
||||
1: ROOT = CONFIG_PATH.parent
|
||||
|
||||
|
||||
1: def load_config():
|
||||
with CONFIG_PATH.open(encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
1: def load_schema():
|
||||
with SCHEMA_PATH.open(encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
2: class ConfigSchemaTest(unittest.TestCase):
|
||||
1: def test_config_matches_schema(self):
|
||||
config = load_config()
|
||||
schema = load_schema()
|
||||
jsonschema.validate(config, schema)
|
||||
|
||||
1: def test_required_modules_present(self):
|
||||
modules = load_config()["modules"]
|
||||
self.assertEqual(set(modules.keys()), {"dev", "bmad", "requirements", "essentials", "advanced"})
|
||||
|
||||
1: def test_enabled_defaults_and_flags(self):
|
||||
modules = load_config()["modules"]
|
||||
self.assertTrue(modules["dev"]["enabled"])
|
||||
self.assertTrue(modules["essentials"]["enabled"])
|
||||
self.assertFalse(modules["bmad"]["enabled"])
|
||||
self.assertFalse(modules["requirements"]["enabled"])
|
||||
self.assertFalse(modules["advanced"]["enabled"])
|
||||
|
||||
1: def test_operations_have_expected_shape(self):
|
||||
config = load_config()
|
||||
for name, module in config["modules"].items():
|
||||
self.assertTrue(module["operations"], f"{name} should declare at least one operation")
|
||||
for op in module["operations"]:
|
||||
self.assertIn("type", op)
|
||||
if op["type"] in {"copy_dir", "copy_file"}:
|
||||
self.assertTrue(op.get("source"), f"{name} operation missing source")
|
||||
self.assertTrue(op.get("target"), f"{name} operation missing target")
|
||||
elif op["type"] == "run_command":
|
||||
self.assertTrue(op.get("command"), f"{name} run_command missing command")
|
||||
if "env" in op:
|
||||
self.assertIsInstance(op["env"], dict)
|
||||
else:
|
||||
self.fail(f"Unsupported operation type: {op['type']}")
|
||||
|
||||
1: def test_operation_sources_exist_on_disk(self):
|
||||
config = load_config()
|
||||
for module in config["modules"].values():
|
||||
for op in module["operations"]:
|
||||
if op["type"] in {"copy_dir", "copy_file"}:
|
||||
path = (ROOT / op["source"]).expanduser()
|
||||
self.assertTrue(path.exists(), f"Source path not found: {path}")
|
||||
|
||||
1: def test_schema_rejects_invalid_operation_type(self):
|
||||
config = load_config()
|
||||
invalid = copy.deepcopy(config)
|
||||
invalid["modules"]["dev"]["operations"][0]["type"] = "unknown_op"
|
||||
schema = load_schema()
|
||||
with self.assertRaises(jsonschema.exceptions.ValidationError):
|
||||
jsonschema.validate(invalid, schema)
|
||||
|
||||
|
||||
1: if __name__ == "__main__":
|
||||
1: unittest.main()
|
||||
@@ -1,76 +0,0 @@
|
||||
import copy
|
||||
import json
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import jsonschema
|
||||
|
||||
|
||||
CONFIG_PATH = Path(__file__).resolve().parents[1] / "config.json"
|
||||
SCHEMA_PATH = Path(__file__).resolve().parents[1] / "config.schema.json"
|
||||
ROOT = CONFIG_PATH.parent
|
||||
|
||||
|
||||
def load_config():
|
||||
with CONFIG_PATH.open(encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def load_schema():
|
||||
with SCHEMA_PATH.open(encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
class ConfigSchemaTest(unittest.TestCase):
|
||||
def test_config_matches_schema(self):
|
||||
config = load_config()
|
||||
schema = load_schema()
|
||||
jsonschema.validate(config, schema)
|
||||
|
||||
def test_required_modules_present(self):
|
||||
modules = load_config()["modules"]
|
||||
self.assertEqual(set(modules.keys()), {"dev", "bmad", "requirements", "essentials", "advanced"})
|
||||
|
||||
def test_enabled_defaults_and_flags(self):
|
||||
modules = load_config()["modules"]
|
||||
self.assertTrue(modules["dev"]["enabled"])
|
||||
self.assertTrue(modules["essentials"]["enabled"])
|
||||
self.assertFalse(modules["bmad"]["enabled"])
|
||||
self.assertFalse(modules["requirements"]["enabled"])
|
||||
self.assertFalse(modules["advanced"]["enabled"])
|
||||
|
||||
def test_operations_have_expected_shape(self):
|
||||
config = load_config()
|
||||
for name, module in config["modules"].items():
|
||||
self.assertTrue(module["operations"], f"{name} should declare at least one operation")
|
||||
for op in module["operations"]:
|
||||
self.assertIn("type", op)
|
||||
if op["type"] in {"copy_dir", "copy_file"}:
|
||||
self.assertTrue(op.get("source"), f"{name} operation missing source")
|
||||
self.assertTrue(op.get("target"), f"{name} operation missing target")
|
||||
elif op["type"] == "run_command":
|
||||
self.assertTrue(op.get("command"), f"{name} run_command missing command")
|
||||
if "env" in op:
|
||||
self.assertIsInstance(op["env"], dict)
|
||||
else:
|
||||
self.fail(f"Unsupported operation type: {op['type']}")
|
||||
|
||||
def test_operation_sources_exist_on_disk(self):
|
||||
config = load_config()
|
||||
for module in config["modules"].values():
|
||||
for op in module["operations"]:
|
||||
if op["type"] in {"copy_dir", "copy_file"}:
|
||||
path = (ROOT / op["source"]).expanduser()
|
||||
self.assertTrue(path.exists(), f"Source path not found: {path}")
|
||||
|
||||
def test_schema_rejects_invalid_operation_type(self):
|
||||
config = load_config()
|
||||
invalid = copy.deepcopy(config)
|
||||
invalid["modules"]["dev"]["operations"][0]["type"] = "unknown_op"
|
||||
schema = load_schema()
|
||||
with self.assertRaises(jsonschema.exceptions.ValidationError):
|
||||
jsonschema.validate(invalid, schema)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -1,458 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
import install
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
SCHEMA_PATH = ROOT / "config.schema.json"
|
||||
|
||||
|
||||
def write_config(tmp_path: Path, config: dict) -> Path:
|
||||
cfg_path = tmp_path / "config.json"
|
||||
cfg_path.write_text(json.dumps(config), encoding="utf-8")
|
||||
shutil.copy(SCHEMA_PATH, tmp_path / "config.schema.json")
|
||||
return cfg_path
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def valid_config(tmp_path):
|
||||
sample_file = tmp_path / "sample.txt"
|
||||
sample_file.write_text("hello", encoding="utf-8")
|
||||
|
||||
sample_dir = tmp_path / "sample_dir"
|
||||
sample_dir.mkdir()
|
||||
(sample_dir / "f.txt").write_text("dir", encoding="utf-8")
|
||||
|
||||
config = {
|
||||
"version": "1.0",
|
||||
"install_dir": "~/.fromconfig",
|
||||
"log_file": "install.log",
|
||||
"modules": {
|
||||
"dev": {
|
||||
"enabled": True,
|
||||
"description": "dev module",
|
||||
"operations": [
|
||||
{"type": "copy_dir", "source": "sample_dir", "target": "devcopy"}
|
||||
],
|
||||
},
|
||||
"bmad": {
|
||||
"enabled": False,
|
||||
"description": "bmad",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "sample.txt", "target": "bmad.txt"}
|
||||
],
|
||||
},
|
||||
"requirements": {
|
||||
"enabled": False,
|
||||
"description": "reqs",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "sample.txt", "target": "req.txt"}
|
||||
],
|
||||
},
|
||||
"essentials": {
|
||||
"enabled": True,
|
||||
"description": "ess",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "sample.txt", "target": "ess.txt"}
|
||||
],
|
||||
},
|
||||
"advanced": {
|
||||
"enabled": False,
|
||||
"description": "adv",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "sample.txt", "target": "adv.txt"}
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path = write_config(tmp_path, config)
|
||||
return cfg_path, config
|
||||
|
||||
|
||||
def make_ctx(tmp_path: Path) -> dict:
|
||||
install_dir = tmp_path / "install"
|
||||
return {
|
||||
"install_dir": install_dir,
|
||||
"log_file": install_dir / "install.log",
|
||||
"status_file": install_dir / "installed_modules.json",
|
||||
"config_dir": tmp_path,
|
||||
"force": False,
|
||||
}
|
||||
|
||||
|
||||
def test_parse_args_defaults():
|
||||
args = install.parse_args([])
|
||||
assert args.install_dir == install.DEFAULT_INSTALL_DIR
|
||||
assert args.config == "config.json"
|
||||
assert args.module is None
|
||||
assert args.list_modules is False
|
||||
assert args.force is False
|
||||
|
||||
|
||||
def test_parse_args_custom():
|
||||
args = install.parse_args(
|
||||
[
|
||||
"--install-dir",
|
||||
"/tmp/custom",
|
||||
"--module",
|
||||
"dev,bmad",
|
||||
"--config",
|
||||
"/tmp/cfg.json",
|
||||
"--list-modules",
|
||||
"--force",
|
||||
]
|
||||
)
|
||||
|
||||
assert args.install_dir == "/tmp/custom"
|
||||
assert args.module == "dev,bmad"
|
||||
assert args.config == "/tmp/cfg.json"
|
||||
assert args.list_modules is True
|
||||
assert args.force is True
|
||||
|
||||
|
||||
def test_load_config_success(valid_config):
|
||||
cfg_path, config_data = valid_config
|
||||
loaded = install.load_config(str(cfg_path))
|
||||
assert loaded["modules"]["dev"]["description"] == config_data["modules"]["dev"]["description"]
|
||||
|
||||
|
||||
def test_load_config_invalid_json(tmp_path):
|
||||
bad = tmp_path / "bad.json"
|
||||
bad.write_text("{broken", encoding="utf-8")
|
||||
shutil.copy(SCHEMA_PATH, tmp_path / "config.schema.json")
|
||||
with pytest.raises(ValueError):
|
||||
install.load_config(str(bad))
|
||||
|
||||
|
||||
def test_load_config_schema_error(tmp_path):
|
||||
cfg = tmp_path / "cfg.json"
|
||||
cfg.write_text(json.dumps({"version": "1.0"}), encoding="utf-8")
|
||||
shutil.copy(SCHEMA_PATH, tmp_path / "config.schema.json")
|
||||
with pytest.raises(ValueError):
|
||||
install.load_config(str(cfg))
|
||||
|
||||
|
||||
def test_resolve_paths_respects_priority(tmp_path):
|
||||
config = {
|
||||
"install_dir": str(tmp_path / "from_config"),
|
||||
"log_file": "logs/install.log",
|
||||
"modules": {},
|
||||
"version": "1.0",
|
||||
}
|
||||
cfg_path = write_config(tmp_path, config)
|
||||
args = install.parse_args(["--config", str(cfg_path)])
|
||||
|
||||
ctx = install.resolve_paths(config, args)
|
||||
assert ctx["install_dir"] == (tmp_path / "from_config").resolve()
|
||||
assert ctx["log_file"] == (tmp_path / "from_config" / "logs" / "install.log").resolve()
|
||||
assert ctx["config_dir"] == tmp_path.resolve()
|
||||
|
||||
cli_args = install.parse_args(
|
||||
["--install-dir", str(tmp_path / "cli_dir"), "--config", str(cfg_path)]
|
||||
)
|
||||
ctx_cli = install.resolve_paths(config, cli_args)
|
||||
assert ctx_cli["install_dir"] == (tmp_path / "cli_dir").resolve()
|
||||
|
||||
|
||||
def test_list_modules_output(valid_config, capsys):
|
||||
_, config_data = valid_config
|
||||
install.list_modules(config_data)
|
||||
captured = capsys.readouterr().out
|
||||
assert "dev" in captured
|
||||
assert "essentials" in captured
|
||||
assert "✓" in captured
|
||||
|
||||
|
||||
def test_select_modules_behaviour(valid_config):
|
||||
_, config_data = valid_config
|
||||
|
||||
selected_default = install.select_modules(config_data, None)
|
||||
assert set(selected_default.keys()) == {"dev", "essentials"}
|
||||
|
||||
selected_specific = install.select_modules(config_data, "bmad")
|
||||
assert set(selected_specific.keys()) == {"bmad"}
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
install.select_modules(config_data, "missing")
|
||||
|
||||
|
||||
def test_ensure_install_dir(tmp_path, monkeypatch):
|
||||
target = tmp_path / "install_here"
|
||||
install.ensure_install_dir(target)
|
||||
assert target.is_dir()
|
||||
|
||||
file_path = tmp_path / "conflict"
|
||||
file_path.write_text("x", encoding="utf-8")
|
||||
with pytest.raises(NotADirectoryError):
|
||||
install.ensure_install_dir(file_path)
|
||||
|
||||
blocked = tmp_path / "blocked"
|
||||
real_access = os.access
|
||||
|
||||
def fake_access(path, mode):
|
||||
if Path(path) == blocked:
|
||||
return False
|
||||
return real_access(path, mode)
|
||||
|
||||
monkeypatch.setattr(os, "access", fake_access)
|
||||
with pytest.raises(PermissionError):
|
||||
install.ensure_install_dir(blocked)
|
||||
|
||||
|
||||
def test_op_copy_dir_respects_force(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
|
||||
src = tmp_path / "src"
|
||||
src.mkdir()
|
||||
(src / "a.txt").write_text("one", encoding="utf-8")
|
||||
|
||||
op = {"type": "copy_dir", "source": "src", "target": "dest"}
|
||||
install.op_copy_dir(op, ctx)
|
||||
target_file = ctx["install_dir"] / "dest" / "a.txt"
|
||||
assert target_file.read_text(encoding="utf-8") == "one"
|
||||
|
||||
(src / "a.txt").write_text("two", encoding="utf-8")
|
||||
install.op_copy_dir(op, ctx)
|
||||
assert target_file.read_text(encoding="utf-8") == "one"
|
||||
|
||||
ctx["force"] = True
|
||||
install.op_copy_dir(op, ctx)
|
||||
assert target_file.read_text(encoding="utf-8") == "two"
|
||||
|
||||
|
||||
def test_op_copy_file_behaviour(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
|
||||
src = tmp_path / "file.txt"
|
||||
src.write_text("first", encoding="utf-8")
|
||||
|
||||
op = {"type": "copy_file", "source": "file.txt", "target": "out/file.txt"}
|
||||
install.op_copy_file(op, ctx)
|
||||
dst = ctx["install_dir"] / "out" / "file.txt"
|
||||
assert dst.read_text(encoding="utf-8") == "first"
|
||||
|
||||
src.write_text("second", encoding="utf-8")
|
||||
install.op_copy_file(op, ctx)
|
||||
assert dst.read_text(encoding="utf-8") == "first"
|
||||
|
||||
ctx["force"] = True
|
||||
install.op_copy_file(op, ctx)
|
||||
assert dst.read_text(encoding="utf-8") == "second"
|
||||
|
||||
|
||||
def test_op_run_command_success(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
install.op_run_command({"type": "run_command", "command": "echo hello"}, ctx)
|
||||
log_content = ctx["log_file"].read_text(encoding="utf-8")
|
||||
assert "hello" in log_content
|
||||
|
||||
|
||||
def test_op_run_command_failure(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
with pytest.raises(RuntimeError):
|
||||
install.op_run_command(
|
||||
{"type": "run_command", "command": f"{sys.executable} -c 'import sys; sys.exit(2)'"},
|
||||
ctx,
|
||||
)
|
||||
log_content = ctx["log_file"].read_text(encoding="utf-8")
|
||||
assert "returncode: 2" in log_content
|
||||
|
||||
|
||||
def test_execute_module_success(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
src = tmp_path / "src.txt"
|
||||
src.write_text("data", encoding="utf-8")
|
||||
|
||||
cfg = {"operations": [{"type": "copy_file", "source": "src.txt", "target": "out.txt"}]}
|
||||
result = install.execute_module("demo", cfg, ctx)
|
||||
assert result["status"] == "success"
|
||||
assert (ctx["install_dir"] / "out.txt").read_text(encoding="utf-8") == "data"
|
||||
|
||||
|
||||
def test_execute_module_failure_logs_and_stops(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
cfg = {"operations": [{"type": "unknown", "source": "", "target": ""}]}
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
install.execute_module("demo", cfg, ctx)
|
||||
|
||||
log_content = ctx["log_file"].read_text(encoding="utf-8")
|
||||
assert "failed on unknown" in log_content
|
||||
|
||||
|
||||
def test_write_log_and_status(tmp_path):
|
||||
ctx = make_ctx(tmp_path)
|
||||
install.ensure_install_dir(ctx["install_dir"])
|
||||
|
||||
install.write_log({"level": "INFO", "message": "hello"}, ctx)
|
||||
content = ctx["log_file"].read_text(encoding="utf-8")
|
||||
assert "hello" in content
|
||||
|
||||
results = [
|
||||
{"module": "dev", "status": "success", "operations": [], "installed_at": "ts"}
|
||||
]
|
||||
install.write_status(results, ctx)
|
||||
status_data = json.loads(ctx["status_file"].read_text(encoding="utf-8"))
|
||||
assert status_data["modules"]["dev"]["status"] == "success"
|
||||
|
||||
|
||||
def test_main_success(valid_config, tmp_path):
|
||||
cfg_path, _ = valid_config
|
||||
install_dir = tmp_path / "install_final"
|
||||
rc = install.main(
|
||||
[
|
||||
"--config",
|
||||
str(cfg_path),
|
||||
"--install-dir",
|
||||
str(install_dir),
|
||||
"--module",
|
||||
"dev",
|
||||
]
|
||||
)
|
||||
|
||||
assert rc == 0
|
||||
assert (install_dir / "devcopy" / "f.txt").exists()
|
||||
assert (install_dir / "installed_modules.json").exists()
|
||||
|
||||
|
||||
def test_main_failure_without_force(tmp_path):
|
||||
cfg = {
|
||||
"version": "1.0",
|
||||
"install_dir": "~/.claude",
|
||||
"log_file": "install.log",
|
||||
"modules": {
|
||||
"dev": {
|
||||
"enabled": True,
|
||||
"description": "dev",
|
||||
"operations": [
|
||||
{
|
||||
"type": "run_command",
|
||||
"command": f"{sys.executable} -c 'import sys; sys.exit(3)'",
|
||||
}
|
||||
],
|
||||
},
|
||||
"bmad": {
|
||||
"enabled": False,
|
||||
"description": "bmad",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "t.txt"}
|
||||
],
|
||||
},
|
||||
"requirements": {
|
||||
"enabled": False,
|
||||
"description": "reqs",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "r.txt"}
|
||||
],
|
||||
},
|
||||
"essentials": {
|
||||
"enabled": False,
|
||||
"description": "ess",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "e.txt"}
|
||||
],
|
||||
},
|
||||
"advanced": {
|
||||
"enabled": False,
|
||||
"description": "adv",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "a.txt"}
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path = write_config(tmp_path, cfg)
|
||||
install_dir = tmp_path / "fail_install"
|
||||
rc = install.main(
|
||||
[
|
||||
"--config",
|
||||
str(cfg_path),
|
||||
"--install-dir",
|
||||
str(install_dir),
|
||||
"--module",
|
||||
"dev",
|
||||
]
|
||||
)
|
||||
|
||||
assert rc == 1
|
||||
assert not (install_dir / "installed_modules.json").exists()
|
||||
|
||||
|
||||
def test_main_force_records_failure(tmp_path):
|
||||
cfg = {
|
||||
"version": "1.0",
|
||||
"install_dir": "~/.claude",
|
||||
"log_file": "install.log",
|
||||
"modules": {
|
||||
"dev": {
|
||||
"enabled": True,
|
||||
"description": "dev",
|
||||
"operations": [
|
||||
{
|
||||
"type": "run_command",
|
||||
"command": f"{sys.executable} -c 'import sys; sys.exit(4)'",
|
||||
}
|
||||
],
|
||||
},
|
||||
"bmad": {
|
||||
"enabled": False,
|
||||
"description": "bmad",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "t.txt"}
|
||||
],
|
||||
},
|
||||
"requirements": {
|
||||
"enabled": False,
|
||||
"description": "reqs",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "r.txt"}
|
||||
],
|
||||
},
|
||||
"essentials": {
|
||||
"enabled": False,
|
||||
"description": "ess",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "e.txt"}
|
||||
],
|
||||
},
|
||||
"advanced": {
|
||||
"enabled": False,
|
||||
"description": "adv",
|
||||
"operations": [
|
||||
{"type": "copy_file", "source": "s.txt", "target": "a.txt"}
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path = write_config(tmp_path, cfg)
|
||||
install_dir = tmp_path / "force_install"
|
||||
rc = install.main(
|
||||
[
|
||||
"--config",
|
||||
str(cfg_path),
|
||||
"--install-dir",
|
||||
str(install_dir),
|
||||
"--module",
|
||||
"dev",
|
||||
"--force",
|
||||
]
|
||||
)
|
||||
|
||||
assert rc == 0
|
||||
status = json.loads((install_dir / "installed_modules.json").read_text(encoding="utf-8"))
|
||||
assert status["modules"]["dev"]["status"] == "failed"
|
||||
@@ -1,224 +0,0 @@
|
||||
import json
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
import install
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
SCHEMA_PATH = ROOT / "config.schema.json"
|
||||
|
||||
|
||||
def _write_schema(target_dir: Path) -> None:
|
||||
shutil.copy(SCHEMA_PATH, target_dir / "config.schema.json")
|
||||
|
||||
|
||||
def _base_config(install_dir: Path, modules: dict) -> dict:
|
||||
return {
|
||||
"version": "1.0",
|
||||
"install_dir": str(install_dir),
|
||||
"log_file": "install.log",
|
||||
"modules": modules,
|
||||
}
|
||||
|
||||
|
||||
def _prepare_env(tmp_path: Path, modules: dict) -> tuple[Path, Path, Path]:
|
||||
"""Create a temp config directory with schema and config.json."""
|
||||
|
||||
config_dir = tmp_path / "config"
|
||||
install_dir = tmp_path / "install"
|
||||
config_dir.mkdir()
|
||||
_write_schema(config_dir)
|
||||
|
||||
cfg_path = config_dir / "config.json"
|
||||
cfg_path.write_text(
|
||||
json.dumps(_base_config(install_dir, modules)), encoding="utf-8"
|
||||
)
|
||||
return cfg_path, install_dir, config_dir
|
||||
|
||||
|
||||
def _sample_sources(config_dir: Path) -> dict:
|
||||
sample_dir = config_dir / "sample_dir"
|
||||
sample_dir.mkdir()
|
||||
(sample_dir / "nested.txt").write_text("dir-content", encoding="utf-8")
|
||||
|
||||
sample_file = config_dir / "sample.txt"
|
||||
sample_file.write_text("file-content", encoding="utf-8")
|
||||
|
||||
return {"dir": sample_dir, "file": sample_file}
|
||||
|
||||
|
||||
def _read_status(install_dir: Path) -> dict:
|
||||
return json.loads((install_dir / "installed_modules.json").read_text("utf-8"))
|
||||
|
||||
|
||||
def test_single_module_full_flow(tmp_path):
|
||||
cfg_path, install_dir, config_dir = _prepare_env(
|
||||
tmp_path,
|
||||
{
|
||||
"solo": {
|
||||
"enabled": True,
|
||||
"description": "single module",
|
||||
"operations": [
|
||||
{"type": "copy_dir", "source": "sample_dir", "target": "payload"},
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "payload/sample.txt",
|
||||
},
|
||||
{
|
||||
"type": "run_command",
|
||||
"command": f"{sys.executable} -c \"from pathlib import Path; Path('run.txt').write_text('ok', encoding='utf-8')\"",
|
||||
},
|
||||
],
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
_sample_sources(config_dir)
|
||||
rc = install.main(["--config", str(cfg_path), "--module", "solo"])
|
||||
|
||||
assert rc == 0
|
||||
assert (install_dir / "payload" / "nested.txt").read_text(encoding="utf-8") == "dir-content"
|
||||
assert (install_dir / "payload" / "sample.txt").read_text(encoding="utf-8") == "file-content"
|
||||
assert (install_dir / "run.txt").read_text(encoding="utf-8") == "ok"
|
||||
|
||||
status = _read_status(install_dir)
|
||||
assert status["modules"]["solo"]["status"] == "success"
|
||||
assert len(status["modules"]["solo"]["operations"]) == 3
|
||||
|
||||
|
||||
def test_multi_module_install_and_status(tmp_path):
|
||||
modules = {
|
||||
"alpha": {
|
||||
"enabled": True,
|
||||
"description": "alpha",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "alpha.txt",
|
||||
}
|
||||
],
|
||||
},
|
||||
"beta": {
|
||||
"enabled": True,
|
||||
"description": "beta",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_dir",
|
||||
"source": "sample_dir",
|
||||
"target": "beta_dir",
|
||||
}
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path, install_dir, config_dir = _prepare_env(tmp_path, modules)
|
||||
_sample_sources(config_dir)
|
||||
|
||||
rc = install.main(["--config", str(cfg_path)])
|
||||
assert rc == 0
|
||||
|
||||
assert (install_dir / "alpha.txt").read_text(encoding="utf-8") == "file-content"
|
||||
assert (install_dir / "beta_dir" / "nested.txt").exists()
|
||||
|
||||
status = _read_status(install_dir)
|
||||
assert set(status["modules"].keys()) == {"alpha", "beta"}
|
||||
assert all(mod["status"] == "success" for mod in status["modules"].values())
|
||||
|
||||
|
||||
def test_force_overwrites_existing_files(tmp_path):
|
||||
modules = {
|
||||
"forcey": {
|
||||
"enabled": True,
|
||||
"description": "force copy",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "target.txt",
|
||||
}
|
||||
],
|
||||
}
|
||||
}
|
||||
|
||||
cfg_path, install_dir, config_dir = _prepare_env(tmp_path, modules)
|
||||
sources = _sample_sources(config_dir)
|
||||
|
||||
install.main(["--config", str(cfg_path), "--module", "forcey"])
|
||||
assert (install_dir / "target.txt").read_text(encoding="utf-8") == "file-content"
|
||||
|
||||
sources["file"].write_text("new-content", encoding="utf-8")
|
||||
|
||||
rc = install.main(["--config", str(cfg_path), "--module", "forcey", "--force"])
|
||||
assert rc == 0
|
||||
assert (install_dir / "target.txt").read_text(encoding="utf-8") == "new-content"
|
||||
|
||||
status = _read_status(install_dir)
|
||||
assert status["modules"]["forcey"]["status"] == "success"
|
||||
|
||||
|
||||
def test_failure_triggers_rollback_and_restores_status(tmp_path):
|
||||
# First successful run to create a known-good status file.
|
||||
ok_modules = {
|
||||
"stable": {
|
||||
"enabled": True,
|
||||
"description": "stable",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "stable.txt",
|
||||
}
|
||||
],
|
||||
}
|
||||
}
|
||||
|
||||
cfg_path, install_dir, config_dir = _prepare_env(tmp_path, ok_modules)
|
||||
_sample_sources(config_dir)
|
||||
assert install.main(["--config", str(cfg_path)]) == 0
|
||||
pre_status = _read_status(install_dir)
|
||||
assert "stable" in pre_status["modules"]
|
||||
|
||||
# Rewrite config to introduce a failing module.
|
||||
failing_modules = {
|
||||
**ok_modules,
|
||||
"broken": {
|
||||
"enabled": True,
|
||||
"description": "will fail",
|
||||
"operations": [
|
||||
{
|
||||
"type": "copy_file",
|
||||
"source": "sample.txt",
|
||||
"target": "broken.txt",
|
||||
},
|
||||
{
|
||||
"type": "run_command",
|
||||
"command": f"{sys.executable} -c 'import sys; sys.exit(5)'",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
cfg_path.write_text(
|
||||
json.dumps(_base_config(install_dir, failing_modules)), encoding="utf-8"
|
||||
)
|
||||
|
||||
rc = install.main(["--config", str(cfg_path)])
|
||||
assert rc == 1
|
||||
|
||||
# The failed module's file should have been removed by rollback.
|
||||
assert not (install_dir / "broken.txt").exists()
|
||||
# Previously installed files remain.
|
||||
assert (install_dir / "stable.txt").exists()
|
||||
|
||||
restored_status = _read_status(install_dir)
|
||||
assert restored_status == pre_status
|
||||
|
||||
log_content = (install_dir / "install.log").read_text(encoding="utf-8")
|
||||
assert "Rolling back" in log_content
|
||||
|
||||
Reference in New Issue
Block a user