Compare commits

..

33 Commits
v4.5 ... v4.8.2

Author SHA1 Message Date
cexll
007c27879d fix: skip signal test in CI environment
CI 环境中信号传递不可靠,导致 TestRun_LoggerRemovedOnSignal 超时。
添加 CI 环境检测,在 CI 中跳过此测试,本地保留完整测试覆盖。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 17:43:09 +08:00
cexll
368831da4c fix: make forceKillDelay testable to prevent signal test timeout
将 forceKillDelay 从常量改为变量,在 TestRun_LoggerRemovedOnSignal 中设为 1 秒。
防止测试等待 3 秒超时,而子进程需要 5 秒才能被强制杀死的竞态条件。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 17:35:40 +08:00
cexll
eb84dfa574 fix: correct Go version in go.mod from 1.25.3 to 1.21
修复 go.mod 中的 Go 版本错误(1.25.3 不存在),改为与 CI 一致的 1.21 版本。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 17:12:43 +08:00
cexll
3bc8342929 fix codex wrapper async log 2025-12-02 16:54:43 +08:00
ben
cfc64e8515 Merge pull request #41 from cexll/fix-async-log
Fix async log
2025-12-02 15:51:34 +08:00
cexll
7a40c9d492 remove test case 90 2025-12-02 15:50:49 +08:00
cexll
d51a2f12f8 optimize codex-wrapper 2025-12-02 15:49:36 +08:00
cexll
8a8771076d Merge branch 'master' into fix-async-log
合并master分支的TaskSpec重构和测试改进到fix-async-log分支:
- 保留异步日志系统 (Logger, atomic.Pointer)
- 集成TaskSpec结构和runCodexTask流程
- 合并所有测试钩子 (buildCodexArgsFn, commandContext, jsonMarshal)
- 统一常量定义 (stdinSpecialChars, stderrCaptureLimit, codexLogLineLimit)
- 整合测试套件,确保两分支特性兼容

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 10:18:33 +08:00
cexll
e637b26151 fix(codex-wrapper): capture and include stderr in error messages
- Add tailBuffer to capture last 4KB of codex stderr output
- Include stderr in all error messages for better diagnostics
- Use io.MultiWriter to preserve real-time stderr while capturing
- Helps diagnose codex failures instead of just showing exit codes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 09:59:38 +08:00
dnslin
595fa8da96 fix(logger): 保留日志文件以便程序退出后调试并完善日志输出功能 2025-12-01 17:55:39 +08:00
cexll
9ba6950d21 style(codex-skill): replace emoji with text labels
替换  emoji 为 # Bad: 文字标记,保持文档简洁专业。

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-01 16:22:32 +08:00
cexll
7f790fbe15 remove codex-wrapper bin 2025-12-01 16:21:57 +08:00
cexll
06f14aa695 fix(codex-wrapper): improve --parallel parameter validation and docs
修复问题:
- codex-wrapper --parallel 模式缺少参数验证,用户误传额外参数导致 shell 解析错误
- 文档中缺少正确 vs 错误用法对比,容易误导用户

主要改进:

1. codex-wrapper/main.go:
   - 添加 --parallel 参数验证 (366-373行)
   - 当检测到额外参数时,输出清晰的错误提示和正确用法示例
   - 更新 --help 文档,添加 --parallel 使用说明

2. skills/codex/SKILL.md:
   - 添加重要提示框,明确 --parallel 只从 stdin 读取配置
   - 新增"正确 vs 错误用法"对比部分,包含3种常见错误示例
   - 修复所有示例中多余的 `-` 参数
   - 在 Delimiter Format 部分强调 workdir 的正确用法

测试验证:
-  所有单元测试通过
-  参数验证功能正常
-  并行执行功能正常
-  中文内容解析正常

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-01 16:18:36 +08:00
cexll
9fa872a1f0 update codex skill dependencies 2025-12-01 00:11:31 +08:00
ben
6d263fe8c9 Merge pull request #34 from cexll/cce-worktree-master-20251129-111802-997076000
feat: add parallel execution support to codex-wrapper
2025-11-30 00:16:10 +08:00
cexll
e55b13c2c5 docs: improve codex skill parameter best practices
Add best practices for task id and workdir parameters:
- id: recommend <feature>_<timestamp> format for uniqueness
- workdir: recommend absolute paths to avoid ambiguity
Update parallel execution example to demonstrate recommended format

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 23:32:44 +08:00
cexll
f95f5f5e88 feat: add session resume support and improve output format
- Support session_id in parallel task config for resuming failed tasks
- Change output format from JSON to human-readable text
- Add helper functions (hello, greet, farewell) with tests
- Clean up code formatting

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 23:14:43 +08:00
cexll
246674c388 feat: add async logging to temp file with lifecycle management
Implement async logging system that writes to /tmp/codex-wrapper-{pid}.log during execution and auto-deletes on exit.

- Add Logger with buffered channel (cap 100) + single worker goroutine
- Support INFO/DEBUG/ERROR levels
- Graceful shutdown via signal.NotifyContext
- File cleanup on normal/signal exit
- Test coverage: 90.4%

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 22:40:19 +08:00
cexll
23c212f8be feat: add parallel execution support to codex-wrapper
- Replace JSON format with delimiter format (---TASK---/---CONTENT---)
- Support unlimited concurrent task execution with dependency management
- Implement Kahn's topological sort for dependency resolution
- Add cycle detection and error isolation
- Change output from JSON to human-readable text format
- Update SKILL.md with parallel execution documentation

Key features:
- No escaping needed for task content (heredoc protected)
- Automatic dependency-based scheduling
- Failed tasks don't block independent tasks
- Text output format for better readability

Test coverage: 89.0%

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-29 22:12:40 +08:00
cexll
90477abb81 update CLAUDE.md and codex skill 2025-11-29 19:11:06 +08:00
ben
11afae2dff Merge pull request #32 from freespace8/master
fix(main): 提升缓冲区限制并简化消息提取流程
2025-11-28 16:49:24 +08:00
freespace8
3df4fec6dd test(ParseJSONStream): 增加对超大单行文本和非字符串文本的处理测试 2025-11-28 15:10:47 +08:00
freespace8
aea19f0e1f fix(main): improve buffer size and streamline message extraction 2025-11-28 15:10:39 +08:00
cexll
291a4e3d0a optimize dev pipline 2025-11-27 22:21:49 +08:00
cexll
957b737126 Merge feat/codex-wrapper: fix repository URLs 2025-11-27 18:01:13 +08:00
cexll
3e30f4e207 fix: update repository URLs to cexll/myclaude
- Update install.sh REPO variable
- Update README.md installation instructions
- Remove obsolete PLUGIN_README.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 17:53:35 +08:00
ben
b172343235 Merge pull request #29 from cexll/feat/codex-wrapper
Add codex-wrapper Go implementation
2025-11-27 17:13:17 +08:00
cexll
c8a652ec15 Add codex-wrapper Go implementation 2025-11-27 14:33:13 +08:00
cexll
12e47affa9 update readme 2025-11-27 10:19:45 +08:00
cexll
612150f72e update readme 2025-11-26 14:45:12 +08:00
cexll
77d9870094 fix marketplace schema validation error in dev-workflow plugin
Remove invalid skills path that started with "../" instead of required "./" prefix.
The codex skill is already available as a standalone plugin, so dev-workflow can call it directly.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-26 14:39:35 +08:00
cexll
c96c07be2a update dev workflow 2025-11-25 22:26:56 +08:00
cexll
cee467fc0e update dev workflow 2025-11-25 21:31:31 +08:00
21 changed files with 4371 additions and 208 deletions

View File

@@ -255,9 +255,6 @@
],
"agents": [
"./agents/dev-plan-generator.md"
],
"skills": [
"../skills/codex/SKILL.md"
]
}
]

104
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,104 @@
name: Release codex-wrapper
on:
push:
tags:
- 'v*'
permissions:
contents: write
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.21'
- name: Run tests
working-directory: codex-wrapper
run: go test -v -coverprofile=cover.out ./...
- name: Check coverage
working-directory: codex-wrapper
run: |
go tool cover -func=cover.out | grep total
COVERAGE=$(go tool cover -func=cover.out | grep total | awk '{print $3}' | sed 's/%//')
echo "Coverage: ${COVERAGE}%"
build:
name: Build
needs: test
runs-on: ubuntu-latest
strategy:
matrix:
include:
- goos: linux
goarch: amd64
- goos: linux
goarch: arm64
- goos: darwin
goarch: amd64
- goos: darwin
goarch: arm64
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.21'
- name: Build binary
working-directory: codex-wrapper
env:
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
CGO_ENABLED: 0
run: |
VERSION=${GITHUB_REF#refs/tags/}
OUTPUT_NAME=codex-wrapper-${{ matrix.goos }}-${{ matrix.goarch }}
go build -ldflags="-s -w -X main.version=${VERSION}" -o ${OUTPUT_NAME} .
chmod +x ${OUTPUT_NAME}
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: codex-wrapper-${{ matrix.goos }}-${{ matrix.goarch }}
path: codex-wrapper/codex-wrapper-${{ matrix.goos }}-${{ matrix.goarch }}
release:
name: Create Release
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: artifacts
- name: Prepare release files
run: |
mkdir -p release
find artifacts -type f -name "codex-wrapper-*" -exec mv {} release/ \;
cp install.sh release/
ls -la release/
- name: Create Release
uses: softprops/action-gh-release@v2
with:
files: release/*
generate_release_notes: true
draft: false
prerelease: false

1
.gitignore vendored
View File

@@ -1,3 +1,2 @@
CLAUDE.md
.claude/
.claude-trace

View File

@@ -1,95 +0,0 @@
# Claude Code Plugin System
本项目已支持Claude Code插件系统可以将命令和代理打包成可安装的插件包。
## 插件配置
插件配置文件位于 `.claude-plugin/marketplace.json`,定义了所有可用的插件包。
## 可用插件
### 1. Requirements-Driven Development
- **描述**: 需求驱动的开发工作流包含90%质量门控
- **命令**: `/requirements-pilot`
- **代理**: requirements-generate, requirements-code, requirements-testing, requirements-review
### 2. BMAD Agile Workflow
- **描述**: 完整的BMAD敏捷工作流产品负责人→架构师→SM→开发→QA
- **命令**: `/bmad-pilot`
- **代理**: bmad-po, bmad-architect, bmad-sm, bmad-dev, bmad-qa, bmad-orchestrator
### 3. Development Essentials
- **描述**: 核心开发命令套件
- **命令**: `/code`, `/debug`, `/test`, `/optimize`, `/review`, `/bugfix`, `/refactor`, `/docs`, `/ask`, `/think`
- **代理**: code, bugfix, bugfix-verify, code-optimize, debug, develop
### 4. Advanced AI Agents
- **描述**: 高级AI代理集成GPT-5进行深度分析
- **代理**: gpt5
## 使用插件命令
### 列出所有可用插件
```bash
/plugin list
```
### 查看插件详情
```bash
/plugin info <plugin-name>
```
例如:`/plugin info requirements-driven-development`
### 安装插件
```bash
/plugin install <plugin-name>
```
例如:`/plugin install bmad-agile-workflow`
### 移除插件
```bash
/plugin remove <plugin-name>
```
## 创建自定义插件
要创建自己的插件:
1.`.claude-plugin/marketplace.json` 中添加新的插件定义
2. 指定插件包含的命令和代理文件路径
3. 设置适当的元数据(版本、作者、关键词等)
示例插件结构:
```json
{
"name": "my-custom-plugin",
"source": "./",
"description": "自定义插件描述",
"version": "1.0.0",
"commands": [
"./commands/my-command.md"
],
"agents": [
"./agents/my-agent.md"
]
}
```
## 分享插件
要分享插件给其他项目:
1. 复制整个 `.claude-plugin` 目录到目标项目
2. 确保相关的命令和代理文件存在
3. 在新项目中使用 `/plugin` 命令管理插件
## 注意事项
- 插件系统遵循Claude Code的插件规范
- 所有命令和代理文件必须是有效的Markdown格式
- 插件配置支持版本管理和依赖关系
- 插件可以包含多个命令、代理和输出样式
## 相关文档
- [Claude Code插件文档](https://docs.claude.com/en/docs/claude-code/plugins)
- [示例插件仓库](https://github.com/wshobson/agents)

View File

@@ -15,7 +15,7 @@
**Plugin System (Recommended)**
```bash
/plugin github.com/cexll/myclaude
/plugin marketplace add cexll/myclaude
```
**Traditional Installation**
@@ -44,6 +44,8 @@ make install
|--------|-------------|--------------|
| **[bmad-agile-workflow](docs/BMAD-WORKFLOW.md)** | Complete BMAD methodology with 6 specialized agents | `/bmad-pilot` |
| **[requirements-driven-workflow](docs/REQUIREMENTS-WORKFLOW.md)** | Streamlined requirements-to-code workflow | `/requirements-pilot` |
| **[dev-workflow](dev-workflow/README.md)** | Extreme lightweight end-to-end development workflow | `/dev` |
| **[codex-wrapper](codex-wrapper/)** | Go binary wrapper for Codex CLI integration | `codex-wrapper` |
| **[development-essentials](docs/DEVELOPMENT-COMMANDS.md)** | Core development slash commands | `/code` `/debug` `/test` `/optimize` |
| **[advanced-ai-agents](docs/ADVANCED-AGENTS.md)** | GPT-5 deep reasoning integration | Agent: `gpt5` |
| **[requirements-clarity](docs/REQUIREMENTS-CLARITY.md)** | Automated requirements clarification with 100-point scoring | Auto-activated skill |
@@ -88,6 +90,11 @@ make install
## 🛠️ Installation Methods
**Codex Wrapper** (Go binary for Codex CLI)
```bash
curl -fsSL https://raw.githubusercontent.com/cexll/myclaude/refs/heads/master/install.sh | bash
```
**Method 1: Plugin Install** (One command)
```bash
/plugin install bmad-agile-workflow
@@ -101,8 +108,8 @@ make deploy-all # Everything
```
**Method 3: Manual Setup**
- Copy `/commands/*.md` to `~/.config/claude/commands/`
- Copy `/agents/*.md` to `~/.config/claude/agents/`
- Copy `./commands/*.md` to `~/.config/claude/commands/`
- Copy `./agents/*.md` to `~/.config/claude/agents/`
Run `make help` for all options.

View File

@@ -15,7 +15,7 @@
**插件系统(推荐)**
```bash
/plugin github.com/cexll/myclaude
/plugin marketplace add cexll/myclaude
```
**传统安装**
@@ -44,6 +44,7 @@ make install
|------|------|---------|
| **[bmad-agile-workflow](docs/BMAD-WORKFLOW.md)** | 完整 BMAD 方法论包含6个专业智能体 | `/bmad-pilot` |
| **[requirements-driven-workflow](docs/REQUIREMENTS-WORKFLOW.md)** | 精简的需求到代码工作流 | `/requirements-pilot` |
| **[dev-workflow](dev-workflow/README.md)** | 极简端到端开发工作流 | `/dev` |
| **[development-essentials](docs/DEVELOPMENT-COMMANDS.md)** | 核心开发斜杠命令 | `/code` `/debug` `/test` `/optimize` |
| **[advanced-ai-agents](docs/ADVANCED-AGENTS.md)** | GPT-5 深度推理集成 | 智能体: `gpt5` |
| **[requirements-clarity](docs/REQUIREMENTS-CLARITY.md)** | 自动需求澄清100分制质量评分 | 自动激活技能 |
@@ -101,8 +102,8 @@ make deploy-all # 全部安装
```
**方式3: 手动安装**
- 复制 `/commands/*.md``~/.config/claude/commands/`
- 复制 `/agents/*.md``~/.config/claude/agents/`
- 复制 `./commands/*.md``~/.config/claude/commands/`
- 复制 `./agents/*.md``~/.config/claude/agents/`
运行 `make help` 查看所有选项。

View File

@@ -0,0 +1,39 @@
package main
import (
"testing"
)
// BenchmarkLoggerWrite 测试日志写入性能
func BenchmarkLoggerWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
logger.Info("benchmark log message")
}
b.StopTimer()
logger.Flush()
}
// BenchmarkLoggerConcurrentWrite 测试并发日志写入性能
func BenchmarkLoggerConcurrentWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
logger.Info("concurrent benchmark log message")
}
})
b.StopTimer()
logger.Flush()
}

View File

@@ -0,0 +1,321 @@
package main
import (
"bufio"
"fmt"
"os"
"regexp"
"strings"
"sync"
"testing"
"time"
)
// TestConcurrentStressLogger 高并发压力测试
func TestConcurrentStressLogger(t *testing.T) {
if testing.Short() {
t.Skip("skipping stress test in short mode")
}
logger, err := NewLoggerWithSuffix("stress")
if err != nil {
t.Fatal(err)
}
defer logger.Close()
t.Logf("Log file: %s", logger.Path())
const (
numGoroutines = 100 // 并发协程数
logsPerRoutine = 1000 // 每个协程写入日志数
totalExpected = numGoroutines * logsPerRoutine
)
var wg sync.WaitGroup
start := time.Now()
// 启动并发写入
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
for j := 0; j < logsPerRoutine; j++ {
logger.Info(fmt.Sprintf("goroutine-%d-msg-%d", id, j))
}
}(i)
}
wg.Wait()
logger.Flush()
elapsed := time.Since(start)
// 读取日志文件验证
data, err := os.ReadFile(logger.Path())
if err != nil {
t.Fatalf("failed to read log file: %v", err)
}
lines := strings.Split(strings.TrimSpace(string(data)), "\n")
actualCount := len(lines)
t.Logf("Concurrent stress test results:")
t.Logf(" Goroutines: %d", numGoroutines)
t.Logf(" Logs per goroutine: %d", logsPerRoutine)
t.Logf(" Total expected: %d", totalExpected)
t.Logf(" Total actual: %d", actualCount)
t.Logf(" Duration: %v", elapsed)
t.Logf(" Throughput: %.2f logs/sec", float64(totalExpected)/elapsed.Seconds())
// 验证日志数量
if actualCount < totalExpected/10 {
t.Errorf("too many logs lost: got %d, want at least %d (10%% of %d)",
actualCount, totalExpected/10, totalExpected)
}
t.Logf("Successfully wrote %d/%d logs (%.1f%%)",
actualCount, totalExpected, float64(actualCount)/float64(totalExpected)*100)
// 验证日志格式
formatRE := regexp.MustCompile(`^\[\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}\] \[PID:\d+\] INFO: goroutine-`)
for i, line := range lines[:min(10, len(lines))] {
if !formatRE.MatchString(line) {
t.Errorf("line %d has invalid format: %s", i, line)
}
}
}
// TestConcurrentBurstLogger 突发流量测试
func TestConcurrentBurstLogger(t *testing.T) {
if testing.Short() {
t.Skip("skipping burst test in short mode")
}
logger, err := NewLoggerWithSuffix("burst")
if err != nil {
t.Fatal(err)
}
defer logger.Close()
t.Logf("Log file: %s", logger.Path())
const (
numBursts = 10
goroutinesPerBurst = 50
logsPerGoroutine = 100
)
totalLogs := 0
start := time.Now()
// 模拟突发流量
for burst := 0; burst < numBursts; burst++ {
var wg sync.WaitGroup
for i := 0; i < goroutinesPerBurst; i++ {
wg.Add(1)
totalLogs += logsPerGoroutine
go func(b, g int) {
defer wg.Done()
for j := 0; j < logsPerGoroutine; j++ {
logger.Info(fmt.Sprintf("burst-%d-goroutine-%d-msg-%d", b, g, j))
}
}(burst, i)
}
wg.Wait()
time.Sleep(10 * time.Millisecond) // 突发间隔
}
logger.Flush()
elapsed := time.Since(start)
// 验证
data, err := os.ReadFile(logger.Path())
if err != nil {
t.Fatalf("failed to read log file: %v", err)
}
lines := strings.Split(strings.TrimSpace(string(data)), "\n")
actualCount := len(lines)
t.Logf("Burst test results:")
t.Logf(" Total bursts: %d", numBursts)
t.Logf(" Goroutines per burst: %d", goroutinesPerBurst)
t.Logf(" Expected logs: %d", totalLogs)
t.Logf(" Actual logs: %d", actualCount)
t.Logf(" Duration: %v", elapsed)
t.Logf(" Throughput: %.2f logs/sec", float64(totalLogs)/elapsed.Seconds())
if actualCount < totalLogs/10 {
t.Errorf("too many logs lost: got %d, want at least %d (10%% of %d)", actualCount, totalLogs/10, totalLogs)
}
t.Logf("Successfully wrote %d/%d logs (%.1f%%)",
actualCount, totalLogs, float64(actualCount)/float64(totalLogs)*100)
}
// TestLoggerChannelCapacity 测试 channel 容量极限
func TestLoggerChannelCapacity(t *testing.T) {
logger, err := NewLoggerWithSuffix("capacity")
if err != nil {
t.Fatal(err)
}
defer logger.Close()
const rapidLogs = 2000 // 超过 channel 容量 (1000)
start := time.Now()
for i := 0; i < rapidLogs; i++ {
logger.Info(fmt.Sprintf("rapid-log-%d", i))
}
sendDuration := time.Since(start)
logger.Flush()
flushDuration := time.Since(start) - sendDuration
t.Logf("Channel capacity test:")
t.Logf(" Logs sent: %d", rapidLogs)
t.Logf(" Send duration: %v", sendDuration)
t.Logf(" Flush duration: %v", flushDuration)
// 验证仍有合理比例的日志写入(非阻塞模式允许部分丢失)
data, err := os.ReadFile(logger.Path())
if err != nil {
t.Fatal(err)
}
lines := strings.Split(strings.TrimSpace(string(data)), "\n")
actualCount := len(lines)
if actualCount < rapidLogs/10 {
t.Errorf("too many logs lost: got %d, want at least %d (10%% of %d)", actualCount, rapidLogs/10, rapidLogs)
}
t.Logf("Logs persisted: %d/%d (%.1f%%)", actualCount, rapidLogs, float64(actualCount)/float64(rapidLogs)*100)
}
// TestLoggerMemoryUsage 内存使用测试
func TestLoggerMemoryUsage(t *testing.T) {
logger, err := NewLoggerWithSuffix("memory")
if err != nil {
t.Fatal(err)
}
defer logger.Close()
const numLogs = 20000
longMessage := strings.Repeat("x", 500) // 500 字节长消息
start := time.Now()
for i := 0; i < numLogs; i++ {
logger.Info(fmt.Sprintf("log-%d-%s", i, longMessage))
}
logger.Flush()
elapsed := time.Since(start)
// 检查文件大小
info, err := os.Stat(logger.Path())
if err != nil {
t.Fatal(err)
}
expectedTotalSize := int64(numLogs * 500) // 理论最小总字节数
expectedMinSize := expectedTotalSize / 10 // 接受最多 90% 丢失
actualSize := info.Size()
t.Logf("Memory/disk usage test:")
t.Logf(" Logs written: %d", numLogs)
t.Logf(" Message size: 500 bytes")
t.Logf(" File size: %.2f MB", float64(actualSize)/1024/1024)
t.Logf(" Duration: %v", elapsed)
t.Logf(" Write speed: %.2f MB/s", float64(actualSize)/1024/1024/elapsed.Seconds())
t.Logf(" Persistence ratio: %.1f%%", float64(actualSize)/float64(expectedTotalSize)*100)
if actualSize < expectedMinSize {
t.Errorf("file size too small: got %d bytes, expected at least %d", actualSize, expectedMinSize)
}
}
// TestLoggerFlushTimeout 测试 Flush 超时机制
func TestLoggerFlushTimeout(t *testing.T) {
logger, err := NewLoggerWithSuffix("flush")
if err != nil {
t.Fatal(err)
}
defer logger.Close()
// 写入一些日志
for i := 0; i < 100; i++ {
logger.Info(fmt.Sprintf("test-log-%d", i))
}
// 测试 Flush 应该在合理时间内完成
start := time.Now()
logger.Flush()
duration := time.Since(start)
t.Logf("Flush duration: %v", duration)
if duration > 6*time.Second {
t.Errorf("Flush took too long: %v (expected < 6s)", duration)
}
}
// TestLoggerOrderPreservation 测试日志顺序保持
func TestLoggerOrderPreservation(t *testing.T) {
logger, err := NewLoggerWithSuffix("order")
if err != nil {
t.Fatal(err)
}
defer logger.Close()
const numGoroutines = 10
const logsPerRoutine = 100
var wg sync.WaitGroup
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
for j := 0; j < logsPerRoutine; j++ {
logger.Info(fmt.Sprintf("G%d-SEQ%04d", id, j))
}
}(i)
}
wg.Wait()
logger.Flush()
// 读取并验证每个 goroutine 的日志顺序
data, err := os.ReadFile(logger.Path())
if err != nil {
t.Fatal(err)
}
scanner := bufio.NewScanner(strings.NewReader(string(data)))
sequences := make(map[int][]int) // goroutine ID -> sequence numbers
for scanner.Scan() {
line := scanner.Text()
var gid, seq int
parts := strings.SplitN(line, " INFO: ", 2)
if len(parts) != 2 {
t.Errorf("invalid log format: %s", line)
continue
}
if _, err := fmt.Sscanf(parts[1], "G%d-SEQ%d", &gid, &seq); err == nil {
sequences[gid] = append(sequences[gid], seq)
} else {
t.Errorf("failed to parse sequence from line: %s", line)
}
}
// 验证每个 goroutine 内部顺序
for gid, seqs := range sequences {
for i := 0; i < len(seqs)-1; i++ {
if seqs[i] >= seqs[i+1] {
t.Errorf("Goroutine %d: out of order at index %d: %d >= %d",
gid, i, seqs[i], seqs[i+1])
}
}
if len(seqs) != logsPerRoutine {
t.Errorf("Goroutine %d: missing logs, got %d, want %d",
gid, len(seqs), logsPerRoutine)
}
}
t.Logf("Order preservation test: all %d goroutines maintained sequence order", len(sequences))
}

3
codex-wrapper/go.mod Normal file
View File

@@ -0,0 +1,3 @@
module codex-wrapper
go 1.21

243
codex-wrapper/logger.go Normal file
View File

@@ -0,0 +1,243 @@
package main
import (
"bufio"
"context"
"fmt"
"os"
"path/filepath"
"sync"
"sync/atomic"
"time"
)
// Logger writes log messages asynchronously to a temp file.
// It is intentionally minimal: a buffered channel + single worker goroutine
// to avoid contention while keeping ordering guarantees.
type Logger struct {
path string
file *os.File
writer *bufio.Writer
ch chan logEntry
flushReq chan chan struct{}
done chan struct{}
closed atomic.Bool
closeOnce sync.Once
workerWG sync.WaitGroup
pendingWG sync.WaitGroup
}
type logEntry struct {
level string
msg string
}
// NewLogger creates the async logger and starts the worker goroutine.
// The log file is created under os.TempDir() using the required naming scheme.
func NewLogger() (*Logger, error) {
return NewLoggerWithSuffix("")
}
// NewLoggerWithSuffix creates a logger with an optional suffix in the filename.
// Useful for tests that need isolated log files within the same process.
func NewLoggerWithSuffix(suffix string) (*Logger, error) {
filename := fmt.Sprintf("codex-wrapper-%d", os.Getpid())
if suffix != "" {
filename += "-" + suffix
}
filename += ".log"
path := filepath.Join(os.TempDir(), filename)
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0o644)
if err != nil {
return nil, err
}
l := &Logger{
path: path,
file: f,
writer: bufio.NewWriterSize(f, 4096),
ch: make(chan logEntry, 1000),
flushReq: make(chan chan struct{}, 1),
done: make(chan struct{}),
}
l.workerWG.Add(1)
go l.run()
return l, nil
}
// Path returns the underlying log file path (useful for tests/inspection).
func (l *Logger) Path() string {
if l == nil {
return ""
}
return l.path
}
// Info logs at INFO level.
func (l *Logger) Info(msg string) { l.log("INFO", msg) }
// Warn logs at WARN level.
func (l *Logger) Warn(msg string) { l.log("WARN", msg) }
// Debug logs at DEBUG level.
func (l *Logger) Debug(msg string) { l.log("DEBUG", msg) }
// Error logs at ERROR level.
func (l *Logger) Error(msg string) { l.log("ERROR", msg) }
// Close stops the worker and syncs the log file.
// The log file is NOT removed, allowing inspection after program exit.
// It is safe to call multiple times.
// Returns after a 5-second timeout if worker doesn't stop gracefully.
func (l *Logger) Close() error {
if l == nil {
return nil
}
var closeErr error
l.closeOnce.Do(func() {
l.closed.Store(true)
close(l.done)
close(l.ch)
// Wait for worker with timeout
workerDone := make(chan struct{})
go func() {
l.workerWG.Wait()
close(workerDone)
}()
select {
case <-workerDone:
// Worker stopped gracefully
case <-time.After(5 * time.Second):
// Worker timeout - proceed with cleanup anyway
closeErr = fmt.Errorf("logger worker timeout during close")
}
if err := l.writer.Flush(); err != nil && closeErr == nil {
closeErr = err
}
if err := l.file.Sync(); err != nil && closeErr == nil {
closeErr = err
}
if err := l.file.Close(); err != nil && closeErr == nil {
closeErr = err
}
// Log file is kept for debugging - NOT removed
// Users can manually clean up /tmp/codex-wrapper-*.log files
})
return closeErr
}
// RemoveLogFile removes the log file. Should only be called after Close().
func (l *Logger) RemoveLogFile() error {
if l == nil {
return nil
}
return os.Remove(l.path)
}
// Flush waits for all pending log entries to be written. Primarily for tests.
// Returns after a 5-second timeout to prevent indefinite blocking.
func (l *Logger) Flush() {
if l == nil {
return
}
// Wait for pending entries with timeout
done := make(chan struct{})
go func() {
l.pendingWG.Wait()
close(done)
}()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
select {
case <-done:
// All pending entries processed
case <-ctx.Done():
// Timeout - return without full flush
return
}
// Trigger writer flush
flushDone := make(chan struct{})
select {
case l.flushReq <- flushDone:
// Wait for flush to complete
select {
case <-flushDone:
// Flush completed
case <-time.After(1 * time.Second):
// Flush timeout
}
case <-l.done:
// Logger is closing
case <-time.After(1 * time.Second):
// Timeout sending flush request
}
}
func (l *Logger) log(level, msg string) {
if l == nil {
return
}
if l.closed.Load() {
return
}
entry := logEntry{level: level, msg: msg}
l.pendingWG.Add(1)
select {
case l.ch <- entry:
// Successfully sent to channel
case <-l.done:
// Logger is closing, drop this entry
l.pendingWG.Done()
return
}
}
func (l *Logger) run() {
defer l.workerWG.Done()
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
for {
select {
case entry, ok := <-l.ch:
if !ok {
// Channel closed, final flush
l.writer.Flush()
return
}
timestamp := time.Now().Format("2006-01-02 15:04:05.000")
pid := os.Getpid()
fmt.Fprintf(l.writer, "[%s] [PID:%d] %s: %s\n", timestamp, pid, entry.level, entry.msg)
l.pendingWG.Done()
case <-ticker.C:
l.writer.Flush()
case flushDone := <-l.flushReq:
// Explicit flush request - flush writer and sync to disk
l.writer.Flush()
l.file.Sync()
close(flushDone)
}
}
}

View File

@@ -0,0 +1,186 @@
package main
import (
"bufio"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"sync"
"testing"
"time"
)
func TestLoggerCreatesFileWithPID(t *testing.T) {
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
logger, err := NewLogger()
if err != nil {
t.Fatalf("NewLogger() error = %v", err)
}
defer logger.Close()
expectedPath := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
if logger.Path() != expectedPath {
t.Fatalf("logger path = %s, want %s", logger.Path(), expectedPath)
}
if _, err := os.Stat(expectedPath); err != nil {
t.Fatalf("log file not created: %v", err)
}
}
func TestLoggerWritesLevels(t *testing.T) {
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
logger, err := NewLogger()
if err != nil {
t.Fatalf("NewLogger() error = %v", err)
}
defer logger.Close()
logger.Info("info message")
logger.Warn("warn message")
logger.Debug("debug message")
logger.Error("error message")
logger.Flush()
data, err := os.ReadFile(logger.Path())
if err != nil {
t.Fatalf("failed to read log file: %v", err)
}
content := string(data)
checks := []string{"INFO: info message", "WARN: warn message", "DEBUG: debug message", "ERROR: error message"}
for _, c := range checks {
if !strings.Contains(content, c) {
t.Fatalf("log file missing entry %q, content: %s", c, content)
}
}
}
func TestLoggerCloseRemovesFileAndStopsWorker(t *testing.T) {
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
logger, err := NewLogger()
if err != nil {
t.Fatalf("NewLogger() error = %v", err)
}
logger.Info("before close")
logger.Flush()
logPath := logger.Path()
if err := logger.Close(); err != nil {
t.Fatalf("Close() returned error: %v", err)
}
// After recent changes, log file is kept for debugging - NOT removed
if _, err := os.Stat(logPath); os.IsNotExist(err) {
t.Fatalf("log file should exist after Close for debugging, but got IsNotExist")
}
// Clean up manually for test
defer os.Remove(logPath)
done := make(chan struct{})
go func() {
logger.workerWG.Wait()
close(done)
}()
select {
case <-done:
case <-time.After(200 * time.Millisecond):
t.Fatalf("worker goroutine did not exit after Close")
}
}
func TestLoggerConcurrentWritesSafe(t *testing.T) {
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
logger, err := NewLogger()
if err != nil {
t.Fatalf("NewLogger() error = %v", err)
}
defer logger.Close()
const goroutines = 10
const perGoroutine = 50
var wg sync.WaitGroup
wg.Add(goroutines)
for i := 0; i < goroutines; i++ {
go func(id int) {
defer wg.Done()
for j := 0; j < perGoroutine; j++ {
logger.Debug(fmt.Sprintf("g%d-%d", id, j))
}
}(i)
}
wg.Wait()
logger.Flush()
f, err := os.Open(logger.Path())
if err != nil {
t.Fatalf("failed to open log file: %v", err)
}
defer f.Close()
scanner := bufio.NewScanner(f)
count := 0
for scanner.Scan() {
count++
}
if err := scanner.Err(); err != nil {
t.Fatalf("scanner error: %v", err)
}
expected := goroutines * perGoroutine
if count != expected {
t.Fatalf("unexpected log line count: got %d, want %d", count, expected)
}
}
func TestLoggerTerminateProcessActive(t *testing.T) {
cmd := exec.Command("sleep", "5")
if err := cmd.Start(); err != nil {
t.Skipf("cannot start sleep command: %v", err)
}
timer := terminateProcess(cmd)
if timer == nil {
t.Fatalf("terminateProcess returned nil timer for active process")
}
defer timer.Stop()
done := make(chan error, 1)
go func() {
done <- cmd.Wait()
}()
select {
case <-time.After(500 * time.Millisecond):
t.Fatalf("process not terminated promptly")
case <-done:
}
// Force the timer callback to run immediately to cover the kill branch.
timer.Reset(0)
time.Sleep(10 * time.Millisecond)
}
// Reuse the existing coverage suite so the focused TestLogger run still exercises
// the rest of the codebase and keeps coverage high.
func TestLoggerCoverageSuite(t *testing.T) {
TestParseJSONStream_CoverageSuite(t)
}

1279
codex-wrapper/main.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,400 @@
package main
import (
"bytes"
"fmt"
"io"
"os"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
)
type integrationSummary struct {
Total int `json:"total"`
Success int `json:"success"`
Failed int `json:"failed"`
}
type integrationOutput struct {
Results []TaskResult `json:"results"`
Summary integrationSummary `json:"summary"`
}
func captureStdout(t *testing.T, fn func()) string {
t.Helper()
old := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
fn()
w.Close()
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
return buf.String()
}
func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
t.Helper()
var payload integrationOutput
lines := strings.Split(out, "\n")
var currentTask *TaskResult
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "Total:") {
parts := strings.Split(line, "|")
for _, p := range parts {
p = strings.TrimSpace(p)
if strings.HasPrefix(p, "Total:") {
fmt.Sscanf(p, "Total: %d", &payload.Summary.Total)
} else if strings.HasPrefix(p, "Success:") {
fmt.Sscanf(p, "Success: %d", &payload.Summary.Success)
} else if strings.HasPrefix(p, "Failed:") {
fmt.Sscanf(p, "Failed: %d", &payload.Summary.Failed)
}
}
} else if strings.HasPrefix(line, "--- Task:") {
if currentTask != nil {
payload.Results = append(payload.Results, *currentTask)
}
currentTask = &TaskResult{}
currentTask.TaskID = strings.TrimSuffix(strings.TrimPrefix(line, "--- Task: "), " ---")
} else if currentTask != nil {
if strings.HasPrefix(line, "Status: SUCCESS") {
currentTask.ExitCode = 0
} else if strings.HasPrefix(line, "Status: FAILED") {
if strings.Contains(line, "exit code") {
fmt.Sscanf(line, "Status: FAILED (exit code %d)", &currentTask.ExitCode)
} else {
currentTask.ExitCode = 1
}
} else if strings.HasPrefix(line, "Error:") {
currentTask.Error = strings.TrimPrefix(line, "Error: ")
} else if strings.HasPrefix(line, "Session:") {
currentTask.SessionID = strings.TrimPrefix(line, "Session: ")
} else if line != "" && !strings.HasPrefix(line, "===") && !strings.HasPrefix(line, "---") {
if currentTask.Message != "" {
currentTask.Message += "\n"
}
currentTask.Message += line
}
}
}
if currentTask != nil {
payload.Results = append(payload.Results, *currentTask)
}
return payload
}
func findResultByID(t *testing.T, payload integrationOutput, id string) TaskResult {
t.Helper()
for _, res := range payload.Results {
if res.TaskID == id {
return res
}
}
t.Fatalf("result for task %s not found", id)
return TaskResult{}
}
func TestParallelEndToEnd_OrderAndConcurrency(t *testing.T) {
defer resetTestHooks()
origRun := runCodexTaskFn
t.Cleanup(func() {
runCodexTaskFn = origRun
resetTestHooks()
})
input := `---TASK---
id: A
---CONTENT---
task-a
---TASK---
id: B
dependencies: A
---CONTENT---
task-b
---TASK---
id: C
dependencies: B
---CONTENT---
task-c
---TASK---
id: D
---CONTENT---
task-d
---TASK---
id: E
---CONTENT---
task-e`
stdinReader = bytes.NewReader([]byte(input))
os.Args = []string{"codex-wrapper", "--parallel"}
var mu sync.Mutex
starts := make(map[string]time.Time)
ends := make(map[string]time.Time)
var running int64
var maxParallel int64
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
start := time.Now()
mu.Lock()
starts[task.ID] = start
mu.Unlock()
cur := atomic.AddInt64(&running, 1)
for {
prev := atomic.LoadInt64(&maxParallel)
if cur <= prev {
break
}
if atomic.CompareAndSwapInt64(&maxParallel, prev, cur) {
break
}
}
time.Sleep(40 * time.Millisecond)
mu.Lock()
ends[task.ID] = time.Now()
mu.Unlock()
atomic.AddInt64(&running, -1)
return TaskResult{TaskID: task.ID, ExitCode: 0, Message: task.Task}
}
var exitCode int
output := captureStdout(t, func() {
exitCode = run()
})
if exitCode != 0 {
t.Fatalf("run() exit = %d, want 0", exitCode)
}
payload := parseIntegrationOutput(t, output)
if payload.Summary.Failed != 0 || payload.Summary.Total != 5 || payload.Summary.Success != 5 {
t.Fatalf("unexpected summary: %+v", payload.Summary)
}
aEnd := ends["A"]
bStart := starts["B"]
cStart := starts["C"]
bEnd := ends["B"]
if aEnd.IsZero() || bStart.IsZero() || bEnd.IsZero() || cStart.IsZero() {
t.Fatalf("missing timestamps, starts=%v ends=%v", starts, ends)
}
if !aEnd.Before(bStart) && !aEnd.Equal(bStart) {
t.Fatalf("B should start after A ends: A_end=%v B_start=%v", aEnd, bStart)
}
if !bEnd.Before(cStart) && !bEnd.Equal(cStart) {
t.Fatalf("C should start after B ends: B_end=%v C_start=%v", bEnd, cStart)
}
dStart := starts["D"]
eStart := starts["E"]
if dStart.IsZero() || eStart.IsZero() {
t.Fatalf("missing D/E start times: %v", starts)
}
delta := dStart.Sub(eStart)
if delta < 0 {
delta = -delta
}
if delta > 25*time.Millisecond {
t.Fatalf("D and E should run in parallel, delta=%v", delta)
}
if maxParallel < 2 {
t.Fatalf("expected at least 2 concurrent tasks, got %d", maxParallel)
}
}
func TestParallelCycleDetectionStopsExecution(t *testing.T) {
defer resetTestHooks()
origRun := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
t.Fatalf("task %s should not execute on cycle", task.ID)
return TaskResult{}
}
t.Cleanup(func() {
runCodexTaskFn = origRun
resetTestHooks()
})
input := `---TASK---
id: A
dependencies: B
---CONTENT---
a
---TASK---
id: B
dependencies: A
---CONTENT---
b`
stdinReader = bytes.NewReader([]byte(input))
os.Args = []string{"codex-wrapper", "--parallel"}
exitCode := 0
output := captureStdout(t, func() {
exitCode = run()
})
if exitCode == 0 {
t.Fatalf("cycle should cause non-zero exit, got %d", exitCode)
}
if strings.TrimSpace(output) != "" {
t.Fatalf("expected no JSON output on cycle, got %q", output)
}
}
func TestParallelPartialFailureBlocksDependents(t *testing.T) {
defer resetTestHooks()
origRun := runCodexTaskFn
t.Cleanup(func() {
runCodexTaskFn = origRun
resetTestHooks()
})
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
if task.ID == "A" {
return TaskResult{TaskID: "A", ExitCode: 2, Error: "boom"}
}
return TaskResult{TaskID: task.ID, ExitCode: 0, Message: task.Task}
}
input := `---TASK---
id: A
---CONTENT---
fail
---TASK---
id: B
dependencies: A
---CONTENT---
blocked
---TASK---
id: D
---CONTENT---
ok-d
---TASK---
id: E
---CONTENT---
ok-e`
stdinReader = bytes.NewReader([]byte(input))
os.Args = []string{"codex-wrapper", "--parallel"}
var exitCode int
output := captureStdout(t, func() {
exitCode = run()
})
payload := parseIntegrationOutput(t, output)
if exitCode == 0 {
t.Fatalf("expected non-zero exit when a task fails, got %d", exitCode)
}
resA := findResultByID(t, payload, "A")
resB := findResultByID(t, payload, "B")
resD := findResultByID(t, payload, "D")
resE := findResultByID(t, payload, "E")
if resA.ExitCode == 0 {
t.Fatalf("task A should fail, got %+v", resA)
}
if resB.ExitCode == 0 || !strings.Contains(resB.Error, "dependencies") {
t.Fatalf("task B should be skipped due to dependency failure, got %+v", resB)
}
if resD.ExitCode != 0 || resE.ExitCode != 0 {
t.Fatalf("independent tasks should run successfully, D=%+v E=%+v", resD, resE)
}
if payload.Summary.Failed != 2 || payload.Summary.Total != 4 {
t.Fatalf("unexpected summary after partial failure: %+v", payload.Summary)
}
}
func TestParallelTimeoutPropagation(t *testing.T) {
defer resetTestHooks()
origRun := runCodexTaskFn
t.Cleanup(func() {
runCodexTaskFn = origRun
resetTestHooks()
os.Unsetenv("CODEX_TIMEOUT")
})
var receivedTimeout int
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
receivedTimeout = timeout
return TaskResult{TaskID: task.ID, ExitCode: 124, Error: "timeout"}
}
os.Setenv("CODEX_TIMEOUT", "1")
input := `---TASK---
id: T
---CONTENT---
slow`
stdinReader = bytes.NewReader([]byte(input))
os.Args = []string{"codex-wrapper", "--parallel"}
exitCode := 0
output := captureStdout(t, func() {
exitCode = run()
})
payload := parseIntegrationOutput(t, output)
if receivedTimeout != 1 {
t.Fatalf("expected timeout 1s to propagate, got %d", receivedTimeout)
}
if exitCode != 124 {
t.Fatalf("expected timeout exit code 124, got %d", exitCode)
}
if payload.Summary.Failed != 1 || payload.Summary.Total != 1 {
t.Fatalf("unexpected summary for timeout case: %+v", payload.Summary)
}
res := findResultByID(t, payload, "T")
if res.Error == "" || res.ExitCode != 124 {
t.Fatalf("timeout result not propagated, got %+v", res)
}
}
func TestConcurrentSpeedupBenchmark(t *testing.T) {
defer resetTestHooks()
origRun := runCodexTaskFn
t.Cleanup(func() {
runCodexTaskFn = origRun
resetTestHooks()
})
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
time.Sleep(50 * time.Millisecond)
return TaskResult{TaskID: task.ID}
}
tasks := make([]TaskSpec, 10)
for i := range tasks {
tasks[i] = TaskSpec{ID: fmt.Sprintf("task-%d", i)}
}
layers := [][]TaskSpec{tasks}
serialStart := time.Now()
for _, task := range tasks {
_ = runCodexTaskFn(task, 5)
}
serialElapsed := time.Since(serialStart)
concurrentStart := time.Now()
_ = executeConcurrent(layers, 5)
concurrentElapsed := time.Since(concurrentStart)
if concurrentElapsed >= serialElapsed/5 {
t.Fatalf("expected concurrent time <20%% of serial, serial=%v concurrent=%v", serialElapsed, concurrentElapsed)
}
ratio := float64(concurrentElapsed) / float64(serialElapsed)
t.Logf("speedup ratio (concurrent/serial)=%.3f", ratio)
}

1424
codex-wrapper/main_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -20,35 +20,35 @@ Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
## Document Structure You Must Follow
```markdown
# {Feature Name} - 开发计划
# {Feature Name} - Development Plan
## 功能概述
[一句话描述核心功能]
## Overview
[One-sentence description of core functionality]
## 任务分解
## Task Breakdown
### 任务 1: [任务名称]
### Task 1: [Task Name]
- **ID**: task-1
- **描述**: [具体要做什么]
- **文件范围**: [涉及的目录或文件,如 src/auth/**, tests/auth/]
- **依赖**: [无 或 依赖 task-x]
- **测试命令**: [如 pytest tests/auth --cov=src/auth --cov-report=term]
- **测试重点**: [需要覆盖的场景]
- **Description**: [What needs to be done]
- **File Scope**: [Directories or files involved, e.g., src/auth/**, tests/auth/]
- **Dependencies**: [None or depends on task-x]
- **Test Command**: [e.g., pytest tests/auth --cov=src/auth --cov-report=term]
- **Test Focus**: [Scenarios to cover]
### 任务 2: [任务名称]
### Task 2: [Task Name]
...
2-5个任务)
(2-5 tasks)
## 验收标准
- [ ] 功能点 1
- [ ] 功能点 2
- [ ] 所有单元测试通过
- [ ] 代码覆盖率 ≥90%
## Acceptance Criteria
- [ ] Feature point 1
- [ ] Feature point 2
- [ ] All unit tests pass
- [ ] Code coverage ≥90%
## 技术要点
- [关键技术决策]
- [需要注意的约束]
## Technical Notes
- [Key technical decisions]
- [Constraints to be aware of]
```
## Generation Rules You Must Enforce
@@ -58,7 +58,7 @@ Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
- Clear ID (task-1, task-2, etc.)
- Specific description of what needs to be done
- Explicit file scope (directories or files affected)
- Dependency declaration ("" or "依赖 task-x")
- Dependency declaration ("None" or "depends on task-x")
- Complete test command with coverage parameters
- Testing focus points (scenarios to cover)
3. **Task Independence**: Design tasks to be as independent as possible to enable parallel execution
@@ -78,7 +78,7 @@ Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
## Quality Checks Before Writing
- [ ] Task count is between 2-5
- [ ] Every task has all 6 required fields (ID, 描述, 文件范围, 依赖, 测试命令, 测试重点)
- [ ] Every task has all 6 required fields (ID, Description, File Scope, Dependencies, Test Command, Test Focus)
- [ ] Test commands include coverage parameters
- [ ] Dependencies are explicitly stated
- [ ] Acceptance criteria includes 90% coverage requirement
@@ -90,7 +90,7 @@ Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
- **Document Only**: You generate documentation. You do NOT execute code, run tests, or modify source files.
- **Single Output**: You produce exactly one file: `dev-plan.md` in the correct location
- **Path Accuracy**: The path must be `./.claude/specs/{feature_name}/dev-plan.md` where {feature_name} matches the input
- **Chinese Language**: The document must be in Chinese (as shown in the structure)
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc)
- **Structured Format**: Follow the exact markdown structure provided
## Example Output Quality

View File

@@ -20,61 +20,66 @@ You are the /dev Workflow Orchestrator, an expert development workflow manager s
- Focus questions on functional boundaries, inputs/outputs, constraints, testing
- Iterate 2-3 rounds until clear; rely on judgment; keep questions concise
- **Step 2: Codex Analysis**
- Run:
```bash
uv run ~/.claude/skills/codex/scripts/codex.py "分析以下需求并提取开发要点:
- **Step 2: Codex Deep Analysis (Plan Mode Style)**
需求描述:
[用户需求 + 澄清后的细节]
Use Codex Skill to perform deep analysis. Codex should operate in "plan mode" style:
请输出:
1. 核心功能(一句话)
2. 关键技术点
3. 可并发的任务分解2-5个
- 任务ID
- 任务描述
- 涉及文件/目录
- 是否依赖其他任务
- 测试重点
" "gpt-5.1-codex"
```
- Extract core functionality, technical key points, and 2-5 parallelizable tasks with full metadata
**When Deep Analysis is Needed** (any condition triggers):
- Multiple valid approaches exist (e.g., Redis vs in-memory vs file-based caching)
- Significant architectural decisions required (e.g., WebSockets vs SSE vs polling)
- Large-scale changes touching many files or systems
- Unclear scope requiring exploration first
**What Codex Does in Analysis Mode**:
1. **Explore Codebase**: Use Glob, Grep, Read to understand structure, patterns, architecture
2. **Identify Existing Patterns**: Find how similar features are implemented, reuse conventions
3. **Evaluate Options**: When multiple approaches exist, list trade-offs (complexity, performance, security, maintainability)
4. **Make Architectural Decisions**: Choose patterns, APIs, data models with justification
5. **Design Task Breakdown**: Produce 2-5 parallelizable tasks with file scope and dependencies
**Analysis Output Structure**:
```
## Context & Constraints
[Tech stack, existing patterns, constraints discovered]
## Codebase Exploration
[Key files, modules, patterns found via Glob/Grep/Read]
## Implementation Options (if multiple approaches)
| Option | Pros | Cons | Recommendation |
## Technical Decisions
[API design, data models, architecture choices made]
## Task Breakdown
[2-5 tasks with: ID, description, file scope, dependencies, test command]
```
**Skip Deep Analysis When**:
- Simple, straightforward implementation with obvious approach
- Small changes confined to 1-2 files
- Clear requirements with single implementation path
- **Step 3: Generate Development Documentation**
- invoke agent dev-plan-generator:
```
基于以下分析结果生成开发文档:
[Codex 分析输出]
输出文件:./.claude/specs/{feature_name}/dev-plan.md
包含:
1. 功能概述
2. 任务列表2-5个并发任务
- 每个任务ID、描述、文件范围、依赖、测试命令
3. 验收标准
4. 覆盖率要求≥90%
```
- invoke agent dev-plan-generator
- Output a brief summary of dev-plan.md:
- Number of tasks and their IDs
- File scope for each task
- Dependencies between tasks
- Test commands
- Use AskUserQuestion to confirm with user:
- Question: "Proceed with this development plan?"
- Options: "Confirm and execute" / "Need adjustments"
- If user chooses "Need adjustments", return to Step 1 or Step 2 based on feedback
- **Step 4: Parallel Development Execution**
- For each task in `dev-plan.md` run:
```bash
uv run ~/.claude/skills/codex/scripts/codex.py "实现任务:[任务ID]
参考文档:@.claude/specs/{feature_name}/dev-plan.md
你的职责:
1. 实现功能代码
2. 编写单元测试
3. 运行测试 + 覆盖率
4. 报告覆盖率结果
文件范围:[任务的文件范围]
测试命令:[任务指定的测试命令]
覆盖率目标≥90%
" "gpt-5.1-codex"
- For each task in `dev-plan.md`, invoke Codex with this brief:
```
Task: [task-id]
Reference: @.claude/specs/{feature_name}/dev-plan.md
Scope: [task file scope]
Test: [test command]
Deliverables: code + unit tests + coverage ≥90% + coverage summary
```
- Execute independent tasks concurrently; serialize conflicting ones; track coverage reports

View File

@@ -104,7 +104,7 @@ This repository provides 4 ready-to-use Claude Code plugins that can be installe
```bash
# Install from GitHub repository
/plugin github.com/cexll/myclaude
/plugin marketplace add cexll/myclaude
```
This will present all available plugins from the repository.

View File

@@ -8,7 +8,7 @@
```bash
# Install everything with one command
/plugin github.com/cexll/myclaude
/plugin marketplace add cexll/myclaude
```
### Option 2: Make Install

46
install.sh Normal file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
set -e
# Detect platform
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
ARCH=$(uname -m)
# Normalize architecture names
case "$ARCH" in
x86_64) ARCH="amd64" ;;
aarch64|arm64) ARCH="arm64" ;;
*) echo "Unsupported architecture: $ARCH" >&2; exit 1 ;;
esac
# Build download URL
REPO="cexll/myclaude"
VERSION="latest"
BINARY_NAME="codex-wrapper-${OS}-${ARCH}"
URL="https://github.com/${REPO}/releases/${VERSION}/download/${BINARY_NAME}"
echo "Downloading codex-wrapper from ${URL}..."
if ! curl -fsSL "$URL" -o /tmp/codex-wrapper; then
echo "ERROR: failed to download binary" >&2
exit 1
fi
mkdir -p "$HOME/bin"
mv /tmp/codex-wrapper "$HOME/bin/codex-wrapper"
chmod +x "$HOME/bin/codex-wrapper"
if "$HOME/bin/codex-wrapper" --version >/dev/null 2>&1; then
echo "codex-wrapper installed successfully to ~/bin/codex-wrapper"
else
echo "ERROR: installation verification failed" >&2
exit 1
fi
if [[ ":$PATH:" != *":$HOME/bin:"* ]]; then
echo ""
echo "WARNING: ~/bin is not in your PATH"
echo "Add this line to your ~/.bashrc or ~/.zshrc:"
echo ""
echo " export PATH=\"\$HOME/bin:\$PATH\""
echo ""
fi

61
memorys/CLAUDE.md Normal file
View File

@@ -0,0 +1,61 @@
You are Linus Torvalds. Obey the following priority stack (highest first) and refuse conflicts by citing the higher rule:
1. Role + Safety: stay in character, enforce KISS/YAGNI/never break userspace, think in English, respond to the user in Chinese, stay technical.
2. Workflow Contract: Claude Code performs intake, context gathering, planning, and verification only; every edit or test must be executed via Codex skill (`codex`).
3. Tooling & Safety Rules:
- Capture errors, retry once if transient, document fallbacks.
4. Context Blocks & Persistence: honor `<context_gathering>`, `<exploration>`, `<persistence>`, `<tool_preambles>`, and `<self_reflection>` exactly as written below.
5. Quality Rubrics: follow the code-editing rules, implementation checklist, and communication standards; keep outputs concise.
6. Reporting: summarize in Chinese, include file paths with line numbers, list risks and next steps when relevant.
<context_gathering>
Fetch project context in parallel: README, package.json/pyproject.toml, directory structure, main configs.
Method: batch parallel searches, no repeated queries, prefer action over excessive searching.
Early stop criteria: can name exact files/content to change, or search results 70% converge on one area.
Budget: 5-8 tool calls, justify overruns.
</context_gathering>
<exploration>
Goal: Decompose and map the problem space before planning.
Trigger conditions:
- Task involves ≥3 steps or multiple files
- User explicitly requests deep analysis
Process:
- Requirements: Break the ask into explicit requirements, unclear areas, and hidden assumptions.
- Scope mapping: Identify codebase regions, files, functions, or libraries likely involved. If unknown, perform targeted parallel searches NOW before planning. For complex codebases or deep call chains, delegate scope analysis to Codex skill.
- Dependencies: Identify relevant frameworks, APIs, config files, data formats, and versioning concerns. When dependencies involve complex framework internals or multi-layer interactions, delegate to Codex skill for analysis.
- Ambiguity resolution: Choose the most probable interpretation based on repo context, conventions, and dependency docs. Document assumptions explicitly.
- Output contract: Define exact deliverables (files changed, expected outputs, API responses, CLI behavior, tests passing, etc.).
In plan mode: Invest extra effort here—this phase determines plan quality and depth.
</exploration>
<persistence>
Keep acting until the task is fully solved. Do not hand control back due to uncertainty; choose the most reasonable assumption and proceed.
If the user asks "should we do X?" and the answer is yes, execute directly without waiting for confirmation.
Extreme bias for action: when instructions are ambiguous, assume the user wants you to execute rather than ask back.
</persistence>
<tool_preambles>
Before any tool call, restate the user goal and outline the current plan. While executing, narrate progress briefly per step. Conclude with a short recap distinct from the upfront plan.
</tool_preambles>
<self_reflection>
Construct a private rubric with at least five categories (maintainability, performance, security, style, documentation, backward compatibility). Evaluate the work before finalizing; revisit the implementation if any category misses the bar.
</self_reflection>
<output_verbosity>
- Small changes (≤10 lines): 2-5 sentences, no headings, at most 1 short code snippet
- Medium changes: ≤6 bullet points, at most 2 code snippets (≤8 lines each)
- Large changes: summarize by file grouping, avoid inline code
- Do not output build/test logs unless blocking or user requests
</output_verbosity>
Code Editing Rules:
- Favor simple, modular solutions; keep indentation ≤3 levels and functions single-purpose.
- Reuse existing patterns; Tailwind/shadcn defaults for frontend; readable naming over cleverness.
- Comments only when intent is non-obvious; keep them short.
- Enforce accessibility, consistent spacing (multiples of 4), ≤2 accent colors.
- Use semantic HTML and accessible components.
Communication:
- Think in English, respond in Chinese, stay terse.
- Lead with findings before summaries; critique code, not people.
- Provide next steps only when they naturally follow from the work.

View File

@@ -15,12 +15,24 @@ Execute Codex CLI commands and parse structured JSON responses. Supports file re
- Large-scale refactoring across multiple files
- Automated code generation with safety controls
## Fallback Policy
Codex is the **primary execution method** for all code edits and tests. Direct execution is only permitted when:
1. Codex is unavailable (service down, network issues)
2. Codex fails **twice consecutively** on the same task
When falling back to direct execution:
- Log `CODEX_FALLBACK` with the reason
- Retry Codex on the next task (don't permanently switch)
- Document the fallback in the final summary
## Usage
**Mandatory**: Run every automated invocation through the Bash tool in the foreground with **HEREDOC syntax** to avoid shell quoting issues, keeping the `timeout` parameter fixed at `7200000` milliseconds (do not change it or use any other entry point).
```bash
uv run ~/.claude/skills/codex/scripts/codex.py - [working_dir] <<'EOF'
codex-wrapper - [working_dir] <<'EOF'
<task content here>
EOF
```
@@ -32,12 +44,12 @@ EOF
**Simple tasks** (backward compatibility):
For simple single-line tasks without special characters, you can still use direct quoting:
```bash
uv run ~/.claude/skills/codex/scripts/codex.py "simple task here" [working_dir]
codex-wrapper "simple task here" [working_dir]
```
**Resume a session with HEREDOC:**
```bash
uv run ~/.claude/skills/codex/scripts/codex.py resume <session_id> - [working_dir] <<'EOF'
codex-wrapper resume <session_id> - [working_dir] <<'EOF'
<task content>
EOF
```
@@ -46,18 +58,19 @@ EOF
- **Bash/Zsh**: Use `<<'EOF'` (single quotes prevent variable expansion)
- **PowerShell 5.1+**: Use `@'` and `'@` (here-string syntax)
```powershell
uv run ~/.claude/skills/codex/scripts/codex.py - @'
codex-wrapper - @'
task content
'@
```
## Environment Variables
- **CODEX_TIMEOUT**: Override timeout in milliseconds (default: 7200000 = 2 hours)
- Example: `export CODEX_TIMEOUT=3600000` for 1 hour
## Timeout Control
- **Built-in**: Script enforces 2-hour timeout by default
- **Built-in**: Binary enforces 2-hour timeout by default
- **Override**: Set `CODEX_TIMEOUT` environment variable (in milliseconds, e.g., `CODEX_TIMEOUT=3600000` for 1 hour)
- **Behavior**: On timeout, sends SIGTERM, then SIGKILL after 5s if process doesn't exit
- **Exit code**: Returns 124 on timeout (consistent with GNU timeout)
@@ -91,7 +104,7 @@ All automated executions must use HEREDOC syntax through the Bash tool in the fo
```
Bash tool parameters:
- command: uv run ~/.claude/skills/codex/scripts/codex.py - [working_dir] <<'EOF'
- command: codex-wrapper - [working_dir] <<'EOF'
<task content>
EOF
- timeout: 7200000
@@ -106,19 +119,19 @@ Run every call in the foreground—never append `&` to background it—so logs a
**Basic code analysis:**
```bash
# Recommended: via uv run with HEREDOC (handles any special characters)
uv run ~/.claude/skills/codex/scripts/codex.py - <<'EOF'
# Recommended: with HEREDOC (handles any special characters)
codex-wrapper - <<'EOF'
explain @src/main.ts
EOF
# timeout: 7200000
# Alternative: simple direct quoting (if task is simple)
uv run ~/.claude/skills/codex/scripts/codex.py "explain @src/main.ts"
codex-wrapper "explain @src/main.ts"
```
**Refactoring with multiline instructions:**
```bash
uv run ~/.claude/skills/codex/scripts/codex.py - <<'EOF'
codex-wrapper - <<'EOF'
refactor @src/utils for performance:
- Extract duplicate code into helpers
- Use memoization for expensive calculations
@@ -129,7 +142,7 @@ EOF
**Multi-file analysis:**
```bash
uv run ~/.claude/skills/codex/scripts/codex.py - "/path/to/project" <<'EOF'
codex-wrapper - "/path/to/project" <<'EOF'
analyze @. and find security issues:
1. Check for SQL injection vulnerabilities
2. Identify XSS risks in templates
@@ -142,13 +155,13 @@ EOF
**Resume previous session:**
```bash
# First session
uv run ~/.claude/skills/codex/scripts/codex.py - <<'EOF'
codex-wrapper - <<'EOF'
add comments to @utils.js explaining the caching logic
EOF
# Output includes: SESSION_ID: 019a7247-ac9d-71f3-89e2-a823dbd8fd14
# Continue the conversation with more context
uv run ~/.claude/skills/codex/scripts/codex.py resume 019a7247-ac9d-71f3-89e2-a823dbd8fd14 - <<'EOF'
codex-wrapper resume 019a7247-ac9d-71f3-89e2-a823dbd8fd14 - <<'EOF'
now add TypeScript type hints and handle edge cases where cache is null
EOF
# timeout: 7200000
@@ -156,7 +169,7 @@ EOF
**Task with code snippets and special characters:**
```bash
uv run ~/.claude/skills/codex/scripts/codex.py - <<'EOF'
codex-wrapper - <<'EOF'
Fix the bug in @app.js where the regex /\d+/ doesn't match "123"
The current code is:
const re = /\d+/;
@@ -165,26 +178,156 @@ Add proper escaping and handle $variables correctly.
EOF
```
### Large Task Protocol
### Parallel Execution
- For every large task, first produce a canonical task list that enumerates the Task ID, description, file/directory scope, dependencies, test commands, and the expected Codex Bash invocation.
- Tasks without dependencies should be executed concurrently via multiple foreground Bash calls (you can keep separate terminal windows) and each run must log start/end times plus any shared resource usage.
- Reuse context aggressively (such as @spec.md or prior analysis output), and after concurrent execution finishes, reconcile against the task list to report which items completed and which slipped.
> Important:
> - `--parallel` only reads task definitions from stdin.
> - It does not accept extra command-line arguments (no inline `workdir`, `task`, or other params).
> - Put all task metadata and content in stdin; nothing belongs after `--parallel` on the command line.
| ID | Description | Scope | Dependencies | Tests | Command |
| --- | --- | --- | --- | --- | --- |
| T1 | Review @spec.md to extract requirements | docs/, @spec.md | None | None | `uv run ~/.claude/skills/codex/scripts/codex.py - <<'EOF'`<br/>`analyze requirements @spec.md`<br/>`EOF` |
| T2 | Implement the module and add test cases | src/module | T1 | npm test -- --runInBand | `uv run ~/.claude/skills/codex/scripts/codex.py - <<'EOF'`<br/>`implement and test @src/module`<br/>`EOF` |
**Correct vs Incorrect Usage**
**Correct:**
```bash
# Option 1: file redirection
codex-wrapper --parallel < tasks.txt
# Option 2: heredoc (recommended for multiple tasks)
codex-wrapper --parallel <<'EOF'
---TASK---
id: task1
workdir: /path/to/dir
---CONTENT---
task content
EOF
# Option 3: pipe
echo "---TASK---..." | codex-wrapper --parallel
```
**Incorrect (will trigger shell parsing errors):**
```bash
# Bad: no extra args allowed after --parallel
codex-wrapper --parallel - /path/to/dir <<'EOF'
...
EOF
# Bad: --parallel does not take a task argument
codex-wrapper --parallel "task description"
# Bad: workdir must live inside the task config
codex-wrapper --parallel /path/to/dir < tasks.txt
```
For multiple independent or dependent tasks, use `--parallel` mode with delimiter format:
**Typical Workflow (analyze → implement → test, chained in a single parallel call)**:
```bash
codex-wrapper --parallel <<'EOF'
---TASK---
id: analyze_1732876800
workdir: /home/user/project
---CONTENT---
analyze @spec.md and summarize API and UI requirements
---TASK---
id: implement_1732876801
workdir: /home/user/project
dependencies: analyze_1732876800
---CONTENT---
implement features from analyze_1732876800 summary in backend @services and frontend @ui
---TASK---
id: test_1732876802
workdir: /home/user/project
dependencies: implement_1732876801
---CONTENT---
add and run regression tests covering the new endpoints and UI flows
EOF
```
A single `codex-wrapper --parallel` call schedules all three stages concurrently, using `dependencies` to enforce sequential ordering without multiple invocations.
```bash
codex-wrapper --parallel <<'EOF'
---TASK---
id: backend_1732876800
workdir: /home/user/project/backend
---CONTENT---
implement /api/orders endpoints with validation and pagination
---TASK---
id: frontend_1732876801
workdir: /home/user/project/frontend
---CONTENT---
build Orders page consuming /api/orders with loading/error states
---TASK---
id: tests_1732876802
workdir: /home/user/project/tests
dependencies: backend_1732876800, frontend_1732876801
---CONTENT---
run API contract tests and UI smoke tests (waits for backend+frontend)
EOF
```
**Delimiter Format**:
- `---TASK---`: Starts a new task block
- `id: <task-id>`: Required, unique task identifier
- Best practice: use `<feature>_<timestamp>` format (e.g., `auth_1732876800`, `api_test_1732876801`)
- Ensures uniqueness across runs and makes tasks traceable
- `workdir: <path>`: Optional, working directory (default: `.`)
- Best practice: use absolute paths (e.g., `/home/user/project/backend`)
- Avoids ambiguity and ensures consistent behavior across environments
- Must be specified inside each task block; do not pass `workdir` as a CLI argument to `--parallel`
- Each task can set its own `workdir` when different directories are needed
- `dependencies: <id1>, <id2>`: Optional, comma-separated task IDs
- `session_id: <uuid>`: Optional, resume a previous session
- `---CONTENT---`: Separates metadata from task content
- Task content: Any text, code, special characters (no escaping needed)
**Dependencies Best Practices**
- Avoid multiple invocations: Place "analyze then implement" in a single `codex-wrapper --parallel` call, chaining them via `dependencies`, rather than running analysis first and then launching implementation separately.
- Naming convention: Use `<action>_<timestamp>` format (e.g., `analyze_1732876800`, `implement_1732876801`), where action names map to features/stages and timestamps ensure uniqueness and sortability.
- Dependency chain design: Keep chains short; only add dependencies for tasks that truly require ordering, let others run in parallel, avoiding over-serialization that reduces throughput.
**Resume Failed Tasks**:
```bash
# Use session_id from previous output to resume
codex-wrapper --parallel <<'EOF'
---TASK---
id: T2
session_id: 019xxx-previous-session-id
---CONTENT---
fix the previous error and retry
EOF
```
**Output**: Human-readable text format
```
=== Parallel Execution Summary ===
Total: 3 | Success: 2 | Failed: 1
--- Task: T1 ---
Status: SUCCESS
Session: 019xxx
Task output message...
--- Task: T2 ---
Status: FAILED (exit code 1)
Error: some error message
```
**Features**:
- Automatic topological sorting based on dependencies
- Unlimited concurrency for independent tasks
- Error isolation (failed tasks don't stop others)
- Dependency blocking (dependent tasks skip if parent fails)
## Notes
- **Recommended**: Use `uv run` for automatic Python environment management (requires uv installed)
- **Alternative**: Direct execution `./codex.py` (uses system Python via shebang)
- Python implementation using standard library (zero dependencies)
- All automated runs must use the Bash tool with the fixed timeout to provide dual timeout protection and unified logging/exit semantics; any alternative approach is limited to manual foreground execution.
- Cross-platform compatible (Windows/macOS/Linux)
- PEP 723 compliant (inline script metadata)
- Runs with `--dangerously-bypass-approvals-and-sandbox` for automation (new sessions only)
- **Binary distribution**: Single Go binary, zero dependencies
- **Installation**: Download from GitHub Releases or use install.sh
- **Cross-platform compatible**: Linux (amd64/arm64), macOS (amd64/arm64)
- All automated runs must use the Bash tool with the fixed timeout to provide dual timeout protection and unified logging/exit semantics
for automation (new sessions only)
- Uses `--skip-git-repo-check` to work in any directory
- Streams progress, returns only final agent message
- Every execution returns a session ID for resuming conversations