Compare commits

...

35 Commits

Author SHA1 Message Date
cexll
326ad85c74 fix: use ANTHROPIC_AUTH_TOKEN for Claude CLI env injection
- Change env var from ANTHROPIC_API_KEY to ANTHROPIC_AUTH_TOKEN
- Add Backend field propagation in taskSpec (cli.go)
- Add stderr logging for injected env vars with API key masking
- Add comprehensive tests for env injection flow

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 15:20:29 +08:00
cexll
e66bec0083 test: use prefix match for version flag tests
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:43 +08:00
cexll
eb066395c2 docs: restructure root READMEs with do as recommended workflow
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:41 +08:00
cexll
b49dad842a docs: update do/omo/sparv module READMEs with detailed workflows
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:39 +08:00
cexll
d98086c661 docs: add README for bmad and requirements modules
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-24 14:27:37 +08:00
cexll
0420646258 update codeagent version 2026-01-24 14:01:54 +08:00
cexll
19a8d8e922 refactor: rename feature-dev to do workflow
- Rename skills/feature-dev/ → skills/do/
- Update config.json module name and paths
- Shorter command: /do instead of /feature-dev
- State file: .claude/do.local.md
- Completion promise: <promise>DO_COMPLETE</promise>

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 20:29:28 +08:00
NieiR
669b1d82ce fix(gemini): read GEMINI_MODEL from ~/.gemini/.env (#131)
When using gemini backend without --model flag, now automatically
reads GEMINI_MODEL from ~/.gemini/.env file, consistent with how
claude backend reads model from settings.
2026-01-23 12:03:50 +08:00
cexll
a21c31fd89 feat: add feature-dev skill with 7-phase workflow
Structured feature development with codeagent orchestration:
- Discovery, Exploration, Clarification, Architecture phases
- Implementation, Review, Summary phases
- Parallel agent execution via code-explorer, code-architect, etc.
- Hook-based workflow automation with validation scripts

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:01:31 +08:00
cexll
773f133111 chore: ignore references directory
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:01:04 +08:00
cexll
4f5d24531c feat(install): support \${CLAUDE_PLUGIN_ROOT} variable in hooks config
- find_module_hooks now returns (hooks_config, plugin_root_path) tuple
- Add _replace_hook_variables() for recursive placeholder substitution
- Add feature-dev module config to config.json

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:55 +08:00
cexll
cc24d43c8b fix(codeagent): validate non-empty output message before printing
Return exit code 1 when backend returns empty result.Message with exit_code=0.
Prevents silent failures where no output is produced.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:47 +08:00
cexll
27d4ac8afd chore: add go.work.sum for workspace dependencies
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-23 12:00:38 +08:00
cexll
2e5d12570d fix: add missing cmd/codeagent/main.go entry point
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-20 17:45:50 +08:00
cexll
7c89c40e8f fix(ci): update release workflow build path for new directory structure
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-20 17:42:10 +08:00
cexll
fa617d1599 refactor: restructure codebase to internal/ directory with modular architecture
- Move all source files to internal/{app,backend,config,executor,logger,parser,utils}
- Integrate third-party libraries: zerolog, goccy/go-json, gopsutil, cobra/viper
- Add comprehensive unit tests for utils package (94.3% coverage)
- Add performance benchmarks for string operations
- Fix error display: cleanup warnings no longer pollute Recent Errors
- Add GitHub Actions CI workflow
- Add Makefile for build automation
- Add README documentation

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-20 17:34:26 +08:00
cexll
90c630e30e fix: write PATH config to both profile and rc files (#128)
Previously install.sh only wrote PATH to ~/.bashrc (or ~/.zshrc),
which doesn't work for non-interactive login shells like `bash -lc`
used by Claude Code, because Ubuntu's ~/.bashrc exits early for
non-interactive shells.

Now writes to both:
- bash: ~/.bashrc + ~/.profile
- zsh: ~/.zshrc + ~/.zprofile

Each file is checked independently for idempotency.

Closes #128

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 22:22:41 +08:00
cexll
25bbbc32a7 feat: add course module with dev, product-requirements and test-cases skills
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 16:28:59 +08:00
cexll
d8304bf2b9 feat: add hooks management to install.py
- Add merge_hooks_to_settings: merge module hooks into settings.json
- Add unmerge_hooks_from_settings: remove module hooks from settings.json
- Add find_module_hooks: detect hooks.json in module directories
- Update execute_module: auto-merge hooks after install
- Update uninstall_module: auto-remove hooks on uninstall
- Add __module__ marker to track hook ownership

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 14:16:16 +08:00
cexll
7240e08900 feat: add sparv module and interactive plugin manager
- Add sparv module to config.json (SPARV workflow v1.1)
- Disable essentials module by default
- Add --status to show installation status of all modules
- Add --uninstall to remove installed modules
- Add interactive management mode (install/uninstall via menu)
- Add filesystem-based installation detection
- Support both module numbers and names in selection
- Merge install status instead of overwriting

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 13:38:52 +08:00
cexll
e122d8ff25 feat: add sparv enhanced rules v1.1
- Add Uncertainty Declaration (G3): declare assumptions when score < 2
- Add Requirement Routing: Quick/Full mode based on scope
- Add Context Acquisition: optional kb.md check before Specify
- Add Knowledge Base: .sparv/kb.md for cross-session patterns
- Add changelog-update.sh: maintain CHANGELOG by type

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-17 13:12:43 +08:00
马雨
6985a30a6a docs: update 'Agent Hierarchy' model for frontend-ui-ux-engineer and document-writer in README (#127)
* docs: update mappings for frontend-ui-ux-engineer and document-writer in README

* docs: update 'Agent Hierarchy' model for frontend-ui-ux-engineer and document-writer in README
2026-01-17 12:32:32 +08:00
马雨
dd4c12b8e2 docs: update mappings for frontend-ui-ux-engineer and document-writer in README (#126) 2026-01-17 12:04:12 +08:00
cexll
a88315d92d feat: add sparv skill to claude-plugin v1.1.0
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-16 22:26:31 +08:00
cexll
d1f13b3379 remove .sparv 2026-01-16 14:35:11 +08:00
cexll
5d362852ab feat sparv skill 2026-01-16 14:34:03 +08:00
cexll
238c7b9a13 fix(codeagent-wrapper): remove extraneous dash arg for opencode stdin mode (#124)
opencode does not support "-" as a stdin marker like codex/claude/gemini.
When using stdin mode, omit the "-" argument so opencode reads from stdin
without an unrecognized positional argument.

Closes #124

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-16 10:30:38 +08:00
cexll
0986fa82ee update readme 2026-01-16 09:39:55 +08:00
cexll
a989ce343c fix(codeagent-wrapper): correct default models for oracle and librarian agents (#120)
- oracle: claude-sonnet-4-20250514 → claude-opus-4-5-20251101
- librarian: claude-sonnet-4-5-20250514 → claude-sonnet-4-5-20250929

Fixes #120

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-16 09:37:39 +08:00
cexll
abe0839249 feat dev skill 2026-01-15 15:31:14 +08:00
cexll
d75c973f32 fix(codeagent-wrapper): filter codex 0.84.0 stderr noise logs (#122)
- Add skills loader error pattern to codex noise filter
- Update CHANGELOG for v5.6.4

Fixes #122

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-15 15:22:25 +08:00
cexll
e7f329940b fix(codeagent-wrapper): filter codex stderr noise logs
Add codexNoisePatterns to filter "ERROR codex_core::codex: needs_follow_up:"
messages from stderr output when using the codex backend.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-15 14:59:31 +08:00
cexll
0fc5eaaa2d fix: update version tests to match 5.6.3
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-14 17:26:21 +08:00
cexll
420eb857ff chore: bump codeagent-wrapper version to 5.6.3
Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-14 17:14:06 +08:00
cexll
661656c587 fix(codeagent-wrapper): use config override for codex reasoning effort
Replace invalid `--reasoning-effort` CLI flag with `-c model_reasoning_effort=<value>`
config override, as codex does not support the former.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-01-14 17:04:21 +08:00
122 changed files with 9467 additions and 4716 deletions

View File

@@ -42,6 +42,13 @@
"version": "5.6.1",
"source": "./development-essentials",
"category": "productivity"
},
{
"name": "sparv",
"description": "Minimal SPARV workflow (Specify→Plan→Act→Review→Vault) with 10-point spec gate, unified journal, 2-action saves, 3-failure protocol, and EHRB risk detection",
"version": "1.1.0",
"source": "./skills/sparv",
"category": "development"
}
]
}

View File

@@ -74,7 +74,7 @@ jobs:
if [ "${{ matrix.goos }}" = "windows" ]; then
OUTPUT_NAME="${OUTPUT_NAME}.exe"
fi
go build -ldflags="-s -w -X main.version=${VERSION}" -o ${OUTPUT_NAME} .
go build -ldflags="-s -w -X main.version=${VERSION}" -o ${OUTPUT_NAME} ./cmd/codeagent
chmod +x ${OUTPUT_NAME}
echo "artifact_path=codeagent-wrapper/${OUTPUT_NAME}" >> $GITHUB_OUTPUT

1
.gitignore vendored
View File

@@ -7,3 +7,4 @@
__pycache__
.coverage
coverage.out
references

View File

@@ -2,6 +2,66 @@
All notable changes to this project will be documented in this file.
## [5.6.4] - 2026-01-15
### 🚀 Features
- add reasoning effort config for codex backend
- default to skip-permissions and bypass-sandbox
- add multi-agent support with yolo mode
- add omo module for multi-agent orchestration
- add intelligent backend selection based on task complexity (#61)
- v5.4.0 structured execution report (#94)
- add millisecond-precision timestamps to all log entries (#91)
- skill-install install script and security scan
- add uninstall scripts with selective module removal
### 🐛 Bug Fixes
- filter codex stderr noise logs
- use config override for codex reasoning effort
- propagate SkipPermissions to parallel tasks (#113)
- add timeout for Windows process termination
- reject dash as workdir parameter (#118)
- add sleep in fake script to prevent CI race condition
- fix gemini env load
- fix omo
- fix codeagent skill TaskOutput
- 修复 Gemini init 事件 session_id 未提取的问题 (#111)
- Windows 后端退出taskkill 结束进程树 + turn.completed 支持 (#108)
- support model parameter for all backends, auto-inject from settings (#105)
- replace setx with reg add to avoid 1024-char PATH truncation (#101)
- 移除未知事件格式的日志噪声 (#96)
- prevent duplicate PATH entries on reinstall (#95)
- Minor issues #12 and #13 - ASCII mode and performance optimization
- correct settings.json filename and bump version to v5.2.8
- allow claude backend to read env from setting.json while preventing recursion (#92)
- comprehensive security and quality improvements for PR #85 & #87 (#90)
- Improve backend termination after message and extend timeout (#86)
- Parser重复解析优化 + 严重bug修复 + PR #86兼容性 (#88)
- filter noisy stderr output from gemini backend (#83)
- 修復 wsl install.sh 格式問題 (#78)
- 修复多 backend 并行日志 PID 混乱并移除包装格式 (#74) (#76)
### 🚜 Refactor
- remove sisyphus agent and unused code
- streamline agent documentation and remove sisyphus
### 📚 Documentation
- add OmO workflow to README and fix plugin marketplace structure
- update FAQ for default bypass/skip-permissions behavior
- 添加 FAQ 常见问题章节
- update troubleshooting with idempotent PATH commands (#95)
### 💼 Other
- add test-cases skill
- add browser skill
- BMADh和Requirements-Driven支持根据语义生成对应的文档 (#82)
- update all readme
## [5.2.4] - 2025-12-16

635
README.md
View File

@@ -3,29 +3,13 @@
# Claude Code Multi-Agent Workflow System
[![Run in Smithery](https://smithery.ai/badge/skills/cexll)](https://smithery.ai/skills?ns=cexll&utm_source=github&utm_medium=badge)
[![License: AGPL-3.0](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Claude Code](https://img.shields.io/badge/Claude-Code-blue)](https://claude.ai/code)
[![Version](https://img.shields.io/badge/Version-5.2-green)](https://github.com/cexll/myclaude)
[![Version](https://img.shields.io/badge/Version-6.x-green)](https://github.com/cexll/myclaude)
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini)
> AI-powered development automation with multi-backend execution (Codex/Claude/Gemini/OpenCode)
## Core Concept: Multi-Backend Architecture
This system leverages a **dual-agent architecture** with pluggable AI backends:
| Role | Agent | Responsibility |
|------|-------|----------------|
| **Orchestrator** | Claude Code | Planning, context gathering, verification, user interaction |
| **Executor** | codeagent-wrapper | Code editing, test execution (Codex/Claude/Gemini backends) |
**Why this separation?**
- Claude Code excels at understanding context and orchestrating complex workflows
- Specialized backends (Codex for code, Claude for reasoning, Gemini for prototyping) excel at focused execution
- Backend selection via `--backend codex|claude|gemini` matches the model to the task
## Quick Start(Please execute in Powershell on Windows)
## Quick Start
```bash
git clone https://github.com/cexll/myclaude.git
@@ -33,199 +17,23 @@ cd myclaude
python3 install.py --install-dir ~/.claude
```
## Workflows Overview
## Modules Overview
### 0. OmO Multi-Agent Orchestrator (Recommended for Complex Tasks)
**Intelligent multi-agent orchestration that routes tasks to specialized agents based on risk signals.**
```bash
/omo "analyze and fix this authentication bug"
```
**Agent Hierarchy:**
| Agent | Role | Backend | Model |
|-------|------|---------|-------|
| `oracle` | Technical advisor | Claude | claude-opus-4-5 |
| `librarian` | External research | Claude | claude-sonnet-4-5 |
| `explore` | Codebase search | OpenCode | grok-code |
| `develop` | Code implementation | Codex | gpt-5.2 |
| `frontend-ui-ux-engineer` | UI/UX specialist | Gemini | gemini-3-pro |
| `document-writer` | Documentation | Gemini | gemini-3-flash |
**Routing Signals (Not Fixed Pipeline):**
- Code location unclear → `explore`
- External library/API → `librarian`
- Risky/multi-file change → `oracle`
- Implementation needed → `develop` / `frontend-ui-ux-engineer`
**Common Recipes:**
- Explain code: `explore`
- Small fix with known location: `develop` directly
- Bug fix, location unknown: `explore → develop`
- Cross-cutting refactor: `explore → oracle → develop`
- External API integration: `explore + librarian → oracle → develop`
**Best For:** Complex bug investigation, multi-file refactoring, architecture decisions
---
### 1. Dev Workflow (Recommended)
**The primary workflow for most development tasks.**
```bash
/dev "implement user authentication with JWT"
```
**6-Step Process:**
1. **Requirements Clarification** - Interactive Q&A to clarify scope
2. **Codex Deep Analysis** - Codebase exploration and architecture decisions
3. **Dev Plan Generation** - Structured task breakdown with test requirements
4. **Parallel Execution** - Codex executes tasks concurrently
5. **Coverage Validation** - Enforce ≥90% test coverage
6. **Completion Summary** - Report with file changes and coverage stats
**Key Features:**
- Claude Code orchestrates, Codex executes all code changes
- Automatic task parallelization for speed
- Mandatory 90% test coverage gate
- Rollback on failure
**Best For:** Feature development, refactoring, bug fixes with tests
---
### 2. BMAD Agile Workflow
**Full enterprise agile methodology with 6 specialized agents.**
```bash
/bmad-pilot "build e-commerce checkout system"
```
**Agents:**
| Agent | Role |
|-------|------|
| Product Owner | Requirements & user stories |
| Architect | System design & tech decisions |
| Tech Lead | Sprint planning & task breakdown |
| Developer | Implementation |
| Code Reviewer | Quality assurance |
| QA Engineer | Testing & validation |
**Process:**
```
Requirements → Architecture → Sprint Plan → Development → Review → QA
↓ ↓ ↓ ↓ ↓ ↓
PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
```
**Best For:** Large features, team coordination, enterprise projects
---
### 3. Requirements-Driven Workflow
**Lightweight requirements-to-code pipeline.**
```bash
/requirements-pilot "implement API rate limiting"
```
**Process:**
1. Requirements generation with quality scoring
2. Implementation planning
3. Code generation
4. Review and testing
**Best For:** Quick prototypes, well-defined features
---
### 4. Development Essentials
**Direct commands for daily coding tasks.**
| Command | Purpose |
|---------|---------|
| `/code` | Implement a feature |
| `/debug` | Debug an issue |
| `/test` | Write tests |
| `/review` | Code review |
| `/optimize` | Performance optimization |
| `/refactor` | Code refactoring |
| `/docs` | Documentation |
**Best For:** Quick tasks, no workflow overhead needed
## Enterprise Workflow Features
- **Multi-backend execution:** `codeagent-wrapper --backend codex|claude|gemini` (default `codex`) so you can match the model to the task without changing workflows.
- **GitHub workflow commands:** `/gh-create-issue "short need"` creates structured issues; `/gh-issue-implement 123` pulls issue #123, drives development, and prepares the PR.
- **Skills + hooks activation:** .claude/hooks run automation (tests, reviews), while `.claude/skills/skill-rules.json` auto-suggests the right skills. Keep hooks enabled in `.claude/settings.json` to activate the enterprise workflow helpers.
---
## Version Requirements
### Codex CLI
**Minimum version:** Check compatibility with your installation
The codeagent-wrapper uses these Codex CLI features:
- `codex e` - Execute commands (shorthand for `codex exec`)
- `--skip-git-repo-check` - Skip git repository validation
- `--json` - JSON stream output format
- `-C <workdir>` - Set working directory
- `resume <session_id>` - Resume previous sessions
**Verify Codex CLI is installed:**
```bash
which codex
codex --version
```
### Claude CLI
**Minimum version:** Check compatibility with your installation
Required features:
- `--output-format stream-json` - Streaming JSON output format
- `--setting-sources` - Control setting sources (prevents infinite recursion)
- `--dangerously-skip-permissions` - Skip permission prompts (use with caution)
- `-p` - Prompt input flag
- `-r <session_id>` - Resume sessions
**Security Note:** The wrapper adds `--dangerously-skip-permissions` for Claude by default. Set `CODEAGENT_SKIP_PERMISSIONS=false` to disable if you need permission prompts.
**Verify Claude CLI is installed:**
```bash
which claude
claude --version
```
### Gemini CLI
**Minimum version:** Check compatibility with your installation
Required features:
- `-o stream-json` - JSON stream output format
- `-y` - Auto-approve prompts (non-interactive mode)
- `-r <session_id>` - Resume sessions
- `-p` - Prompt input flag
**Verify Gemini CLI is installed:**
```bash
which gemini
gemini --version
```
---
| Module | Description | Documentation |
|--------|-------------|---------------|
| [do](skills/do/README.md) | **Recommended** - 7-phase feature development with codeagent orchestration | `/do` command |
| [dev](dev-workflow/README.md) | Lightweight dev workflow with Codex integration | `/dev` command |
| [omo](skills/omo/README.md) | Multi-agent orchestration with intelligent routing | `/omo` command |
| [bmad](bmad-agile-workflow/README.md) | BMAD agile workflow with 6 specialized agents | `/bmad-pilot` command |
| [requirements](requirements-driven-workflow/README.md) | Lightweight requirements-to-code pipeline | `/requirements-pilot` command |
| [essentials](development-essentials/README.md) | Core development commands and utilities | `/code`, `/debug`, etc. |
| [sparv](skills/sparv/README.md) | SPARV workflow (Specify→Plan→Act→Review→Vault) | `/sparv` command |
| course | Course development (combines dev + product-requirements + test-cases) | Composite module |
## Installation
### Modular Installation (Recommended)
```bash
# Install all enabled modules (dev + essentials by default)
# Install all enabled modules
python3 install.py --install-dir ~/.claude
# Install specific module
@@ -234,173 +42,72 @@ python3 install.py --module dev
# List available modules
python3 install.py --list-modules
# Force overwrite existing files
# Force overwrite
python3 install.py --force
```
### Available Modules
### Module Configuration
| Module | Default | Description |
|--------|---------|-------------|
| `dev` | ✓ Enabled | Dev workflow + Codex integration |
| `essentials` | ✓ Enabled | Core development commands |
| `bmad` | Disabled | Full BMAD agile workflow |
| `requirements` | Disabled | Requirements-driven workflow |
### What Gets Installed
```
~/.claude/
├── bin/
│ └── codeagent-wrapper # Main executable
├── CLAUDE.md # Core instructions and role definition
├── commands/ # Slash commands (/dev, /code, etc.)
├── agents/ # Agent definitions
├── skills/
│ └── codex/
│ └── SKILL.md # Codex integration skill
├── config.json # Configuration
└── installed_modules.json # Installation status
```
### Customizing Installation Directory
By default, myclaude installs to `~/.claude`. You can customize this using the `INSTALL_DIR` environment variable:
```bash
# Install to custom directory
INSTALL_DIR=/opt/myclaude bash install.sh
# Update your PATH accordingly
export PATH="/opt/myclaude/bin:$PATH"
```
**Directory Structure:**
- `$INSTALL_DIR/bin/` - codeagent-wrapper binary
- `$INSTALL_DIR/skills/` - Skill definitions
- `$INSTALL_DIR/config.json` - Configuration file
- `$INSTALL_DIR/commands/` - Slash command definitions
- `$INSTALL_DIR/agents/` - Agent definitions
**Note:** When using a custom installation directory, ensure that `$INSTALL_DIR/bin` is added to your `PATH` environment variable.
### Configuration
Edit `config.json` to customize:
Edit `config.json` to enable/disable modules:
```json
{
"version": "1.0",
"install_dir": "~/.claude",
"modules": {
"dev": {
"enabled": true,
"operations": [
{"type": "merge_dir", "source": "dev-workflow"},
{"type": "copy_file", "source": "memorys/CLAUDE.md", "target": "CLAUDE.md"},
{"type": "copy_file", "source": "skills/codex/SKILL.md", "target": "skills/codex/SKILL.md"},
{"type": "run_command", "command": "bash install.sh"}
]
}
"dev": { "enabled": true },
"bmad": { "enabled": false },
"requirements": { "enabled": false },
"essentials": { "enabled": false },
"omo": { "enabled": false },
"sparv": { "enabled": false },
"do": { "enabled": false },
"course": { "enabled": false }
}
}
```
**Operation Types:**
| Type | Description |
|------|-------------|
| `merge_dir` | Merge subdirs (commands/, agents/) into install dir |
| `copy_dir` | Copy entire directory |
| `copy_file` | Copy single file to target path |
| `run_command` | Execute shell command |
---
## Codex Integration
The `codex` skill enables Claude Code to delegate code execution to Codex CLI.
### Usage in Workflows
```bash
# Codex is invoked via the skill
codeagent-wrapper - <<'EOF'
implement @src/auth.ts with JWT validation
EOF
```
### Parallel Execution
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: backend_api
workdir: /project/backend
---CONTENT---
implement REST endpoints for /api/users
---TASK---
id: frontend_ui
workdir: /project/frontend
dependencies: backend_api
---CONTENT---
create React components consuming the API
EOF
```
### Install Codex Wrapper
```bash
# Automatic (via dev module)
python3 install.py --module dev
# Manual
bash install.sh
```
#### Windows
Windows installs place `codeagent-wrapper.exe` in `%USERPROFILE%\bin`.
```powershell
# PowerShell (recommended)
powershell -ExecutionPolicy Bypass -File install.ps1
# Batch (cmd)
install.bat
```
**Add to PATH** (if installer doesn't detect it):
```powershell
# PowerShell - persistent for current user
[Environment]::SetEnvironmentVariable('PATH', "$HOME\bin;" + [Environment]::GetEnvironmentVariable('PATH','User'), 'User')
# PowerShell - current session only
$Env:PATH = "$HOME\bin;$Env:PATH"
```
```batch
REM cmd.exe - persistent for current user (use PowerShell method above instead)
REM WARNING: This expands %PATH% which includes system PATH, causing duplication
REM Note: Using reg add instead of setx to avoid 1024-character truncation limit
reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%" /f
```
---
## Workflow Selection Guide
| Scenario | Recommended Workflow |
|----------|---------------------|
| New feature with tests | `/dev` |
| Quick bug fix | `/debug` or `/code` |
| Large multi-sprint feature | `/bmad-pilot` |
| Prototype or POC | `/requirements-pilot` |
| Code review | `/review` |
| Performance issue | `/optimize` |
| Scenario | Recommended |
|----------|-------------|
| Feature development (default) | `/do` |
| Lightweight feature | `/dev` |
| Bug investigation + fix | `/omo` |
| Large enterprise project | `/bmad-pilot` |
| Quick prototype | `/requirements-pilot` |
| Simple task | `/code`, `/debug` |
---
## Core Architecture
| Role | Agent | Responsibility |
|------|-------|----------------|
| **Orchestrator** | Claude Code | Planning, context gathering, verification |
| **Executor** | codeagent-wrapper | Code editing, test execution (Codex/Claude/Gemini/OpenCode) |
## Backend CLI Requirements
| Backend | Required Features |
|---------|-------------------|
| Codex | `codex e`, `--json`, `-C`, `resume` |
| Claude | `--output-format stream-json`, `-r` |
| Gemini | `-o stream-json`, `-y`, `-r` |
## Directory Structure After Installation
```
~/.claude/
├── bin/codeagent-wrapper
├── CLAUDE.md
├── commands/
├── agents/
├── skills/
└── config.json
```
## Documentation
- [Codeagent-Wrapper Guide](docs/CODEAGENT-WRAPPER.md)
- [Hooks Documentation](docs/HOOKS.md)
- [codeagent-wrapper](codeagent-wrapper/README.md)
## Troubleshooting
@@ -408,214 +115,38 @@ reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%
**Codex wrapper not found:**
```bash
# Installer auto-adds PATH, check if configured
if [[ ":$PATH:" != *":$HOME/.claude/bin:"* ]]; then
echo "PATH not configured. Reinstalling..."
bash install.sh
fi
# Or manually add (idempotent command)
[[ ":$PATH:" != *":$HOME/.claude/bin:"* ]] && echo 'export PATH="$HOME/.claude/bin:$PATH"' >> ~/.zshrc
```
**Permission denied:**
```bash
python3 install.py --install-dir ~/.claude --force
bash install.sh
```
**Module not loading:**
```bash
# Check installation status
cat ~/.claude/installed_modules.json
# Reinstall specific module
python3 install.py --module dev --force
python3 install.py --module <name> --force
```
### Version Compatibility Issues
**Backend CLI not found:**
**Backend CLI errors:**
```bash
# Check if backend CLIs are installed
which codex
which claude
which gemini
# Install missing backends
# Codex: Follow installation instructions at https://codex.docs
# Claude: Follow installation instructions at https://claude.ai/docs
# Gemini: Follow installation instructions at https://ai.google.dev/docs
which codex && codex --version
which claude && claude --version
which gemini && gemini --version
```
**Unsupported CLI flags:**
```bash
# If you see errors like "unknown flag" or "invalid option"
## FAQ
# Check backend CLI version
codex --version
claude --version
gemini --version
| Issue | Solution |
|-------|----------|
| "Unknown event format" | Logging display issue, can be ignored |
| Gemini can't read .gitignore files | Remove from .gitignore or use different backend |
| `/dev` slow | Check logs, try faster model, use single repo |
| Codex permission denied | Set `approval_policy = "never"` in ~/.codex/config.yaml |
# For Codex: Ensure it supports `e`, `--skip-git-repo-check`, `--json`, `-C`, and `resume`
# For Claude: Ensure it supports `--output-format stream-json`, `--setting-sources`, `-r`
# For Gemini: Ensure it supports `-o stream-json`, `-y`, `-r`, `-p`
# Update your backend CLI to the latest version if needed
```
**JSON parsing errors:**
```bash
# If you see "failed to parse JSON output" errors
# Verify the backend outputs stream-json format
codex e --json "test task" # Should output newline-delimited JSON
claude --output-format stream-json -p "test" # Should output stream JSON
# If not, your backend CLI version may be too old or incompatible
```
**Infinite recursion with Claude backend:**
```bash
# The wrapper prevents this with `--setting-sources ""` flag
# If you still see recursion, ensure your Claude CLI supports this flag
claude --help | grep "setting-sources"
# If flag is not supported, upgrade Claude CLI
```
**Session resume failures:**
```bash
# Check if session ID is valid
codex history # List recent sessions
claude history
# Ensure backend CLI supports session resumption
codex resume <session_id> "test" # Should continue from previous session
claude -r <session_id> "test"
# If not supported, use new sessions instead of resume mode
```
---
## FAQ (Frequently Asked Questions)
### Q1: `codeagent-wrapper` execution fails with "Unknown event format"
**Problem:**
```
Unknown event format: {"type":"turn.started"}
Unknown event format: {"type":"assistant", ...}
```
**Solution:**
This is a logging event format display issue and does not affect actual functionality. It will be fixed in the next version. You can ignore these log outputs.
**Related Issue:** [#96](https://github.com/cexll/myclaude/issues/96)
---
### Q2: Gemini cannot read files ignored by `.gitignore`
**Problem:**
When using `codeagent-wrapper --backend gemini`, files in directories like `.claude/` that are ignored by `.gitignore` cannot be read.
**Solution:**
- **Option 1:** Remove `.claude/` from your `.gitignore` file
- **Option 2:** Ensure files that need to be read are not in `.gitignore` list
**Related Issue:** [#75](https://github.com/cexll/myclaude/issues/75)
---
### Q3: `/dev` command parallel execution is very slow
**Problem:**
Using `/dev` command for simple features takes too long (over 30 minutes) with no visibility into task progress.
**Solution:**
1. **Check logs:** Review `C:\Users\User\AppData\Local\Temp\codeagent-wrapper-*.log` to identify bottlenecks
2. **Adjust backend:**
- Try faster models like `gpt-5.1-codex-max`
- Running in WSL may be significantly faster
3. **Workspace:** Use a single repository instead of monorepo with multiple sub-projects
**Related Issue:** [#77](https://github.com/cexll/myclaude/issues/77)
---
### Q4: Codex permission denied with new Go version
**Problem:**
After upgrading to the new Go-based Codex implementation, execution fails with permission denied errors.
**Solution:**
Add the following configuration to `~/.codex/config.yaml` (Windows: `c:\user\.codex\config.toml`):
```yaml
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
approval_policy = "never"
sandbox_mode = "workspace-write"
disable_response_storage = true
network_access = true
```
**Key settings:**
- `approval_policy = "never"` - Remove approval restrictions
- `sandbox_mode = "workspace-write"` - Allow workspace write access
- `network_access = true` - Enable network access
**Related Issue:** [#31](https://github.com/cexll/myclaude/issues/31)
---
### Q5: How to disable default bypass/skip-permissions mode
**Background:**
By default, codeagent-wrapper enables bypass mode for both Codex and Claude backends:
- `CODEX_BYPASS_SANDBOX=true` - Bypasses Codex sandbox restrictions
- `CODEAGENT_SKIP_PERMISSIONS=true` - Skips Claude permission prompts
**To disable (if you need sandbox/permission protection):**
```bash
export CODEX_BYPASS_SANDBOX=false
export CODEAGENT_SKIP_PERMISSIONS=false
```
Or add to your shell profile (`~/.zshrc` or `~/.bashrc`):
```bash
echo 'export CODEX_BYPASS_SANDBOX=false' >> ~/.zshrc
echo 'export CODEAGENT_SKIP_PERMISSIONS=false' >> ~/.zshrc
```
**Note:** Disabling bypass mode will require manual approval for certain operations.
---
**Still having issues?** Visit [GitHub Issues](https://github.com/cexll/myclaude/issues) to search or report new issues.
---
## Documentation
- **[Codeagent-Wrapper Guide](docs/CODEAGENT-WRAPPER.md)** - Multi-backend execution wrapper
- **[Hooks Documentation](docs/HOOKS.md)** - Custom hooks and automation
### Additional Resources
- **[Installation Log](install.log)** - Installation history and troubleshooting
---
See [GitHub Issues](https://github.com/cexll/myclaude/issues) for more.
## License
AGPL-3.0 License - see [LICENSE](LICENSE)
AGPL-3.0 - see [LICENSE](LICENSE)
## Support
- **Issues**: [GitHub Issues](https://github.com/cexll/myclaude/issues)
- **Documentation**: [docs/](docs/)
---
**Claude Code + Codex = Better Development** - Orchestration meets execution.
- [GitHub Issues](https://github.com/cexll/myclaude/issues)
- [Documentation](docs/)

View File

@@ -2,25 +2,11 @@
[![License: AGPL-3.0](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Claude Code](https://img.shields.io/badge/Claude-Code-blue)](https://claude.ai/code)
[![Version](https://img.shields.io/badge/Version-5.2-green)](https://github.com/cexll/myclaude)
[![Version](https://img.shields.io/badge/Version-6.x-green)](https://github.com/cexll/myclaude)
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini)
> AI 驱动的开发自动化 - 多后端执行架构 (Codex/Claude/Gemini/OpenCode)
## 核心概念:多后端架构
本系统采用**双智能体架构**与可插拔 AI 后端:
| 角色 | 智能体 | 职责 |
|------|-------|------|
| **编排者** | Claude Code | 规划、上下文收集、验证、用户交互 |
| **执行者** | codeagent-wrapper | 代码编辑、测试执行Codex/Claude/Gemini 后端)|
**为什么分离?**
- Claude Code 擅长理解上下文和编排复杂工作流
- 专业后端Codex 擅长代码、Claude 擅长推理、Gemini 擅长原型)专注执行
- 通过 `--backend codex|claude|gemini` 匹配模型与任务
## 快速开始windows上请在Powershell中执行
## 快速开始
```bash
git clone https://github.com/cexll/myclaude.git
@@ -28,72 +14,125 @@ cd myclaude
python3 install.py --install-dir ~/.claude
```
## 工作流概览
## 模块概览
### 0. OmO 多智能体编排器(复杂任务推荐)
| 模块 | 描述 | 文档 |
|------|------|------|
| [do](skills/do/README.md) | **推荐** - 7 阶段功能开发 + codeagent 编排 | `/do` 命令 |
| [dev](dev-workflow/README.md) | 轻量级开发工作流 + Codex 集成 | `/dev` 命令 |
| [omo](skills/omo/README.md) | 多智能体编排 + 智能路由 | `/omo` 命令 |
| [bmad](bmad-agile-workflow/README.md) | BMAD 敏捷工作流 + 6 个专业智能体 | `/bmad-pilot` 命令 |
| [requirements](requirements-driven-workflow/README.md) | 轻量级需求到代码流水线 | `/requirements-pilot` 命令 |
| [essentials](development-essentials/README.md) | 核心开发命令和工具 | `/code`, `/debug` 等 |
| [sparv](skills/sparv/README.md) | SPARV 工作流 (Specify→Plan→Act→Review→Vault) | `/sparv` 命令 |
| course | 课程开发(组合 dev + product-requirements + test-cases | 组合模块 |
**基于风险信号智能路由任务到专业智能体的多智能体编排系统。**
## 核心架构
| 角色 | 智能体 | 职责 |
|------|-------|------|
| **编排者** | Claude Code | 规划、上下文收集、验证 |
| **执行者** | codeagent-wrapper | 代码编辑、测试执行Codex/Claude/Gemini/OpenCode 后端)|
## 工作流详解
### do 工作流(推荐)
7 阶段功能开发,通过 codeagent-wrapper 编排多个智能体。**大多数功能开发任务的首选工作流。**
```bash
/omo "分析并修复这个认证 bug"
/do "添加用户登录功能"
```
**智能体层级**
| 智能体 | 角色 | 后端 | 模型 |
|-------|------|------|------|
| `oracle` | 技术顾问 | Claude | claude-opus-4-5 |
| `librarian` | 外部研究 | Claude | claude-sonnet-4-5 |
| `explore` | 代码库搜索 | OpenCode | grok-code |
| `develop` | 代码实现 | Codex | gpt-5.2 |
| `frontend-ui-ux-engineer` | UI/UX 专家 | Gemini | gemini-3-pro |
| `document-writer` | 文档撰写 | Gemini | gemini-3-flash |
**7 阶段**
| 阶段 | 名称 | 目标 |
|------|------|------|
| 1 | Discovery | 理解需求 |
| 2 | Exploration | 映射代码库模式 |
| 3 | Clarification | 解决歧义(**强制**|
| 4 | Architecture | 设计实现方案 |
| 5 | Implementation | 构建功能(**需审批**|
| 6 | Review | 捕获缺陷 |
| 7 | Summary | 记录结果 |
**路由信号(非固定流水线)**
- 代码位置不明确 → `explore`
- 外部库/API → `librarian`
- 高风险/多文件变更 → `oracle`
- 需要实现 → `develop` / `frontend-ui-ux-engineer`
**常用配方:**
- 解释代码:`explore`
- 位置已知的小修复:直接 `develop`
- Bug 修复,位置未知:`explore → develop`
- 跨模块重构:`explore → oracle → develop`
- 外部 API 集成:`explore + librarian → oracle → develop`
**适用场景:** 复杂 bug 调查、多文件重构、架构决策
**智能体**
- `code-explorer` - 代码追踪、架构映射
- `code-architect` - 设计方案、文件规划
- `code-reviewer` - 代码审查、简化建议
- `develop` - 实现代码、运行测试
---
### 1. Dev 工作流(推荐)
### Dev 工作流
**大多数开发任务的首选工作流。**
轻量级开发工作流,适合简单功能开发。
```bash
/dev "实现 JWT 用户认证"
```
**6 步流程:**
1. **需求澄清** - 交互式问答明确范围
2. **Codex 深度分析** - 代码库探索和架构决策
3. **开发计划生成** - 结构化任务分解和测试要求
4. **并行执行** - Codex 并发执行任务
5. **覆盖率验证** - 强制 ≥90% 测试覆盖率
6. **完成总结** - 文件变更和覆盖率报告
**核心特性:**
- Claude Code 编排Codex 执行所有代码变更
- 自动任务并行化提升速度
- 强制 90% 测试覆盖率门禁
- 失败自动回滚
**适用场景:** 功能开发、重构、带测试的 bug 修复
1. 需求澄清 - 交互式问答
2. Codex 深度分析 - 代码库探索
3. 开发计划生成 - 结构化任务分解
4. 并行执行 - Codex 并发执行
5. 覆盖率验证 - 强制 ≥90%
6. 完成总结 - 报告生成
---
### 2. BMAD 敏捷工作流
### OmO 多智能体编排器
**包含 6 个专业智能体的完整企业敏捷方法论。**
基于风险信号智能路由任务到专业智能体。
```bash
/omo "分析并修复这个认证 bug"
```
**智能体层级:**
| 智能体 | 角色 | 后端 |
|-------|------|------|
| `oracle` | 技术顾问 | Claude |
| `librarian` | 外部研究 | Claude |
| `explore` | 代码库搜索 | OpenCode |
| `develop` | 代码实现 | Codex |
| `frontend-ui-ux-engineer` | UI/UX 专家 | Gemini |
| `document-writer` | 文档撰写 | Gemini |
**常用配方:**
- 解释代码:`explore`
- 位置已知的小修复:直接 `develop`
- Bug 修复(位置未知):`explore → develop`
- 跨模块重构:`explore → oracle → develop`
---
### SPARV 工作流
极简 5 阶段工作流Specify → Plan → Act → Review → Vault。
```bash
/sparv "实现订单导出功能"
```
**核心规则:**
- **10 分规格门**:得分 0-10必须 >=9 才能进入 Plan
- **2 动作保存**:每 2 次工具调用写入 journal.md
- **3 失败协议**:连续 3 次失败后停止并上报
- **EHRB**:高风险操作需明确确认
**评分维度(各 0-2 分):**
1. Value - 为什么做,可验证的收益
2. Scope - MVP + 不在范围内的内容
3. Acceptance - 可测试的验收标准
4. Boundaries - 错误/性能/兼容/安全边界
5. Risk - EHRB/依赖/未知 + 处理方式
---
### BMAD 敏捷工作流
完整企业敏捷方法论 + 6 个专业智能体。
```bash
/bmad-pilot "构建电商结账系统"
@@ -104,43 +143,36 @@ python3 install.py --install-dir ~/.claude
|-------|------|
| Product Owner | 需求与用户故事 |
| Architect | 系统设计与技术决策 |
| Tech Lead | Sprint 规划与任务分解 |
| Scrum Master | Sprint 规划与任务分解 |
| Developer | 实现 |
| Code Reviewer | 质量保证 |
| QA Engineer | 测试与验证 |
**流程**
```
需求 → 架构 → Sprint计划 → 开发 → 审查 → QA
↓ ↓ ↓ ↓ ↓ ↓
PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
```
**适用场景:** 大型功能、团队协作、企业项目
**审批门**
- PRD 完成后90+ 分)需用户审批
- 架构完成后90+ 分)需用户审批
---
### 3. 需求驱动工作流
### 需求驱动工作流
**轻量级需求到代码流水线。**
轻量级需求到代码流水线。
```bash
/requirements-pilot "实现 API 限流"
```
**流程**
1. 带质量评分的需求生成
2. 实现规划
3. 代码生成
4. 审查和测试
**适用场景:** 快速原型、明确定义的功能
**100 分质量评分**
- 功能清晰度30 分
- 技术具体性25 分
- 实现完整性25 分
- 业务上下文20 分
---
### 4. 开发基础命令
### 开发基础命令
**日常编码任务的直接命令。**
日常编码任务的直接命令。
| 命令 | 用途 |
|------|------|
@@ -152,16 +184,12 @@ PRD.md DESIGN.md SPRINT.md Code REVIEW.md TEST.md
| `/refactor` | 代码重构 |
| `/docs` | 编写文档 |
**适用场景:** 快速任务,无需工作流开销
---
## 安装
### 模块化安装(推荐)
```bash
# 安装所有启用的模块默认dev + essentials
# 安装所有启用的模块
python3 install.py --install-dir ~/.claude
# 安装特定模块
@@ -170,314 +198,77 @@ python3 install.py --module dev
# 列出可用模块
python3 install.py --list-modules
# 强制覆盖现有文件
# 强制覆盖
python3 install.py --force
```
### 可用模块
### 模块配置
| 模块 | 默认 | 描述 |
|------|------|------|
| `dev` | ✓ 启用 | Dev 工作流 + Codex 集成 |
| `essentials` | ✓ 启用 | 核心开发命令 |
| `bmad` | 禁用 | 完整 BMAD 敏捷工作流 |
| `requirements` | 禁用 | 需求驱动工作流 |
### 安装内容
```
~/.claude/
├── bin/
│ └── codeagent-wrapper # 主可执行文件
├── CLAUDE.md # 核心指令和角色定义
├── commands/ # 斜杠命令 (/dev, /code 等)
├── agents/ # 智能体定义
├── skills/
│ └── codex/
│ └── SKILL.md # Codex 集成技能
├── config.json # 配置文件
└── installed_modules.json # 安装状态
```
### 自定义安装目录
默认情况下myclaude 安装到 `~/.claude`。您可以使用 `INSTALL_DIR` 环境变量自定义安装目录:
```bash
# 安装到自定义目录
INSTALL_DIR=/opt/myclaude bash install.sh
# 相应更新您的 PATH
export PATH="/opt/myclaude/bin:$PATH"
```
**目录结构:**
- `$INSTALL_DIR/bin/` - codeagent-wrapper 可执行文件
- `$INSTALL_DIR/skills/` - 技能定义
- `$INSTALL_DIR/config.json` - 配置文件
- `$INSTALL_DIR/commands/` - 斜杠命令定义
- `$INSTALL_DIR/agents/` - 智能体定义
**注意:** 使用自定义安装目录时,请确保将 `$INSTALL_DIR/bin` 添加到您的 `PATH` 环境变量中。
### 配置
编辑 `config.json` 自定义:
编辑 `config.json` 启用/禁用模块:
```json
{
"version": "1.0",
"install_dir": "~/.claude",
"modules": {
"dev": {
"enabled": true,
"operations": [
{"type": "merge_dir", "source": "dev-workflow"},
{"type": "copy_file", "source": "memorys/CLAUDE.md", "target": "CLAUDE.md"},
{"type": "copy_file", "source": "skills/codex/SKILL.md", "target": "skills/codex/SKILL.md"},
{"type": "run_command", "command": "bash install.sh"}
]
}
"dev": { "enabled": true },
"bmad": { "enabled": false },
"requirements": { "enabled": false },
"essentials": { "enabled": false },
"omo": { "enabled": false },
"sparv": { "enabled": false },
"do": { "enabled": false },
"course": { "enabled": false }
}
}
```
**操作类型:**
| 类型 | 描述 |
|------|------|
| `merge_dir` | 合并子目录 (commands/, agents/) 到安装目录 |
| `copy_dir` | 复制整个目录 |
| `copy_file` | 复制单个文件到目标路径 |
| `run_command` | 执行 shell 命令 |
---
## Codex 集成
`codex` 技能使 Claude Code 能够将代码执行委托给 Codex CLI。
### 工作流中的使用
```bash
# 通过技能调用 Codex
codeagent-wrapper - <<'EOF'
在 @src/auth.ts 中实现 JWT 验证
EOF
```
### 并行执行
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: backend_api
workdir: /project/backend
---CONTENT---
实现 /api/users 的 REST 端点
---TASK---
id: frontend_ui
workdir: /project/frontend
dependencies: backend_api
---CONTENT---
创建消费 API 的 React 组件
EOF
```
### 安装 Codex Wrapper
```bash
# 自动(通过 dev 模块)
python3 install.py --module dev
# 手动
bash install.sh
```
#### Windows 系统
Windows 系统会将 `codeagent-wrapper.exe` 安装到 `%USERPROFILE%\bin`
```powershell
# PowerShell推荐
powershell -ExecutionPolicy Bypass -File install.ps1
# 批处理cmd
install.bat
```
**添加到 PATH**(如果安装程序未自动检测):
```powershell
# PowerShell - 永久添加(当前用户)
[Environment]::SetEnvironmentVariable('PATH', "$HOME\bin;" + [Environment]::GetEnvironmentVariable('PATH','User'), 'User')
# PowerShell - 仅当前会话
$Env:PATH = "$HOME\bin;$Env:PATH"
```
```batch
REM cmd.exe - 永久添加(当前用户)(建议使用上面的 PowerShell 方法)
REM 警告:此命令会展开 %PATH% 包含系统 PATH导致重复
REM 注意:使用 reg add 而非 setx 以避免 1024 字符截断限制
reg add "HKCU\Environment" /v Path /t REG_EXPAND_SZ /d "%USERPROFILE%\bin;%PATH%" /f
```
---
## 工作流选择指南
| 场景 | 推荐工作流 |
|------|----------|
| 带测试的新功能 | `/dev` |
| 快速 bug 修复 | `/debug``/code` |
| 大型多 Sprint 功能 | `/bmad-pilot` |
| 原型或 POC | `/requirements-pilot` |
| 代码审查 | `/review` |
| 性能问题 | `/optimize` |
| 场景 | 推荐 |
|------|------|
| 功能开发(默认) | `/do` |
| 轻量级功能 | `/dev` |
| Bug 调查 + 修复 | `/omo` |
| 大型企业项目 | `/bmad-pilot` |
| 快速原型 | `/requirements-pilot` |
| 简单任务 | `/code`, `/debug` |
---
## 后端 CLI 要求
| 后端 | 必需功能 |
|------|----------|
| Codex | `codex e`, `--json`, `-C`, `resume` |
| Claude | `--output-format stream-json`, `-r` |
| Gemini | `-o stream-json`, `-y`, `-r` |
## 故障排查
### 常见问题
**Codex wrapper 未找到:**
```bash
# 安装程序会自动添加 PATH检查是否已添加
if [[ ":$PATH:" != *":$HOME/.claude/bin:"* ]]; then
echo "PATH not configured. Reinstalling..."
bash install.sh
fi
# 或手动添加(幂等性命令)
[[ ":$PATH:" != *":$HOME/.claude/bin:"* ]] && echo 'export PATH="$HOME/.claude/bin:$PATH"' >> ~/.zshrc
```
**权限被拒绝:**
```bash
python3 install.py --install-dir ~/.claude --force
bash install.sh
```
**模块未加载:**
```bash
# 检查安装状态
cat ~/.claude/installed_modules.json
# 重新安装特定模块
python3 install.py --module dev --force
python3 install.py --module <name> --force
```
---
## FAQ
## 常见问题 (FAQ)
| 问题 | 解决方案 |
|------|----------|
| "Unknown event format" | 日志显示问题,可忽略 |
| Gemini 无法读取 .gitignore 文件 | 从 .gitignore 移除或使用其他后端 |
| `/dev` 执行慢 | 检查日志,尝试更快模型,使用单一仓库 |
| Codex 权限拒绝 | 在 ~/.codex/config.yaml 设置 `approval_policy = "never"` |
### Q1: `codeagent-wrapper` 执行时报错 "Unknown event format"
**问题描述:**
执行 `codeagent-wrapper` 时出现错误:
```
Unknown event format: {"type":"turn.started"}
Unknown event format: {"type":"assistant", ...}
```
**解决方案:**
这是日志事件流的显示问题,不影响实际功能执行。预计在下个版本中修复。如需排查其他问题,可忽略此日志输出。
**相关 Issue** [#96](https://github.com/cexll/myclaude/issues/96)
---
### Q2: Gemini 无法读取 `.gitignore` 忽略的文件
**问题描述:**
使用 `codeagent-wrapper --backend gemini` 时,无法读取 `.claude/` 等被 `.gitignore` 忽略的目录中的文件。
**解决方案:**
- **方案一:** 在项目根目录的 `.gitignore` 中取消对 `.claude/` 的忽略
- **方案二:** 确保需要读取的文件不在 `.gitignore` 忽略列表中
**相关 Issue** [#75](https://github.com/cexll/myclaude/issues/75)
---
### Q3: `/dev` 命令并行执行特别慢
**问题描述:**
使用 `/dev` 命令开发简单功能耗时过长超过30分钟无法了解任务执行状态。
**解决方案:**
1. **检查日志:** 查看 `C:\Users\User\AppData\Local\Temp\codeagent-wrapper-*.log` 分析瓶颈
2. **调整后端:**
- 尝试使用 `gpt-5.1-codex-max` 等更快的模型
- 在 WSL 环境下运行速度可能更快
3. **工作区选择:** 使用独立的代码仓库而非包含多个子项目的 monorepo
**相关 Issue** [#77](https://github.com/cexll/myclaude/issues/77)
---
### Q4: 新版 Go 实现的 Codex 权限不足
**问题描述:**
升级到新版 Go 实现的 Codex 后,出现权限不足的错误。
**解决方案:**
`~/.codex/config.yaml` 中添加以下配置Windows: `c:\user\.codex\config.toml`
```yaml
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
approval_policy = "never"
sandbox_mode = "workspace-write"
disable_response_storage = true
network_access = true
```
**关键配置说明:**
- `approval_policy = "never"` - 移除审批限制
- `sandbox_mode = "workspace-write"` - 允许工作区写入权限
- `network_access = true` - 启用网络访问
**相关 Issue** [#31](https://github.com/cexll/myclaude/issues/31)
---
### Q5: 执行时遇到权限拒绝或沙箱限制
**问题描述:**
运行 codeagent-wrapper 时出现权限错误或沙箱限制。
**解决方案:**
设置以下环境变量:
```bash
export CODEX_BYPASS_SANDBOX=true
export CODEAGENT_SKIP_PERMISSIONS=true
```
或添加到 shell 配置文件(`~/.zshrc``~/.bashrc`
```bash
echo 'export CODEX_BYPASS_SANDBOX=true' >> ~/.zshrc
echo 'export CODEAGENT_SKIP_PERMISSIONS=true' >> ~/.zshrc
```
**注意:** 这些设置会绕过安全限制,请仅在可信环境中使用。
---
**仍有疑问?** 请访问 [GitHub Issues](https://github.com/cexll/myclaude/issues) 搜索或提交新问题。
---
更多问题请访问 [GitHub Issues](https://github.com/cexll/myclaude/issues)。
## 许可证
AGPL-3.0 License - 查看 [LICENSE](LICENSE)
AGPL-3.0 - 查看 [LICENSE](LICENSE)
## 支持
- **问题反馈**: [GitHub Issues](https://github.com/cexll/myclaude/issues)
- **文档**: [docs/](docs/)
---
**Claude Code + Codex = 更好的开发** - 编排遇见执行。
- [GitHub Issues](https://github.com/cexll/myclaude/issues)
- [文档](docs/)

View File

@@ -0,0 +1,109 @@
# bmad - BMAD Agile Workflow
Full enterprise agile methodology with 6 specialized agents, UltraThink analysis, and repository-aware development.
## Installation
```bash
python install.py --module bmad
```
## Usage
```bash
/bmad-pilot <PROJECT_DESCRIPTION> [OPTIONS]
```
### Options
| Option | Description |
|--------|-------------|
| `--skip-tests` | Skip QA testing phase |
| `--direct-dev` | Skip SM planning, go directly to development |
| `--skip-scan` | Skip initial repository scanning |
## Workflow Phases
| Phase | Agent | Deliverable | Description |
|-------|-------|-------------|-------------|
| 0 | Orchestrator | `00-repo-scan.md` | Repository scanning with UltraThink analysis |
| 1 | Product Owner (PO) | `01-product-requirements.md` | PRD with 90+ quality score required |
| 2 | Architect | `02-system-architecture.md` | Technical design with 90+ score required |
| 3 | Scrum Master (SM) | `03-sprint-plan.md` | Sprint backlog with stories and estimates |
| 4 | Developer | Implementation code | Multi-sprint implementation |
| 4.5 | Reviewer | `04-dev-reviewed.md` | Code review (Pass/Pass with Risk/Fail) |
| 5 | QA Engineer | Test suite | Comprehensive testing and validation |
## Agents
| Agent | Role |
|-------|------|
| `bmad-orchestrator` | Repository scanning, workflow coordination |
| `bmad-po` | Requirements gathering, PRD creation |
| `bmad-architect` | System design, technology decisions |
| `bmad-sm` | Sprint planning, task breakdown |
| `bmad-dev` | Code implementation |
| `bmad-review` | Code review, quality assessment |
| `bmad-qa` | Testing, validation |
## Approval Gates
Two mandatory stop points require explicit user approval:
1. **After PRD** (Phase 1 → 2): User must approve requirements before architecture
2. **After Architecture** (Phase 2 → 3): User must approve design before implementation
## Output Structure
```
.claude/specs/{feature_name}/
├── 00-repo-scan.md
├── 01-product-requirements.md
├── 02-system-architecture.md
├── 03-sprint-plan.md
└── 04-dev-reviewed.md
```
## UltraThink Methodology
Applied throughout the workflow for deep analysis:
1. **Hypothesis Generation** - Form hypotheses about the problem
2. **Evidence Collection** - Gather evidence from codebase
3. **Pattern Recognition** - Identify recurring patterns
4. **Synthesis** - Create comprehensive understanding
5. **Validation** - Cross-check findings
## Interactive Confirmation Flow
PO and Architect phases use iterative refinement:
1. Agent produces initial draft + quality score
2. Orchestrator presents to user with clarification questions
3. User provides responses
4. Agent refines until quality >= 90
5. User confirms to save deliverable
## When to Use
- Large multi-sprint features
- Enterprise projects requiring documentation
- Team coordination scenarios
- Projects needing formal approval gates
## Directory Structure
```
bmad-agile-workflow/
├── README.md
├── commands/
│ └── bmad-pilot.md
└── agents/
├── bmad-orchestrator.md
├── bmad-po.md
├── bmad-architect.md
├── bmad-sm.md
├── bmad-dev.md
├── bmad-review.md
└── bmad-qa.md
```

View File

@@ -0,0 +1,39 @@
name: CI
on:
push:
branches: [main, master]
pull_request:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
go-version: ["1.21", "1.22"]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
cache: true
- name: Test
run: make test
- name: Build
run: make build
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: "1.22"
cache: true
- name: Lint
run: make lint

View File

@@ -1,4 +1,7 @@
# Build artifacts
bin/
codeagent
codeagent.exe
codeagent-wrapper
codeagent-wrapper.exe
*.test
@@ -9,3 +12,12 @@ coverage*.out
cover.out
cover_*.out
coverage.html
# Logs
*.log
# Temp files
*.tmp
*.swp
*~
.DS_Store

View File

@@ -0,0 +1,37 @@
GO ?= go
TOOLS_BIN := $(CURDIR)/bin
TOOLCHAIN ?= go1.22.0
GOLANGCI_LINT_VERSION := v1.56.2
STATICCHECK_VERSION := v0.4.7
GOLANGCI_LINT := $(TOOLS_BIN)/golangci-lint
STATICCHECK := $(TOOLS_BIN)/staticcheck
.PHONY: build test lint clean install
build:
$(GO) build -o codeagent ./cmd/codeagent
$(GO) build -o codeagent-wrapper ./cmd/codeagent-wrapper
test:
$(GO) test ./...
$(GOLANGCI_LINT):
@mkdir -p $(TOOLS_BIN)
GOTOOLCHAIN=$(TOOLCHAIN) GOBIN=$(TOOLS_BIN) $(GO) install github.com/golangci/golangci-lint/cmd/golangci-lint@$(GOLANGCI_LINT_VERSION)
$(STATICCHECK):
@mkdir -p $(TOOLS_BIN)
GOTOOLCHAIN=$(TOOLCHAIN) GOBIN=$(TOOLS_BIN) $(GO) install honnef.co/go/tools/cmd/staticcheck@$(STATICCHECK_VERSION)
lint: $(GOLANGCI_LINT) $(STATICCHECK)
GOTOOLCHAIN=$(TOOLCHAIN) $(GOLANGCI_LINT) run ./...
GOTOOLCHAIN=$(TOOLCHAIN) $(STATICCHECK) ./...
clean:
@python3 -c 'import glob, os; paths=["codeagent","codeagent.exe","codeagent-wrapper","codeagent-wrapper.exe","coverage.out","cover.out","coverage.html"]; paths += glob.glob("coverage*.out") + glob.glob("cover_*.out") + glob.glob("*.test"); [os.remove(p) for p in paths if os.path.exists(p)]'
install:
$(GO) install ./cmd/codeagent
$(GO) install ./cmd/codeagent-wrapper

152
codeagent-wrapper/README.md Normal file
View File

@@ -0,0 +1,152 @@
# codeagent-wrapper
`codeagent-wrapper` 是一个用 Go 编写的“多后端 AI 代码代理”命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端Codex / Claude / Gemini / Opencode并提供一致的参数、配置与会话恢复体验。
入口:`cmd/codeagent/main.go`(生成二进制名:`codeagent`)和 `cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。两者行为一致。
## 功能特性
- 多后端支持:`codex` / `claude` / `gemini` / `opencode`
- 统一命令行:`codeagent [flags] <task>` / `codeagent resume <session_id> <task> [workdir]`
- 自动 stdin遇到换行/特殊字符/超长任务自动走 stdin避免 shell quoting 地狱;也可显式使用 `-`
- 配置合并:支持配置文件与 `CODEAGENT_*` 环境变量viper
- Agent 预设:从 `~/.codeagent/models.json` 读取 backend/model/prompt 等预设
- 并行执行:`--parallel` 从 stdin 读取多任务配置,支持依赖拓扑并发执行
- 日志清理:`codeagent cleanup` 清理旧日志(日志写入系统临时目录)
## 安装
要求Go 1.21+。
在仓库根目录执行:
```bash
go install ./cmd/codeagent
go install ./cmd/codeagent-wrapper
```
安装后确认:
```bash
codeagent version
codeagent-wrapper version
```
## 使用示例
最简单用法(默认后端:`codex`
```bash
codeagent "分析 internal/app/cli.go 的入口逻辑,给出改进建议"
```
指定后端:
```bash
codeagent --backend claude "解释 internal/executor/parallel_config.go 的并行配置格式"
```
指定工作目录(第 2 个位置参数):
```bash
codeagent "在当前 repo 下搜索潜在数据竞争" .
```
显式从 stdin 读取 task使用 `-`
```bash
cat task.txt | codeagent -
```
恢复会话:
```bash
codeagent resume <session_id> "继续上次任务"
```
并行模式(从 stdin 读取任务配置;禁止位置参数):
```bash
codeagent --parallel <<'EOF'
---TASK---
id: t1
workdir: .
backend: codex
---CONTENT---
列出本项目的主要模块以及它们的职责。
---TASK---
id: t2
dependencies: t1
backend: claude
---CONTENT---
基于 t1 的结论,提出重构风险点与建议。
EOF
```
## 配置说明
### 配置文件
默认查找路径(当 `--config` 为空时):
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
示例YAML
```yaml
backend: codex
model: gpt-4.1
skip-permissions: false
```
也可以通过 `--config /path/to/config.yaml` 显式指定。
### 环境变量(`CODEAGENT_*`
通过 viper 读取并自动映射 `-``_`,常用项:
- `CODEAGENT_BACKEND``codex|claude|gemini|opencode`
- `CODEAGENT_MODEL`
- `CODEAGENT_AGENT`
- `CODEAGENT_PROMPT_FILE`
- `CODEAGENT_REASONING_EFFORT`
- `CODEAGENT_SKIP_PERMISSIONS`
- `CODEAGENT_FULL_OUTPUT`(并行模式 legacy 输出)
- `CODEAGENT_MAX_PARALLEL_WORKERS`0 表示不限制,上限 100
### Agent 预设(`~/.codeagent/models.json`
可在 `~/.codeagent/models.json` 定义 agent → backend/model/prompt 等映射,用 `--agent <name>` 选择:
```json
{
"default_backend": "opencode",
"default_model": "opencode/grok-code",
"agents": {
"develop": {
"backend": "codex",
"model": "gpt-4.1",
"prompt_file": "~/.codeagent/prompts/develop.md",
"description": "Code development"
}
}
}
```
## 支持的后端
该项目本身不内置模型能力,依赖你本机安装并可在 `PATH` 中找到对应 CLI
- `codex`:执行 `codex e ...`(默认会添加 `--dangerously-bypass-approvals-and-sandbox`;如需关闭请设置 `CODEX_BYPASS_SANDBOX=false`
- `claude`:执行 `claude -p ... --output-format stream-json`(默认会跳过权限提示;如需开启请设置 `CODEAGENT_SKIP_PERMISSIONS=false`
- `gemini`:执行 `gemini ... -o stream-json`(可从 `~/.gemini/.env` 加载环境变量)
- `opencode`:执行 `opencode run --format json`
## 开发
```bash
make build
make test
make lint
make clean
```

View File

@@ -1,79 +0,0 @@
package main
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
)
type AgentModelConfig struct {
Backend string `json:"backend"`
Model string `json:"model"`
PromptFile string `json:"prompt_file,omitempty"`
Description string `json:"description,omitempty"`
Yolo bool `json:"yolo,omitempty"`
Reasoning string `json:"reasoning,omitempty"`
}
type ModelsConfig struct {
DefaultBackend string `json:"default_backend"`
DefaultModel string `json:"default_model"`
Agents map[string]AgentModelConfig `json:"agents"`
}
var defaultModelsConfig = ModelsConfig{
DefaultBackend: "opencode",
DefaultModel: "opencode/grok-code",
Agents: map[string]AgentModelConfig{
"oracle": {Backend: "claude", Model: "claude-sonnet-4-20250514", PromptFile: "~/.claude/skills/omo/references/oracle.md", Description: "Technical advisor"},
"librarian": {Backend: "claude", Model: "claude-sonnet-4-5-20250514", PromptFile: "~/.claude/skills/omo/references/librarian.md", Description: "Researcher"},
"explore": {Backend: "opencode", Model: "opencode/grok-code", PromptFile: "~/.claude/skills/omo/references/explore.md", Description: "Code search"},
"develop": {Backend: "codex", Model: "", PromptFile: "~/.claude/skills/omo/references/develop.md", Description: "Code development"},
"frontend-ui-ux-engineer": {Backend: "gemini", Model: "", PromptFile: "~/.claude/skills/omo/references/frontend-ui-ux-engineer.md", Description: "Frontend engineer"},
"document-writer": {Backend: "gemini", Model: "", PromptFile: "~/.claude/skills/omo/references/document-writer.md", Description: "Documentation"},
},
}
func loadModelsConfig() *ModelsConfig {
home, err := os.UserHomeDir()
if err != nil {
logWarn(fmt.Sprintf("Failed to resolve home directory for models config: %v; using defaults", err))
return &defaultModelsConfig
}
configPath := filepath.Join(home, ".codeagent", "models.json")
data, err := os.ReadFile(configPath)
if err != nil {
if !os.IsNotExist(err) {
logWarn(fmt.Sprintf("Failed to read models config %s: %v; using defaults", configPath, err))
}
return &defaultModelsConfig
}
var cfg ModelsConfig
if err := json.Unmarshal(data, &cfg); err != nil {
logWarn(fmt.Sprintf("Failed to parse models config %s: %v; using defaults", configPath, err))
return &defaultModelsConfig
}
// Merge with defaults
for name, agent := range defaultModelsConfig.Agents {
if _, exists := cfg.Agents[name]; !exists {
if cfg.Agents == nil {
cfg.Agents = make(map[string]AgentModelConfig)
}
cfg.Agents[name] = agent
}
}
return &cfg
}
func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning string, yolo bool) {
cfg := loadModelsConfig()
if agent, ok := cfg.Agents[agentName]; ok {
return agent.Backend, agent.Model, agent.PromptFile, agent.Reasoning, agent.Yolo
}
return cfg.DefaultBackend, cfg.DefaultModel, "", "", false
}

View File

@@ -1,237 +0,0 @@
package main
import (
"encoding/json"
"os"
"path/filepath"
"strings"
)
// Backend defines the contract for invoking different AI CLI backends.
// Each backend is responsible for supplying the executable command and
// building the argument list based on the wrapper config.
type Backend interface {
Name() string
BuildArgs(cfg *Config, targetArg string) []string
Command() string
}
type CodexBackend struct{}
func (CodexBackend) Name() string { return "codex" }
func (CodexBackend) Command() string {
return "codex"
}
func (CodexBackend) BuildArgs(cfg *Config, targetArg string) []string {
return buildCodexArgs(cfg, targetArg)
}
type ClaudeBackend struct{}
func (ClaudeBackend) Name() string { return "claude" }
func (ClaudeBackend) Command() string {
return "claude"
}
func (ClaudeBackend) BuildArgs(cfg *Config, targetArg string) []string {
return buildClaudeArgs(cfg, targetArg)
}
const maxClaudeSettingsBytes = 1 << 20 // 1MB
type minimalClaudeSettings struct {
Env map[string]string
Model string
}
// loadMinimalClaudeSettings 从 ~/.claude/settings.json 只提取安全的最小子集:
// - env: 只接受字符串类型的值
// - model: 只接受字符串类型的值
// 文件缺失/解析失败/超限都返回空。
func loadMinimalClaudeSettings() minimalClaudeSettings {
home, err := os.UserHomeDir()
if err != nil || home == "" {
return minimalClaudeSettings{}
}
settingPath := filepath.Join(home, ".claude", "settings.json")
info, err := os.Stat(settingPath)
if err != nil || info.Size() > maxClaudeSettingsBytes {
return minimalClaudeSettings{}
}
data, err := os.ReadFile(settingPath)
if err != nil {
return minimalClaudeSettings{}
}
var cfg struct {
Env map[string]any `json:"env"`
Model any `json:"model"`
}
if err := json.Unmarshal(data, &cfg); err != nil {
return minimalClaudeSettings{}
}
out := minimalClaudeSettings{}
if model, ok := cfg.Model.(string); ok {
out.Model = strings.TrimSpace(model)
}
if len(cfg.Env) == 0 {
return out
}
env := make(map[string]string, len(cfg.Env))
for k, v := range cfg.Env {
s, ok := v.(string)
if !ok {
continue
}
env[k] = s
}
if len(env) == 0 {
return out
}
out.Env = env
return out
}
// loadMinimalEnvSettings is kept for backwards tests; prefer loadMinimalClaudeSettings.
func loadMinimalEnvSettings() map[string]string {
settings := loadMinimalClaudeSettings()
if len(settings.Env) == 0 {
return nil
}
return settings.Env
}
// loadGeminiEnv loads environment variables from ~/.gemini/.env
// Supports GEMINI_API_KEY, GEMINI_MODEL, GOOGLE_GEMINI_BASE_URL
// Also sets GEMINI_API_KEY_AUTH_MECHANISM=bearer for third-party API compatibility
func loadGeminiEnv() map[string]string {
home, err := os.UserHomeDir()
if err != nil || home == "" {
return nil
}
envPath := filepath.Join(home, ".gemini", ".env")
data, err := os.ReadFile(envPath)
if err != nil {
return nil
}
env := make(map[string]string)
for _, line := range strings.Split(string(data), "\n") {
line = strings.TrimSpace(line)
if line == "" || strings.HasPrefix(line, "#") {
continue
}
idx := strings.IndexByte(line, '=')
if idx <= 0 {
continue
}
key := strings.TrimSpace(line[:idx])
value := strings.TrimSpace(line[idx+1:])
if key != "" && value != "" {
env[key] = value
}
}
// Set bearer auth mechanism for third-party API compatibility
if _, ok := env["GEMINI_API_KEY"]; ok {
if _, hasAuth := env["GEMINI_API_KEY_AUTH_MECHANISM"]; !hasAuth {
env["GEMINI_API_KEY_AUTH_MECHANISM"] = "bearer"
}
}
if len(env) == 0 {
return nil
}
return env
}
func buildClaudeArgs(cfg *Config, targetArg string) []string {
if cfg == nil {
return nil
}
args := []string{"-p"}
// Default to skip permissions unless CODEAGENT_SKIP_PERMISSIONS=false
if cfg.SkipPermissions || cfg.Yolo || envFlagDefaultTrue("CODEAGENT_SKIP_PERMISSIONS") {
args = append(args, "--dangerously-skip-permissions")
}
// Prevent infinite recursion: disable all setting sources (user, project, local)
// This ensures a clean execution environment without CLAUDE.md or skills that would trigger codeagent
args = append(args, "--setting-sources", "")
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "--model", model)
}
if cfg.Mode == "resume" {
if cfg.SessionID != "" {
// Claude CLI uses -r <session_id> for resume.
args = append(args, "-r", cfg.SessionID)
}
}
// Note: claude CLI doesn't support -C flag; workdir set via cmd.Dir
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
return args
}
type GeminiBackend struct{}
func (GeminiBackend) Name() string { return "gemini" }
func (GeminiBackend) Command() string {
return "gemini"
}
func (GeminiBackend) BuildArgs(cfg *Config, targetArg string) []string {
return buildGeminiArgs(cfg, targetArg)
}
type OpencodeBackend struct{}
func (OpencodeBackend) Name() string { return "opencode" }
func (OpencodeBackend) Command() string { return "opencode" }
func (OpencodeBackend) BuildArgs(cfg *Config, targetArg string) []string {
args := []string{"run"}
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "-m", model)
}
if cfg.Mode == "resume" && cfg.SessionID != "" {
args = append(args, "-s", cfg.SessionID)
}
args = append(args, "--format", "json", targetArg)
return args
}
func buildGeminiArgs(cfg *Config, targetArg string) []string {
if cfg == nil {
return nil
}
args := []string{"-o", "stream-json", "-y"}
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "-m", model)
}
if cfg.Mode == "resume" {
if cfg.SessionID != "" {
args = append(args, "-r", cfg.SessionID)
}
}
// Note: gemini CLI doesn't support -C flag; workdir set via cmd.Dir
// Use positional argument instead of deprecated -p flag
// For stdin mode ("-"), use -p to read from stdin
if targetArg == "-" {
args = append(args, "-p", targetArg)
} else {
args = append(args, targetArg)
}
return args
}

View File

@@ -1,39 +0,0 @@
package main
import (
"testing"
)
// BenchmarkLoggerWrite 测试日志写入性能
func BenchmarkLoggerWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
logger.Info("benchmark log message")
}
b.StopTimer()
logger.Flush()
}
// BenchmarkLoggerConcurrentWrite 测试并发日志写入性能
func BenchmarkLoggerConcurrentWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
logger.Info("concurrent benchmark log message")
}
})
b.StopTimer()
logger.Flush()
}

View File

@@ -0,0 +1,7 @@
package main
import app "codeagent-wrapper/internal/app"
func main() {
app.Run()
}

View File

@@ -1,473 +0,0 @@
package main
import (
"bytes"
"context"
"fmt"
"os"
"strconv"
"strings"
)
// Config holds CLI configuration
type Config struct {
Mode string // "new" or "resume"
Task string
SessionID string
WorkDir string
Model string
ReasoningEffort string
ExplicitStdin bool
Timeout int
Backend string
Agent string
PromptFile string
PromptFileExplicit bool
SkipPermissions bool
Yolo bool
MaxParallelWorkers int
}
// ParallelConfig defines the JSON schema for parallel execution
type ParallelConfig struct {
Tasks []TaskSpec `json:"tasks"`
GlobalBackend string `json:"backend,omitempty"`
}
// TaskSpec describes an individual task entry in the parallel config
type TaskSpec struct {
ID string `json:"id"`
Task string `json:"task"`
WorkDir string `json:"workdir,omitempty"`
Dependencies []string `json:"dependencies,omitempty"`
SessionID string `json:"session_id,omitempty"`
Backend string `json:"backend,omitempty"`
Model string `json:"model,omitempty"`
ReasoningEffort string `json:"reasoning_effort,omitempty"`
Agent string `json:"agent,omitempty"`
PromptFile string `json:"prompt_file,omitempty"`
SkipPermissions bool `json:"skip_permissions,omitempty"`
Mode string `json:"-"`
UseStdin bool `json:"-"`
Context context.Context `json:"-"`
}
// TaskResult captures the execution outcome of a task
type TaskResult struct {
TaskID string `json:"task_id"`
ExitCode int `json:"exit_code"`
Message string `json:"message"`
SessionID string `json:"session_id"`
Error string `json:"error"`
LogPath string `json:"log_path"`
// Structured report fields
Coverage string `json:"coverage,omitempty"` // extracted coverage percentage (e.g., "92%")
CoverageNum float64 `json:"coverage_num,omitempty"` // numeric coverage for comparison
CoverageTarget float64 `json:"coverage_target,omitempty"` // target coverage (default 90)
FilesChanged []string `json:"files_changed,omitempty"` // list of changed files
KeyOutput string `json:"key_output,omitempty"` // brief summary of what was done
TestsPassed int `json:"tests_passed,omitempty"` // number of tests passed
TestsFailed int `json:"tests_failed,omitempty"` // number of tests failed
sharedLog bool
}
var backendRegistry = map[string]Backend{
"codex": CodexBackend{},
"claude": ClaudeBackend{},
"gemini": GeminiBackend{},
"opencode": OpencodeBackend{},
}
func selectBackend(name string) (Backend, error) {
key := strings.ToLower(strings.TrimSpace(name))
if key == "" {
key = defaultBackendName
}
if backend, ok := backendRegistry[key]; ok {
return backend, nil
}
return nil, fmt.Errorf("unsupported backend %q", name)
}
func envFlagEnabled(key string) bool {
val, ok := os.LookupEnv(key)
if !ok {
return false
}
val = strings.TrimSpace(strings.ToLower(val))
switch val {
case "", "0", "false", "no", "off":
return false
default:
return true
}
}
func parseBoolFlag(val string, defaultValue bool) bool {
val = strings.TrimSpace(strings.ToLower(val))
switch val {
case "1", "true", "yes", "on":
return true
case "0", "false", "no", "off":
return false
default:
return defaultValue
}
}
// envFlagDefaultTrue returns true unless the env var is explicitly set to false/0/no/off.
func envFlagDefaultTrue(key string) bool {
val, ok := os.LookupEnv(key)
if !ok {
return true
}
return parseBoolFlag(val, true)
}
func validateAgentName(name string) error {
if strings.TrimSpace(name) == "" {
return fmt.Errorf("agent name is empty")
}
for _, r := range name {
switch {
case r >= 'a' && r <= 'z':
case r >= 'A' && r <= 'Z':
case r >= '0' && r <= '9':
case r == '-', r == '_':
default:
return fmt.Errorf("agent name %q contains invalid character %q", name, r)
}
}
return nil
}
func parseParallelConfig(data []byte) (*ParallelConfig, error) {
trimmed := bytes.TrimSpace(data)
if len(trimmed) == 0 {
return nil, fmt.Errorf("parallel config is empty")
}
tasks := strings.Split(string(trimmed), "---TASK---")
var cfg ParallelConfig
seen := make(map[string]struct{})
taskIndex := 0
for _, taskBlock := range tasks {
taskBlock = strings.TrimSpace(taskBlock)
if taskBlock == "" {
continue
}
taskIndex++
parts := strings.SplitN(taskBlock, "---CONTENT---", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("task block #%d missing ---CONTENT--- separator", taskIndex)
}
meta := strings.TrimSpace(parts[0])
content := strings.TrimSpace(parts[1])
task := TaskSpec{WorkDir: defaultWorkdir}
agentSpecified := false
for _, line := range strings.Split(meta, "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
kv := strings.SplitN(line, ":", 2)
if len(kv) != 2 {
continue
}
key := strings.TrimSpace(kv[0])
value := strings.TrimSpace(kv[1])
switch key {
case "id":
task.ID = value
case "workdir":
// Validate workdir: "-" is not a valid directory
if value == "-" {
return nil, fmt.Errorf("task block #%d has invalid workdir: '-' is not a valid directory path", taskIndex)
}
task.WorkDir = value
case "session_id":
task.SessionID = value
task.Mode = "resume"
case "backend":
task.Backend = value
case "model":
task.Model = value
case "reasoning_effort":
task.ReasoningEffort = value
case "agent":
agentSpecified = true
task.Agent = value
case "skip_permissions", "skip-permissions":
if value == "" {
task.SkipPermissions = true
continue
}
task.SkipPermissions = parseBoolFlag(value, false)
case "dependencies":
for _, dep := range strings.Split(value, ",") {
dep = strings.TrimSpace(dep)
if dep != "" {
task.Dependencies = append(task.Dependencies, dep)
}
}
}
}
if task.Mode == "" {
task.Mode = "new"
}
if agentSpecified {
if strings.TrimSpace(task.Agent) == "" {
return nil, fmt.Errorf("task block #%d has empty agent field", taskIndex)
}
if err := validateAgentName(task.Agent); err != nil {
return nil, fmt.Errorf("task block #%d invalid agent name: %w", taskIndex, err)
}
backend, model, promptFile, reasoning, _ := resolveAgentConfig(task.Agent)
if task.Backend == "" {
task.Backend = backend
}
if task.Model == "" {
task.Model = model
}
if task.ReasoningEffort == "" {
task.ReasoningEffort = reasoning
}
task.PromptFile = promptFile
}
if task.ID == "" {
return nil, fmt.Errorf("task block #%d missing id field", taskIndex)
}
if content == "" {
return nil, fmt.Errorf("task block #%d (%q) missing content", taskIndex, task.ID)
}
if task.Mode == "resume" && strings.TrimSpace(task.SessionID) == "" {
return nil, fmt.Errorf("task block #%d (%q) has empty session_id", taskIndex, task.ID)
}
if _, exists := seen[task.ID]; exists {
return nil, fmt.Errorf("task block #%d has duplicate id: %s", taskIndex, task.ID)
}
task.Task = content
cfg.Tasks = append(cfg.Tasks, task)
seen[task.ID] = struct{}{}
}
if len(cfg.Tasks) == 0 {
return nil, fmt.Errorf("no tasks found")
}
return &cfg, nil
}
func parseArgs() (*Config, error) {
args := os.Args[1:]
if len(args) == 0 {
return nil, fmt.Errorf("task required")
}
backendName := defaultBackendName
model := ""
reasoningEffort := ""
agentName := ""
promptFile := ""
promptFileExplicit := false
yolo := false
skipPermissions := envFlagEnabled("CODEAGENT_SKIP_PERMISSIONS")
filtered := make([]string, 0, len(args))
for i := 0; i < len(args); i++ {
arg := args[i]
switch {
case arg == "--agent":
if i+1 >= len(args) {
return nil, fmt.Errorf("--agent flag requires a value")
}
value := strings.TrimSpace(args[i+1])
if value == "" {
return nil, fmt.Errorf("--agent flag requires a value")
}
if err := validateAgentName(value); err != nil {
return nil, fmt.Errorf("--agent flag invalid value: %w", err)
}
resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning, resolvedYolo := resolveAgentConfig(value)
backendName = resolvedBackend
model = resolvedModel
if !promptFileExplicit {
promptFile = resolvedPromptFile
}
if reasoningEffort == "" {
reasoningEffort = resolvedReasoning
}
yolo = resolvedYolo
agentName = value
i++
continue
case strings.HasPrefix(arg, "--agent="):
value := strings.TrimSpace(strings.TrimPrefix(arg, "--agent="))
if value == "" {
return nil, fmt.Errorf("--agent flag requires a value")
}
if err := validateAgentName(value); err != nil {
return nil, fmt.Errorf("--agent flag invalid value: %w", err)
}
resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning, resolvedYolo := resolveAgentConfig(value)
backendName = resolvedBackend
model = resolvedModel
if !promptFileExplicit {
promptFile = resolvedPromptFile
}
if reasoningEffort == "" {
reasoningEffort = resolvedReasoning
}
yolo = resolvedYolo
agentName = value
continue
case arg == "--prompt-file":
if i+1 >= len(args) {
return nil, fmt.Errorf("--prompt-file flag requires a value")
}
value := strings.TrimSpace(args[i+1])
if value == "" {
return nil, fmt.Errorf("--prompt-file flag requires a value")
}
promptFile = value
promptFileExplicit = true
i++
continue
case strings.HasPrefix(arg, "--prompt-file="):
value := strings.TrimSpace(strings.TrimPrefix(arg, "--prompt-file="))
if value == "" {
return nil, fmt.Errorf("--prompt-file flag requires a value")
}
promptFile = value
promptFileExplicit = true
continue
case arg == "--backend":
if i+1 >= len(args) {
return nil, fmt.Errorf("--backend flag requires a value")
}
backendName = args[i+1]
i++
continue
case strings.HasPrefix(arg, "--backend="):
value := strings.TrimPrefix(arg, "--backend=")
if value == "" {
return nil, fmt.Errorf("--backend flag requires a value")
}
backendName = value
continue
case arg == "--skip-permissions", arg == "--dangerously-skip-permissions":
skipPermissions = true
continue
case arg == "--model":
if i+1 >= len(args) {
return nil, fmt.Errorf("--model flag requires a value")
}
model = args[i+1]
i++
continue
case strings.HasPrefix(arg, "--model="):
value := strings.TrimPrefix(arg, "--model=")
if value == "" {
return nil, fmt.Errorf("--model flag requires a value")
}
model = value
continue
case arg == "--reasoning-effort":
if i+1 >= len(args) {
return nil, fmt.Errorf("--reasoning-effort flag requires a value")
}
value := strings.TrimSpace(args[i+1])
if value == "" {
return nil, fmt.Errorf("--reasoning-effort flag requires a value")
}
reasoningEffort = value
i++
continue
case strings.HasPrefix(arg, "--reasoning-effort="):
value := strings.TrimSpace(strings.TrimPrefix(arg, "--reasoning-effort="))
if value == "" {
return nil, fmt.Errorf("--reasoning-effort flag requires a value")
}
reasoningEffort = value
continue
case strings.HasPrefix(arg, "--skip-permissions="):
skipPermissions = parseBoolFlag(strings.TrimPrefix(arg, "--skip-permissions="), skipPermissions)
continue
case strings.HasPrefix(arg, "--dangerously-skip-permissions="):
skipPermissions = parseBoolFlag(strings.TrimPrefix(arg, "--dangerously-skip-permissions="), skipPermissions)
continue
}
filtered = append(filtered, arg)
}
if len(filtered) == 0 {
return nil, fmt.Errorf("task required")
}
args = filtered
cfg := &Config{WorkDir: defaultWorkdir, Backend: backendName, Agent: agentName, PromptFile: promptFile, PromptFileExplicit: promptFileExplicit, SkipPermissions: skipPermissions, Yolo: yolo, Model: strings.TrimSpace(model), ReasoningEffort: strings.TrimSpace(reasoningEffort)}
cfg.MaxParallelWorkers = resolveMaxParallelWorkers()
if args[0] == "resume" {
if len(args) < 3 {
return nil, fmt.Errorf("resume mode requires: resume <session_id> <task>")
}
cfg.Mode = "resume"
cfg.SessionID = strings.TrimSpace(args[1])
if cfg.SessionID == "" {
return nil, fmt.Errorf("resume mode requires non-empty session_id")
}
cfg.Task = args[2]
cfg.ExplicitStdin = (args[2] == "-")
if len(args) > 3 {
// Validate workdir: "-" is not a valid directory
if args[3] == "-" {
return nil, fmt.Errorf("invalid workdir: '-' is not a valid directory path")
}
cfg.WorkDir = args[3]
}
} else {
cfg.Mode = "new"
cfg.Task = args[0]
cfg.ExplicitStdin = (args[0] == "-")
if len(args) > 1 {
// Validate workdir: "-" is not a valid directory
if args[1] == "-" {
return nil, fmt.Errorf("invalid workdir: '-' is not a valid directory path")
}
cfg.WorkDir = args[1]
}
}
return cfg, nil
}
const maxParallelWorkersLimit = 100
func resolveMaxParallelWorkers() int {
raw := strings.TrimSpace(os.Getenv("CODEAGENT_MAX_PARALLEL_WORKERS"))
if raw == "" {
return 0
}
value, err := strconv.Atoi(raw)
if err != nil || value < 0 {
logWarn(fmt.Sprintf("Invalid CODEAGENT_MAX_PARALLEL_WORKERS=%q, falling back to unlimited", raw))
return 0
}
if value > maxParallelWorkersLimit {
logWarn(fmt.Sprintf("CODEAGENT_MAX_PARALLEL_WORKERS=%d exceeds limit, capping at %d", value, maxParallelWorkersLimit))
return maxParallelWorkersLimit
}
return value
}

View File

@@ -1,3 +1,43 @@
module codeagent-wrapper
go 1.21
require (
github.com/goccy/go-json v0.10.5
github.com/rs/zerolog v1.34.0
github.com/shirou/gopsutil/v3 v3.24.5
github.com/spf13/cobra v1.8.1
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.19.0
)
require (
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/sagikazarmark/locafero v0.4.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.11.0 // indirect
github.com/spf13/cast v1.6.0 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.9.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.14.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

117
codeagent-wrapper/go.sum Normal file
View File

@@ -0,0 +1,117 @@
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ=
github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4=
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.19.0 h1:RWq5SEjt8o25SROyN3z2OrDB9l7RPd3lwTWU8EcEdcI=
github.com/spf13/viper v1.19.0/go.mod h1:GQUN9bilAbhU/jgc1bKs99f/suXKeUMct8Adx5+Ntkg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"context"
@@ -6,6 +6,9 @@ import (
"path/filepath"
"testing"
"time"
config "codeagent-wrapper/internal/config"
executor "codeagent-wrapper/internal/executor"
)
func TestValidateAgentName(t *testing.T) {
@@ -28,7 +31,7 @@ func TestValidateAgentName(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateAgentName(tt.input)
err := config.ValidateAgentName(tt.input)
if (err != nil) != tt.wantErr {
t.Fatalf("validateAgentName(%q) err=%v, wantErr=%v", tt.input, err, tt.wantErr)
}
@@ -59,6 +62,8 @@ func TestParseParallelConfig_ResolvesAgentPromptFile(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(config.ResetModelsConfigCacheForTest)
config.ResetModelsConfigCacheForTest()
configDir := filepath.Join(home, ".codeagent")
if err := os.MkdirAll(configDir, 0o755); err != nil {
@@ -117,10 +122,8 @@ func TestDefaultRunCodexTaskFn_AppliesAgentPromptFile(t *testing.T) {
WaitDelay: 2 * time.Millisecond,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
selectBackendFn = func(name string) (Backend, error) {
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
_ = executor.SetSelectBackendFn(func(name string) (Backend, error) {
return testBackend{
name: name,
command: "fake-cmd",
@@ -128,7 +131,7 @@ func TestDefaultRunCodexTaskFn_AppliesAgentPromptFile(t *testing.T) {
return []string{targetArg}
},
}, nil
}
})
res := defaultRunCodexTaskFn(TaskSpec{
ID: "t",

View File

@@ -0,0 +1,278 @@
package wrapper
import (
"fmt"
"io"
"os"
"path/filepath"
"strings"
"time"
)
const (
version = "6.1.2"
defaultWorkdir = "."
defaultTimeout = 7200 // seconds (2 hours)
defaultCoverageTarget = 90.0
codexLogLineLimit = 1000
stdinSpecialChars = "\n\\\"'`$"
stderrCaptureLimit = 4 * 1024
defaultBackendName = "codex"
defaultCodexCommand = "codex"
// stdout close reasons
stdoutCloseReasonWait = "wait-done"
stdoutCloseReasonDrain = "drain-timeout"
stdoutCloseReasonCtx = "context-cancel"
stdoutDrainTimeout = 100 * time.Millisecond
)
// Test hooks for dependency injection
var (
stdinReader io.Reader = os.Stdin
isTerminalFn = defaultIsTerminal
codexCommand = defaultCodexCommand
cleanupHook func()
startupCleanupAsync = true
buildCodexArgsFn = buildCodexArgs
selectBackendFn = selectBackend
cleanupLogsFn = cleanupOldLogs
defaultBuildArgsFn = buildCodexArgs
runTaskFn = runCodexTask
exitFn = os.Exit
)
func runStartupCleanup() {
if cleanupLogsFn == nil {
return
}
defer func() {
if r := recover(); r != nil {
logWarn(fmt.Sprintf("cleanupOldLogs panic: %v", r))
}
}()
if _, err := cleanupLogsFn(); err != nil {
logWarn(fmt.Sprintf("cleanupOldLogs error: %v", err))
}
}
func scheduleStartupCleanup() {
if !startupCleanupAsync {
runStartupCleanup()
return
}
if cleanupLogsFn == nil {
return
}
fn := cleanupLogsFn
go func() {
defer func() {
if r := recover(); r != nil {
logWarn(fmt.Sprintf("cleanupOldLogs panic: %v", r))
}
}()
if _, err := fn(); err != nil {
logWarn(fmt.Sprintf("cleanupOldLogs error: %v", err))
}
}()
}
func runCleanupMode() int {
if cleanupLogsFn == nil {
fmt.Fprintln(os.Stderr, "Cleanup failed: log cleanup function not configured")
return 1
}
stats, err := cleanupLogsFn()
if err != nil {
fmt.Fprintf(os.Stderr, "Cleanup failed: %v\n", err)
return 1
}
fmt.Println("Cleanup completed")
fmt.Printf("Files scanned: %d\n", stats.Scanned)
fmt.Printf("Files deleted: %d\n", stats.Deleted)
if len(stats.DeletedFiles) > 0 {
for _, f := range stats.DeletedFiles {
fmt.Printf(" - %s\n", f)
}
}
fmt.Printf("Files kept: %d\n", stats.Kept)
if len(stats.KeptFiles) > 0 {
for _, f := range stats.KeptFiles {
fmt.Printf(" - %s\n", f)
}
}
if stats.Errors > 0 {
fmt.Printf("Deletion errors: %d\n", stats.Errors)
}
return 0
}
func readAgentPromptFile(path string, allowOutsideClaudeDir bool) (string, error) {
raw := strings.TrimSpace(path)
if raw == "" {
return "", nil
}
expanded := raw
if raw == "~" || strings.HasPrefix(raw, "~/") || strings.HasPrefix(raw, "~\\") {
home, err := os.UserHomeDir()
if err != nil {
return "", err
}
if raw == "~" {
expanded = home
} else {
expanded = home + raw[1:]
}
}
absPath, err := filepath.Abs(expanded)
if err != nil {
return "", err
}
absPath = filepath.Clean(absPath)
home, err := os.UserHomeDir()
if err != nil {
if !allowOutsideClaudeDir {
return "", err
}
logWarn(fmt.Sprintf("Failed to resolve home directory for prompt file validation: %v; proceeding without restriction", err))
} else {
allowedDirs := []string{
filepath.Clean(filepath.Join(home, ".claude")),
filepath.Clean(filepath.Join(home, ".codeagent", "agents")),
}
for i := range allowedDirs {
allowedAbs, err := filepath.Abs(allowedDirs[i])
if err == nil {
allowedDirs[i] = filepath.Clean(allowedAbs)
}
}
isWithinDir := func(path, dir string) bool {
rel, err := filepath.Rel(dir, path)
if err != nil {
return false
}
rel = filepath.Clean(rel)
if rel == "." {
return true
}
if rel == ".." {
return false
}
prefix := ".." + string(os.PathSeparator)
return !strings.HasPrefix(rel, prefix)
}
if !allowOutsideClaudeDir {
withinAllowed := false
for _, dir := range allowedDirs {
if isWithinDir(absPath, dir) {
withinAllowed = true
break
}
}
if !withinAllowed {
logWarn(fmt.Sprintf("Refusing to read prompt file outside allowed dirs (%s): %s", strings.Join(allowedDirs, ", "), absPath))
return "", fmt.Errorf("prompt file must be under ~/.claude or ~/.codeagent/agents")
}
resolvedPath, errPath := filepath.EvalSymlinks(absPath)
if errPath == nil {
resolvedPath = filepath.Clean(resolvedPath)
resolvedAllowed := make([]string, 0, len(allowedDirs))
for _, dir := range allowedDirs {
resolvedBase, errBase := filepath.EvalSymlinks(dir)
if errBase != nil {
continue
}
resolvedAllowed = append(resolvedAllowed, filepath.Clean(resolvedBase))
}
if len(resolvedAllowed) > 0 {
withinResolved := false
for _, dir := range resolvedAllowed {
if isWithinDir(resolvedPath, dir) {
withinResolved = true
break
}
}
if !withinResolved {
logWarn(fmt.Sprintf("Refusing to read prompt file outside allowed dirs (%s) (resolved): %s", strings.Join(resolvedAllowed, ", "), resolvedPath))
return "", fmt.Errorf("prompt file must be under ~/.claude or ~/.codeagent/agents")
}
}
}
} else {
withinAllowed := false
for _, dir := range allowedDirs {
if isWithinDir(absPath, dir) {
withinAllowed = true
break
}
}
if !withinAllowed {
logWarn(fmt.Sprintf("Reading prompt file outside allowed dirs (%s): %s", strings.Join(allowedDirs, ", "), absPath))
}
}
}
data, err := os.ReadFile(absPath)
if err != nil {
return "", err
}
return strings.TrimRight(string(data), "\r\n"), nil
}
func wrapTaskWithAgentPrompt(prompt string, task string) string {
return "<agent-prompt>\n" + prompt + "\n</agent-prompt>\n\n" + task
}
func runCleanupHook() {
if logger := activeLogger(); logger != nil {
logger.Flush()
}
if cleanupHook != nil {
cleanupHook()
}
}
func printHelp() {
name := currentWrapperName()
help := fmt.Sprintf(`%[1]s - Go wrapper for AI CLI backends
Usage:
%[1]s "task" [workdir]
%[1]s --backend claude "task" [workdir]
%[1]s --prompt-file /path/to/prompt.md "task" [workdir]
%[1]s - [workdir] Read task from stdin
%[1]s resume <session_id> "task" [workdir]
%[1]s resume <session_id> - [workdir]
%[1]s --parallel Run tasks in parallel (config from stdin)
%[1]s --parallel --full-output Run tasks in parallel with full output (legacy)
%[1]s --version
%[1]s --help
Parallel mode examples:
%[1]s --parallel < tasks.txt
echo '...' | %[1]s --parallel
%[1]s --parallel --full-output < tasks.txt
%[1]s --parallel <<'EOF'
Environment Variables:
CODEX_TIMEOUT Timeout in milliseconds (default: 7200000)
CODEAGENT_ASCII_MODE Use ASCII symbols instead of Unicode (PASS/WARN/FAIL)
Exit Codes:
0 Success
1 General error (missing args, no output)
124 Timeout
127 backend command not found
130 Interrupted (Ctrl+C)
* Passthrough from backend process`, name)
fmt.Println(help)
}

View File

@@ -0,0 +1,9 @@
package wrapper
import backend "codeagent-wrapper/internal/backend"
type Backend = backend.Backend
type CodexBackend = backend.CodexBackend
type ClaudeBackend = backend.ClaudeBackend
type GeminiBackend = backend.GeminiBackend
type OpencodeBackend = backend.OpencodeBackend

View File

@@ -0,0 +1,7 @@
package wrapper
import backend "codeagent-wrapper/internal/backend"
func init() {
backend.SetLogFuncs(logWarn, logError)
}

View File

@@ -0,0 +1,5 @@
package wrapper
import backend "codeagent-wrapper/internal/backend"
func selectBackend(name string) (Backend, error) { return backend.Select(name) }

View File

@@ -0,0 +1,103 @@
package wrapper
import (
"bytes"
"os"
"testing"
config "codeagent-wrapper/internal/config"
)
var (
benchCmdSink any
benchConfigSink *Config
benchMessageSink string
benchThreadIDSink string
)
// BenchmarkStartup_NewRootCommand measures CLI startup overhead (command+flags construction).
func BenchmarkStartup_NewRootCommand(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
benchCmdSink = newRootCommand()
}
}
// BenchmarkConfigParse_ParseArgs measures config parsing from argv/env (steady-state).
func BenchmarkConfigParse_ParseArgs(b *testing.B) {
home := b.TempDir()
b.Setenv("HOME", home)
b.Setenv("USERPROFILE", home)
config.ResetModelsConfigCacheForTest()
b.Cleanup(config.ResetModelsConfigCacheForTest)
origArgs := os.Args
os.Args = []string{"codeagent-wrapper", "--agent", "develop", "task"}
b.Cleanup(func() { os.Args = origArgs })
if _, err := parseArgs(); err != nil {
b.Fatalf("warmup parseArgs() error: %v", err)
}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
cfg, err := parseArgs()
if err != nil {
b.Fatalf("parseArgs() error: %v", err)
}
benchConfigSink = cfg
}
}
// BenchmarkJSONParse_ParseJSONStreamInternal measures line-delimited JSON stream parsing.
func BenchmarkJSONParse_ParseJSONStreamInternal(b *testing.B) {
stream := []byte(
`{"type":"thread.started","thread_id":"t"}` + "\n" +
`{"type":"item.completed","item":{"type":"agent_message","text":"hello"}}` + "\n" +
`{"type":"thread.completed","thread_id":"t"}` + "\n",
)
b.SetBytes(int64(len(stream)))
b.ReportAllocs()
for i := 0; i < b.N; i++ {
message, threadID := parseJSONStreamInternal(bytes.NewReader(stream), nil, nil, nil, nil)
benchMessageSink = message
benchThreadIDSink = threadID
}
}
// BenchmarkLoggerWrite 测试日志写入性能
func BenchmarkLoggerWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
logger.Info("benchmark log message")
}
b.StopTimer()
logger.Flush()
}
// BenchmarkLoggerConcurrentWrite 测试并发日志写入性能
func BenchmarkLoggerConcurrentWrite(b *testing.B) {
logger, err := NewLogger()
if err != nil {
b.Fatal(err)
}
defer logger.Close()
b.ResetTimer()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
logger.Info("concurrent benchmark log message")
}
})
b.StopTimer()
logger.Flush()
}

View File

@@ -0,0 +1,664 @@
package wrapper
import (
"errors"
"fmt"
"io"
"os"
"reflect"
"strings"
config "codeagent-wrapper/internal/config"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/spf13/viper"
)
type exitError struct {
code int
}
func (e exitError) Error() string {
return fmt.Sprintf("exit %d", e.code)
}
type cliOptions struct {
Backend string
Model string
ReasoningEffort string
Agent string
PromptFile string
SkipPermissions bool
Parallel bool
FullOutput bool
Cleanup bool
Version bool
ConfigFile string
}
func Main() {
Run()
}
// Run is the program entrypoint for cmd/codeagent/main.go.
func Run() {
exitFn(run())
}
func run() int {
cmd := newRootCommand()
cmd.SetArgs(os.Args[1:])
if err := cmd.Execute(); err != nil {
var ee exitError
if errors.As(err, &ee) {
return ee.code
}
return 1
}
return 0
}
func newRootCommand() *cobra.Command {
name := currentWrapperName()
opts := &cliOptions{}
cmd := &cobra.Command{
Use: fmt.Sprintf("%s [flags] <task>|resume <session_id> <task> [workdir]", name),
Short: "Go wrapper for AI CLI backends",
SilenceErrors: true,
SilenceUsage: true,
Args: cobra.ArbitraryArgs,
RunE: func(cmd *cobra.Command, args []string) error {
if opts.Version {
fmt.Printf("%s version %s\n", name, version)
return nil
}
if opts.Cleanup {
code := runCleanupMode()
if code == 0 {
return nil
}
return exitError{code: code}
}
exitCode := runWithLoggerAndCleanup(func() int {
v, err := config.NewViper(opts.ConfigFile)
if err != nil {
logError(err.Error())
return 1
}
if opts.Parallel {
return runParallelMode(cmd, args, opts, v, name)
}
logInfo("Script started")
cfg, err := buildSingleConfig(cmd, args, os.Args[1:], opts, v)
if err != nil {
logError(err.Error())
return 1
}
logInfo(fmt.Sprintf("Parsed args: mode=%s, task_len=%d, backend=%s", cfg.Mode, len(cfg.Task), cfg.Backend))
return runSingleMode(cfg, name)
})
if exitCode == 0 {
return nil
}
return exitError{code: exitCode}
},
}
cmd.CompletionOptions.DisableDefaultCmd = true
addRootFlags(cmd.Flags(), opts)
cmd.AddCommand(newVersionCommand(name), newCleanupCommand())
return cmd
}
func addRootFlags(fs *pflag.FlagSet, opts *cliOptions) {
fs.StringVar(&opts.ConfigFile, "config", "", "Config file path (default: $HOME/.codeagent/config.*)")
fs.BoolVarP(&opts.Version, "version", "v", false, "Print version and exit")
fs.BoolVar(&opts.Cleanup, "cleanup", false, "Clean up old logs and exit")
fs.BoolVar(&opts.Parallel, "parallel", false, "Run tasks in parallel (config from stdin)")
fs.BoolVar(&opts.FullOutput, "full-output", false, "Parallel mode: include full task output (legacy)")
fs.StringVar(&opts.Backend, "backend", defaultBackendName, "Backend to use (codex, claude, gemini, opencode)")
fs.StringVar(&opts.Model, "model", "", "Model override")
fs.StringVar(&opts.ReasoningEffort, "reasoning-effort", "", "Reasoning effort (backend-specific)")
fs.StringVar(&opts.Agent, "agent", "", "Agent preset name (from ~/.codeagent/models.json)")
fs.StringVar(&opts.PromptFile, "prompt-file", "", "Prompt file path")
fs.BoolVar(&opts.SkipPermissions, "skip-permissions", false, "Skip permissions prompts (also via CODEAGENT_SKIP_PERMISSIONS)")
fs.BoolVar(&opts.SkipPermissions, "dangerously-skip-permissions", false, "Alias for --skip-permissions")
}
func newVersionCommand(name string) *cobra.Command {
return &cobra.Command{
Use: "version",
Short: "Print version and exit",
SilenceErrors: true,
SilenceUsage: true,
RunE: func(cmd *cobra.Command, args []string) error {
fmt.Printf("%s version %s\n", name, version)
return nil
},
}
}
func newCleanupCommand() *cobra.Command {
return &cobra.Command{
Use: "cleanup",
Short: "Clean up old logs and exit",
SilenceErrors: true,
SilenceUsage: true,
RunE: func(cmd *cobra.Command, args []string) error {
code := runCleanupMode()
if code == 0 {
return nil
}
return exitError{code: code}
},
}
}
func runWithLoggerAndCleanup(fn func() int) (exitCode int) {
logger, err := NewLogger()
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to initialize logger: %v\n", err)
return 1
}
setLogger(logger)
defer func() {
logger := activeLogger()
if logger != nil {
logger.Flush()
}
if err := closeLogger(); err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to close logger: %v\n", err)
}
if logger == nil {
return
}
if exitCode != 0 {
if entries := logger.ExtractRecentErrors(10); len(entries) > 0 {
fmt.Fprintln(os.Stderr, "\n=== Recent Errors ===")
for _, entry := range entries {
fmt.Fprintln(os.Stderr, entry)
}
fmt.Fprintf(os.Stderr, "Log file: %s (deleted)\n", logger.Path())
}
}
_ = logger.RemoveLogFile()
}()
defer runCleanupHook()
// Clean up stale logs from previous runs.
scheduleStartupCleanup()
return fn()
}
func parseArgs() (*Config, error) {
opts := &cliOptions{}
cmd := &cobra.Command{SilenceErrors: true, SilenceUsage: true, Args: cobra.ArbitraryArgs}
addRootFlags(cmd.Flags(), opts)
rawArgv := os.Args[1:]
if err := cmd.ParseFlags(rawArgv); err != nil {
return nil, err
}
args := cmd.Flags().Args()
v, err := config.NewViper(opts.ConfigFile)
if err != nil {
return nil, err
}
return buildSingleConfig(cmd, args, rawArgv, opts, v)
}
func buildSingleConfig(cmd *cobra.Command, args []string, rawArgv []string, opts *cliOptions, v *viper.Viper) (*Config, error) {
backendName := defaultBackendName
model := ""
reasoningEffort := ""
agentName := ""
promptFile := ""
promptFileExplicit := false
yolo := false
if cmd.Flags().Changed("agent") {
agentName = strings.TrimSpace(opts.Agent)
if agentName == "" {
return nil, fmt.Errorf("--agent flag requires a value")
}
if err := config.ValidateAgentName(agentName); err != nil {
return nil, fmt.Errorf("--agent flag invalid value: %w", err)
}
} else {
agentName = strings.TrimSpace(v.GetString("agent"))
if agentName != "" {
if err := config.ValidateAgentName(agentName); err != nil {
return nil, fmt.Errorf("--agent flag invalid value: %w", err)
}
}
}
var resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning string
if agentName != "" {
var resolvedYolo bool
resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning, _, _, resolvedYolo = config.ResolveAgentConfig(agentName)
yolo = resolvedYolo
}
if cmd.Flags().Changed("prompt-file") {
promptFile = strings.TrimSpace(opts.PromptFile)
if promptFile == "" {
return nil, fmt.Errorf("--prompt-file flag requires a value")
}
promptFileExplicit = true
} else if val := strings.TrimSpace(v.GetString("prompt-file")); val != "" {
promptFile = val
promptFileExplicit = true
} else {
promptFile = resolvedPromptFile
}
agentFlagChanged := cmd.Flags().Changed("agent")
backendFlagChanged := cmd.Flags().Changed("backend")
if backendFlagChanged {
backendName = strings.TrimSpace(opts.Backend)
if backendName == "" {
return nil, fmt.Errorf("--backend flag requires a value")
}
}
switch {
case agentFlagChanged && backendFlagChanged && lastFlagIndex(rawArgv, "agent") > lastFlagIndex(rawArgv, "backend"):
backendName = resolvedBackend
case !backendFlagChanged && agentName != "":
backendName = resolvedBackend
case !backendFlagChanged:
if val := strings.TrimSpace(v.GetString("backend")); val != "" {
backendName = val
}
}
modelFlagChanged := cmd.Flags().Changed("model")
if modelFlagChanged {
model = strings.TrimSpace(opts.Model)
if model == "" {
return nil, fmt.Errorf("--model flag requires a value")
}
}
switch {
case agentFlagChanged && modelFlagChanged && lastFlagIndex(rawArgv, "agent") > lastFlagIndex(rawArgv, "model"):
model = strings.TrimSpace(resolvedModel)
case !modelFlagChanged && agentName != "":
model = strings.TrimSpace(resolvedModel)
case !modelFlagChanged:
model = strings.TrimSpace(v.GetString("model"))
}
if cmd.Flags().Changed("reasoning-effort") {
reasoningEffort = strings.TrimSpace(opts.ReasoningEffort)
if reasoningEffort == "" {
return nil, fmt.Errorf("--reasoning-effort flag requires a value")
}
} else if val := strings.TrimSpace(v.GetString("reasoning-effort")); val != "" {
reasoningEffort = val
} else if agentName != "" {
reasoningEffort = strings.TrimSpace(resolvedReasoning)
}
skipChanged := cmd.Flags().Changed("skip-permissions") || cmd.Flags().Changed("dangerously-skip-permissions")
skipPermissions := false
if skipChanged {
skipPermissions = opts.SkipPermissions
} else {
skipPermissions = v.GetBool("skip-permissions")
}
if len(args) == 0 {
return nil, fmt.Errorf("task required")
}
cfg := &Config{
WorkDir: defaultWorkdir,
Backend: backendName,
Agent: agentName,
PromptFile: promptFile,
PromptFileExplicit: promptFileExplicit,
SkipPermissions: skipPermissions,
Yolo: yolo,
Model: model,
ReasoningEffort: reasoningEffort,
MaxParallelWorkers: config.ResolveMaxParallelWorkers(),
}
if args[0] == "resume" {
if len(args) < 3 {
return nil, fmt.Errorf("resume mode requires: resume <session_id> <task>")
}
cfg.Mode = "resume"
cfg.SessionID = strings.TrimSpace(args[1])
if cfg.SessionID == "" {
return nil, fmt.Errorf("resume mode requires non-empty session_id")
}
cfg.Task = args[2]
cfg.ExplicitStdin = (args[2] == "-")
if len(args) > 3 {
if args[3] == "-" {
return nil, fmt.Errorf("invalid workdir: '-' is not a valid directory path")
}
cfg.WorkDir = args[3]
}
} else {
cfg.Mode = "new"
cfg.Task = args[0]
cfg.ExplicitStdin = (args[0] == "-")
if len(args) > 1 {
if args[1] == "-" {
return nil, fmt.Errorf("invalid workdir: '-' is not a valid directory path")
}
cfg.WorkDir = args[1]
}
}
return cfg, nil
}
func lastFlagIndex(argv []string, name string) int {
if len(argv) == 0 {
return -1
}
name = strings.TrimSpace(name)
if name == "" {
return -1
}
needle := "--" + name
prefix := needle + "="
last := -1
for i, arg := range argv {
if arg == needle || strings.HasPrefix(arg, prefix) {
last = i
}
}
return last
}
func runParallelMode(cmd *cobra.Command, args []string, opts *cliOptions, v *viper.Viper, name string) int {
if len(args) > 0 {
fmt.Fprintln(os.Stderr, "ERROR: --parallel reads its task configuration from stdin; no positional arguments are allowed.")
fmt.Fprintln(os.Stderr, "Usage examples:")
fmt.Fprintf(os.Stderr, " %s --parallel < tasks.txt\n", name)
fmt.Fprintf(os.Stderr, " echo '...' | %s --parallel\n", name)
fmt.Fprintf(os.Stderr, " %s --parallel <<'EOF'\n", name)
fmt.Fprintf(os.Stderr, " %s --parallel --full-output <<'EOF' # include full task output\n", name)
return 1
}
if cmd.Flags().Changed("agent") || cmd.Flags().Changed("prompt-file") || cmd.Flags().Changed("reasoning-effort") {
fmt.Fprintln(os.Stderr, "ERROR: --parallel reads its task configuration from stdin; only --backend, --model, --full-output and --skip-permissions are allowed.")
return 1
}
backendName := defaultBackendName
if cmd.Flags().Changed("backend") {
backendName = strings.TrimSpace(opts.Backend)
if backendName == "" {
fmt.Fprintln(os.Stderr, "ERROR: --backend flag requires a value")
return 1
}
} else if val := strings.TrimSpace(v.GetString("backend")); val != "" {
backendName = val
}
model := ""
if cmd.Flags().Changed("model") {
model = strings.TrimSpace(opts.Model)
if model == "" {
fmt.Fprintln(os.Stderr, "ERROR: --model flag requires a value")
return 1
}
} else {
model = strings.TrimSpace(v.GetString("model"))
}
fullOutput := opts.FullOutput
if !cmd.Flags().Changed("full-output") && v.IsSet("full-output") {
fullOutput = v.GetBool("full-output")
}
skipChanged := cmd.Flags().Changed("skip-permissions") || cmd.Flags().Changed("dangerously-skip-permissions")
skipPermissions := false
if skipChanged {
skipPermissions = opts.SkipPermissions
} else {
skipPermissions = v.GetBool("skip-permissions")
}
backend, err := selectBackendFn(backendName)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
backendName = backend.Name()
data, err := io.ReadAll(stdinReader)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to read stdin: %v\n", err)
return 1
}
cfg, err := parseParallelConfig(data)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
cfg.GlobalBackend = backendName
model = strings.TrimSpace(model)
for i := range cfg.Tasks {
if strings.TrimSpace(cfg.Tasks[i].Backend) == "" {
cfg.Tasks[i].Backend = backendName
}
if strings.TrimSpace(cfg.Tasks[i].Model) == "" && model != "" {
cfg.Tasks[i].Model = model
}
cfg.Tasks[i].SkipPermissions = cfg.Tasks[i].SkipPermissions || skipPermissions
}
timeoutSec := resolveTimeout()
layers, err := topologicalSort(cfg.Tasks)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
results := executeConcurrent(layers, timeoutSec)
for i := range results {
results[i].CoverageTarget = defaultCoverageTarget
if results[i].Message == "" {
continue
}
lines := strings.Split(results[i].Message, "\n")
results[i].Coverage = extractCoverageFromLines(lines)
results[i].CoverageNum = extractCoverageNum(results[i].Coverage)
results[i].FilesChanged = extractFilesChangedFromLines(lines)
results[i].TestsPassed, results[i].TestsFailed = extractTestResultsFromLines(lines)
results[i].KeyOutput = extractKeyOutputFromLines(lines, 150)
}
fmt.Println(generateFinalOutputWithMode(results, !fullOutput))
exitCode := 0
for _, res := range results {
if res.ExitCode != 0 {
exitCode = res.ExitCode
}
}
return exitCode
}
func runSingleMode(cfg *Config, name string) int {
backend, err := selectBackendFn(cfg.Backend)
if err != nil {
logError(err.Error())
return 1
}
cfg.Backend = backend.Name()
cmdInjected := codexCommand != defaultCodexCommand
argsInjected := buildCodexArgsFn != nil && reflect.ValueOf(buildCodexArgsFn).Pointer() != reflect.ValueOf(defaultBuildArgsFn).Pointer()
if backend.Name() != defaultBackendName || !cmdInjected {
codexCommand = backend.Command()
}
if backend.Name() != defaultBackendName || !argsInjected {
buildCodexArgsFn = backend.BuildArgs
}
logInfo(fmt.Sprintf("Selected backend: %s", backend.Name()))
timeoutSec := resolveTimeout()
logInfo(fmt.Sprintf("Timeout: %ds", timeoutSec))
cfg.Timeout = timeoutSec
var taskText string
var piped bool
if cfg.ExplicitStdin {
logInfo("Explicit stdin mode: reading task from stdin")
data, err := io.ReadAll(stdinReader)
if err != nil {
logError("Failed to read stdin: " + err.Error())
return 1
}
taskText = string(data)
if taskText == "" {
logError("Explicit stdin mode requires task input from stdin")
return 1
}
piped = !isTerminal()
} else {
pipedTask, err := readPipedTask()
if err != nil {
logError("Failed to read piped stdin: " + err.Error())
return 1
}
piped = pipedTask != ""
if piped {
taskText = pipedTask
} else {
taskText = cfg.Task
}
}
if strings.TrimSpace(cfg.PromptFile) != "" {
prompt, err := readAgentPromptFile(cfg.PromptFile, cfg.PromptFileExplicit)
if err != nil {
logError("Failed to read prompt file: " + err.Error())
return 1
}
taskText = wrapTaskWithAgentPrompt(prompt, taskText)
}
useStdin := cfg.ExplicitStdin || shouldUseStdin(taskText, piped)
targetArg := taskText
if useStdin {
targetArg = "-"
}
codexArgs := buildCodexArgsFn(cfg, targetArg)
logger := activeLogger()
if logger == nil {
fmt.Fprintln(os.Stderr, "ERROR: logger is not initialized")
return 1
}
fmt.Fprintf(os.Stderr, "[%s]\n", name)
fmt.Fprintf(os.Stderr, " Backend: %s\n", cfg.Backend)
fmt.Fprintf(os.Stderr, " Command: %s %s\n", codexCommand, strings.Join(codexArgs, " "))
fmt.Fprintf(os.Stderr, " PID: %d\n", os.Getpid())
fmt.Fprintf(os.Stderr, " Log: %s\n", logger.Path())
if useStdin {
var reasons []string
if piped {
reasons = append(reasons, "piped input")
}
if cfg.ExplicitStdin {
reasons = append(reasons, "explicit \"-\"")
}
if strings.Contains(taskText, "\n") {
reasons = append(reasons, "newline")
}
if strings.Contains(taskText, "\\") {
reasons = append(reasons, "backslash")
}
if strings.Contains(taskText, "\"") {
reasons = append(reasons, "double-quote")
}
if strings.Contains(taskText, "'") {
reasons = append(reasons, "single-quote")
}
if strings.Contains(taskText, "`") {
reasons = append(reasons, "backtick")
}
if strings.Contains(taskText, "$") {
reasons = append(reasons, "dollar")
}
if len(taskText) > 800 {
reasons = append(reasons, "length>800")
}
if len(reasons) > 0 {
logWarn(fmt.Sprintf("Using stdin mode for task due to: %s", strings.Join(reasons, ", ")))
}
}
logInfo(fmt.Sprintf("%s running...", cfg.Backend))
taskSpec := TaskSpec{
Task: taskText,
WorkDir: cfg.WorkDir,
Mode: cfg.Mode,
SessionID: cfg.SessionID,
Backend: cfg.Backend,
Model: cfg.Model,
ReasoningEffort: cfg.ReasoningEffort,
Agent: cfg.Agent,
SkipPermissions: cfg.SkipPermissions,
UseStdin: useStdin,
}
result := runTaskFn(taskSpec, false, cfg.Timeout)
if result.ExitCode != 0 {
return result.ExitCode
}
// Validate that we got a meaningful output message
if strings.TrimSpace(result.Message) == "" {
logError(fmt.Sprintf("no output message: backend=%s returned empty result.Message with exit_code=0", cfg.Backend))
return 1
}
fmt.Println(result.Message)
if result.SessionID != "" {
fmt.Printf("\n---\nSESSION_ID: %s\n", result.SessionID)
}
return 0
}

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"bufio"
@@ -11,9 +11,20 @@ import (
"sync/atomic"
"testing"
"time"
"github.com/goccy/go-json"
)
func stripTimestampPrefix(line string) string {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "{") {
var evt struct {
Message string `json:"message"`
}
if err := json.Unmarshal([]byte(line), &evt); err == nil && evt.Message != "" {
return evt.Message
}
}
if !strings.HasPrefix(line, "[") {
return line
}

View File

@@ -0,0 +1,7 @@
package wrapper
import config "codeagent-wrapper/internal/config"
// Keep the existing Config name throughout the codebase, but source the
// implementation from internal/config.
type Config = config.Config

View File

@@ -0,0 +1,54 @@
package wrapper
import (
"context"
backend "codeagent-wrapper/internal/backend"
config "codeagent-wrapper/internal/config"
executor "codeagent-wrapper/internal/executor"
)
// defaultRunCodexTaskFn is the default implementation of runCodexTaskFn (exposed for test reset).
func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
return executor.DefaultRunCodexTaskFn(task, timeout)
}
var runCodexTaskFn = defaultRunCodexTaskFn
func topologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
return executor.TopologicalSort(tasks)
}
func executeConcurrent(layers [][]TaskSpec, timeout int) []TaskResult {
maxWorkers := config.ResolveMaxParallelWorkers()
return executeConcurrentWithContext(context.Background(), layers, timeout, maxWorkers)
}
func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec, timeout int, maxWorkers int) []TaskResult {
return executor.ExecuteConcurrentWithContext(parentCtx, layers, timeout, maxWorkers, runCodexTaskFn)
}
func generateFinalOutput(results []TaskResult) string {
return executor.GenerateFinalOutput(results)
}
func generateFinalOutputWithMode(results []TaskResult, summaryOnly bool) string {
return executor.GenerateFinalOutputWithMode(results, summaryOnly)
}
func buildCodexArgs(cfg *Config, targetArg string) []string {
return backend.BuildCodexArgs(cfg, targetArg)
}
func runCodexTask(taskSpec TaskSpec, silent bool, timeoutSec int) TaskResult {
return runCodexTaskWithContext(context.Background(), taskSpec, nil, nil, false, silent, timeoutSec)
}
func runCodexProcess(parentCtx context.Context, codexArgs []string, taskText string, useStdin bool, timeoutSec int) (message, threadID string, exitCode int) {
res := runCodexTaskWithContext(parentCtx, TaskSpec{Task: taskText, WorkDir: defaultWorkdir, Mode: "new", UseStdin: useStdin}, nil, codexArgs, true, false, timeoutSec)
return res.Message, res.SessionID, res.ExitCode
}
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
return executor.RunCodexTaskWithContext(parentCtx, taskSpec, backend, codexCommand, buildCodexArgsFn, customArgs, useCustomArgs, silent, timeoutSec)
}

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"bufio"
@@ -15,9 +15,10 @@ import (
"strings"
"sync"
"sync/atomic"
"syscall"
"testing"
"time"
executor "codeagent-wrapper/internal/executor"
)
var executorTestTaskCounter atomic.Int64
@@ -91,7 +92,7 @@ func (rc *reasonReadCloser) record(reason string) {
type execFakeRunner struct {
stdout io.ReadCloser
stderr io.ReadCloser
process processHandle
process executor.ProcessHandle
stdin io.WriteCloser
dir string
env map[string]string
@@ -158,7 +159,7 @@ func (f *execFakeRunner) SetEnv(env map[string]string) {
f.env[k] = v
}
}
func (f *execFakeRunner) Process() processHandle {
func (f *execFakeRunner) Process() executor.ProcessHandle {
if f.process != nil {
return f.process
}
@@ -168,225 +169,15 @@ func (f *execFakeRunner) Process() processHandle {
return &execFakeProcess{pid: 1}
}
func TestExecutorHelperCoverage(t *testing.T) {
t.Run("realCmdAndProcess", func(t *testing.T) {
rc := &realCmd{}
if err := rc.Start(); err == nil {
t.Fatalf("expected error for nil command")
}
if err := rc.Wait(); err == nil {
t.Fatalf("expected error for nil command")
}
if _, err := rc.StdoutPipe(); err == nil {
t.Fatalf("expected error for nil command")
}
if _, err := rc.StderrPipe(); err == nil {
t.Fatalf("expected error for nil command")
}
if _, err := rc.StdinPipe(); err == nil {
t.Fatalf("expected error for nil command")
}
rc.SetStderr(io.Discard)
if rc.Process() != nil {
t.Fatalf("expected nil process")
}
rcWithCmd := &realCmd{cmd: &exec.Cmd{}}
rcWithCmd.SetStderr(io.Discard)
rcWithCmd.SetDir("/tmp")
if rcWithCmd.cmd.Dir != "/tmp" {
t.Fatalf("expected SetDir to set cmd.Dir, got %q", rcWithCmd.cmd.Dir)
}
echoCmd := exec.Command("echo", "ok")
rcProc := &realCmd{cmd: echoCmd}
stdoutPipe, err := rcProc.StdoutPipe()
if err != nil {
t.Fatalf("StdoutPipe error: %v", err)
}
stderrPipe, err := rcProc.StderrPipe()
if err != nil {
t.Fatalf("StderrPipe error: %v", err)
}
stdinPipe, err := rcProc.StdinPipe()
if err != nil {
t.Fatalf("StdinPipe error: %v", err)
}
if err := rcProc.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
_, _ = stdinPipe.Write([]byte{})
_ = stdinPipe.Close()
procHandle := rcProc.Process()
if procHandle == nil {
t.Fatalf("expected process handle")
}
_ = procHandle.Signal(syscall.SIGTERM)
_ = procHandle.Kill()
_ = rcProc.Wait()
_, _ = io.ReadAll(stdoutPipe)
_, _ = io.ReadAll(stderrPipe)
rp := &realProcess{}
if rp.Pid() != 0 {
t.Fatalf("nil process should have pid 0")
}
if rp.Kill() != nil {
t.Fatalf("nil process Kill should be nil")
}
if rp.Signal(syscall.SIGTERM) != nil {
t.Fatalf("nil process Signal should be nil")
}
rpLive := &realProcess{proc: &os.Process{Pid: 99}}
if rpLive.Pid() != 99 {
t.Fatalf("expected pid 99, got %d", rpLive.Pid())
}
_ = rpLive.Kill()
_ = rpLive.Signal(syscall.SIGTERM)
})
t.Run("topologicalSortAndSkip", func(t *testing.T) {
layers, err := topologicalSort([]TaskSpec{{ID: "root"}, {ID: "child", Dependencies: []string{"root"}}})
if err != nil || len(layers) != 2 {
t.Fatalf("unexpected topological sort result: layers=%d err=%v", len(layers), err)
}
if _, err := topologicalSort([]TaskSpec{{ID: "cycle", Dependencies: []string{"cycle"}}}); err == nil {
t.Fatalf("expected cycle detection error")
}
failed := map[string]TaskResult{"root": {ExitCode: 1}}
if skip, _ := shouldSkipTask(TaskSpec{ID: "child", Dependencies: []string{"root"}}, failed); !skip {
t.Fatalf("should skip when dependency failed")
}
if skip, _ := shouldSkipTask(TaskSpec{ID: "leaf"}, failed); skip {
t.Fatalf("should not skip task without dependencies")
}
if skip, _ := shouldSkipTask(TaskSpec{ID: "child-ok", Dependencies: []string{"root"}}, map[string]TaskResult{}); skip {
t.Fatalf("should not skip when dependencies succeeded")
}
})
t.Run("cancelledTaskResult", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
cancel()
res := cancelledTaskResult("t1", ctx)
if res.ExitCode != 130 {
t.Fatalf("expected cancel exit code, got %d", res.ExitCode)
}
timeoutCtx, timeoutCancel := context.WithTimeout(context.Background(), 0)
defer timeoutCancel()
res = cancelledTaskResult("t2", timeoutCtx)
if res.ExitCode != 124 {
t.Fatalf("expected timeout exit code, got %d", res.ExitCode)
}
})
t.Run("generateFinalOutputAndArgs", func(t *testing.T) {
const key = "CODEX_BYPASS_SANDBOX"
t.Setenv(key, "false")
out := generateFinalOutput([]TaskResult{
{TaskID: "ok", ExitCode: 0},
{TaskID: "fail", ExitCode: 1, Error: "boom"},
})
if !strings.Contains(out, "ok") || !strings.Contains(out, "fail") {
t.Fatalf("unexpected summary output: %s", out)
}
// Test summary mode (default) - should have new format with ### headers
out = generateFinalOutput([]TaskResult{{TaskID: "rich", ExitCode: 0, SessionID: "sess", LogPath: "/tmp/log", Message: "hello"}})
if !strings.Contains(out, "### rich") {
t.Fatalf("summary output missing task header: %s", out)
}
// Test full output mode - should have Session and Message
out = generateFinalOutputWithMode([]TaskResult{{TaskID: "rich", ExitCode: 0, SessionID: "sess", LogPath: "/tmp/log", Message: "hello"}}, false)
if !strings.Contains(out, "Session: sess") || !strings.Contains(out, "Log: /tmp/log") || !strings.Contains(out, "hello") {
t.Fatalf("full output missing fields: %s", out)
}
args := buildCodexArgs(&Config{Mode: "new", WorkDir: "/tmp"}, "task")
if !slices.Equal(args, []string{"e", "--skip-git-repo-check", "-C", "/tmp", "--json", "task"}) {
t.Fatalf("unexpected codex args: %+v", args)
}
args = buildCodexArgs(&Config{Mode: "resume", SessionID: "sess"}, "target")
if !slices.Equal(args, []string{"e", "--skip-git-repo-check", "--json", "resume", "sess", "target"}) {
t.Fatalf("unexpected resume args: %+v", args)
}
})
t.Run("generateFinalOutputASCIIMode", func(t *testing.T) {
t.Setenv("CODEAGENT_ASCII_MODE", "true")
results := []TaskResult{
{TaskID: "ok", ExitCode: 0, Coverage: "92%", CoverageNum: 92, CoverageTarget: 90, KeyOutput: "done"},
{TaskID: "warn", ExitCode: 0, Coverage: "80%", CoverageNum: 80, CoverageTarget: 90, KeyOutput: "did"},
{TaskID: "bad", ExitCode: 2, Error: "boom"},
}
out := generateFinalOutput(results)
for _, sym := range []string{"PASS", "WARN", "FAIL"} {
if !strings.Contains(out, sym) {
t.Fatalf("ASCII mode should include %q, got: %s", sym, out)
}
}
for _, sym := range []string{"✓", "⚠️", "✗"} {
if strings.Contains(out, sym) {
t.Fatalf("ASCII mode should not include %q, got: %s", sym, out)
}
}
})
t.Run("generateFinalOutputUnicodeMode", func(t *testing.T) {
t.Setenv("CODEAGENT_ASCII_MODE", "false")
results := []TaskResult{
{TaskID: "ok", ExitCode: 0, Coverage: "92%", CoverageNum: 92, CoverageTarget: 90, KeyOutput: "done"},
{TaskID: "warn", ExitCode: 0, Coverage: "80%", CoverageNum: 80, CoverageTarget: 90, KeyOutput: "did"},
{TaskID: "bad", ExitCode: 2, Error: "boom"},
}
out := generateFinalOutput(results)
for _, sym := range []string{"✓", "⚠️", "✗"} {
if !strings.Contains(out, sym) {
t.Fatalf("Unicode mode should include %q, got: %s", sym, out)
}
}
})
t.Run("executeConcurrentWrapper", func(t *testing.T) {
orig := runCodexTaskFn
defer func() { runCodexTaskFn = orig }()
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
return TaskResult{TaskID: task.ID, ExitCode: 0, Message: "done"}
}
t.Setenv("CODEAGENT_MAX_PARALLEL_WORKERS", "1")
results := executeConcurrent([][]TaskSpec{{{ID: "wrap"}}}, 1)
if len(results) != 1 || results[0].TaskID != "wrap" {
t.Fatalf("unexpected wrapper results: %+v", results)
}
unbounded := executeConcurrentWithContext(context.Background(), [][]TaskSpec{{{ID: "unbounded"}}}, 1, 0)
if len(unbounded) != 1 || unbounded[0].ExitCode != 0 {
t.Fatalf("unexpected unbounded result: %+v", unbounded)
}
ctx, cancel := context.WithCancel(context.Background())
cancel()
cancelled := executeConcurrentWithContext(ctx, [][]TaskSpec{{{ID: "cancel"}}}, 1, 1)
if cancelled[0].ExitCode == 0 {
t.Fatalf("expected cancelled result, got %+v", cancelled[0])
}
})
}
func TestExecutorRunCodexTaskWithContext(t *testing.T) {
origRunner := newCommandRunner
defer func() { newCommandRunner = origRunner }()
defer resetTestHooks()
t.Run("resumeMissingSessionID", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
t.Fatalf("unexpected command execution for invalid resume config")
return nil
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: ".", Mode: "resume"}, nil, nil, false, false, 1)
if res.ExitCode == 0 || !strings.Contains(res.Error, "session_id") {
@@ -396,13 +187,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
t.Run("success", func(t *testing.T) {
var firstStdout *reasonReadCloser
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
rc := newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"hello"}}`)
if firstStdout == nil {
firstStdout = rc
}
return &execFakeRunner{stdout: rc, process: &execFakeProcess{pid: 1234}}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{ID: "task-1", Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.Error != "" || res.Message != "hello" || res.ExitCode != 0 {
@@ -432,17 +224,18 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("startErrors", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{startErr: errors.New("executable file not found"), process: &execFakeProcess{pid: 1}}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode != 127 {
t.Fatalf("expected missing executable exit code, got %d", res.ExitCode)
}
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{startErr: errors.New("start failed"), process: &execFakeProcess{pid: 2}}
}
})
res = runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected non-zero exit on start failure")
@@ -450,13 +243,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("timeoutAndPipes", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"slow"}}`),
process: &execFakeProcess{pid: 5},
waitDelay: 20 * time.Millisecond,
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: ".", UseStdin: true}, nil, nil, false, false, 0)
if res.ExitCode == 0 {
t.Fatalf("expected timeout result, got %+v", res)
@@ -464,17 +258,18 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("pipeErrors", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{stdoutErr: errors.New("stdout fail"), process: &execFakeProcess{pid: 6}}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected failure on stdout pipe error")
}
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{stdinErr: errors.New("stdin fail"), process: &execFakeProcess{pid: 7}}
}
})
res = runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: ".", UseStdin: true}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected failure on stdin pipe error")
@@ -487,13 +282,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
if exitErr == nil {
t.Fatalf("expected exec.ExitError")
}
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ignored"}}`),
process: &execFakeProcess{pid: 8},
waitErr: exitErr,
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected non-zero exit on wait error")
@@ -501,13 +297,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("contextCancelled", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"cancel"}}`),
process: &execFakeProcess{pid: 9},
waitDelay: 10 * time.Millisecond,
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
ctx, cancel := context.WithCancel(context.Background())
cancel()
res := runCodexTaskWithContext(ctx, TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
@@ -517,12 +314,13 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("silentLogger", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"quiet"}}`),
process: &execFakeProcess{pid: 10},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, true, 1)
if res.ExitCode != 0 || res.LogPath == "" {
@@ -532,12 +330,13 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("injectedLogger", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"injected"}}`),
process: &execFakeProcess{pid: 12},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
injected, err := NewLoggerWithSuffix("executor-injected")
@@ -549,7 +348,7 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
_ = os.Remove(injected.Path())
}()
ctx := withTaskLogger(context.Background(), injected)
ctx := executor.WithTaskLogger(context.Background(), injected)
res := runCodexTaskWithContext(ctx, TaskSpec{ID: "task-injected", Task: "payload", WorkDir: "."}, nil, nil, false, true, 1)
if res.ExitCode != 0 || res.LogPath != injected.Path() {
t.Fatalf("expected injected logger path, got %+v", res)
@@ -569,12 +368,13 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("contextLoggerWithoutParent", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ctx"}}`),
process: &execFakeProcess{pid: 14},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
taskLogger, err := NewLoggerWithSuffix("executor-taskctx")
@@ -586,8 +386,8 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
_ = os.Remove(taskLogger.Path())
})
ctx := withTaskLogger(context.Background(), taskLogger)
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-context", Task: "payload", WorkDir: ".", Context: ctx}, nil, nil, false, true, 1)
ctx := executor.WithTaskLogger(context.Background(), taskLogger)
res := runCodexTaskWithContext(context.TODO(), TaskSpec{ID: "task-context", Task: "payload", WorkDir: ".", Context: ctx}, nil, nil, false, true, 1)
if res.ExitCode != 0 || res.LogPath != taskLogger.Path() {
t.Fatalf("expected task logger to be reused from spec context, got %+v", res)
}
@@ -607,16 +407,17 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
t.Run("backendSetsDirAndNilContext", func(t *testing.T) {
var rc *execFakeRunner
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
rc = &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"backend"}}`),
process: &execFakeProcess{pid: 13},
}
return rc
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
res := runCodexTaskWithContext(nil, TaskSpec{ID: "task-backend", Task: "payload", WorkDir: "/tmp"}, ClaudeBackend{}, nil, false, false, 1)
res := runCodexTaskWithContext(context.TODO(), TaskSpec{ID: "task-backend", Task: "payload", WorkDir: "/tmp"}, ClaudeBackend{}, nil, false, false, 1)
if res.ExitCode != 0 || res.Message != "backend" {
t.Fatalf("unexpected result: %+v", res)
}
@@ -628,13 +429,14 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
t.Run("claudeSkipPermissionsPropagatesFromTaskSpec", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
var gotArgs []string
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
gotArgs = append([]string(nil), args...)
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"agent_message","text":"ok"}}`),
process: &execFakeProcess{pid: 15},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
_ = closeLogger()
res := runCodexTaskWithContext(context.Background(), TaskSpec{ID: "task-skip", Task: "payload", WorkDir: ".", SkipPermissions: true}, ClaudeBackend{}, nil, false, false, 1)
@@ -647,12 +449,13 @@ func TestExecutorRunCodexTaskWithContext(t *testing.T) {
})
t.Run("missingMessage", func(t *testing.T) {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
return &execFakeRunner{
stdout: newReasonReadCloser(`{"type":"item.completed","item":{"type":"task","text":"noop"}}`),
process: &execFakeProcess{pid: 11},
}
}
})
t.Cleanup(func() { executor.SetNewCommandRunner(nil) })
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "payload", WorkDir: "."}, nil, nil, false, false, 1)
if res.ExitCode == 0 {
t.Fatalf("expected failure when no agent_message returned")
@@ -678,7 +481,7 @@ func TestExecutorParallelLogIsolation(t *testing.T) {
origRun := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
logger := taskLoggerFromContext(task.Context)
logger := executor.TaskLoggerFromContext(task.Context)
if logger == nil {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing task logger"}
}
@@ -702,7 +505,7 @@ func TestExecutorParallelLogIsolation(t *testing.T) {
os.Stderr = stderrW
defer func() { os.Stderr = oldStderr }()
results := executeConcurrentWithContext(nil, [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, -1)
results := executeConcurrentWithContext(context.TODO(), [][]TaskSpec{{{ID: taskA}, {ID: taskB}}}, 1, -1)
_ = stderrW.Close()
os.Stderr = oldStderr
@@ -768,7 +571,7 @@ func TestConcurrentExecutorParallelLogIsolationAndClosure(t *testing.T) {
t.Setenv("TMPDIR", tempDir)
oldArgs := os.Args
os.Args = []string{defaultWrapperName}
os.Args = []string{wrapperName}
t.Cleanup(func() { os.Args = oldArgs })
mainLogger, err := NewLoggerWithSuffix("concurrent-main")
@@ -814,7 +617,7 @@ func TestConcurrentExecutorParallelLogIsolationAndClosure(t *testing.T) {
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
readyCh <- struct{}{}
logger := taskLoggerFromContext(task.Context)
logger := executor.TaskLoggerFromContext(task.Context)
loggerCh <- taskLoggerInfo{taskID: task.ID, logger: logger}
if logger == nil {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing task logger"}
@@ -901,15 +704,9 @@ func TestConcurrentExecutorParallelLogIsolationAndClosure(t *testing.T) {
}
for taskID, logger := range loggers {
if !logger.closed.Load() {
if !logger.IsClosed() {
t.Fatalf("expected task logger to be closed for %q", taskID)
}
if logger.file == nil {
t.Fatalf("expected task logger file to be non-nil for %q", taskID)
}
if _, err := logger.file.Write([]byte("x")); err == nil {
t.Fatalf("expected task logger file to be closed for %q", taskID)
}
}
mainLogger.Flush()
@@ -979,10 +776,10 @@ func parseTaskIDFromLogLine(line string) (string, bool) {
}
func TestExecutorTaskLoggerContext(t *testing.T) {
if taskLoggerFromContext(nil) != nil {
t.Fatalf("expected nil logger from nil context")
if executor.TaskLoggerFromContext(context.TODO()) != nil {
t.Fatalf("expected nil logger from TODO context")
}
if taskLoggerFromContext(context.Background()) != nil {
if executor.TaskLoggerFromContext(context.Background()) != nil {
t.Fatalf("expected nil logger when context has no logger")
}
@@ -995,12 +792,12 @@ func TestExecutorTaskLoggerContext(t *testing.T) {
_ = os.Remove(logger.Path())
}()
ctx := withTaskLogger(context.Background(), logger)
if got := taskLoggerFromContext(ctx); got != logger {
ctx := executor.WithTaskLogger(context.Background(), logger)
if got := executor.TaskLoggerFromContext(ctx); got != logger {
t.Fatalf("expected logger roundtrip, got %v", got)
}
if taskLoggerFromContext(withTaskLogger(context.Background(), nil)) != nil {
if executor.TaskLoggerFromContext(executor.WithTaskLogger(context.Background(), nil)) != nil {
t.Fatalf("expected nil logger when injected logger is nil")
}
}
@@ -1157,7 +954,7 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
orig := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
logger := taskLoggerFromContext(task.Context)
logger := executor.TaskLoggerFromContext(task.Context)
if logger != mainLogger {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "unexpected logger"}
}
@@ -1191,9 +988,6 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
if res.LogPath != mainLogger.Path() {
t.Fatalf("shared log path mismatch: got %q want %q", res.LogPath, mainLogger.Path())
}
if !res.sharedLog {
t.Fatalf("expected sharedLog flag for %+v", res)
}
if !strings.Contains(stderrOut, "Log (shared)") {
t.Fatalf("stderr missing shared marker: %s", stderrOut)
}
@@ -1222,7 +1016,7 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
orig := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
logger := taskLoggerFromContext(task.Context)
logger := executor.TaskLoggerFromContext(task.Context)
if logger == nil {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "missing logger"}
}
@@ -1260,7 +1054,14 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
if err != nil {
t.Fatalf("failed to read log %q: %v", res.LogPath, err)
}
if !strings.Contains(string(data), "TASK="+res.TaskID) {
found := false
for _, line := range strings.Split(string(data), "\n") {
if strings.Contains(stripTimestampPrefix(line), "TASK="+res.TaskID) {
found = true
break
}
}
if !found {
t.Fatalf("log for %q missing task marker, content: %s", res.TaskID, string(data))
}
_ = os.Remove(res.LogPath)
@@ -1268,147 +1069,6 @@ func TestExecutorExecuteConcurrentWithContextBranches(t *testing.T) {
})
}
func TestExecutorSignalAndTermination(t *testing.T) {
forceKillDelay.Store(0)
defer forceKillDelay.Store(5)
proc := &execFakeProcess{pid: 42}
cmd := &execFakeRunner{process: proc}
origNotify := signalNotifyFn
origStop := signalStopFn
defer func() {
signalNotifyFn = origNotify
signalStopFn = origStop
}()
signalNotifyFn = func(c chan<- os.Signal, sigs ...os.Signal) {
go func() { c <- syscall.SIGINT }()
}
signalStopFn = func(c chan<- os.Signal) {}
forwardSignals(context.Background(), cmd, func(string) {})
time.Sleep(20 * time.Millisecond)
proc.mu.Lock()
signalled := len(proc.signals)
proc.mu.Unlock()
if runtime.GOOS != "windows" && signalled == 0 {
t.Fatalf("process did not receive signal")
}
if proc.killed.Load() == 0 {
t.Fatalf("process was not killed after signal")
}
timer := terminateProcess(cmd)
if timer == nil {
t.Fatalf("terminateProcess returned nil timer")
}
timer.Stop()
ft := terminateCommand(cmd)
if ft == nil {
t.Fatalf("terminateCommand returned nil")
}
ft.Stop()
cmdKill := &execFakeRunner{process: &execFakeProcess{pid: 50}}
ftKill := terminateCommand(cmdKill)
time.Sleep(10 * time.Millisecond)
if p, ok := cmdKill.process.(*execFakeProcess); ok && p.killed.Load() == 0 {
t.Fatalf("terminateCommand did not kill process")
}
ftKill.Stop()
cmdKill2 := &execFakeRunner{process: &execFakeProcess{pid: 51}}
timer2 := terminateProcess(cmdKill2)
time.Sleep(10 * time.Millisecond)
if p, ok := cmdKill2.process.(*execFakeProcess); ok && p.killed.Load() == 0 {
t.Fatalf("terminateProcess did not kill process")
}
timer2.Stop()
if terminateCommand(nil) != nil {
t.Fatalf("terminateCommand should return nil for nil cmd")
}
if terminateCommand(&execFakeRunner{allowNilProcess: true}) != nil {
t.Fatalf("terminateCommand should return nil when process is nil")
}
if terminateProcess(nil) != nil {
t.Fatalf("terminateProcess should return nil for nil cmd")
}
if terminateProcess(&execFakeRunner{allowNilProcess: true}) != nil {
t.Fatalf("terminateProcess should return nil when process is nil")
}
signalNotifyFn = func(c chan<- os.Signal, sigs ...os.Signal) {}
ctxDone, cancelDone := context.WithCancel(context.Background())
cancelDone()
forwardSignals(ctxDone, &execFakeRunner{process: &execFakeProcess{pid: 70}}, func(string) {})
}
func TestExecutorCancelReasonAndCloseWithReason(t *testing.T) {
if reason := cancelReason("", nil); !strings.Contains(reason, "Context") {
t.Fatalf("unexpected cancelReason for nil ctx: %s", reason)
}
ctx, cancel := context.WithTimeout(context.Background(), 0)
defer cancel()
if !strings.Contains(cancelReason("cmd", ctx), "timeout") {
t.Fatalf("expected timeout reason")
}
cancelCtx, cancelFn := context.WithCancel(context.Background())
cancelFn()
if !strings.Contains(cancelReason("cmd", cancelCtx), "Execution cancelled") {
t.Fatalf("expected cancellation reason")
}
if !strings.Contains(cancelReason("", cancelCtx), "codex") {
t.Fatalf("expected default command name in cancel reason")
}
rc := &reasonReadCloser{r: strings.NewReader("data"), closedC: make(chan struct{}, 1)}
closeWithReason(rc, "why")
select {
case <-rc.closedC:
default:
t.Fatalf("CloseWithReason was not called")
}
plain := io.NopCloser(strings.NewReader("x"))
closeWithReason(plain, "noop")
closeWithReason(nil, "noop")
}
func TestExecutorForceKillTimerStop(t *testing.T) {
done := make(chan struct{}, 1)
ft := &forceKillTimer{timer: time.AfterFunc(50*time.Millisecond, func() { done <- struct{}{} }), done: done}
ft.Stop()
done2 := make(chan struct{}, 1)
ft2 := &forceKillTimer{timer: time.AfterFunc(0, func() { done2 <- struct{}{} }), done: done2}
time.Sleep(10 * time.Millisecond)
ft2.Stop()
var nilTimer *forceKillTimer
nilTimer.Stop()
(&forceKillTimer{}).Stop()
}
func TestExecutorForwardSignalsDefaults(t *testing.T) {
origNotify := signalNotifyFn
origStop := signalStopFn
signalNotifyFn = nil
signalStopFn = nil
defer func() {
signalNotifyFn = origNotify
signalStopFn = origStop
}()
ctx, cancel := context.WithCancel(context.Background())
cancel()
forwardSignals(ctx, &execFakeRunner{process: &execFakeProcess{pid: 80}}, func(string) {})
time.Sleep(10 * time.Millisecond)
}
func TestExecutorSharedLogFalseWhenCustomLogPath(t *testing.T) {
devNull, err := os.OpenFile(os.DevNull, os.O_WRONLY, 0)
if err != nil {
@@ -1464,10 +1124,9 @@ func TestExecutorSharedLogFalseWhenCustomLogPath(t *testing.T) {
}
res := results[0]
// 关键断言:即使 handle.shared=true因为 task logger 创建失败),
// 但因为 LogPath 不等于主 logger 的路径sharedLog 应为 false
if res.sharedLog {
t.Fatalf("expected sharedLog=false when LogPath differs from shared logger, got true")
out := generateFinalOutputWithMode(results, false)
if strings.Contains(out, "(shared)") {
t.Fatalf("did not expect shared marker when LogPath differs from shared logger, got: %s", out)
}
// 验证 LogPath 确实是自定义的

View File

@@ -0,0 +1,26 @@
package wrapper
import ilogger "codeagent-wrapper/internal/logger"
type Logger = ilogger.Logger
type CleanupStats = ilogger.CleanupStats
func NewLogger() (*Logger, error) { return ilogger.NewLogger() }
func NewLoggerWithSuffix(suffix string) (*Logger, error) { return ilogger.NewLoggerWithSuffix(suffix) }
func setLogger(l *Logger) { ilogger.SetLogger(l) }
func closeLogger() error { return ilogger.CloseLogger() }
func activeLogger() *Logger { return ilogger.ActiveLogger() }
func logInfo(msg string) { ilogger.LogInfo(msg) }
func logWarn(msg string) { ilogger.LogWarn(msg) }
func logError(msg string) { ilogger.LogError(msg) }
func cleanupOldLogs() (CleanupStats, error) { return ilogger.CleanupOldLogs() }
func sanitizeLogSuffix(raw string) string { return ilogger.SanitizeLogSuffix(raw) }

View File

@@ -1,7 +1,8 @@
package main
package wrapper
import (
"bytes"
"codeagent-wrapper/internal/logger"
"fmt"
"io"
"os"
@@ -36,7 +37,9 @@ func captureStdout(t *testing.T, fn func()) string {
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
return buf.String()
}
@@ -57,11 +60,17 @@ func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
for _, p := range parts {
p = strings.TrimSpace(p)
if strings.HasSuffix(p, "tasks") {
fmt.Sscanf(p, "%d tasks", &payload.Summary.Total)
if _, err := fmt.Sscanf(p, "%d tasks", &payload.Summary.Total); err != nil {
t.Fatalf("failed to parse total tasks from %q: %v", p, err)
}
} else if strings.HasSuffix(p, "passed") {
fmt.Sscanf(p, "%d passed", &payload.Summary.Success)
if _, err := fmt.Sscanf(p, "%d passed", &payload.Summary.Success); err != nil {
t.Fatalf("failed to parse passed tasks from %q: %v", p, err)
}
} else if strings.HasSuffix(p, "failed") {
fmt.Sscanf(p, "%d failed", &payload.Summary.Failed)
if _, err := fmt.Sscanf(p, "%d failed", &payload.Summary.Failed); err != nil {
t.Fatalf("failed to parse failed tasks from %q: %v", p, err)
}
}
}
} else if strings.HasPrefix(line, "Total:") {
@@ -70,11 +79,17 @@ func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
for _, p := range parts {
p = strings.TrimSpace(p)
if strings.HasPrefix(p, "Total:") {
fmt.Sscanf(p, "Total: %d", &payload.Summary.Total)
if _, err := fmt.Sscanf(p, "Total: %d", &payload.Summary.Total); err != nil {
t.Fatalf("failed to parse total tasks from %q: %v", p, err)
}
} else if strings.HasPrefix(p, "Success:") {
fmt.Sscanf(p, "Success: %d", &payload.Summary.Success)
if _, err := fmt.Sscanf(p, "Success: %d", &payload.Summary.Success); err != nil {
t.Fatalf("failed to parse passed tasks from %q: %v", p, err)
}
} else if strings.HasPrefix(p, "Failed:") {
fmt.Sscanf(p, "Failed: %d", &payload.Summary.Failed)
if _, err := fmt.Sscanf(p, "Failed: %d", &payload.Summary.Failed); err != nil {
t.Fatalf("failed to parse failed tasks from %q: %v", p, err)
}
}
}
} else if line == "## Task Results" {
@@ -94,34 +109,39 @@ func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
currentTask = &TaskResult{}
taskLine := strings.TrimPrefix(line, "### ")
success, warning, failed := getStatusSymbols()
// Parse different formats
if strings.Contains(taskLine, " "+success) {
parts := strings.Split(taskLine, " "+success)
parseMarker := func(marker string, exitCode int) bool {
needle := " " + marker
if !strings.Contains(taskLine, needle) {
return false
}
parts := strings.Split(taskLine, needle)
currentTask.TaskID = strings.TrimSpace(parts[0])
currentTask.ExitCode = 0
// Extract coverage if present
if len(parts) > 1 {
currentTask.ExitCode = exitCode
if exitCode == 0 && len(parts) > 1 {
coveragePart := strings.TrimSpace(parts[1])
if strings.HasSuffix(coveragePart, "%") {
currentTask.Coverage = coveragePart
}
}
} else if strings.Contains(taskLine, " "+warning) {
parts := strings.Split(taskLine, " "+warning)
currentTask.TaskID = strings.TrimSpace(parts[0])
currentTask.ExitCode = 0
} else if strings.Contains(taskLine, " "+failed) {
parts := strings.Split(taskLine, " "+failed)
currentTask.TaskID = strings.TrimSpace(parts[0])
currentTask.ExitCode = 1
} else {
return true
}
switch {
case parseMarker("✓", 0), parseMarker("PASS", 0):
// ok
case parseMarker("⚠️", 0), parseMarker("WARN", 0):
// warning
case parseMarker("✗", 1), parseMarker("FAIL", 1):
// fail
default:
currentTask.TaskID = taskLine
}
} else if currentTask != nil && inTaskResults {
// Parse task details
if strings.HasPrefix(line, "Exit code:") {
fmt.Sscanf(line, "Exit code: %d", &currentTask.ExitCode)
if _, err := fmt.Sscanf(line, "Exit code: %d", &currentTask.ExitCode); err != nil {
t.Fatalf("failed to parse exit code from %q: %v", line, err)
}
} else if strings.HasPrefix(line, "Error:") {
currentTask.Error = strings.TrimPrefix(line, "Error: ")
} else if strings.HasPrefix(line, "Log:") {
@@ -147,7 +167,9 @@ func parseIntegrationOutput(t *testing.T, out string) integrationOutput {
currentTask.ExitCode = 0
} else if strings.HasPrefix(line, "Status: FAILED") {
if strings.Contains(line, "exit code") {
fmt.Sscanf(line, "Status: FAILED (exit code %d)", &currentTask.ExitCode)
if _, err := fmt.Sscanf(line, "Status: FAILED (exit code %d)", &currentTask.ExitCode); err != nil {
t.Fatalf("failed to parse exit code from %q: %v", line, err)
}
} else {
currentTask.ExitCode = 1
}
@@ -180,6 +202,37 @@ func findResultByID(t *testing.T, payload integrationOutput, id string) TaskResu
return TaskResult{}
}
func setTempDirEnv(t *testing.T, dir string) string {
t.Helper()
resolved := dir
if eval, err := filepath.EvalSymlinks(dir); err == nil {
resolved = eval
}
t.Setenv("TMPDIR", resolved)
t.Setenv("TEMP", resolved)
t.Setenv("TMP", resolved)
return resolved
}
func createTempLog(t *testing.T, dir, name string) string {
t.Helper()
path := filepath.Join(dir, name)
if err := os.WriteFile(path, []byte("test"), 0o644); err != nil {
t.Fatalf("failed to create temp log %s: %v", path, err)
}
return path
}
func stubProcessRunning(t *testing.T, fn func(int) bool) {
t.Helper()
t.Cleanup(logger.SetProcessRunningCheck(fn))
}
func stubProcessStartTime(t *testing.T, fn func(int) time.Time) {
t.Helper()
t.Cleanup(logger.SetProcessStartTimeFn(fn))
}
func TestRunParallelEndToEnd_OrderAndConcurrency(t *testing.T) {
defer resetTestHooks()
origRun := runCodexTaskFn
@@ -365,7 +418,7 @@ id: beta
---CONTENT---
task-beta`
stdinReader = bytes.NewReader([]byte(input))
os.Args = []string{"codex-wrapper", "--parallel"}
os.Args = []string{"codeagent-wrapper", "--parallel"}
var exitCode int
output := captureStdout(t, func() {
@@ -418,9 +471,9 @@ id: d
---CONTENT---
ok-d`
stdinReader = bytes.NewReader([]byte(input))
os.Args = []string{"codex-wrapper", "--parallel"}
os.Args = []string{"codeagent-wrapper", "--parallel"}
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
origRun := runCodexTaskFn
runCodexTaskFn = func(task TaskSpec, timeout int) TaskResult {
@@ -474,9 +527,9 @@ ok-d`
// After parallel log isolation fix, each task has its own log file
expectedLines := map[string]struct{}{
fmt.Sprintf("Task a: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-a.log", os.Getpid()))): {},
fmt.Sprintf("Task b: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-b.log", os.Getpid()))): {},
fmt.Sprintf("Task d: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d-d.log", os.Getpid()))): {},
fmt.Sprintf("Task a: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d-a.log", os.Getpid()))): {},
fmt.Sprintf("Task b: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d-b.log", os.Getpid()))): {},
fmt.Sprintf("Task d: Log: %s", filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d-d.log", os.Getpid()))): {},
}
if len(taskLines) != len(expectedLines) {
@@ -494,7 +547,7 @@ func TestRunNonParallelOutputsIncludeLogPathsIntegration(t *testing.T) {
defer resetTestHooks()
tempDir := setTempDirEnv(t, t.TempDir())
os.Args = []string{"codex-wrapper", "integration-log-check"}
os.Args = []string{"codeagent-wrapper", "integration-log-check"}
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
codexCommand = "echo"
@@ -512,7 +565,7 @@ func TestRunNonParallelOutputsIncludeLogPathsIntegration(t *testing.T) {
if exitCode != 0 {
t.Fatalf("run() exit=%d, want 0", exitCode)
}
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
wantLine := fmt.Sprintf("Log: %s", expectedLog)
if !strings.Contains(stderr, wantLine) {
t.Fatalf("stderr missing %q, got: %q", wantLine, stderr)
@@ -693,11 +746,11 @@ func TestRunStartupCleanupRemovesOrphansEndToEnd(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
orphanA := createTempLog(t, tempDir, "codex-wrapper-5001.log")
orphanB := createTempLog(t, tempDir, "codex-wrapper-5002-extra.log")
orphanC := createTempLog(t, tempDir, "codex-wrapper-5003-suffix.log")
orphanA := createTempLog(t, tempDir, "codeagent-wrapper-5001.log")
orphanB := createTempLog(t, tempDir, "codeagent-wrapper-5002-extra.log")
orphanC := createTempLog(t, tempDir, "codeagent-wrapper-5003-suffix.log")
runningPID := 81234
runningLog := createTempLog(t, tempDir, fmt.Sprintf("codex-wrapper-%d.log", runningPID))
runningLog := createTempLog(t, tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", runningPID))
unrelated := createTempLog(t, tempDir, "wrapper.log")
stubProcessRunning(t, func(pid int) bool {
@@ -713,7 +766,7 @@ func TestRunStartupCleanupRemovesOrphansEndToEnd(t *testing.T) {
codexCommand = createFakeCodexScript(t, "tid-startup", "ok")
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
os.Args = []string{"codex-wrapper", "task"}
os.Args = []string{"codeagent-wrapper", "task"}
if exit := run(); exit != 0 {
t.Fatalf("run() exit=%d, want 0", exit)
@@ -739,7 +792,7 @@ func TestRunStartupCleanupConcurrentWrappers(t *testing.T) {
const totalLogs = 40
for i := 0; i < totalLogs; i++ {
createTempLog(t, tempDir, fmt.Sprintf("codex-wrapper-%d.log", 9000+i))
createTempLog(t, tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", 9000+i))
}
stubProcessRunning(t, func(pid int) bool {
@@ -763,7 +816,7 @@ func TestRunStartupCleanupConcurrentWrappers(t *testing.T) {
close(start)
wg.Wait()
matches, err := filepath.Glob(filepath.Join(tempDir, "codex-wrapper-*.log"))
matches, err := filepath.Glob(filepath.Join(tempDir, "codeagent-wrapper-*.log"))
if err != nil {
t.Fatalf("glob error: %v", err)
}
@@ -777,9 +830,9 @@ func TestRunCleanupFlagEndToEnd_Success(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
staleA := createTempLog(t, tempDir, "codex-wrapper-2100.log")
staleB := createTempLog(t, tempDir, "codex-wrapper-2200-extra.log")
keeper := createTempLog(t, tempDir, "codex-wrapper-2300.log")
staleA := createTempLog(t, tempDir, "codeagent-wrapper-2100.log")
staleB := createTempLog(t, tempDir, "codeagent-wrapper-2200-extra.log")
keeper := createTempLog(t, tempDir, "codeagent-wrapper-2300.log")
stubProcessRunning(t, func(pid int) bool {
return pid == 2300 || pid == os.Getpid()
@@ -791,7 +844,7 @@ func TestRunCleanupFlagEndToEnd_Success(t *testing.T) {
return time.Time{}
})
os.Args = []string{"codex-wrapper", "--cleanup"}
os.Args = []string{"codeagent-wrapper", "--cleanup"}
var exitCode int
output := captureStdout(t, func() {
@@ -815,10 +868,10 @@ func TestRunCleanupFlagEndToEnd_Success(t *testing.T) {
if !strings.Contains(output, "Files kept: 1") {
t.Fatalf("missing 'Files kept: 1' in output: %q", output)
}
if !strings.Contains(output, "codex-wrapper-2100.log") || !strings.Contains(output, "codex-wrapper-2200-extra.log") {
if !strings.Contains(output, "codeagent-wrapper-2100.log") || !strings.Contains(output, "codeagent-wrapper-2200-extra.log") {
t.Fatalf("missing deleted file names in output: %q", output)
}
if !strings.Contains(output, "codex-wrapper-2300.log") {
if !strings.Contains(output, "codeagent-wrapper-2300.log") {
t.Fatalf("missing kept file names in output: %q", output)
}
@@ -831,7 +884,7 @@ func TestRunCleanupFlagEndToEnd_Success(t *testing.T) {
t.Fatalf("expected kept log to remain, err=%v", err)
}
currentLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
currentLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
if _, err := os.Stat(currentLog); err == nil {
t.Fatalf("cleanup mode should not create new log file %s", currentLog)
} else if !os.IsNotExist(err) {
@@ -850,7 +903,7 @@ func TestRunCleanupFlagEndToEnd_FailureDoesNotAffectStartup(t *testing.T) {
return CleanupStats{Scanned: 1}, fmt.Errorf("permission denied")
}
os.Args = []string{"codex-wrapper", "--cleanup"}
os.Args = []string{"codeagent-wrapper", "--cleanup"}
var exitCode int
errOutput := captureStderr(t, func() {
@@ -867,7 +920,7 @@ func TestRunCleanupFlagEndToEnd_FailureDoesNotAffectStartup(t *testing.T) {
t.Fatalf("cleanup called %d times, want 1", calls)
}
currentLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
currentLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
if _, err := os.Stat(currentLog); err == nil {
t.Fatalf("cleanup failure should not create new log file %s", currentLog)
} else if !os.IsNotExist(err) {
@@ -880,7 +933,7 @@ func TestRunCleanupFlagEndToEnd_FailureDoesNotAffectStartup(t *testing.T) {
codexCommand = createFakeCodexScript(t, "tid-cleanup-e2e", "ok")
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
os.Args = []string{"codex-wrapper", "post-cleanup task"}
os.Args = []string{"codeagent-wrapper", "post-cleanup task"}
var normalExit int
normalOutput := captureStdout(t, func() {

View File

@@ -1,10 +1,9 @@
package main
package wrapper
import (
"bufio"
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
@@ -19,6 +18,11 @@ import (
"syscall"
"testing"
"time"
config "codeagent-wrapper/internal/config"
executor "codeagent-wrapper/internal/executor"
"github.com/goccy/go-json"
)
// Helper to reset test hooks
@@ -28,17 +32,15 @@ func resetTestHooks() {
codexCommand = "codex"
cleanupHook = nil
cleanupLogsFn = cleanupOldLogs
signalNotifyFn = signal.Notify
signalStopFn = signal.Stop
startupCleanupAsync = false
config.ResetModelsConfigCacheForTest()
_ = executor.SetSelectBackendFn(nil)
buildCodexArgsFn = buildCodexArgs
selectBackendFn = selectBackend
commandContext = exec.CommandContext
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return &realCmd{cmd: commandContext(ctx, name, args...)}
}
forceKillDelay.Store(5)
closeLogger()
executablePathFn = os.Executable
_ = executor.SetCommandContextFn(nil)
_ = executor.SetNewCommandRunner(nil)
_ = executor.SetForceKillDelay(5)
_ = closeLogger()
runTaskFn = runCodexTask
runCodexTaskFn = defaultRunCodexTaskFn
exitFn = os.Exit
@@ -86,6 +88,8 @@ func (t testBackend) Command() string {
return "echo"
}
func (t testBackend) Env(baseURL, apiKey string) map[string]string { return nil }
func withBackend(command string, argsFn func(*Config, string) []string) func() {
prev := selectBackendFn
selectBackendFn = func(name string) (Backend, error) {
@@ -107,7 +111,7 @@ func restoreStdoutPipe(c *capturedStdout) {
}
c.writer.Close()
os.Stdout = c.old
io.Copy(&c.buf, c.reader)
_, _ = io.Copy(&c.buf, c.reader)
}
func (c *capturedStdout) String() string {
@@ -127,7 +131,9 @@ func captureOutput(t *testing.T, fn func()) string {
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
return buf.String()
}
@@ -141,7 +147,9 @@ func captureStderr(t *testing.T, fn func()) string {
os.Stderr = old
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
return buf.String()
}
@@ -262,7 +270,7 @@ func (d *drainBlockingCmd) SetEnv(env map[string]string) {
d.inner.SetEnv(env)
}
func (d *drainBlockingCmd) Process() processHandle {
func (d *drainBlockingCmd) Process() executor.ProcessHandle {
return d.inner.Process()
}
@@ -553,7 +561,7 @@ func (f *fakeCmd) SetEnv(env map[string]string) {
}
}
func (f *fakeCmd) Process() processHandle {
func (f *fakeCmd) Process() executor.ProcessHandle {
if f == nil {
return nil
}
@@ -728,9 +736,7 @@ func TestFakeCmdInfra(t *testing.T) {
WaitDelay: 5 * time.Millisecond,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string {
return []string{targetArg}
}
@@ -774,9 +780,7 @@ func TestRunCodexTask_WaitBeforeParse(t *testing.T) {
WaitDelay: waitDelay,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string {
return []string{targetArg}
}
@@ -821,9 +825,7 @@ func TestRunCodexTask_ParseStall(t *testing.T) {
})
blockingCmd := newDrainBlockingCmd(fake)
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return blockingCmd
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return blockingCmd })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string {
return []string{targetArg}
}
@@ -878,7 +880,7 @@ func TestRunCodexTask_ParseStall(t *testing.T) {
func TestRunCodexTask_ContextTimeout(t *testing.T) {
defer resetTestHooks()
forceKillDelay.Store(0)
_ = executor.SetForceKillDelay(0)
fake := newFakeCmd(fakeCmdConfig{
KeepStdoutOpen: true,
@@ -887,9 +889,7 @@ func TestRunCodexTask_ContextTimeout(t *testing.T) {
ReleaseWaitOnSignal: false,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string {
return []string{targetArg}
}
@@ -898,14 +898,6 @@ func TestRunCodexTask_ContextTimeout(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 200*time.Millisecond)
defer cancel()
var capturedTimer *forceKillTimer
terminateCommandFn = func(cmd commandRunner) *forceKillTimer {
timer := terminateCommand(cmd)
capturedTimer = timer
return timer
}
defer func() { terminateCommandFn = terminateCommand }()
result := runCodexTaskWithContext(ctx, TaskSpec{Task: "ctx-timeout", WorkDir: defaultWorkdir}, nil, nil, false, false, 60)
if result.ExitCode != 124 {
@@ -929,15 +921,6 @@ func TestRunCodexTask_ContextTimeout(t *testing.T) {
t.Fatalf("expected Kill to eventually run, got 0")
}
}
if capturedTimer == nil {
t.Fatalf("forceKillTimer not captured")
}
if !capturedTimer.stopped.Load() {
t.Fatalf("forceKillTimer.Stop was not called")
}
if !capturedTimer.drained.Load() {
t.Fatalf("forceKillTimer drain logic did not run")
}
if fake.stdout == nil {
t.Fatalf("stdout reader not initialized")
}
@@ -948,7 +931,7 @@ func TestRunCodexTask_ContextTimeout(t *testing.T) {
func TestRunCodexTask_ForcesStopAfterCompletion(t *testing.T) {
defer resetTestHooks()
forceKillDelay.Store(0)
_ = executor.SetForceKillDelay(0)
fake := newFakeCmd(fakeCmdConfig{
StdoutPlan: []fakeStdoutEvent{
@@ -961,9 +944,7 @@ func TestRunCodexTask_ForcesStopAfterCompletion(t *testing.T) {
ReleaseWaitOnKill: true,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{targetArg} }
codexCommand = "fake-cmd"
@@ -988,7 +969,7 @@ func TestRunCodexTask_ForcesStopAfterCompletion(t *testing.T) {
func TestRunCodexTask_ForcesStopAfterTurnCompleted(t *testing.T) {
defer resetTestHooks()
forceKillDelay.Store(0)
_ = executor.SetForceKillDelay(0)
fake := newFakeCmd(fakeCmdConfig{
StdoutPlan: []fakeStdoutEvent{
@@ -1001,9 +982,7 @@ func TestRunCodexTask_ForcesStopAfterTurnCompleted(t *testing.T) {
ReleaseWaitOnKill: true,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{targetArg} }
codexCommand = "fake-cmd"
@@ -1028,7 +1007,7 @@ func TestRunCodexTask_ForcesStopAfterTurnCompleted(t *testing.T) {
func TestRunCodexTask_DoesNotTerminateBeforeThreadCompleted(t *testing.T) {
defer resetTestHooks()
forceKillDelay.Store(0)
_ = executor.SetForceKillDelay(0)
fake := newFakeCmd(fakeCmdConfig{
StdoutPlan: []fakeStdoutEvent{
@@ -1042,9 +1021,7 @@ func TestRunCodexTask_DoesNotTerminateBeforeThreadCompleted(t *testing.T) {
ReleaseWaitOnKill: true,
})
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return fake
}
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner { return fake })
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{targetArg} }
codexCommand = "fake-cmd"
@@ -1498,7 +1475,7 @@ func TestBackendParseBoolFlag(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := parseBoolFlag(tt.val, tt.def); got != tt.want {
if got := config.ParseBoolFlag(tt.val, tt.def); got != tt.want {
t.Fatalf("parseBoolFlag(%q,%v) = %v, want %v", tt.val, tt.def, got, tt.want)
}
})
@@ -1508,17 +1485,17 @@ func TestBackendParseBoolFlag(t *testing.T) {
func TestBackendEnvFlagEnabled(t *testing.T) {
const key = "TEST_FLAG_ENABLED"
t.Setenv(key, "")
if envFlagEnabled(key) {
if config.EnvFlagEnabled(key) {
t.Fatalf("envFlagEnabled should be false when unset")
}
t.Setenv(key, "true")
if !envFlagEnabled(key) {
if !config.EnvFlagEnabled(key) {
t.Fatalf("envFlagEnabled should be true for 'true'")
}
t.Setenv(key, "no")
if envFlagEnabled(key) {
if config.EnvFlagEnabled(key) {
t.Fatalf("envFlagEnabled should be false for 'no'")
}
}
@@ -1708,8 +1685,8 @@ func TestClaudeModel_DefaultsFromSettings(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
makeRunner := func(gotName *string, gotArgs *[]string, fake **fakeCmd) func(context.Context, string, ...string) commandRunner {
return func(ctx context.Context, name string, args ...string) commandRunner {
makeRunner := func(gotName *string, gotArgs *[]string, fake **fakeCmd) func(context.Context, string, ...string) executor.CommandRunner {
return func(ctx context.Context, name string, args ...string) executor.CommandRunner {
*gotName = name
*gotArgs = append([]string(nil), args...)
cmd := newFakeCmd(fakeCmdConfig{
@@ -1729,9 +1706,8 @@ func TestClaudeModel_DefaultsFromSettings(t *testing.T) {
gotArgs []string
fake *fakeCmd
)
origRunner := newCommandRunner
newCommandRunner = makeRunner(&gotName, &gotArgs, &fake)
t.Cleanup(func() { newCommandRunner = origRunner })
restore := executor.SetNewCommandRunner(makeRunner(&gotName, &gotArgs, &fake))
t.Cleanup(restore)
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "hi", Mode: "new", WorkDir: defaultWorkdir}, ClaudeBackend{}, nil, false, true, 5)
if res.ExitCode != 0 || res.Message != "ok" {
@@ -1761,9 +1737,8 @@ func TestClaudeModel_DefaultsFromSettings(t *testing.T) {
gotArgs []string
fake *fakeCmd
)
origRunner := newCommandRunner
newCommandRunner = makeRunner(&gotName, &gotArgs, &fake)
t.Cleanup(func() { newCommandRunner = origRunner })
restore := executor.SetNewCommandRunner(makeRunner(&gotName, &gotArgs, &fake))
t.Cleanup(restore)
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "hi", Mode: "new", WorkDir: defaultWorkdir, Model: "sonnet"}, ClaudeBackend{}, nil, false, true, 5)
if res.ExitCode != 0 || res.Message != "ok" {
@@ -1787,9 +1762,8 @@ func TestClaudeModel_DefaultsFromSettings(t *testing.T) {
gotArgs []string
fake *fakeCmd
)
origRunner := newCommandRunner
newCommandRunner = makeRunner(&gotName, &gotArgs, &fake)
t.Cleanup(func() { newCommandRunner = origRunner })
restore := executor.SetNewCommandRunner(makeRunner(&gotName, &gotArgs, &fake))
t.Cleanup(restore)
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "hi", Mode: "resume", SessionID: "sid-123", WorkDir: defaultWorkdir}, ClaudeBackend{}, nil, false, true, 5)
if res.ExitCode != 0 || res.Message != "ok" {
@@ -1939,6 +1913,37 @@ func TestRun_PassesReasoningEffortToTaskSpec(t *testing.T) {
}
}
func TestRun_NoOutputMessage_ReturnsExitCode1AndWritesStderr(t *testing.T) {
defer resetTestHooks()
cleanupLogsFn = func() (CleanupStats, error) { return CleanupStats{}, nil }
t.Setenv("TMPDIR", t.TempDir())
selectBackendFn = func(name string) (Backend, error) {
return testBackend{name: name, command: "echo"}, nil
}
runTaskFn = func(task TaskSpec, silent bool, timeout int) TaskResult {
return TaskResult{ExitCode: 0, Message: ""}
}
isTerminalFn = func() bool { return true }
stdinReader = strings.NewReader("")
os.Args = []string{"codeagent-wrapper", "task"}
var code int
errOutput := captureStderr(t, func() {
code = run()
})
if code != 1 {
t.Fatalf("run() exit=%d, want 1", code)
}
if !strings.Contains(errOutput, "no output message") {
t.Fatalf("stderr missing sentinel error text; got:\n%s", errOutput)
}
}
func TestRunBuildCodexArgs_NewMode(t *testing.T) {
const key = "CODEX_BYPASS_SANDBOX"
t.Setenv(key, "false")
@@ -1970,7 +1975,7 @@ func TestRunBuildCodexArgs_NewMode_WithReasoningEffort(t *testing.T) {
args := buildCodexArgs(cfg, "my task")
expected := []string{
"e",
"--reasoning-effort", "high",
"-c", "model_reasoning_effort=high",
"--skip-git-repo-check",
"-C", "/test/dir",
"--json",
@@ -1991,8 +1996,7 @@ func TestRunCodexTaskWithContext_CodexReasoningEffort(t *testing.T) {
t.Setenv("CODEX_BYPASS_SANDBOX", "false")
var gotArgs []string
origRunner := newCommandRunner
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
restore := executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
gotArgs = append([]string(nil), args...)
return newFakeCmd(fakeCmdConfig{
PID: 123,
@@ -2000,8 +2004,8 @@ func TestRunCodexTaskWithContext_CodexReasoningEffort(t *testing.T) {
{Data: "{\"type\":\"result\",\"session_id\":\"sid\",\"result\":\"ok\"}\n"},
},
})
}
t.Cleanup(func() { newCommandRunner = origRunner })
})
t.Cleanup(restore)
res := runCodexTaskWithContext(context.Background(), TaskSpec{Task: "hi", Mode: "new", WorkDir: defaultWorkdir, ReasoningEffort: "high"}, nil, nil, false, true, 5)
if res.ExitCode != 0 || res.Message != "ok" {
@@ -2010,13 +2014,13 @@ func TestRunCodexTaskWithContext_CodexReasoningEffort(t *testing.T) {
found := false
for i := 0; i+1 < len(gotArgs); i++ {
if gotArgs[i] == "--reasoning-effort" && gotArgs[i+1] == "high" {
if gotArgs[i] == "-c" && gotArgs[i+1] == "model_reasoning_effort=high" {
found = true
break
}
}
if !found {
t.Fatalf("expected --reasoning-effort high in args, got %v", gotArgs)
t.Fatalf("expected -c model_reasoning_effort=high in args, got %v", gotArgs)
}
}
@@ -2071,7 +2075,7 @@ func TestRunBuildCodexArgs_BypassSandboxEnvTrue(t *testing.T) {
t.Fatalf("NewLogger() error = %v", err)
}
setLogger(logger)
defer closeLogger()
defer func() { _ = closeLogger() }()
t.Setenv("CODEX_BYPASS_SANDBOX", "true")
@@ -2716,7 +2720,7 @@ func TestRunLogFunctions(t *testing.T) {
t.Fatalf("NewLogger() error = %v", err)
}
setLogger(logger)
defer closeLogger()
defer func() { _ = closeLogger() }()
logInfo("info message")
logWarn("warn message")
@@ -2751,25 +2755,38 @@ func TestLoggerPathAndRemoveNil(t *testing.T) {
}
func TestLoggerLogDropOnDone(t *testing.T) {
logger := &Logger{
ch: make(chan logEntry),
done: make(chan struct{}),
}
close(logger.done)
logger.log("INFO", "dropped")
logger.pendingWG.Wait()
t.Skip("internal logger behavior moved to internal/logger; exercise via public methods instead")
}
func TestLoggerLogAfterClose(t *testing.T) {
defer resetTestHooks()
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
logger, err := NewLogger()
if err != nil {
t.Fatalf("NewLogger error: %v", err)
}
logPath := logger.Path()
t.Cleanup(func() { _ = os.Remove(logPath) })
logger.Info("before close")
logger.Flush()
if err := logger.Close(); err != nil {
t.Fatalf("Close error: %v", err)
}
logger.log("INFO", "should be ignored")
logger.Info("should be ignored")
logger.Flush()
data, err := os.ReadFile(logPath)
if err != nil {
t.Fatalf("failed to read log file: %v", err)
}
if strings.Contains(string(data), "should be ignored") {
t.Fatalf("expected log message to be dropped after Close, got: %s", string(data))
}
}
func TestLogWriterLogLine(t *testing.T) {
@@ -2788,7 +2805,7 @@ func TestLogWriterLogLine(t *testing.T) {
if !strings.Contains(string(data), "P:abc") {
t.Fatalf("log output missing truncated entry, got %q", string(data))
}
closeLogger()
_ = closeLogger()
}
func TestNewLogWriterDefaultMaxLen(t *testing.T) {
@@ -2807,7 +2824,9 @@ func TestBackendPrintHelp(t *testing.T) {
os.Stdout = oldStdout
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
output := buf.String()
expected := []string{"codeagent-wrapper", "Usage:", "resume", "CODEX_TIMEOUT", "Exit Codes:"}
@@ -2929,12 +2948,12 @@ func TestRunCodexTaskFn_UsesTaskBackend(t *testing.T) {
var seenName string
var seenArgs []string
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
_ = executor.SetNewCommandRunner(func(ctx context.Context, name string, args ...string) executor.CommandRunner {
seenName = name
seenArgs = append([]string(nil), args...)
return fake
}
selectBackendFn = func(name string) (Backend, error) {
})
_ = executor.SetSelectBackendFn(func(name string) (Backend, error) {
return testBackend{
name: strings.ToLower(name),
command: "custom-cli",
@@ -2942,7 +2961,7 @@ func TestRunCodexTaskFn_UsesTaskBackend(t *testing.T) {
return []string{"do", targetArg}
},
}, nil
}
})
res := runCodexTaskFn(TaskSpec{ID: "task-1", Task: "payload", Backend: "Custom"}, 5)
@@ -2966,9 +2985,9 @@ func TestRunCodexTaskFn_UsesTaskBackend(t *testing.T) {
func TestRunCodexTaskFn_InvalidBackend(t *testing.T) {
defer resetTestHooks()
selectBackendFn = func(name string) (Backend, error) {
_ = executor.SetSelectBackendFn(func(name string) (Backend, error) {
return nil, fmt.Errorf("invalid backend: %s", name)
}
})
res := runCodexTaskFn(TaskSpec{ID: "bad-task", Task: "noop", Backend: "unknown"}, 5)
if res.ExitCode == 0 {
@@ -3109,11 +3128,11 @@ func TestRunCodexTask_ExitError(t *testing.T) {
func TestRunCodexTask_StdinPipeError(t *testing.T) {
defer resetTestHooks()
commandContext = func(ctx context.Context, name string, args ...string) *exec.Cmd {
_ = executor.SetCommandContextFn(func(ctx context.Context, name string, args ...string) *exec.Cmd {
cmd := exec.CommandContext(ctx, "cat")
cmd.Stdin = os.Stdin
return cmd
}
})
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{} }
res := runCodexTask(TaskSpec{Task: "data", UseStdin: true}, false, 1)
if res.ExitCode != 1 || !strings.Contains(res.Error, "stdin pipe") {
@@ -3123,11 +3142,11 @@ func TestRunCodexTask_StdinPipeError(t *testing.T) {
func TestRunCodexTask_StdoutPipeError(t *testing.T) {
defer resetTestHooks()
commandContext = func(ctx context.Context, name string, args ...string) *exec.Cmd {
_ = executor.SetCommandContextFn(func(ctx context.Context, name string, args ...string) *exec.Cmd {
cmd := exec.CommandContext(ctx, "echo", "noop")
cmd.Stdout = os.Stdout
return cmd
}
})
buildCodexArgsFn = func(cfg *Config, targetArg string) []string { return []string{} }
res := runCodexTask(TaskSpec{Task: "noop"}, false, 1)
if res.ExitCode != 1 || !strings.Contains(res.Error, "stdout pipe") {
@@ -3170,36 +3189,6 @@ func TestRunCodexTask_SignalHandling(t *testing.T) {
}
}
func TestForwardSignals_ContextCancel(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
forwardSignals(ctx, &realCmd{cmd: &exec.Cmd{}}, func(string) {})
cancel()
time.Sleep(10 * time.Millisecond)
}
func TestCancelReason(t *testing.T) {
const cmdName = "codex"
if got := cancelReason(cmdName, nil); got != "Context cancelled" {
t.Fatalf("cancelReason(nil) = %q, want %q", got, "Context cancelled")
}
ctxTimeout, cancelTimeout := context.WithTimeout(context.Background(), 1*time.Nanosecond)
defer cancelTimeout()
<-ctxTimeout.Done()
wantTimeout := fmt.Sprintf("%s execution timeout", cmdName)
if got := cancelReason(cmdName, ctxTimeout); got != wantTimeout {
t.Fatalf("cancelReason(deadline) = %q, want %q", got, wantTimeout)
}
ctxCancelled, cancel := context.WithCancel(context.Background())
cancel()
if got := cancelReason(cmdName, ctxCancelled); got != "Execution cancelled, terminating codex process" {
t.Fatalf("cancelReason(cancelled) = %q, want %q", got, "Execution cancelled, terminating codex process")
}
}
func TestRunCodexProcess(t *testing.T) {
defer resetTestHooks()
script := createFakeCodexScript(t, "proc-thread", "proc-msg")
@@ -3235,7 +3224,9 @@ func TestRunSilentMode(t *testing.T) {
w.Close()
os.Stderr = oldStderr
var buf bytes.Buffer
io.Copy(&buf, r)
if _, err := io.Copy(&buf, r); err != nil {
t.Fatalf("io.Copy() error = %v", err)
}
return buf.String()
}
@@ -3336,35 +3327,6 @@ func TestParallelTopologicalSortTasks(t *testing.T) {
}
}
func TestRunShouldSkipTask(t *testing.T) {
failed := map[string]TaskResult{"a": {TaskID: "a", ExitCode: 1}, "b": {TaskID: "b", ExitCode: 2}}
tests := []struct {
name string
task TaskSpec
skip bool
reasonContains []string
}{
{"no deps", TaskSpec{ID: "c"}, false, nil},
{"missing deps not failed", TaskSpec{ID: "d", Dependencies: []string{"x"}}, false, nil},
{"single failed dep", TaskSpec{ID: "e", Dependencies: []string{"a"}}, true, []string{"a"}},
{"multiple failed deps", TaskSpec{ID: "f", Dependencies: []string{"a", "b"}}, true, []string{"a", "b"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
skip, reason := shouldSkipTask(tt.task, failed)
if skip != tt.skip {
t.Fatalf("skip=%v, want %v", skip, tt.skip)
}
for _, expect := range tt.reasonContains {
if !strings.Contains(reason, expect) {
t.Fatalf("reason %q missing %q", reason, expect)
}
}
})
}
}
func TestRunTopologicalSort_CycleDetection(t *testing.T) {
tasks := []TaskSpec{{ID: "a", Dependencies: []string{"b"}}, {ID: "b", Dependencies: []string{"a"}}}
if _, err := topologicalSort(tasks); err == nil || !strings.Contains(err.Error(), "cycle detected") {
@@ -3701,7 +3663,7 @@ func TestParallelTriggersCleanup(t *testing.T) {
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"codex-wrapper", "--parallel"}
os.Args = []string{"codeagent-wrapper", "--parallel"}
stdinReader = strings.NewReader(`---TASK---
id: only
---CONTENT---
@@ -3736,10 +3698,8 @@ func TestVersionFlag(t *testing.T) {
}
})
want := "codeagent-wrapper version 5.5.0\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}
@@ -3752,26 +3712,22 @@ func TestVersionShortFlag(t *testing.T) {
}
})
want := "codeagent-wrapper version 5.5.0\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}
func TestVersionLegacyAlias(t *testing.T) {
defer resetTestHooks()
os.Args = []string{"codex-wrapper", "--version"}
os.Args = []string{"codeagent-wrapper", "--version"}
output := captureOutput(t, func() {
if code := run(); code != 0 {
t.Errorf("exit = %d, want 0", code)
}
})
want := "codex-wrapper version 5.5.0\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
if !strings.HasPrefix(output, "codeagent-wrapper version ") {
t.Fatalf("output = %q, want prefix %q", output, "codeagent-wrapper version ")
}
}
@@ -3793,7 +3749,7 @@ func TestRun_HelpShort(t *testing.T) {
func TestRun_HelpDoesNotTriggerCleanup(t *testing.T) {
defer resetTestHooks()
os.Args = []string{"codex-wrapper", "--help"}
os.Args = []string{"codeagent-wrapper", "--help"}
cleanupLogsFn = func() (CleanupStats, error) {
t.Fatalf("cleanup should not run for --help")
return CleanupStats{}, nil
@@ -3806,7 +3762,7 @@ func TestRun_HelpDoesNotTriggerCleanup(t *testing.T) {
func TestVersionDoesNotTriggerCleanup(t *testing.T) {
defer resetTestHooks()
os.Args = []string{"codex-wrapper", "--version"}
os.Args = []string{"codeagent-wrapper", "--version"}
cleanupLogsFn = func() (CleanupStats, error) {
t.Fatalf("cleanup should not run for --version")
return CleanupStats{}, nil
@@ -3874,7 +3830,7 @@ func TestVersionCoverageFullRun(t *testing.T) {
_ = closeLogger()
_ = logger.RemoveLogFile()
loggerPtr.Store(nil)
setLogger(nil)
})
t.Run("parseArgsError", func(t *testing.T) {
@@ -4089,7 +4045,7 @@ func TestVersionMainWrapper(t *testing.T) {
exitCalled := -1
exitFn = func(code int) { exitCalled = code }
os.Args = []string{"codeagent-wrapper", "--version"}
main()
Main()
if exitCalled != 0 {
t.Fatalf("main exit = %d, want 0", exitCalled)
}
@@ -4102,8 +4058,8 @@ func TestBackendCleanupMode_Success(t *testing.T) {
Scanned: 5,
Deleted: 3,
Kept: 2,
DeletedFiles: []string{"codex-wrapper-111.log", "codex-wrapper-222.log", "codex-wrapper-333.log"},
KeptFiles: []string{"codex-wrapper-444.log", "codex-wrapper-555.log"},
DeletedFiles: []string{"codeagent-wrapper-111.log", "codeagent-wrapper-222.log", "codeagent-wrapper-333.log"},
KeptFiles: []string{"codeagent-wrapper-444.log", "codeagent-wrapper-555.log"},
}, nil
}
@@ -4114,7 +4070,7 @@ func TestBackendCleanupMode_Success(t *testing.T) {
if exitCode != 0 {
t.Fatalf("exit = %d, want 0", exitCode)
}
want := "Cleanup completed\nFiles scanned: 5\nFiles deleted: 3\n - codex-wrapper-111.log\n - codex-wrapper-222.log\n - codex-wrapper-333.log\nFiles kept: 2\n - codex-wrapper-444.log\n - codex-wrapper-555.log\n"
want := "Cleanup completed\nFiles scanned: 5\nFiles deleted: 3\n - codeagent-wrapper-111.log\n - codeagent-wrapper-222.log\n - codeagent-wrapper-333.log\nFiles kept: 2\n - codeagent-wrapper-444.log\n - codeagent-wrapper-555.log\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
}
@@ -4128,7 +4084,7 @@ func TestBackendCleanupMode_SuccessWithErrorsLine(t *testing.T) {
Deleted: 1,
Kept: 0,
Errors: 1,
DeletedFiles: []string{"codex-wrapper-123.log"},
DeletedFiles: []string{"codeagent-wrapper-123.log"},
}, nil
}
@@ -4139,7 +4095,7 @@ func TestBackendCleanupMode_SuccessWithErrorsLine(t *testing.T) {
if exitCode != 0 {
t.Fatalf("exit = %d, want 0", exitCode)
}
want := "Cleanup completed\nFiles scanned: 2\nFiles deleted: 1\n - codex-wrapper-123.log\nFiles kept: 0\nDeletion errors: 1\n"
want := "Cleanup completed\nFiles scanned: 2\nFiles deleted: 1\n - codeagent-wrapper-123.log\nFiles kept: 0\nDeletion errors: 1\n"
if output != want {
t.Fatalf("output = %q, want %q", output, want)
}
@@ -4208,7 +4164,7 @@ func TestRun_CleanupFlag(t *testing.T) {
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"codex-wrapper", "--cleanup"}
os.Args = []string{"codeagent-wrapper", "--cleanup"}
calls := 0
cleanupLogsFn = func() (CleanupStats, error) {
@@ -4453,7 +4409,7 @@ func TestRun_LoggerRemovedOnSignal(t *testing.T) {
defer signal.Reset(syscall.SIGINT, syscall.SIGTERM)
// Set shorter delays for faster test
forceKillDelay.Store(1)
_ = executor.SetForceKillDelay(1)
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
@@ -4565,7 +4521,7 @@ func TestRun_CleanupFailureDoesNotBlock(t *testing.T) {
codexCommand = createFakeCodexScript(t, "tid-cleanup", "ok")
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
os.Args = []string{"codex-wrapper", "task"}
os.Args = []string{"codeagent-wrapper", "task"}
if exit := run(); exit != 0 {
t.Fatalf("exit = %d, want 0", exit)
@@ -4704,73 +4660,6 @@ func TestBackendDiscardInvalidJSONBuffer(t *testing.T) {
})
}
func TestRunForwardSignals(t *testing.T) {
defer resetTestHooks()
if runtime.GOOS == "windows" {
t.Skip("sleep command not available on Windows")
}
execCmd := exec.Command("sleep", "5")
if err := execCmd.Start(); err != nil {
t.Skipf("unable to start sleep command: %v", err)
}
defer func() {
_ = execCmd.Process.Kill()
execCmd.Wait()
}()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
forceKillDelay.Store(0)
defer forceKillDelay.Store(5)
ready := make(chan struct{})
var captured chan<- os.Signal
signalNotifyFn = func(ch chan<- os.Signal, sig ...os.Signal) {
captured = ch
close(ready)
}
signalStopFn = func(ch chan<- os.Signal) {}
defer func() {
signalNotifyFn = signal.Notify
signalStopFn = signal.Stop
}()
var mu sync.Mutex
var logs []string
cmd := &realCmd{cmd: execCmd}
forwardSignals(ctx, cmd, func(msg string) {
mu.Lock()
defer mu.Unlock()
logs = append(logs, msg)
})
select {
case <-ready:
case <-time.After(500 * time.Millisecond):
t.Fatalf("signalNotifyFn not invoked")
}
captured <- syscall.SIGINT
done := make(chan error, 1)
go func() { done <- cmd.Wait() }()
select {
case <-done:
case <-time.After(2 * time.Second):
t.Fatalf("process did not exit after forwarded signal")
}
mu.Lock()
defer mu.Unlock()
if len(logs) == 0 {
t.Fatalf("expected log entry for forwarded signal")
}
}
// Backend-focused coverage suite to ensure run() paths stay exercised under the focused pattern.
func TestBackendRunCoverage(t *testing.T) {
suite := []struct {
@@ -4810,7 +4699,7 @@ func TestParallelLogPathInSerialMode(t *testing.T) {
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
os.Args = []string{"codex-wrapper", "do-stuff"}
os.Args = []string{"codeagent-wrapper", "do-stuff"}
stdinReader = strings.NewReader("")
isTerminalFn = func() bool { return true }
codexCommand = "echo"
@@ -4827,126 +4716,13 @@ func TestParallelLogPathInSerialMode(t *testing.T) {
if exitCode != 0 {
t.Fatalf("run() exit = %d, want 0", exitCode)
}
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codex-wrapper-%d.log", os.Getpid()))
expectedLog := filepath.Join(tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", os.Getpid()))
wantLine := fmt.Sprintf("Log: %s", expectedLog)
if !strings.Contains(stderr, wantLine) {
t.Fatalf("stderr missing %q, got: %q", wantLine, stderr)
}
}
func TestRealProcessNilSafety(t *testing.T) {
var proc *realProcess
if pid := proc.Pid(); pid != 0 {
t.Fatalf("Pid() = %d, want 0", pid)
}
if err := proc.Kill(); err != nil {
t.Fatalf("Kill() error = %v", err)
}
if err := proc.Signal(syscall.SIGTERM); err != nil {
t.Fatalf("Signal() error = %v", err)
}
}
func TestRealProcessKill(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("sleep command not available on Windows")
}
cmd := exec.Command("sleep", "5")
if err := cmd.Start(); err != nil {
t.Skipf("unable to start sleep command: %v", err)
}
waited := false
defer func() {
if waited {
return
}
if cmd.Process != nil {
_ = cmd.Process.Kill()
cmd.Wait()
}
}()
proc := &realProcess{proc: cmd.Process}
if proc.Pid() == 0 {
t.Fatalf("Pid() returned 0 for active process")
}
if err := proc.Kill(); err != nil {
t.Fatalf("Kill() error = %v", err)
}
waitErr := cmd.Wait()
waited = true
if waitErr == nil {
t.Fatalf("Kill() should lead to non-nil wait error")
}
}
func TestRealProcessSignal(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("sleep command not available on Windows")
}
cmd := exec.Command("sleep", "5")
if err := cmd.Start(); err != nil {
t.Skipf("unable to start sleep command: %v", err)
}
waited := false
defer func() {
if waited {
return
}
if cmd.Process != nil {
_ = cmd.Process.Kill()
cmd.Wait()
}
}()
proc := &realProcess{proc: cmd.Process}
if err := proc.Signal(syscall.SIGTERM); err != nil {
t.Fatalf("Signal() error = %v", err)
}
waitErr := cmd.Wait()
waited = true
if waitErr == nil {
t.Fatalf("Signal() should lead to non-nil wait error")
}
}
func TestRealCmdProcess(t *testing.T) {
rc := &realCmd{}
if rc.Process() != nil {
t.Fatalf("Process() should return nil when realCmd has no command")
}
rc = &realCmd{cmd: &exec.Cmd{}}
if rc.Process() != nil {
t.Fatalf("Process() should return nil when exec.Cmd has no process")
}
if runtime.GOOS == "windows" {
return
}
cmd := exec.Command("sleep", "5")
if err := cmd.Start(); err != nil {
t.Skipf("unable to start sleep command: %v", err)
}
defer func() {
if cmd.Process != nil {
_ = cmd.Process.Kill()
cmd.Wait()
}
}()
rc = &realCmd{cmd: cmd}
handle := rc.Process()
if handle == nil {
t.Fatalf("expected non-nil process handle")
}
if pid := handle.Pid(); pid == 0 {
t.Fatalf("process handle returned pid=0")
}
}
func TestRun_CLI_Success(t *testing.T) {
defer resetTestHooks()
os.Args = []string{"codeagent-wrapper", "do-things"}
@@ -4988,7 +4764,7 @@ func TestResolveMaxParallelWorkers(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
t.Setenv("CODEAGENT_MAX_PARALLEL_WORKERS", tt.envValue)
got := resolveMaxParallelWorkers()
got := config.ResolveMaxParallelWorkers()
if got != tt.want {
t.Errorf("resolveMaxParallelWorkers() = %d, want %d", got, tt.want)
}

View File

@@ -0,0 +1,9 @@
package wrapper
import (
executor "codeagent-wrapper/internal/executor"
)
func parseParallelConfig(data []byte) (*ParallelConfig, error) {
return executor.ParseParallelConfig(data)
}

View File

@@ -0,0 +1,34 @@
package wrapper
import (
"bufio"
"io"
parser "codeagent-wrapper/internal/parser"
"github.com/goccy/go-json"
)
func parseJSONStream(r io.Reader) (message, threadID string) {
return parseJSONStreamWithLog(r, logWarn, logInfo)
}
func parseJSONStreamWithWarn(r io.Reader, warnFn func(string)) (message, threadID string) {
return parseJSONStreamWithLog(r, warnFn, logInfo)
}
func parseJSONStreamWithLog(r io.Reader, warnFn func(string), infoFn func(string)) (message, threadID string) {
return parseJSONStreamInternal(r, warnFn, infoFn, nil, nil)
}
func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func(), onComplete func()) (message, threadID string) {
return parser.ParseJSONStreamInternal(r, warnFn, infoFn, onMessage, onComplete)
}
func hasKey(m map[string]json.RawMessage, key string) bool { return parser.HasKey(m, key) }
func discardInvalidJSON(decoder *json.Decoder, reader *bufio.Reader) (*bufio.Reader, error) {
return parser.DiscardInvalidJSON(decoder, reader)
}
func normalizeText(text interface{}) string { return parser.NormalizeText(text) }

View File

@@ -0,0 +1,8 @@
package wrapper
import executor "codeagent-wrapper/internal/executor"
// Type aliases to keep existing names in the wrapper package.
type ParallelConfig = executor.ParallelConfig
type TaskSpec = executor.TaskSpec
type TaskResult = executor.TaskResult

View File

@@ -0,0 +1,30 @@
package wrapper
import (
"os"
"testing"
)
func TestDefaultIsTerminalCoverage(t *testing.T) {
oldStdin := os.Stdin
t.Cleanup(func() { os.Stdin = oldStdin })
f, err := os.CreateTemp(t.TempDir(), "stdin-*")
if err != nil {
t.Fatalf("os.CreateTemp() error = %v", err)
}
defer os.Remove(f.Name())
os.Stdin = f
if got := defaultIsTerminal(); got {
t.Fatalf("defaultIsTerminal() = %v, want false for regular file", got)
}
if err := f.Close(); err != nil {
t.Fatalf("Close() error = %v", err)
}
os.Stdin = f
if got := defaultIsTerminal(); !got {
t.Fatalf("defaultIsTerminal() = %v, want true when Stat fails", got)
}
}

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"bytes"
@@ -7,6 +7,8 @@ import (
"os"
"strconv"
"strings"
utils "codeagent-wrapper/internal/utils"
)
func resolveTimeout() int {
@@ -52,7 +54,7 @@ func shouldUseStdin(taskText string, piped bool) bool {
if len(taskText) > 800 {
return true
}
return strings.IndexAny(taskText, stdinSpecialChars) >= 0
return strings.ContainsAny(taskText, stdinSpecialChars)
}
func defaultIsTerminal() bool {
@@ -196,69 +198,21 @@ func (b *tailBuffer) String() string {
}
func truncate(s string, maxLen int) string {
if len(s) <= maxLen {
return s
}
if maxLen < 0 {
return ""
}
return s[:maxLen] + "..."
return utils.Truncate(s, maxLen)
}
// safeTruncate safely truncates string to maxLen, avoiding panic and UTF-8 corruption.
func safeTruncate(s string, maxLen int) string {
if maxLen <= 0 || s == "" {
return ""
}
runes := []rune(s)
if len(runes) <= maxLen {
return s
}
if maxLen < 4 {
return string(runes[:1])
}
cutoff := maxLen - 3
if cutoff <= 0 {
return string(runes[:1])
}
if len(runes) <= cutoff {
return s
}
return string(runes[:cutoff]) + "..."
return utils.SafeTruncate(s, maxLen)
}
// sanitizeOutput removes ANSI escape sequences and control characters.
func sanitizeOutput(s string) string {
var result strings.Builder
inEscape := false
for i := 0; i < len(s); i++ {
if s[i] == '\x1b' && i+1 < len(s) && s[i+1] == '[' {
inEscape = true
i++ // skip '['
continue
}
if inEscape {
if (s[i] >= 'A' && s[i] <= 'Z') || (s[i] >= 'a' && s[i] <= 'z') {
inEscape = false
}
continue
}
// Keep printable chars and common whitespace.
if s[i] >= 32 || s[i] == '\n' || s[i] == '\t' {
result.WriteByte(s[i])
}
}
return result.String()
return utils.SanitizeOutput(s)
}
func min(a, b int) int {
if a < b {
return a
}
return b
return utils.Min(a, b)
}
func hello() string {
@@ -381,7 +335,7 @@ func extractFilesChangedFromLines(lines []string) []string {
for _, prefix := range []string{"Modified:", "Created:", "Updated:", "Edited:", "Wrote:", "Changed:"} {
if strings.HasPrefix(line, prefix) {
file := strings.TrimSpace(strings.TrimPrefix(line, prefix))
file = strings.Trim(file, "`,\"'()[],:")
file = strings.Trim(file, "`\"'()[],:")
file = strings.TrimPrefix(file, "@")
if file != "" && !seen[file] {
files = append(files, file)
@@ -398,7 +352,7 @@ func extractFilesChangedFromLines(lines []string) []string {
// Pattern 2: Tokens that look like file paths (allow root files, strip @ prefix).
parts := strings.Fields(line)
for _, part := range parts {
part = strings.Trim(part, "`,\"'()[],:")
part = strings.Trim(part, "`\"'()[],:")
part = strings.TrimPrefix(part, "@")
for _, ext := range exts {
if strings.HasSuffix(part, ext) && !seen[part] {
@@ -567,116 +521,3 @@ func extractKeyOutputFromLines(lines []string, maxLen int) string {
clean := strings.TrimSpace(strings.Join(lines, "\n"))
return safeTruncate(clean, maxLen)
}
// extractCoverageGap extracts what's missing from coverage reports
// Looks for uncovered lines, branches, or functions
func extractCoverageGap(message string) string {
if message == "" {
return ""
}
lower := strings.ToLower(message)
lines := strings.Split(message, "\n")
// Look for uncovered/missing patterns
for _, line := range lines {
lineLower := strings.ToLower(line)
line = strings.TrimSpace(line)
// Common patterns for uncovered code
if strings.Contains(lineLower, "uncovered") ||
strings.Contains(lineLower, "not covered") ||
strings.Contains(lineLower, "missing coverage") ||
strings.Contains(lineLower, "lines not covered") {
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
// Look for specific file:line patterns in coverage reports
if strings.Contains(lineLower, "branch") && strings.Contains(lineLower, "not taken") {
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
}
// Look for function names that aren't covered
if strings.Contains(lower, "function") && strings.Contains(lower, "0%") {
for _, line := range lines {
if strings.Contains(strings.ToLower(line), "0%") && strings.Contains(line, "function") {
line = strings.TrimSpace(line)
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
}
}
return ""
}
// extractErrorDetail extracts meaningful error context from task output
// Returns the most relevant error information up to maxLen characters
func extractErrorDetail(message string, maxLen int) string {
if message == "" || maxLen <= 0 {
return ""
}
lines := strings.Split(message, "\n")
var errorLines []string
// Look for error-related lines
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
lower := strings.ToLower(line)
// Skip noise lines
if strings.HasPrefix(line, "at ") && strings.Contains(line, "(") {
// Stack trace line - only keep first one
if len(errorLines) > 0 && strings.HasPrefix(strings.ToLower(errorLines[len(errorLines)-1]), "at ") {
continue
}
}
// Prioritize error/fail lines
if strings.Contains(lower, "error") ||
strings.Contains(lower, "fail") ||
strings.Contains(lower, "exception") ||
strings.Contains(lower, "assert") ||
strings.Contains(lower, "expected") ||
strings.Contains(lower, "timeout") ||
strings.Contains(lower, "not found") ||
strings.Contains(lower, "cannot") ||
strings.Contains(lower, "undefined") ||
strings.HasPrefix(line, "FAIL") ||
strings.HasPrefix(line, "●") {
errorLines = append(errorLines, line)
}
}
if len(errorLines) == 0 {
// No specific error lines found, take last few lines
start := len(lines) - 5
if start < 0 {
start = 0
}
for _, line := range lines[start:] {
line = strings.TrimSpace(line)
if line != "" {
errorLines = append(errorLines, line)
}
}
}
// Join and truncate
result := strings.Join(errorLines, " | ")
return safeTruncate(result, maxLen)
}

View File

@@ -1,4 +1,4 @@
package main
package wrapper
import (
"fmt"

View File

@@ -0,0 +1,9 @@
package wrapper
import ilogger "codeagent-wrapper/internal/logger"
const wrapperName = ilogger.WrapperName
func currentWrapperName() string { return ilogger.CurrentWrapperName() }
func primaryLogPrefix() string { return ilogger.PrimaryLogPrefix() }

View File

@@ -0,0 +1,33 @@
package backend
import config "codeagent-wrapper/internal/config"
// Backend defines the contract for invoking different AI CLI backends.
// Each backend is responsible for supplying the executable command and
// building the argument list based on the wrapper config.
type Backend interface {
Name() string
BuildArgs(cfg *config.Config, targetArg string) []string
Command() string
Env(baseURL, apiKey string) map[string]string
}
var (
logWarnFn = func(string) {}
logErrorFn = func(string) {}
)
// SetLogFuncs configures optional logging hooks used by some backends.
// Callers can safely pass nil to disable the hook.
func SetLogFuncs(warnFn, errorFn func(string)) {
if warnFn != nil {
logWarnFn = warnFn
} else {
logWarnFn = func(string) {}
}
if errorFn != nil {
logErrorFn = errorFn
} else {
logErrorFn = func(string) {}
}
}

View File

@@ -1,4 +1,4 @@
package main
package backend
import (
"bytes"
@@ -6,6 +6,8 @@ import (
"path/filepath"
"reflect"
"testing"
config "codeagent-wrapper/internal/config"
)
func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
@@ -13,7 +15,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
t.Run("new mode omits skip-permissions when env disabled", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
cfg := &Config{Mode: "new", WorkDir: "/repo"}
cfg := &config.Config{Mode: "new", WorkDir: "/repo"}
got := backend.BuildArgs(cfg, "todo")
want := []string{"-p", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "todo"}
if !reflect.DeepEqual(got, want) {
@@ -22,7 +24,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
})
t.Run("new mode includes skip-permissions by default", func(t *testing.T) {
cfg := &Config{Mode: "new", SkipPermissions: false}
cfg := &config.Config{Mode: "new", SkipPermissions: false}
got := backend.BuildArgs(cfg, "-")
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "-"}
if !reflect.DeepEqual(got, want) {
@@ -32,7 +34,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
t.Run("resume mode includes session id", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
cfg := &Config{Mode: "resume", SessionID: "sid-123", WorkDir: "/ignored"}
cfg := &config.Config{Mode: "resume", SessionID: "sid-123", WorkDir: "/ignored"}
got := backend.BuildArgs(cfg, "resume-task")
want := []string{"-p", "--setting-sources", "", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
if !reflect.DeepEqual(got, want) {
@@ -42,7 +44,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
t.Run("resume mode without session still returns base flags", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
cfg := &Config{Mode: "resume", WorkDir: "/ignored"}
cfg := &config.Config{Mode: "resume", WorkDir: "/ignored"}
got := backend.BuildArgs(cfg, "follow-up")
want := []string{"-p", "--setting-sources", "", "--output-format", "stream-json", "--verbose", "follow-up"}
if !reflect.DeepEqual(got, want) {
@@ -51,7 +53,7 @@ func TestClaudeBuildArgs_ModesAndPermissions(t *testing.T) {
})
t.Run("resume mode can opt-in skip permissions", func(t *testing.T) {
cfg := &Config{Mode: "resume", SessionID: "sid-123", SkipPermissions: true}
cfg := &config.Config{Mode: "resume", SessionID: "sid-123", SkipPermissions: true}
got := backend.BuildArgs(cfg, "resume-task")
want := []string{"-p", "--dangerously-skip-permissions", "--setting-sources", "", "-r", "sid-123", "--output-format", "stream-json", "--verbose", "resume-task"}
if !reflect.DeepEqual(got, want) {
@@ -70,7 +72,7 @@ func TestBackendBuildArgs_Model(t *testing.T) {
t.Run("claude includes --model when set", func(t *testing.T) {
t.Setenv("CODEAGENT_SKIP_PERMISSIONS", "false")
backend := ClaudeBackend{}
cfg := &Config{Mode: "new", Model: "opus"}
cfg := &config.Config{Mode: "new", Model: "opus"}
got := backend.BuildArgs(cfg, "todo")
want := []string{"-p", "--setting-sources", "", "--model", "opus", "--output-format", "stream-json", "--verbose", "todo"}
if !reflect.DeepEqual(got, want) {
@@ -80,7 +82,7 @@ func TestBackendBuildArgs_Model(t *testing.T) {
t.Run("gemini includes -m when set", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "new", Model: "gemini-3-pro-preview"}
cfg := &config.Config{Mode: "new", Model: "gemini-3-pro-preview"}
got := backend.BuildArgs(cfg, "task")
want := []string{"-o", "stream-json", "-y", "-m", "gemini-3-pro-preview", "task"}
if !reflect.DeepEqual(got, want) {
@@ -93,7 +95,7 @@ func TestBackendBuildArgs_Model(t *testing.T) {
t.Setenv(key, "false")
backend := CodexBackend{}
cfg := &Config{Mode: "new", WorkDir: "/tmp", Model: "o3"}
cfg := &config.Config{Mode: "new", WorkDir: "/tmp", Model: "o3"}
got := backend.BuildArgs(cfg, "task")
want := []string{"e", "--model", "o3", "--skip-git-repo-check", "-C", "/tmp", "--json", "task"}
if !reflect.DeepEqual(got, want) {
@@ -105,7 +107,7 @@ func TestBackendBuildArgs_Model(t *testing.T) {
func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Run("gemini new mode defaults workdir", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "new", WorkDir: "/workspace"}
cfg := &config.Config{Mode: "new", WorkDir: "/workspace"}
got := backend.BuildArgs(cfg, "task")
want := []string{"-o", "stream-json", "-y", "task"}
if !reflect.DeepEqual(got, want) {
@@ -115,7 +117,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Run("gemini resume mode uses session id", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "resume", SessionID: "sid-999"}
cfg := &config.Config{Mode: "resume", SessionID: "sid-999"}
got := backend.BuildArgs(cfg, "resume")
want := []string{"-o", "stream-json", "-y", "-r", "sid-999", "resume"}
if !reflect.DeepEqual(got, want) {
@@ -125,7 +127,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Run("gemini resume mode without session omits identifier", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "resume"}
cfg := &config.Config{Mode: "resume"}
got := backend.BuildArgs(cfg, "resume")
want := []string{"-o", "stream-json", "-y", "resume"}
if !reflect.DeepEqual(got, want) {
@@ -142,7 +144,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Run("gemini stdin mode uses -p flag", func(t *testing.T) {
backend := GeminiBackend{}
cfg := &Config{Mode: "new"}
cfg := &config.Config{Mode: "new"}
got := backend.BuildArgs(cfg, "-")
want := []string{"-o", "stream-json", "-y", "-p", "-"}
if !reflect.DeepEqual(got, want) {
@@ -155,7 +157,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Setenv(key, "false")
backend := CodexBackend{}
cfg := &Config{Mode: "new", WorkDir: "/tmp"}
cfg := &config.Config{Mode: "new", WorkDir: "/tmp"}
got := backend.BuildArgs(cfg, "task")
want := []string{"e", "--skip-git-repo-check", "-C", "/tmp", "--json", "task"}
if !reflect.DeepEqual(got, want) {
@@ -168,7 +170,7 @@ func TestClaudeBuildArgs_GeminiAndCodexModes(t *testing.T) {
t.Setenv(key, "true")
backend := CodexBackend{}
cfg := &Config{Mode: "new", WorkDir: "/tmp"}
cfg := &config.Config{Mode: "new", WorkDir: "/tmp"}
got := backend.BuildArgs(cfg, "task")
want := []string{"e", "--dangerously-bypass-approvals-and-sandbox", "--skip-git-repo-check", "-C", "/tmp", "--json", "task"}
if !reflect.DeepEqual(got, want) {
@@ -204,7 +206,7 @@ func TestLoadMinimalEnvSettings(t *testing.T) {
t.Setenv("USERPROFILE", home)
t.Run("missing file returns empty", func(t *testing.T) {
if got := loadMinimalEnvSettings(); len(got) != 0 {
if got := LoadMinimalEnvSettings(); len(got) != 0 {
t.Fatalf("got %v, want empty", got)
}
})
@@ -220,7 +222,7 @@ func TestLoadMinimalEnvSettings(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
got := loadMinimalEnvSettings()
got := LoadMinimalEnvSettings()
if got["ANTHROPIC_API_KEY"] != "secret" || got["FOO"] != "bar" {
t.Fatalf("got %v, want keys present", got)
}
@@ -234,7 +236,7 @@ func TestLoadMinimalEnvSettings(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
got := loadMinimalEnvSettings()
got := LoadMinimalEnvSettings()
if got["GOOD"] != "ok" {
t.Fatalf("got %v, want GOOD=ok", got)
}
@@ -249,12 +251,72 @@ func TestLoadMinimalEnvSettings(t *testing.T) {
t.Run("oversized file returns empty", func(t *testing.T) {
dir := filepath.Join(home, ".claude")
path := filepath.Join(dir, "settings.json")
data := bytes.Repeat([]byte("a"), maxClaudeSettingsBytes+1)
data := bytes.Repeat([]byte("a"), MaxClaudeSettingsBytes+1)
if err := os.WriteFile(path, data, 0o600); err != nil {
t.Fatalf("WriteFile: %v", err)
}
if got := loadMinimalEnvSettings(); len(got) != 0 {
if got := LoadMinimalEnvSettings(); len(got) != 0 {
t.Fatalf("got %v, want empty", got)
}
})
}
func TestOpencodeBackend_BuildArgs(t *testing.T) {
backend := OpencodeBackend{}
t.Run("basic", func(t *testing.T) {
cfg := &config.Config{Mode: "new"}
got := backend.BuildArgs(cfg, "hello")
want := []string{"run", "--format", "json", "hello"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("with model", func(t *testing.T) {
cfg := &config.Config{Mode: "new", Model: "opencode/grok-code"}
got := backend.BuildArgs(cfg, "task")
want := []string{"run", "-m", "opencode/grok-code", "--format", "json", "task"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("resume mode", func(t *testing.T) {
cfg := &config.Config{Mode: "resume", SessionID: "ses_123", Model: "opencode/grok-code"}
got := backend.BuildArgs(cfg, "follow-up")
want := []string{"run", "-m", "opencode/grok-code", "-s", "ses_123", "--format", "json", "follow-up"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("resume without session", func(t *testing.T) {
cfg := &config.Config{Mode: "resume"}
got := backend.BuildArgs(cfg, "task")
want := []string{"run", "--format", "json", "task"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("stdin mode omits dash", func(t *testing.T) {
cfg := &config.Config{Mode: "new"}
got := backend.BuildArgs(cfg, "-")
want := []string{"run", "--format", "json"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
}
func TestOpencodeBackend_Interface(t *testing.T) {
backend := OpencodeBackend{}
if backend.Name() != "opencode" {
t.Errorf("Name() = %q, want %q", backend.Name(), "opencode")
}
if backend.Command() != "opencode" {
t.Errorf("Command() = %q, want %q", backend.Command(), "opencode")
}
}

View File

@@ -0,0 +1,139 @@
package backend
import (
"os"
"path/filepath"
"strings"
config "codeagent-wrapper/internal/config"
"github.com/goccy/go-json"
)
type ClaudeBackend struct{}
func (ClaudeBackend) Name() string { return "claude" }
func (ClaudeBackend) Command() string { return "claude" }
func (ClaudeBackend) Env(baseURL, apiKey string) map[string]string {
baseURL = strings.TrimSpace(baseURL)
apiKey = strings.TrimSpace(apiKey)
if baseURL == "" && apiKey == "" {
return nil
}
env := make(map[string]string, 2)
if baseURL != "" {
env["ANTHROPIC_BASE_URL"] = baseURL
}
if apiKey != "" {
env["ANTHROPIC_AUTH_TOKEN"] = apiKey
}
return env
}
func (ClaudeBackend) BuildArgs(cfg *config.Config, targetArg string) []string {
return buildClaudeArgs(cfg, targetArg)
}
const MaxClaudeSettingsBytes = 1 << 20 // 1MB
type MinimalClaudeSettings struct {
Env map[string]string
Model string
}
// LoadMinimalClaudeSettings 从 ~/.claude/settings.json 只提取安全的最小子集:
// - env: 只接受字符串类型的值
// - model: 只接受字符串类型的值
// 文件缺失/解析失败/超限都返回空。
func LoadMinimalClaudeSettings() MinimalClaudeSettings {
home, err := os.UserHomeDir()
if err != nil || home == "" {
return MinimalClaudeSettings{}
}
claudeDir := filepath.Clean(filepath.Join(home, ".claude"))
settingPath := filepath.Clean(filepath.Join(claudeDir, "settings.json"))
rel, err := filepath.Rel(claudeDir, settingPath)
if err != nil || rel == ".." || strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return MinimalClaudeSettings{}
}
info, err := os.Stat(settingPath)
if err != nil || info.Size() > MaxClaudeSettingsBytes {
return MinimalClaudeSettings{}
}
data, err := os.ReadFile(settingPath) // #nosec G304 -- path is fixed under user home and validated to stay within claudeDir
if err != nil {
return MinimalClaudeSettings{}
}
var cfg struct {
Env map[string]any `json:"env"`
Model any `json:"model"`
}
if err := json.Unmarshal(data, &cfg); err != nil {
return MinimalClaudeSettings{}
}
out := MinimalClaudeSettings{}
if model, ok := cfg.Model.(string); ok {
out.Model = strings.TrimSpace(model)
}
if len(cfg.Env) == 0 {
return out
}
env := make(map[string]string, len(cfg.Env))
for k, v := range cfg.Env {
s, ok := v.(string)
if !ok {
continue
}
env[k] = s
}
if len(env) == 0 {
return out
}
out.Env = env
return out
}
func LoadMinimalEnvSettings() map[string]string {
settings := LoadMinimalClaudeSettings()
if len(settings.Env) == 0 {
return nil
}
return settings.Env
}
func buildClaudeArgs(cfg *config.Config, targetArg string) []string {
if cfg == nil {
return nil
}
args := []string{"-p"}
// Default to skip permissions unless CODEAGENT_SKIP_PERMISSIONS=false
if cfg.SkipPermissions || cfg.Yolo || config.EnvFlagDefaultTrue("CODEAGENT_SKIP_PERMISSIONS") {
args = append(args, "--dangerously-skip-permissions")
}
// Prevent infinite recursion: disable all setting sources (user, project, local)
// This ensures a clean execution environment without CLAUDE.md or skills that would trigger codeagent
args = append(args, "--setting-sources", "")
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "--model", model)
}
if cfg.Mode == "resume" {
if cfg.SessionID != "" {
// Claude CLI uses -r <session_id> for resume.
args = append(args, "-r", cfg.SessionID)
}
}
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
return args
}

View File

@@ -0,0 +1,79 @@
package backend
import (
"strings"
config "codeagent-wrapper/internal/config"
)
type CodexBackend struct{}
func (CodexBackend) Name() string { return "codex" }
func (CodexBackend) Command() string { return "codex" }
func (CodexBackend) Env(baseURL, apiKey string) map[string]string {
baseURL = strings.TrimSpace(baseURL)
apiKey = strings.TrimSpace(apiKey)
if baseURL == "" && apiKey == "" {
return nil
}
env := make(map[string]string, 2)
if baseURL != "" {
env["OPENAI_BASE_URL"] = baseURL
}
if apiKey != "" {
env["OPENAI_API_KEY"] = apiKey
}
return env
}
func (CodexBackend) BuildArgs(cfg *config.Config, targetArg string) []string {
return BuildCodexArgs(cfg, targetArg)
}
func BuildCodexArgs(cfg *config.Config, targetArg string) []string {
if cfg == nil {
panic("buildCodexArgs: nil config")
}
var resumeSessionID string
isResume := cfg.Mode == "resume"
if isResume {
resumeSessionID = strings.TrimSpace(cfg.SessionID)
if resumeSessionID == "" {
logErrorFn("invalid config: resume mode requires non-empty session_id")
isResume = false
}
}
args := []string{"e"}
// Default to bypass sandbox unless CODEX_BYPASS_SANDBOX=false
if cfg.Yolo || config.EnvFlagDefaultTrue("CODEX_BYPASS_SANDBOX") {
logWarnFn("YOLO mode or CODEX_BYPASS_SANDBOX enabled: running without approval/sandbox protection")
args = append(args, "--dangerously-bypass-approvals-and-sandbox")
}
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "--model", model)
}
if reasoningEffort := strings.TrimSpace(cfg.ReasoningEffort); reasoningEffort != "" {
args = append(args, "-c", "model_reasoning_effort="+reasoningEffort)
}
args = append(args, "--skip-git-repo-check")
if isResume {
return append(args,
"--json",
"resume",
resumeSessionID,
targetArg,
)
}
return append(args,
"-C", cfg.WorkDir,
"--json",
targetArg,
)
}

View File

@@ -0,0 +1,110 @@
package backend
import (
"os"
"path/filepath"
"strings"
config "codeagent-wrapper/internal/config"
)
type GeminiBackend struct{}
func (GeminiBackend) Name() string { return "gemini" }
func (GeminiBackend) Command() string { return "gemini" }
func (GeminiBackend) Env(baseURL, apiKey string) map[string]string {
baseURL = strings.TrimSpace(baseURL)
apiKey = strings.TrimSpace(apiKey)
if baseURL == "" && apiKey == "" {
return nil
}
env := make(map[string]string, 2)
if baseURL != "" {
env["GOOGLE_GEMINI_BASE_URL"] = baseURL
}
if apiKey != "" {
env["GEMINI_API_KEY"] = apiKey
}
return env
}
func (GeminiBackend) BuildArgs(cfg *config.Config, targetArg string) []string {
return buildGeminiArgs(cfg, targetArg)
}
// LoadGeminiEnv loads environment variables from ~/.gemini/.env
// Supports GEMINI_API_KEY, GEMINI_MODEL, GOOGLE_GEMINI_BASE_URL
// Also sets GEMINI_API_KEY_AUTH_MECHANISM=bearer for third-party API compatibility
func LoadGeminiEnv() map[string]string {
home, err := os.UserHomeDir()
if err != nil || home == "" {
return nil
}
envDir := filepath.Clean(filepath.Join(home, ".gemini"))
envPath := filepath.Clean(filepath.Join(envDir, ".env"))
rel, err := filepath.Rel(envDir, envPath)
if err != nil || rel == ".." || strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return nil
}
data, err := os.ReadFile(envPath) // #nosec G304 -- path is fixed under user home and validated to stay within envDir
if err != nil {
return nil
}
env := make(map[string]string)
for _, line := range strings.Split(string(data), "\n") {
line = strings.TrimSpace(line)
if line == "" || strings.HasPrefix(line, "#") {
continue
}
idx := strings.IndexByte(line, '=')
if idx <= 0 {
continue
}
key := strings.TrimSpace(line[:idx])
value := strings.TrimSpace(line[idx+1:])
if key != "" && value != "" {
env[key] = value
}
}
// Set bearer auth mechanism for third-party API compatibility
if _, ok := env["GEMINI_API_KEY"]; ok {
if _, hasAuth := env["GEMINI_API_KEY_AUTH_MECHANISM"]; !hasAuth {
env["GEMINI_API_KEY_AUTH_MECHANISM"] = "bearer"
}
}
if len(env) == 0 {
return nil
}
return env
}
func buildGeminiArgs(cfg *config.Config, targetArg string) []string {
if cfg == nil {
return nil
}
args := []string{"-o", "stream-json", "-y"}
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "-m", model)
}
if cfg.Mode == "resume" {
if cfg.SessionID != "" {
args = append(args, "-r", cfg.SessionID)
}
}
// Use positional argument instead of deprecated -p flag.
// For stdin mode ("-"), use -p to read from stdin.
if targetArg == "-" {
args = append(args, "-p", targetArg)
} else {
args = append(args, targetArg)
}
return args
}

View File

@@ -0,0 +1,29 @@
package backend
import (
"strings"
config "codeagent-wrapper/internal/config"
)
type OpencodeBackend struct{}
func (OpencodeBackend) Name() string { return "opencode" }
func (OpencodeBackend) Command() string { return "opencode" }
func (OpencodeBackend) Env(baseURL, apiKey string) map[string]string { return nil }
func (OpencodeBackend) BuildArgs(cfg *config.Config, targetArg string) []string {
args := []string{"run"}
if cfg != nil {
if model := strings.TrimSpace(cfg.Model); model != "" {
args = append(args, "-m", model)
}
if cfg.Mode == "resume" && cfg.SessionID != "" {
args = append(args, "-s", cfg.SessionID)
}
}
args = append(args, "--format", "json")
if targetArg != "-" {
args = append(args, targetArg)
}
return args
}

View File

@@ -0,0 +1,29 @@
package backend
import (
"fmt"
"strings"
)
var registry = map[string]Backend{
"codex": CodexBackend{},
"claude": ClaudeBackend{},
"gemini": GeminiBackend{},
"opencode": OpencodeBackend{},
}
// Registry exposes the available backends. Intended for internal inspection/tests.
func Registry() map[string]Backend {
return registry
}
func Select(name string) (Backend, error) {
key := strings.ToLower(strings.TrimSpace(name))
if key == "" {
key = "codex"
}
if backend, ok := registry[key]; ok {
return backend, nil
}
return nil, fmt.Errorf("unsupported backend %q", name)
}

View File

@@ -0,0 +1,220 @@
package config
import (
"fmt"
"os"
"path/filepath"
"strings"
"sync"
ilogger "codeagent-wrapper/internal/logger"
"github.com/goccy/go-json"
)
type BackendConfig struct {
BaseURL string `json:"base_url,omitempty"`
APIKey string `json:"api_key,omitempty"`
}
type AgentModelConfig struct {
Backend string `json:"backend"`
Model string `json:"model"`
PromptFile string `json:"prompt_file,omitempty"`
Description string `json:"description,omitempty"`
Yolo bool `json:"yolo,omitempty"`
Reasoning string `json:"reasoning,omitempty"`
BaseURL string `json:"base_url,omitempty"`
APIKey string `json:"api_key,omitempty"`
}
type ModelsConfig struct {
DefaultBackend string `json:"default_backend"`
DefaultModel string `json:"default_model"`
Agents map[string]AgentModelConfig `json:"agents"`
Backends map[string]BackendConfig `json:"backends,omitempty"`
}
var defaultModelsConfig = ModelsConfig{
DefaultBackend: "opencode",
DefaultModel: "opencode/grok-code",
Agents: map[string]AgentModelConfig{
"oracle": {Backend: "claude", Model: "claude-opus-4-5-20251101", PromptFile: "~/.claude/skills/omo/references/oracle.md", Description: "Technical advisor"},
"librarian": {Backend: "claude", Model: "claude-sonnet-4-5-20250929", PromptFile: "~/.claude/skills/omo/references/librarian.md", Description: "Researcher"},
"explore": {Backend: "opencode", Model: "opencode/grok-code", PromptFile: "~/.claude/skills/omo/references/explore.md", Description: "Code search"},
"develop": {Backend: "codex", Model: "", PromptFile: "~/.claude/skills/omo/references/develop.md", Description: "Code development"},
"frontend-ui-ux-engineer": {Backend: "gemini", Model: "", PromptFile: "~/.claude/skills/omo/references/frontend-ui-ux-engineer.md", Description: "Frontend engineer"},
"document-writer": {Backend: "gemini", Model: "", PromptFile: "~/.claude/skills/omo/references/document-writer.md", Description: "Documentation"},
},
}
var (
modelsConfigOnce sync.Once
modelsConfigCached *ModelsConfig
)
func modelsConfig() *ModelsConfig {
modelsConfigOnce.Do(func() {
modelsConfigCached = loadModelsConfig()
})
if modelsConfigCached == nil {
return &defaultModelsConfig
}
return modelsConfigCached
}
func loadModelsConfig() *ModelsConfig {
home, err := os.UserHomeDir()
if err != nil {
ilogger.LogWarn(fmt.Sprintf("Failed to resolve home directory for models config: %v; using defaults", err))
return &defaultModelsConfig
}
configDir := filepath.Clean(filepath.Join(home, ".codeagent"))
configPath := filepath.Clean(filepath.Join(configDir, "models.json"))
rel, err := filepath.Rel(configDir, configPath)
if err != nil || rel == ".." || strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return &defaultModelsConfig
}
data, err := os.ReadFile(configPath) // #nosec G304 -- path is fixed under user home and validated to stay within configDir
if err != nil {
if !os.IsNotExist(err) {
ilogger.LogWarn(fmt.Sprintf("Failed to read models config %s: %v; using defaults", configPath, err))
}
return &defaultModelsConfig
}
var cfg ModelsConfig
if err := json.Unmarshal(data, &cfg); err != nil {
ilogger.LogWarn(fmt.Sprintf("Failed to parse models config %s: %v; using defaults", configPath, err))
return &defaultModelsConfig
}
cfg.DefaultBackend = strings.TrimSpace(cfg.DefaultBackend)
if cfg.DefaultBackend == "" {
cfg.DefaultBackend = defaultModelsConfig.DefaultBackend
}
cfg.DefaultModel = strings.TrimSpace(cfg.DefaultModel)
if cfg.DefaultModel == "" {
cfg.DefaultModel = defaultModelsConfig.DefaultModel
}
// Merge with defaults
for name, agent := range defaultModelsConfig.Agents {
if _, exists := cfg.Agents[name]; !exists {
if cfg.Agents == nil {
cfg.Agents = make(map[string]AgentModelConfig)
}
cfg.Agents[name] = agent
}
}
// Normalize backend keys so lookups can be case-insensitive.
if len(cfg.Backends) > 0 {
normalized := make(map[string]BackendConfig, len(cfg.Backends))
for k, v := range cfg.Backends {
key := strings.ToLower(strings.TrimSpace(k))
if key == "" {
continue
}
normalized[key] = v
}
if len(normalized) > 0 {
cfg.Backends = normalized
} else {
cfg.Backends = nil
}
}
return &cfg
}
func LoadDynamicAgent(name string) (AgentModelConfig, bool) {
if err := ValidateAgentName(name); err != nil {
return AgentModelConfig{}, false
}
home, err := os.UserHomeDir()
if err != nil || strings.TrimSpace(home) == "" {
return AgentModelConfig{}, false
}
absPath := filepath.Join(home, ".codeagent", "agents", name+".md")
info, err := os.Stat(absPath)
if err != nil || info.IsDir() {
return AgentModelConfig{}, false
}
return AgentModelConfig{PromptFile: "~/.codeagent/agents/" + name + ".md"}, true
}
func ResolveBackendConfig(backendName string) (baseURL, apiKey string) {
cfg := modelsConfig()
resolved := resolveBackendConfig(cfg, backendName)
return strings.TrimSpace(resolved.BaseURL), strings.TrimSpace(resolved.APIKey)
}
func resolveBackendConfig(cfg *ModelsConfig, backendName string) BackendConfig {
if cfg == nil || len(cfg.Backends) == 0 {
return BackendConfig{}
}
key := strings.ToLower(strings.TrimSpace(backendName))
if key == "" {
key = strings.ToLower(strings.TrimSpace(cfg.DefaultBackend))
}
if key == "" {
return BackendConfig{}
}
if backend, ok := cfg.Backends[key]; ok {
return backend
}
return BackendConfig{}
}
func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning, baseURL, apiKey string, yolo bool) {
cfg := modelsConfig()
if agent, ok := cfg.Agents[agentName]; ok {
backend = strings.TrimSpace(agent.Backend)
if backend == "" {
backend = cfg.DefaultBackend
}
backendCfg := resolveBackendConfig(cfg, backend)
baseURL = strings.TrimSpace(agent.BaseURL)
if baseURL == "" {
baseURL = strings.TrimSpace(backendCfg.BaseURL)
}
apiKey = strings.TrimSpace(agent.APIKey)
if apiKey == "" {
apiKey = strings.TrimSpace(backendCfg.APIKey)
}
return backend, strings.TrimSpace(agent.Model), agent.PromptFile, agent.Reasoning, baseURL, apiKey, agent.Yolo
}
if dynamic, ok := LoadDynamicAgent(agentName); ok {
backend = cfg.DefaultBackend
model = cfg.DefaultModel
backendCfg := resolveBackendConfig(cfg, backend)
baseURL = strings.TrimSpace(backendCfg.BaseURL)
apiKey = strings.TrimSpace(backendCfg.APIKey)
return backend, model, dynamic.PromptFile, "", baseURL, apiKey, false
}
backend = cfg.DefaultBackend
model = cfg.DefaultModel
backendCfg := resolveBackendConfig(cfg, backend)
baseURL = strings.TrimSpace(backendCfg.BaseURL)
apiKey = strings.TrimSpace(backendCfg.APIKey)
return backend, model, "", "", baseURL, apiKey, false
}
func ResolveAgentConfig(agentName string) (backend, model, promptFile, reasoning, baseURL, apiKey string, yolo bool) {
return resolveAgentConfig(agentName)
}
func ResetModelsConfigCacheForTest() {
modelsConfigCached = nil
modelsConfigOnce = sync.Once{}
}

View File

@@ -1,9 +1,8 @@
package main
package config
import (
"os"
"path/filepath"
"reflect"
"testing"
)
@@ -11,6 +10,8 @@ func TestResolveAgentConfig_Defaults(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
// Test that default agents resolve correctly without config file
tests := []struct {
@@ -19,16 +20,16 @@ func TestResolveAgentConfig_Defaults(t *testing.T) {
wantModel string
wantPromptFile string
}{
{"oracle", "claude", "claude-sonnet-4-20250514", "~/.claude/skills/omo/references/oracle.md"},
{"librarian", "claude", "claude-sonnet-4-5-20250514", "~/.claude/skills/omo/references/librarian.md"},
{"explore", "opencode", "opencode/grok-code", "~/.claude/skills/omo/references/explore.md"},
{"frontend-ui-ux-engineer", "gemini", "", "~/.claude/skills/omo/references/frontend-ui-ux-engineer.md"},
{"document-writer", "gemini", "", "~/.claude/skills/omo/references/document-writer.md"},
}
{"oracle", "claude", "claude-opus-4-5-20251101", "~/.claude/skills/omo/references/oracle.md"},
{"librarian", "claude", "claude-sonnet-4-5-20250929", "~/.claude/skills/omo/references/librarian.md"},
{"explore", "opencode", "opencode/grok-code", "~/.claude/skills/omo/references/explore.md"},
{"frontend-ui-ux-engineer", "gemini", "", "~/.claude/skills/omo/references/frontend-ui-ux-engineer.md"},
{"document-writer", "gemini", "", "~/.claude/skills/omo/references/document-writer.md"},
}
for _, tt := range tests {
t.Run(tt.agent, func(t *testing.T) {
backend, model, promptFile, _, _ := resolveAgentConfig(tt.agent)
backend, model, promptFile, _, _, _, _ := resolveAgentConfig(tt.agent)
if backend != tt.wantBackend {
t.Errorf("backend = %q, want %q", backend, tt.wantBackend)
}
@@ -46,8 +47,10 @@ func TestResolveAgentConfig_UnknownAgent(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
backend, model, promptFile, _, _ := resolveAgentConfig("unknown-agent")
backend, model, promptFile, _, _, _, _ := resolveAgentConfig("unknown-agent")
if backend != "opencode" {
t.Errorf("unknown agent backend = %q, want %q", backend, "opencode")
}
@@ -63,6 +66,8 @@ func TestLoadModelsConfig_NoFile(t *testing.T) {
home := "/nonexistent/path/that/does/not/exist"
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
cfg := loadModelsConfig()
if cfg.DefaultBackend != "opencode" {
@@ -84,11 +89,23 @@ func TestLoadModelsConfig_WithFile(t *testing.T) {
configContent := `{
"default_backend": "claude",
"default_model": "claude-opus-4",
"backends": {
"Claude": {
"base_url": "https://backend.example",
"api_key": "backend-key"
},
"codex": {
"base_url": "https://openai.example",
"api_key": "openai-key"
}
},
"agents": {
"custom-agent": {
"backend": "codex",
"model": "gpt-4o",
"description": "Custom agent"
"description": "Custom agent",
"base_url": "https://agent.example",
"api_key": "agent-key"
}
}
}`
@@ -99,6 +116,8 @@ func TestLoadModelsConfig_WithFile(t *testing.T) {
t.Setenv("HOME", tmpDir)
t.Setenv("USERPROFILE", tmpDir)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
cfg := loadModelsConfig()
@@ -125,6 +144,55 @@ func TestLoadModelsConfig_WithFile(t *testing.T) {
if _, ok := cfg.Agents["oracle"]; !ok {
t.Error("default agent oracle should be merged")
}
baseURL, apiKey := ResolveBackendConfig("claude")
if baseURL != "https://backend.example" {
t.Errorf("ResolveBackendConfig(baseURL) = %q, want %q", baseURL, "https://backend.example")
}
if apiKey != "backend-key" {
t.Errorf("ResolveBackendConfig(apiKey) = %q, want %q", apiKey, "backend-key")
}
backend, model, _, _, agentBaseURL, agentAPIKey, _ := ResolveAgentConfig("custom-agent")
if backend != "codex" {
t.Errorf("ResolveAgentConfig(backend) = %q, want %q", backend, "codex")
}
if model != "gpt-4o" {
t.Errorf("ResolveAgentConfig(model) = %q, want %q", model, "gpt-4o")
}
if agentBaseURL != "https://agent.example" {
t.Errorf("ResolveAgentConfig(baseURL) = %q, want %q", agentBaseURL, "https://agent.example")
}
if agentAPIKey != "agent-key" {
t.Errorf("ResolveAgentConfig(apiKey) = %q, want %q", agentAPIKey, "agent-key")
}
}
func TestResolveAgentConfig_DynamicAgent(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
agentDir := filepath.Join(home, ".codeagent", "agents")
if err := os.MkdirAll(agentDir, 0o755); err != nil {
t.Fatalf("MkdirAll: %v", err)
}
if err := os.WriteFile(filepath.Join(agentDir, "sarsh.md"), []byte("prompt\n"), 0o644); err != nil {
t.Fatalf("WriteFile: %v", err)
}
backend, model, promptFile, _, _, _, _ := resolveAgentConfig("sarsh")
if backend != "opencode" {
t.Errorf("backend = %q, want %q", backend, "opencode")
}
if model != "opencode/grok-code" {
t.Errorf("model = %q, want %q", model, "opencode/grok-code")
}
if promptFile != "~/.codeagent/agents/sarsh.md" {
t.Errorf("promptFile = %q, want %q", promptFile, "~/.codeagent/agents/sarsh.md")
}
}
func TestLoadModelsConfig_InvalidJSON(t *testing.T) {
@@ -142,6 +210,8 @@ func TestLoadModelsConfig_InvalidJSON(t *testing.T) {
t.Setenv("HOME", tmpDir)
t.Setenv("USERPROFILE", tmpDir)
t.Cleanup(ResetModelsConfigCacheForTest)
ResetModelsConfigCacheForTest()
cfg := loadModelsConfig()
// Should fall back to defaults
@@ -149,60 +219,3 @@ func TestLoadModelsConfig_InvalidJSON(t *testing.T) {
t.Errorf("invalid JSON should fallback, got DefaultBackend = %q", cfg.DefaultBackend)
}
}
func TestOpencodeBackend_BuildArgs(t *testing.T) {
backend := OpencodeBackend{}
t.Run("basic", func(t *testing.T) {
cfg := &Config{Mode: "new"}
got := backend.BuildArgs(cfg, "hello")
want := []string{"run", "--format", "json", "hello"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("with model", func(t *testing.T) {
cfg := &Config{Mode: "new", Model: "opencode/grok-code"}
got := backend.BuildArgs(cfg, "task")
want := []string{"run", "-m", "opencode/grok-code", "--format", "json", "task"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("resume mode", func(t *testing.T) {
cfg := &Config{Mode: "resume", SessionID: "ses_123", Model: "opencode/grok-code"}
got := backend.BuildArgs(cfg, "follow-up")
want := []string{"run", "-m", "opencode/grok-code", "-s", "ses_123", "--format", "json", "follow-up"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("resume without session", func(t *testing.T) {
cfg := &Config{Mode: "resume"}
got := backend.BuildArgs(cfg, "task")
want := []string{"run", "--format", "json", "task"}
if !reflect.DeepEqual(got, want) {
t.Errorf("got %v, want %v", got, want)
}
})
}
func TestOpencodeBackend_Interface(t *testing.T) {
backend := OpencodeBackend{}
if backend.Name() != "opencode" {
t.Errorf("Name() = %q, want %q", backend.Name(), "opencode")
}
if backend.Command() != "opencode" {
t.Errorf("Command() = %q, want %q", backend.Command(), "opencode")
}
}
func TestBackendRegistry_IncludesOpencode(t *testing.T) {
if _, ok := backendRegistry["opencode"]; !ok {
t.Error("backendRegistry should include opencode")
}
}

View File

@@ -0,0 +1,102 @@
package config
import (
"fmt"
"os"
"strconv"
"strings"
)
// Config holds CLI configuration.
type Config struct {
Mode string // "new" or "resume"
Task string
SessionID string
WorkDir string
Model string
ReasoningEffort string
ExplicitStdin bool
Timeout int
Backend string
Agent string
PromptFile string
PromptFileExplicit bool
SkipPermissions bool
Yolo bool
MaxParallelWorkers int
}
// EnvFlagEnabled returns true when the environment variable exists and is not
// explicitly set to a falsey value ("0/false/no/off").
func EnvFlagEnabled(key string) bool {
val, ok := os.LookupEnv(key)
if !ok {
return false
}
val = strings.TrimSpace(strings.ToLower(val))
switch val {
case "", "0", "false", "no", "off":
return false
default:
return true
}
}
func ParseBoolFlag(val string, defaultValue bool) bool {
val = strings.TrimSpace(strings.ToLower(val))
switch val {
case "1", "true", "yes", "on":
return true
case "0", "false", "no", "off":
return false
default:
return defaultValue
}
}
// EnvFlagDefaultTrue returns true unless the env var is explicitly set to
// false/0/no/off.
func EnvFlagDefaultTrue(key string) bool {
val, ok := os.LookupEnv(key)
if !ok {
return true
}
return ParseBoolFlag(val, true)
}
func ValidateAgentName(name string) error {
if strings.TrimSpace(name) == "" {
return fmt.Errorf("agent name is empty")
}
for _, r := range name {
switch {
case r >= 'a' && r <= 'z':
case r >= 'A' && r <= 'Z':
case r >= '0' && r <= '9':
case r == '-', r == '_':
default:
return fmt.Errorf("agent name %q contains invalid character %q", name, r)
}
}
return nil
}
const maxParallelWorkersLimit = 100
// ResolveMaxParallelWorkers reads CODEAGENT_MAX_PARALLEL_WORKERS. It returns 0
// for "unlimited".
func ResolveMaxParallelWorkers() int {
raw := strings.TrimSpace(os.Getenv("CODEAGENT_MAX_PARALLEL_WORKERS"))
if raw == "" {
return 0
}
value, err := strconv.Atoi(raw)
if err != nil || value < 0 {
return 0
}
if value > maxParallelWorkersLimit {
return maxParallelWorkersLimit
}
return value
}

View File

@@ -0,0 +1,47 @@
package config
import (
"errors"
"os"
"path/filepath"
"strings"
"github.com/spf13/viper"
)
// NewViper returns a viper instance configured for CODEAGENT_* environment
// variables and an optional config file.
//
// Search order when configFile is empty:
// - $HOME/.codeagent/config.(yaml|yml|json|toml|...)
func NewViper(configFile string) (*viper.Viper, error) {
v := viper.New()
v.SetEnvPrefix("CODEAGENT")
v.SetEnvKeyReplacer(strings.NewReplacer("-", "_"))
v.AutomaticEnv()
if strings.TrimSpace(configFile) != "" {
v.SetConfigFile(configFile)
if err := v.ReadInConfig(); err != nil {
return nil, err
}
return v, nil
}
home, err := os.UserHomeDir()
if err != nil || strings.TrimSpace(home) == "" {
return v, nil
}
v.SetConfigName("config")
v.AddConfigPath(filepath.Join(home, ".codeagent"))
if err := v.ReadInConfig(); err != nil {
var notFound viper.ConfigFileNotFoundError
if errors.As(err, &notFound) {
return v, nil
}
return nil, err
}
return v, nil
}

View File

@@ -0,0 +1,193 @@
package executor
import (
"os"
"path/filepath"
"strings"
"testing"
backend "codeagent-wrapper/internal/backend"
config "codeagent-wrapper/internal/config"
)
// TestEnvInjectionWithAgent tests the full flow of env injection with agent config
func TestEnvInjectionWithAgent(t *testing.T) {
// Setup temp config
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0755); err != nil {
t.Fatal(err)
}
// Write test config with agent that has base_url and api_key
configContent := `{
"default_backend": "codex",
"agents": {
"test-agent": {
"backend": "claude",
"model": "test-model",
"base_url": "https://test.api.com",
"api_key": "test-api-key-12345678"
}
}
}`
configPath := filepath.Join(configDir, "models.json")
if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil {
t.Fatal(err)
}
// Override HOME to use temp dir
oldHome := os.Getenv("HOME")
os.Setenv("HOME", tmpDir)
defer os.Setenv("HOME", oldHome)
// Reset config cache
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Test ResolveAgentConfig
agentBackend, model, _, _, baseURL, apiKey, _ := config.ResolveAgentConfig("test-agent")
t.Logf("ResolveAgentConfig: backend=%q, model=%q, baseURL=%q, apiKey=%q",
agentBackend, model, baseURL, apiKey)
if agentBackend != "claude" {
t.Errorf("expected backend 'claude', got %q", agentBackend)
}
if baseURL != "https://test.api.com" {
t.Errorf("expected baseURL 'https://test.api.com', got %q", baseURL)
}
if apiKey != "test-api-key-12345678" {
t.Errorf("expected apiKey 'test-api-key-12345678', got %q", apiKey)
}
// Test Backend.Env
b := backend.ClaudeBackend{}
env := b.Env(baseURL, apiKey)
t.Logf("Backend.Env: %v", env)
if env == nil {
t.Fatal("expected non-nil env from Backend.Env")
}
if env["ANTHROPIC_BASE_URL"] != baseURL {
t.Errorf("expected ANTHROPIC_BASE_URL=%q, got %q", baseURL, env["ANTHROPIC_BASE_URL"])
}
if env["ANTHROPIC_AUTH_TOKEN"] != apiKey {
t.Errorf("expected ANTHROPIC_AUTH_TOKEN=%q, got %q", apiKey, env["ANTHROPIC_AUTH_TOKEN"])
}
}
// TestEnvInjectionLogic tests the exact logic used in executor
func TestEnvInjectionLogic(t *testing.T) {
// Setup temp config
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0755); err != nil {
t.Fatal(err)
}
configContent := `{
"default_backend": "codex",
"agents": {
"explore": {
"backend": "claude",
"model": "MiniMax-M2.1",
"base_url": "https://api.minimaxi.com/anthropic",
"api_key": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.test"
}
}
}`
configPath := filepath.Join(configDir, "models.json")
if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil {
t.Fatal(err)
}
oldHome := os.Getenv("HOME")
os.Setenv("HOME", tmpDir)
defer os.Setenv("HOME", oldHome)
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Simulate the executor logic
cfgBackend := "claude" // This should come from taskSpec.Backend
agentName := "explore"
// Step 1: Get backend config (usually empty for claude without global config)
baseURL, apiKey := config.ResolveBackendConfig(cfgBackend)
t.Logf("Step 1 - ResolveBackendConfig(%q): baseURL=%q, apiKey=%q", cfgBackend, baseURL, apiKey)
// Step 2: If agent specified, get agent config
if agentName != "" {
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _ := config.ResolveAgentConfig(agentName)
t.Logf("Step 2 - ResolveAgentConfig(%q): backend=%q, baseURL=%q, apiKey=%q",
agentName, agentBackend, agentBaseURL, agentAPIKey)
// Step 3: Check if agent backend matches cfg backend
if strings.EqualFold(strings.TrimSpace(agentBackend), strings.TrimSpace(cfgBackend)) {
baseURL, apiKey = agentBaseURL, agentAPIKey
t.Logf("Step 3 - Backend match! Using agent config: baseURL=%q, apiKey=%q", baseURL, apiKey)
} else {
t.Logf("Step 3 - Backend mismatch: agent=%q, cfg=%q", agentBackend, cfgBackend)
}
}
// Step 4: Get env vars from backend
b := backend.ClaudeBackend{}
injected := b.Env(baseURL, apiKey)
t.Logf("Step 4 - Backend.Env: %v", injected)
// Verify
if len(injected) == 0 {
t.Fatal("Expected env vars to be injected, got none")
}
expectedURL := "https://api.minimaxi.com/anthropic"
if injected["ANTHROPIC_BASE_URL"] != expectedURL {
t.Errorf("ANTHROPIC_BASE_URL: expected %q, got %q", expectedURL, injected["ANTHROPIC_BASE_URL"])
}
if _, ok := injected["ANTHROPIC_AUTH_TOKEN"]; !ok {
t.Error("ANTHROPIC_AUTH_TOKEN not set")
}
// Step 5: Test masking
for k, v := range injected {
masked := maskSensitiveValue(k, v)
t.Logf("Step 5 - Env log: %s=%s", k, masked)
}
}
// TestTaskSpecBackendPropagation tests that taskSpec.Backend is properly used
func TestTaskSpecBackendPropagation(t *testing.T) {
// Simulate what happens in RunCodexTaskWithContext
taskSpec := TaskSpec{
ID: "test",
Task: "hello",
Backend: "claude",
Agent: "explore",
}
// This is the logic from executor.go lines 889-916
cfg := &config.Config{
Mode: "new",
Task: taskSpec.Task,
Backend: "codex", // default
}
var backend Backend = nil // nil in single mode
commandName := "codex" // default
if backend != nil {
cfg.Backend = backend.Name()
} else if taskSpec.Backend != "" {
cfg.Backend = taskSpec.Backend
} else if commandName != "" {
cfg.Backend = commandName
}
t.Logf("taskSpec.Backend=%q, cfg.Backend=%q", taskSpec.Backend, cfg.Backend)
if cfg.Backend != "claude" {
t.Errorf("expected cfg.Backend='claude', got %q", cfg.Backend)
}
}

View File

@@ -0,0 +1,333 @@
package executor
import (
"strings"
"testing"
backend "codeagent-wrapper/internal/backend"
)
func TestMaskSensitiveValue(t *testing.T) {
tests := []struct {
name string
key string
value string
expected string
}{
{
name: "API_KEY with long value",
key: "ANTHROPIC_AUTH_TOKEN",
value: "sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
expected: "sk-a****xxxx",
},
{
name: "api_key lowercase",
key: "api_key",
value: "abcdefghijklmnop",
expected: "abcd****mnop",
},
{
name: "AUTH_TOKEN",
key: "AUTH_TOKEN",
value: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9",
expected: "eyJh****VCJ9",
},
{
name: "SECRET",
key: "MY_SECRET",
value: "super-secret-value-12345",
expected: "supe****2345",
},
{
name: "short key value (8 chars)",
key: "API_KEY",
value: "12345678",
expected: "****",
},
{
name: "very short key value",
key: "API_KEY",
value: "abc",
expected: "****",
},
{
name: "empty key value",
key: "API_KEY",
value: "",
expected: "",
},
{
name: "non-sensitive BASE_URL",
key: "ANTHROPIC_BASE_URL",
value: "https://api.anthropic.com",
expected: "https://api.anthropic.com",
},
{
name: "non-sensitive MODEL",
key: "MODEL",
value: "claude-3-opus",
expected: "claude-3-opus",
},
{
name: "case insensitive - Key",
key: "My_Key",
value: "1234567890abcdef",
expected: "1234****cdef",
},
{
name: "case insensitive - TOKEN",
key: "ACCESS_TOKEN",
value: "access123456789",
expected: "acce****6789",
},
{
name: "partial match - apikey",
key: "MYAPIKEY",
value: "1234567890",
expected: "1234****7890",
},
{
name: "partial match - secretvalue",
key: "SECRETVALUE",
value: "abcdefghij",
expected: "abcd****ghij",
},
{
name: "9 char value (just above threshold)",
key: "API_KEY",
value: "123456789",
expected: "1234****6789",
},
{
name: "exactly 8 char value (at threshold)",
key: "API_KEY",
value: "12345678",
expected: "****",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := maskSensitiveValue(tt.key, tt.value)
if result != tt.expected {
t.Errorf("maskSensitiveValue(%q, %q) = %q, want %q", tt.key, tt.value, result, tt.expected)
}
})
}
}
func TestMaskSensitiveValue_NoLeakage(t *testing.T) {
// Ensure sensitive values are never fully exposed
sensitiveKeys := []string{"API_KEY", "api_key", "AUTH_TOKEN", "SECRET", "access_token", "MYAPIKEY"}
longValue := "this-is-a-very-long-secret-value-that-should-be-masked"
for _, key := range sensitiveKeys {
t.Run(key, func(t *testing.T) {
masked := maskSensitiveValue(key, longValue)
// Should not contain the full value
if masked == longValue {
t.Errorf("key %q: value was not masked", key)
}
// Should contain mask marker
if !strings.Contains(masked, "****") {
t.Errorf("key %q: masked value %q does not contain ****", key, masked)
}
// First 4 chars should be visible
if !strings.HasPrefix(masked, longValue[:4]) {
t.Errorf("key %q: masked value should start with first 4 chars", key)
}
// Last 4 chars should be visible
if !strings.HasSuffix(masked, longValue[len(longValue)-4:]) {
t.Errorf("key %q: masked value should end with last 4 chars", key)
}
})
}
}
func TestMaskSensitiveValue_NonSensitivePassthrough(t *testing.T) {
// Non-sensitive keys should pass through unchanged
nonSensitiveKeys := []string{
"ANTHROPIC_BASE_URL",
"BASE_URL",
"MODEL",
"BACKEND",
"WORKDIR",
"HOME",
"PATH",
}
value := "any-value-here-12345"
for _, key := range nonSensitiveKeys {
t.Run(key, func(t *testing.T) {
result := maskSensitiveValue(key, value)
if result != value {
t.Errorf("key %q: expected passthrough but got %q", key, result)
}
})
}
}
// TestClaudeBackendEnv tests that ClaudeBackend.Env returns correct env vars
func TestClaudeBackendEnv(t *testing.T) {
tests := []struct {
name string
baseURL string
apiKey string
expectKeys []string
expectNil bool
}{
{
name: "both base_url and api_key",
baseURL: "https://api.custom.com",
apiKey: "sk-test-key-12345",
expectKeys: []string{"ANTHROPIC_BASE_URL", "ANTHROPIC_AUTH_TOKEN"},
},
{
name: "only base_url",
baseURL: "https://api.custom.com",
apiKey: "",
expectKeys: []string{"ANTHROPIC_BASE_URL"},
},
{
name: "only api_key",
baseURL: "",
apiKey: "sk-test-key-12345",
expectKeys: []string{"ANTHROPIC_AUTH_TOKEN"},
},
{
name: "both empty",
baseURL: "",
apiKey: "",
expectNil: true,
},
{
name: "whitespace only",
baseURL: " ",
apiKey: " ",
expectNil: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
b := backend.ClaudeBackend{}
env := b.Env(tt.baseURL, tt.apiKey)
if tt.expectNil {
if env != nil {
t.Errorf("expected nil env, got %v", env)
}
return
}
if env == nil {
t.Fatal("expected non-nil env")
}
for _, key := range tt.expectKeys {
if _, ok := env[key]; !ok {
t.Errorf("expected key %q in env", key)
}
}
// Verify values are correct
if tt.baseURL != "" && strings.TrimSpace(tt.baseURL) != "" {
if env["ANTHROPIC_BASE_URL"] != strings.TrimSpace(tt.baseURL) {
t.Errorf("ANTHROPIC_BASE_URL = %q, want %q", env["ANTHROPIC_BASE_URL"], strings.TrimSpace(tt.baseURL))
}
}
if tt.apiKey != "" && strings.TrimSpace(tt.apiKey) != "" {
if env["ANTHROPIC_AUTH_TOKEN"] != strings.TrimSpace(tt.apiKey) {
t.Errorf("ANTHROPIC_AUTH_TOKEN = %q, want %q", env["ANTHROPIC_AUTH_TOKEN"], strings.TrimSpace(tt.apiKey))
}
}
})
}
}
// TestEnvLoggingIntegration tests that env vars are properly masked in logs
func TestEnvLoggingIntegration(t *testing.T) {
b := backend.ClaudeBackend{}
baseURL := "https://api.minimaxi.com/anthropic"
apiKey := "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.longjwttoken"
env := b.Env(baseURL, apiKey)
if env == nil {
t.Fatal("expected non-nil env")
}
// Verify that when we log these values, sensitive ones are masked
for k, v := range env {
masked := maskSensitiveValue(k, v)
if k == "ANTHROPIC_BASE_URL" {
// URL should not be masked
if masked != v {
t.Errorf("BASE_URL should not be masked: got %q, want %q", masked, v)
}
}
if k == "ANTHROPIC_AUTH_TOKEN" {
// API key should be masked
if masked == v {
t.Errorf("API_KEY should be masked, but got original value")
}
if !strings.Contains(masked, "****") {
t.Errorf("masked API_KEY should contain ****: got %q", masked)
}
// Should still show first 4 and last 4 chars
if !strings.HasPrefix(masked, v[:4]) {
t.Errorf("masked value should start with first 4 chars of original")
}
if !strings.HasSuffix(masked, v[len(v)-4:]) {
t.Errorf("masked value should end with last 4 chars of original")
}
}
}
}
// TestGeminiBackendEnv tests GeminiBackend.Env for comparison
func TestGeminiBackendEnv(t *testing.T) {
b := backend.GeminiBackend{}
env := b.Env("https://custom.api", "gemini-api-key-12345")
if env == nil {
t.Fatal("expected non-nil env")
}
// Check that GEMINI env vars are set
if _, ok := env["GOOGLE_GEMINI_BASE_URL"]; !ok {
t.Error("expected GOOGLE_GEMINI_BASE_URL in env")
}
if _, ok := env["GEMINI_API_KEY"]; !ok {
t.Error("expected GEMINI_API_KEY in env")
}
// Verify masking works for Gemini keys too
for k, v := range env {
masked := maskSensitiveValue(k, v)
if strings.Contains(strings.ToLower(k), "key") {
if masked == v && len(v) > 0 {
t.Errorf("key %q should be masked", k)
}
}
}
}
// TestCodexBackendEnv tests CodexBackend.Env
func TestCodexBackendEnv(t *testing.T) {
b := backend.CodexBackend{}
env := b.Env("https://custom.api", "codex-api-key-12345")
if env == nil {
t.Fatal("expected non-nil env for codex")
}
// Check for OPENAI env vars
if _, ok := env["OPENAI_BASE_URL"]; !ok {
t.Error("expected OPENAI_BASE_URL in env")
}
if _, ok := env["OPENAI_API_KEY"]; !ok {
t.Error("expected OPENAI_API_KEY in env")
}
}

View File

@@ -0,0 +1,133 @@
package executor
import (
"context"
"errors"
"io"
"os"
"path/filepath"
"strings"
"testing"
config "codeagent-wrapper/internal/config"
)
type fakeCmd struct {
env map[string]string
}
func (f *fakeCmd) Start() error { return nil }
func (f *fakeCmd) Wait() error { return nil }
func (f *fakeCmd) StdoutPipe() (io.ReadCloser, error) {
return io.NopCloser(strings.NewReader("")), nil
}
func (f *fakeCmd) StderrPipe() (io.ReadCloser, error) {
return nil, errors.New("fake stderr pipe error")
}
func (f *fakeCmd) StdinPipe() (io.WriteCloser, error) {
return nil, errors.New("fake stdin pipe error")
}
func (f *fakeCmd) SetStderr(io.Writer) {}
func (f *fakeCmd) SetDir(string) {}
func (f *fakeCmd) SetEnv(env map[string]string) {
if len(env) == 0 {
return
}
if f.env == nil {
f.env = make(map[string]string, len(env))
}
for k, v := range env {
f.env[k] = v
}
}
func (f *fakeCmd) Process() processHandle { return nil }
func TestEnvInjection_LogsToStderrAndMasksKey(t *testing.T) {
// Arrange ~/.codeagent/models.json via HOME override.
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".codeagent")
if err := os.MkdirAll(configDir, 0o755); err != nil {
t.Fatal(err)
}
const baseURL = "https://api.minimaxi.com/anthropic"
const apiKey = "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.test"
models := `{
"agents": {
"explore": {
"backend": "claude",
"model": "MiniMax-M2.1",
"base_url": "` + baseURL + `",
"api_key": "` + apiKey + `"
}
}
}`
if err := os.WriteFile(filepath.Join(configDir, "models.json"), []byte(models), 0o644); err != nil {
t.Fatal(err)
}
oldHome := os.Getenv("HOME")
if err := os.Setenv("HOME", tmpDir); err != nil {
t.Fatal(err)
}
defer func() { _ = os.Setenv("HOME", oldHome) }()
config.ResetModelsConfigCacheForTest()
defer config.ResetModelsConfigCacheForTest()
// Capture stderr (RunCodexTaskWithContext prints env injection lines there).
r, w, err := os.Pipe()
if err != nil {
t.Fatal(err)
}
oldStderr := os.Stderr
os.Stderr = w
defer func() { os.Stderr = oldStderr }()
readDone := make(chan string, 1)
go func() {
defer r.Close()
b, _ := io.ReadAll(r)
readDone <- string(b)
}()
var cmd *fakeCmd
restoreRunner := SetNewCommandRunner(func(ctx context.Context, name string, args ...string) CommandRunner {
cmd = &fakeCmd{}
return cmd
})
defer restoreRunner()
// Act: force an early return right after env injection by making StderrPipe fail.
_ = RunCodexTaskWithContext(
context.Background(),
TaskSpec{Task: "hi", WorkDir: ".", Backend: "claude", Agent: "explore"},
nil,
"claude",
nil,
nil,
false,
false,
1,
)
_ = w.Close()
got := <-readDone
// Assert: env was injected into the command and logging is present with masking.
if cmd == nil || cmd.env == nil {
t.Fatalf("expected cmd env to be set, got cmd=%v env=%v", cmd, nil)
}
if cmd.env["ANTHROPIC_BASE_URL"] != baseURL {
t.Fatalf("ANTHROPIC_BASE_URL=%q, want %q", cmd.env["ANTHROPIC_BASE_URL"], baseURL)
}
if cmd.env["ANTHROPIC_AUTH_TOKEN"] != apiKey {
t.Fatalf("ANTHROPIC_AUTH_TOKEN=%q, want %q", cmd.env["ANTHROPIC_AUTH_TOKEN"], apiKey)
}
if !strings.Contains(got, "Env: ANTHROPIC_BASE_URL="+baseURL) {
t.Fatalf("stderr missing base URL env log; stderr=%q", got)
}
if !strings.Contains(got, "Env: ANTHROPIC_AUTH_TOKEN=eyJh****test") {
t.Fatalf("stderr missing masked API key log; stderr=%q", got)
}
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"context"
@@ -14,11 +14,92 @@ import (
"sync/atomic"
"syscall"
"time"
backend "codeagent-wrapper/internal/backend"
config "codeagent-wrapper/internal/config"
ilogger "codeagent-wrapper/internal/logger"
parser "codeagent-wrapper/internal/parser"
utils "codeagent-wrapper/internal/utils"
)
const postMessageTerminateDelay = 1 * time.Second
const forceKillWaitTimeout = 5 * time.Second
// Defaults duplicated from wrapper for module decoupling.
const (
defaultWorkdir = "."
defaultCoverageTarget = 90.0
defaultBackendName = "codex"
codexLogLineLimit = 1000
stderrCaptureLimit = 4 * 1024
)
const (
// stdout close reasons
stdoutCloseReasonWait = "wait-done"
stdoutCloseReasonDrain = "drain-timeout"
stdoutCloseReasonCtx = "context-cancel"
stdoutDrainTimeout = 100 * time.Millisecond
)
// Hook points (tests can override inside this package).
var (
selectBackendFn = backend.Select
commandContext = exec.CommandContext
terminateCommandFn = terminateCommand
)
var forceKillDelay atomic.Int32
func init() {
forceKillDelay.Store(5) // seconds - default value
}
type (
Backend = backend.Backend
Config = config.Config
Logger = ilogger.Logger
)
type minimalClaudeSettings = backend.MinimalClaudeSettings
func loadMinimalClaudeSettings() minimalClaudeSettings { return backend.LoadMinimalClaudeSettings() }
func loadGeminiEnv() map[string]string { return backend.LoadGeminiEnv() }
func NewLogger() (*Logger, error) { return ilogger.NewLogger() }
func NewLoggerWithSuffix(suffix string) (*Logger, error) { return ilogger.NewLoggerWithSuffix(suffix) }
func setLogger(l *Logger) { ilogger.SetLogger(l) }
func closeLogger() error { return ilogger.CloseLogger() }
func activeLogger() *Logger { return ilogger.ActiveLogger() }
func logInfo(msg string) { ilogger.LogInfo(msg) }
func logWarn(msg string) { ilogger.LogWarn(msg) }
func logError(msg string) { ilogger.LogError(msg) }
func logConcurrencyPlanning(limit, total int) { ilogger.LogConcurrencyPlanning(limit, total) }
func logConcurrencyState(event, taskID string, active, limit int) {
ilogger.LogConcurrencyState(event, taskID, active, limit)
}
func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func(), onComplete func()) (message, threadID string) {
return parser.ParseJSONStreamInternal(r, warnFn, infoFn, onMessage, onComplete)
}
func sanitizeOutput(s string) string { return utils.SanitizeOutput(s) }
func safeTruncate(s string, maxLen int) string { return utils.SafeTruncate(s, maxLen) }
func min(a, b int) int { return utils.Min(a, b) }
// commandRunner abstracts exec.Cmd for testability
type commandRunner interface {
Start() error
@@ -230,7 +311,7 @@ func newTaskLoggerHandle(taskID string) taskLoggerHandle {
}
// defaultRunCodexTaskFn is the default implementation of runCodexTaskFn (exposed for test reset)
func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
func DefaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
if task.WorkDir == "" {
task.WorkDir = defaultWorkdir
}
@@ -238,13 +319,13 @@ func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
task.Mode = "new"
}
if strings.TrimSpace(task.PromptFile) != "" {
prompt, err := readAgentPromptFile(task.PromptFile, false)
prompt, err := ReadAgentPromptFile(task.PromptFile, false)
if err != nil {
return TaskResult{TaskID: task.ID, ExitCode: 1, Error: "failed to read prompt file: " + err.Error()}
}
task.Task = wrapTaskWithAgentPrompt(prompt, task.Task)
task.Task = WrapTaskWithAgentPrompt(prompt, task.Task)
}
if task.UseStdin || shouldUseStdin(task.Task, false) {
if task.UseStdin || ShouldUseStdin(task.Task, false) {
task.UseStdin = true
}
@@ -263,12 +344,10 @@ func defaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
if parentCtx == nil {
parentCtx = context.Background()
}
return runCodexTaskWithContext(parentCtx, task, backend, nil, false, true, timeout)
return RunCodexTaskWithContext(parentCtx, task, backend, "", nil, nil, false, true, timeout)
}
var runCodexTaskFn = defaultRunCodexTaskFn
func topologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
func TopologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
idToTask := make(map[string]TaskSpec, len(tasks))
indegree := make(map[string]int, len(tasks))
adj := make(map[string][]string, len(tasks))
@@ -334,12 +413,16 @@ func topologicalSort(tasks []TaskSpec) ([][]TaskSpec, error) {
return layers, nil
}
func executeConcurrent(layers [][]TaskSpec, timeout int) []TaskResult {
maxWorkers := resolveMaxParallelWorkers()
return executeConcurrentWithContext(context.Background(), layers, timeout, maxWorkers)
func ExecuteConcurrent(layers [][]TaskSpec, timeout int, runTask func(TaskSpec, int) TaskResult) []TaskResult {
maxWorkers := config.ResolveMaxParallelWorkers()
return ExecuteConcurrentWithContext(context.Background(), layers, timeout, maxWorkers, runTask)
}
func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec, timeout int, maxWorkers int) []TaskResult {
func ExecuteConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec, timeout int, maxWorkers int, runTask func(TaskSpec, int) TaskResult) []TaskResult {
if runTask == nil {
runTask = DefaultRunCodexTaskFn
}
totalTasks := 0
for _, layer := range layers {
totalTasks += len(layer)
@@ -470,7 +553,7 @@ func executeConcurrentWithContext(parentCtx context.Context, layers [][]TaskSpec
printTaskStart(ts.ID, taskLogPath, handle.shared)
res := runCodexTaskFn(ts, timeout)
res := runTask(ts, timeout)
if taskLogPath != "" {
if res.LogPath == "" || (handle.shared && handle.logger != nil && res.LogPath == handle.logger.Path()) {
res.LogPath = taskLogPath
@@ -535,14 +618,14 @@ func getStatusSymbols() (success, warning, failed string) {
return "✓", "⚠️", "✗"
}
func generateFinalOutput(results []TaskResult) string {
return generateFinalOutputWithMode(results, true) // default to summary mode
func GenerateFinalOutput(results []TaskResult) string {
return GenerateFinalOutputWithMode(results, true) // default to summary mode
}
// generateFinalOutputWithMode generates output based on mode
// summaryOnly=true: structured report - every token has value
// summaryOnly=false: full output with complete messages (legacy behavior)
func generateFinalOutputWithMode(results []TaskResult, summaryOnly bool) string {
func GenerateFinalOutputWithMode(results []TaskResult, summaryOnly bool) string {
var sb strings.Builder
successSymbol, warningSymbol, failedSymbol := getStatusSymbols()
@@ -756,7 +839,7 @@ func buildCodexArgs(cfg *Config, targetArg string) []string {
args := []string{"e"}
// Default to bypass sandbox unless CODEX_BYPASS_SANDBOX=false
if cfg.Yolo || envFlagDefaultTrue("CODEX_BYPASS_SANDBOX") {
if cfg.Yolo || config.EnvFlagDefaultTrue("CODEX_BYPASS_SANDBOX") {
logWarn("YOLO mode or CODEX_BYPASS_SANDBOX enabled: running without approval/sandbox protection")
args = append(args, "--dangerously-bypass-approvals-and-sandbox")
}
@@ -766,7 +849,7 @@ func buildCodexArgs(cfg *Config, targetArg string) []string {
}
if reasoningEffort := strings.TrimSpace(cfg.ReasoningEffort); reasoningEffort != "" {
args = append(args, "--reasoning-effort", reasoningEffort)
args = append(args, "-c", "model_reasoning_effort="+reasoningEffort)
}
args = append(args, "--skip-git-repo-check")
@@ -787,25 +870,20 @@ func buildCodexArgs(cfg *Config, targetArg string) []string {
)
}
func runCodexTask(taskSpec TaskSpec, silent bool, timeoutSec int) TaskResult {
return runCodexTaskWithContext(context.Background(), taskSpec, nil, nil, false, silent, timeoutSec)
}
func runCodexProcess(parentCtx context.Context, codexArgs []string, taskText string, useStdin bool, timeoutSec int) (message, threadID string, exitCode int) {
res := runCodexTaskWithContext(parentCtx, TaskSpec{Task: taskText, WorkDir: defaultWorkdir, Mode: "new", UseStdin: useStdin}, nil, codexArgs, true, false, timeoutSec)
return res.Message, res.SessionID, res.ExitCode
}
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
func RunCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, defaultCommandName string, defaultArgsBuilder func(*Config, string) []string, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
taskCtx := taskSpec.Context
if parentCtx == nil {
parentCtx = taskSpec.Context
parentCtx = taskCtx
}
if parentCtx == nil {
parentCtx = context.Background()
}
result := TaskResult{TaskID: taskSpec.ID}
injectedLogger := taskLoggerFromContext(parentCtx)
injectedLogger := taskLoggerFromContext(taskCtx)
if injectedLogger == nil {
injectedLogger = taskLoggerFromContext(parentCtx)
}
logger := injectedLogger
cfg := &Config{
@@ -819,8 +897,14 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
Backend: defaultBackendName,
}
commandName := codexCommand
argsBuilder := buildCodexArgsFn
commandName := strings.TrimSpace(defaultCommandName)
if commandName == "" {
commandName = defaultBackendName
}
argsBuilder := defaultArgsBuilder
if argsBuilder == nil {
argsBuilder = buildCodexArgs
}
if backend != nil {
commandName = backend.Command()
argsBuilder = backend.BuildArgs
@@ -844,19 +928,23 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
return result
}
var claudeEnv map[string]string
var fileEnv map[string]string
if cfg.Backend == "claude" {
settings := loadMinimalClaudeSettings()
claudeEnv = settings.Env
fileEnv = settings.Env
if cfg.Mode != "resume" && strings.TrimSpace(cfg.Model) == "" && settings.Model != "" {
cfg.Model = settings.Model
}
}
// Load gemini env from ~/.gemini/.env if exists
var geminiEnv map[string]string
if cfg.Backend == "gemini" {
geminiEnv = loadGeminiEnv()
fileEnv = loadGeminiEnv()
if cfg.Mode != "resume" && strings.TrimSpace(cfg.Model) == "" {
if model := fileEnv["GEMINI_MODEL"]; model != "" {
cfg.Model = model
}
}
}
useStdin := taskSpec.UseStdin
@@ -958,11 +1046,34 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
cmd := newCommandRunner(ctx, commandName, codexArgs...)
if cfg.Backend == "claude" && len(claudeEnv) > 0 {
cmd.SetEnv(claudeEnv)
if len(fileEnv) > 0 {
cmd.SetEnv(fileEnv)
}
if cfg.Backend == "gemini" && len(geminiEnv) > 0 {
cmd.SetEnv(geminiEnv)
envBackend := backend
if envBackend == nil && cfg.Backend != "" {
if b, err := selectBackendFn(cfg.Backend); err == nil {
envBackend = b
}
}
if envBackend != nil {
baseURL, apiKey := config.ResolveBackendConfig(cfg.Backend)
if agentName := strings.TrimSpace(taskSpec.Agent); agentName != "" {
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _ := config.ResolveAgentConfig(agentName)
if strings.EqualFold(strings.TrimSpace(agentBackend), strings.TrimSpace(cfg.Backend)) {
baseURL, apiKey = agentBaseURL, agentAPIKey
}
}
if injected := envBackend.Env(baseURL, apiKey); len(injected) > 0 {
cmd.SetEnv(injected)
// Log injected env vars with masked API keys (to file and stderr)
for k, v := range injected {
msg := fmt.Sprintf("Env: %s=%s", k, maskSensitiveValue(k, v))
logInfoFn(msg)
fmt.Fprintln(os.Stderr, " "+msg)
}
}
}
// For backends that don't support -C flag (claude, gemini), set working directory via cmd.Dir
@@ -983,6 +1094,9 @@ func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
if cfg.Backend == "gemini" {
stderrFilter = newFilteringWriter(os.Stderr, geminiNoisePatterns)
stderrOut = stderrFilter
} else if cfg.Backend == "codex" {
stderrFilter = newFilteringWriter(os.Stderr, codexNoisePatterns)
stderrOut = stderrFilter
}
stderrWriters = append([]io.Writer{stderrOut}, stderrWriters...)
}
@@ -1199,11 +1313,9 @@ waitLoop:
case parsed = <-parseCh:
closeWithReason(stdout, stdoutCloseReasonWait)
case <-messageSeen:
messageSeenObserved = true
closeWithReason(stdout, stdoutCloseReasonWait)
parsed = <-parseCh
case <-completeSeen:
completeSeenObserved = true
closeWithReason(stdout, stdoutCloseReasonWait)
parsed = <-parseCh
case <-drainTimer.C:
@@ -1273,44 +1385,13 @@ waitLoop:
return result
}
func forwardSignals(ctx context.Context, cmd commandRunner, logErrorFn func(string)) {
notify := signalNotifyFn
stop := signalStopFn
if notify == nil {
notify = signal.Notify
}
if stop == nil {
stop = signal.Stop
}
sigCh := make(chan os.Signal, 1)
notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
go func() {
defer stop(sigCh)
select {
case sig := <-sigCh:
logErrorFn(fmt.Sprintf("Received signal: %v", sig))
if proc := cmd.Process(); proc != nil {
_ = sendTermSignal(proc)
time.AfterFunc(time.Duration(forceKillDelay.Load())*time.Second, func() {
if p := cmd.Process(); p != nil {
_ = p.Kill()
}
})
}
case <-ctx.Done():
}
}()
}
func cancelReason(commandName string, ctx context.Context) string {
if ctx == nil {
return "Context cancelled"
}
if commandName == "" {
commandName = codexCommand
commandName = defaultBackendName
}
if errors.Is(ctx.Err(), context.DeadlineExceeded) {
@@ -1375,20 +1456,18 @@ func terminateCommand(cmd commandRunner) *forceKillTimer {
return &forceKillTimer{timer: timer, done: done}
}
func terminateProcess(cmd commandRunner) *time.Timer {
if cmd == nil {
return nil
}
proc := cmd.Process()
if proc == nil {
return nil
}
_ = sendTermSignal(proc)
return time.AfterFunc(time.Duration(forceKillDelay.Load())*time.Second, func() {
if p := cmd.Process(); p != nil {
_ = p.Kill()
// maskSensitiveValue masks sensitive values like API keys for logging.
// Values containing "key", "token", or "secret" (case-insensitive) are masked.
// For values longer than 8 chars: shows first 4 + **** + last 4.
// For shorter values: shows only ****.
func maskSensitiveValue(key, value string) string {
keyLower := strings.ToLower(key)
if strings.Contains(keyLower, "key") || strings.Contains(keyLower, "token") || strings.Contains(keyLower, "secret") {
if len(value) > 8 {
return value[:4] + "****" + value[len(value)-4:]
} else if len(value) > 0 {
return "****"
}
})
}
return value
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"bytes"
@@ -18,6 +18,12 @@ var geminiNoisePatterns = []string{
"YOLO mode is enabled",
}
// codexNoisePatterns contains stderr patterns to filter for codex backend
var codexNoisePatterns = []string{
"ERROR codex_core::codex: needs_follow_up:",
"ERROR codex_core::skills::loader:",
}
// filteringWriter wraps an io.Writer and filters out lines matching patterns
type filteringWriter struct {
w io.Writer
@@ -39,7 +45,7 @@ func (f *filteringWriter) Write(p []byte) (n int, err error) {
break
}
if !f.shouldFilter(line) {
f.w.Write([]byte(line))
_, _ = f.w.Write([]byte(line))
}
}
return len(p), nil
@@ -59,7 +65,7 @@ func (f *filteringWriter) Flush() {
if f.buf.Len() > 0 {
remaining := f.buf.String()
if !f.shouldFilter(remaining) {
f.w.Write([]byte(remaining))
_, _ = f.w.Write([]byte(remaining))
}
f.buf.Reset()
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"bytes"
@@ -48,7 +48,7 @@ func TestFilteringWriter(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
var buf bytes.Buffer
fw := newFilteringWriter(&buf, tt.patterns)
fw.Write([]byte(tt.input))
_, _ = fw.Write([]byte(tt.input))
fw.Flush()
if got := buf.String(); got != tt.want {
@@ -63,8 +63,8 @@ func TestFilteringWriterPartialLines(t *testing.T) {
fw := newFilteringWriter(&buf, geminiNoisePatterns)
// Write partial line
fw.Write([]byte("Hello "))
fw.Write([]byte("World\n"))
_, _ = fw.Write([]byte("Hello "))
_, _ = fw.Write([]byte("World\n"))
fw.Flush()
if got := buf.String(); got != "Hello World\n" {

View File

@@ -0,0 +1,124 @@
package executor
import "bytes"
type logWriter struct {
prefix string
maxLen int
buf bytes.Buffer
dropped bool
}
func newLogWriter(prefix string, maxLen int) *logWriter {
if maxLen <= 0 {
maxLen = codexLogLineLimit
}
return &logWriter{prefix: prefix, maxLen: maxLen}
}
func (lw *logWriter) Write(p []byte) (int, error) {
if lw == nil {
return len(p), nil
}
total := len(p)
for len(p) > 0 {
if idx := bytes.IndexByte(p, '\n'); idx >= 0 {
lw.writeLimited(p[:idx])
lw.logLine(true)
p = p[idx+1:]
continue
}
lw.writeLimited(p)
break
}
return total, nil
}
func (lw *logWriter) Flush() {
if lw == nil || lw.buf.Len() == 0 {
return
}
lw.logLine(false)
}
func (lw *logWriter) logLine(force bool) {
if lw == nil {
return
}
line := lw.buf.String()
dropped := lw.dropped
lw.dropped = false
lw.buf.Reset()
if line == "" && !force {
return
}
if lw.maxLen > 0 {
if dropped {
if lw.maxLen > 3 {
line = line[:min(len(line), lw.maxLen-3)] + "..."
} else {
line = line[:min(len(line), lw.maxLen)]
}
} else if len(line) > lw.maxLen {
cutoff := lw.maxLen
if cutoff > 3 {
line = line[:cutoff-3] + "..."
} else {
line = line[:cutoff]
}
}
}
logInfo(lw.prefix + line)
}
func (lw *logWriter) writeLimited(p []byte) {
if lw == nil || len(p) == 0 {
return
}
if lw.maxLen <= 0 {
lw.buf.Write(p)
return
}
remaining := lw.maxLen - lw.buf.Len()
if remaining <= 0 {
lw.dropped = true
return
}
if len(p) <= remaining {
lw.buf.Write(p)
return
}
lw.buf.Write(p[:remaining])
lw.dropped = true
}
type tailBuffer struct {
limit int
data []byte
}
func (b *tailBuffer) Write(p []byte) (int, error) {
if b.limit <= 0 {
return len(p), nil
}
if len(p) >= b.limit {
b.data = append(b.data[:0], p[len(p)-b.limit:]...)
return len(p), nil
}
total := len(b.data) + len(p)
if total <= b.limit {
b.data = append(b.data, p...)
return len(p), nil
}
overflow := total - b.limit
b.data = append(b.data[overflow:], p...)
return len(p), nil
}
func (b *tailBuffer) String() string {
return string(b.data)
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"os"
@@ -7,14 +7,12 @@ import (
)
func TestLogWriterWriteLimitsBuffer(t *testing.T) {
defer resetTestHooks()
logger, err := NewLogger()
if err != nil {
t.Fatalf("NewLogger error: %v", err)
}
setLogger(logger)
defer closeLogger()
t.Cleanup(func() { _ = closeLogger() })
lw := newLogWriter("P:", 10)
_, _ = lw.Write([]byte(strings.Repeat("a", 100)))

View File

@@ -0,0 +1,135 @@
package executor
import (
"bytes"
"fmt"
"strings"
config "codeagent-wrapper/internal/config"
)
func ParseParallelConfig(data []byte) (*ParallelConfig, error) {
trimmed := bytes.TrimSpace(data)
if len(trimmed) == 0 {
return nil, fmt.Errorf("parallel config is empty")
}
tasks := strings.Split(string(trimmed), "---TASK---")
var cfg ParallelConfig
seen := make(map[string]struct{})
taskIndex := 0
for _, taskBlock := range tasks {
taskBlock = strings.TrimSpace(taskBlock)
if taskBlock == "" {
continue
}
taskIndex++
parts := strings.SplitN(taskBlock, "---CONTENT---", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("task block #%d missing ---CONTENT--- separator", taskIndex)
}
meta := strings.TrimSpace(parts[0])
content := strings.TrimSpace(parts[1])
task := TaskSpec{WorkDir: defaultWorkdir}
agentSpecified := false
for _, line := range strings.Split(meta, "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
kv := strings.SplitN(line, ":", 2)
if len(kv) != 2 {
continue
}
key := strings.TrimSpace(kv[0])
value := strings.TrimSpace(kv[1])
switch key {
case "id":
task.ID = value
case "workdir":
// Validate workdir: "-" is not a valid directory
if value == "-" {
return nil, fmt.Errorf("task block #%d has invalid workdir: '-' is not a valid directory path", taskIndex)
}
task.WorkDir = value
case "session_id":
task.SessionID = value
task.Mode = "resume"
case "backend":
task.Backend = value
case "model":
task.Model = value
case "reasoning_effort":
task.ReasoningEffort = value
case "agent":
agentSpecified = true
task.Agent = value
case "skip_permissions", "skip-permissions":
if value == "" {
task.SkipPermissions = true
continue
}
task.SkipPermissions = config.ParseBoolFlag(value, false)
case "dependencies":
for _, dep := range strings.Split(value, ",") {
dep = strings.TrimSpace(dep)
if dep != "" {
task.Dependencies = append(task.Dependencies, dep)
}
}
}
}
if task.Mode == "" {
task.Mode = "new"
}
if agentSpecified {
if strings.TrimSpace(task.Agent) == "" {
return nil, fmt.Errorf("task block #%d has empty agent field", taskIndex)
}
if err := config.ValidateAgentName(task.Agent); err != nil {
return nil, fmt.Errorf("task block #%d invalid agent name: %w", taskIndex, err)
}
backend, model, promptFile, reasoning, _, _, _ := config.ResolveAgentConfig(task.Agent)
if task.Backend == "" {
task.Backend = backend
}
if task.Model == "" {
task.Model = model
}
if task.ReasoningEffort == "" {
task.ReasoningEffort = reasoning
}
task.PromptFile = promptFile
}
if task.ID == "" {
return nil, fmt.Errorf("task block #%d missing id field", taskIndex)
}
if content == "" {
return nil, fmt.Errorf("task block #%d (%q) missing content", taskIndex, task.ID)
}
if task.Mode == "resume" && strings.TrimSpace(task.SessionID) == "" {
return nil, fmt.Errorf("task block #%d (%q) has empty session_id", taskIndex, task.ID)
}
if _, exists := seen[task.ID]; exists {
return nil, fmt.Errorf("task block #%d has duplicate id: %s", taskIndex, task.ID)
}
task.Task = content
cfg.Tasks = append(cfg.Tasks, task)
seen[task.ID] = struct{}{}
}
if len(cfg.Tasks) == 0 {
return nil, fmt.Errorf("no tasks found")
}
return &cfg, nil
}

View File

@@ -0,0 +1,130 @@
package executor
import (
"fmt"
"os"
"path/filepath"
"strings"
)
func ReadAgentPromptFile(path string, allowOutsideClaudeDir bool) (string, error) {
raw := strings.TrimSpace(path)
if raw == "" {
return "", nil
}
expanded := raw
if raw == "~" || strings.HasPrefix(raw, "~/") || strings.HasPrefix(raw, "~\\") {
home, err := os.UserHomeDir()
if err != nil {
return "", err
}
if raw == "~" {
expanded = home
} else {
expanded = home + raw[1:]
}
}
absPath, err := filepath.Abs(expanded)
if err != nil {
return "", err
}
absPath = filepath.Clean(absPath)
home, err := os.UserHomeDir()
if err != nil {
if !allowOutsideClaudeDir {
return "", err
}
logWarn(fmt.Sprintf("Failed to resolve home directory for prompt file validation: %v; proceeding without restriction", err))
} else {
allowedDirs := []string{
filepath.Clean(filepath.Join(home, ".claude")),
filepath.Clean(filepath.Join(home, ".codeagent", "agents")),
}
for i := range allowedDirs {
allowedAbs, err := filepath.Abs(allowedDirs[i])
if err == nil {
allowedDirs[i] = filepath.Clean(allowedAbs)
}
}
isWithinDir := func(path, dir string) bool {
rel, err := filepath.Rel(dir, path)
if err != nil {
return false
}
rel = filepath.Clean(rel)
if rel == "." {
return true
}
if rel == ".." {
return false
}
prefix := ".." + string(os.PathSeparator)
return !strings.HasPrefix(rel, prefix)
}
if !allowOutsideClaudeDir {
withinAllowed := false
for _, dir := range allowedDirs {
if isWithinDir(absPath, dir) {
withinAllowed = true
break
}
}
if !withinAllowed {
logWarn(fmt.Sprintf("Refusing to read prompt file outside allowed dirs (%s): %s", strings.Join(allowedDirs, ", "), absPath))
return "", fmt.Errorf("prompt file must be under ~/.claude or ~/.codeagent/agents")
}
resolvedPath, errPath := filepath.EvalSymlinks(absPath)
if errPath == nil {
resolvedPath = filepath.Clean(resolvedPath)
resolvedAllowed := make([]string, 0, len(allowedDirs))
for _, dir := range allowedDirs {
resolvedBase, errBase := filepath.EvalSymlinks(dir)
if errBase != nil {
continue
}
resolvedAllowed = append(resolvedAllowed, filepath.Clean(resolvedBase))
}
if len(resolvedAllowed) > 0 {
withinResolved := false
for _, dir := range resolvedAllowed {
if isWithinDir(resolvedPath, dir) {
withinResolved = true
break
}
}
if !withinResolved {
logWarn(fmt.Sprintf("Refusing to read prompt file outside allowed dirs (%s) (resolved): %s", strings.Join(resolvedAllowed, ", "), resolvedPath))
return "", fmt.Errorf("prompt file must be under ~/.claude or ~/.codeagent/agents")
}
}
}
} else {
withinAllowed := false
for _, dir := range allowedDirs {
if isWithinDir(absPath, dir) {
withinAllowed = true
break
}
}
if !withinAllowed {
logWarn(fmt.Sprintf("Reading prompt file outside allowed dirs (%s): %s", strings.Join(allowedDirs, ", "), absPath))
}
}
}
data, err := os.ReadFile(absPath)
if err != nil {
return "", err
}
return strings.TrimRight(string(data), "\r\n"), nil
}
func WrapTaskWithAgentPrompt(prompt string, task string) string {
return "<agent-prompt>\n" + prompt + "\n</agent-prompt>\n\n" + task
}

View File

@@ -1,4 +1,4 @@
package main
package executor
import (
"os"
@@ -9,7 +9,7 @@ import (
)
func TestWrapTaskWithAgentPrompt(t *testing.T) {
got := wrapTaskWithAgentPrompt("P", "do")
got := WrapTaskWithAgentPrompt("P", "do")
want := "<agent-prompt>\nP\n</agent-prompt>\n\ndo"
if got != want {
t.Fatalf("wrapTaskWithAgentPrompt mismatch:\n got=%q\nwant=%q", got, want)
@@ -18,7 +18,7 @@ func TestWrapTaskWithAgentPrompt(t *testing.T) {
func TestReadAgentPromptFile_EmptyPath(t *testing.T) {
for _, allowOutside := range []bool{false, true} {
got, err := readAgentPromptFile(" ", allowOutside)
got, err := ReadAgentPromptFile(" ", allowOutside)
if err != nil {
t.Fatalf("unexpected error (allowOutside=%v): %v", allowOutside, err)
}
@@ -35,7 +35,7 @@ func TestReadAgentPromptFile_ExplicitAbsolutePath(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
got, err := readAgentPromptFile(path, true)
got, err := ReadAgentPromptFile(path, true)
if err != nil {
t.Fatalf("readAgentPromptFile error: %v", err)
}
@@ -54,7 +54,7 @@ func TestReadAgentPromptFile_ExplicitTildeExpansion(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
got, err := readAgentPromptFile("~/prompt.md", true)
got, err := ReadAgentPromptFile("~/prompt.md", true)
if err != nil {
t.Fatalf("readAgentPromptFile error: %v", err)
}
@@ -77,7 +77,30 @@ func TestReadAgentPromptFile_RestrictedAllowsClaudeDir(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
got, err := readAgentPromptFile("~/.claude/prompt.md", false)
got, err := ReadAgentPromptFile("~/.claude/prompt.md", false)
if err != nil {
t.Fatalf("readAgentPromptFile error: %v", err)
}
if got != "OK" {
t.Fatalf("got %q, want %q", got, "OK")
}
}
func TestReadAgentPromptFile_RestrictedAllowsCodeagentAgentsDir(t *testing.T) {
home := t.TempDir()
t.Setenv("HOME", home)
t.Setenv("USERPROFILE", home)
agentDir := filepath.Join(home, ".codeagent", "agents")
if err := os.MkdirAll(agentDir, 0o755); err != nil {
t.Fatalf("MkdirAll: %v", err)
}
path := filepath.Join(agentDir, "sarsh.md")
if err := os.WriteFile(path, []byte("OK\n"), 0o644); err != nil {
t.Fatalf("WriteFile: %v", err)
}
got, err := ReadAgentPromptFile("~/.codeagent/agents/sarsh.md", false)
if err != nil {
t.Fatalf("readAgentPromptFile error: %v", err)
}
@@ -96,7 +119,7 @@ func TestReadAgentPromptFile_RestrictedRejectsOutsideClaudeDir(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
if _, err := readAgentPromptFile("~/prompt.md", false); err == nil {
if _, err := ReadAgentPromptFile("~/prompt.md", false); err == nil {
t.Fatalf("expected error for prompt file outside ~/.claude, got nil")
}
}
@@ -111,7 +134,7 @@ func TestReadAgentPromptFile_RestrictedRejectsTraversal(t *testing.T) {
t.Fatalf("WriteFile: %v", err)
}
if _, err := readAgentPromptFile("~/.claude/../secret.md", false); err == nil {
if _, err := ReadAgentPromptFile("~/.claude/../secret.md", false); err == nil {
t.Fatalf("expected traversal to be rejected, got nil")
}
}
@@ -126,7 +149,7 @@ func TestReadAgentPromptFile_NotFound(t *testing.T) {
t.Fatalf("MkdirAll: %v", err)
}
_, err := readAgentPromptFile("~/.claude/missing.md", false)
_, err := ReadAgentPromptFile("~/.claude/missing.md", false)
if err == nil || !os.IsNotExist(err) {
t.Fatalf("expected not-exist error, got %v", err)
}
@@ -153,7 +176,7 @@ func TestReadAgentPromptFile_PermissionDenied(t *testing.T) {
t.Fatalf("Chmod: %v", err)
}
_, err := readAgentPromptFile("~/.claude/private.md", false)
_, err := ReadAgentPromptFile("~/.claude/private.md", false)
if err == nil {
t.Fatalf("expected permission error, got nil")
}

View File

@@ -0,0 +1,104 @@
package executor
import "strings"
// extractCoverageGap extracts what's missing from coverage reports.
func extractCoverageGap(message string) string {
if message == "" {
return ""
}
lower := strings.ToLower(message)
lines := strings.Split(message, "\n")
for _, line := range lines {
lineLower := strings.ToLower(line)
line = strings.TrimSpace(line)
if strings.Contains(lineLower, "uncovered") ||
strings.Contains(lineLower, "not covered") ||
strings.Contains(lineLower, "missing coverage") ||
strings.Contains(lineLower, "lines not covered") {
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
if strings.Contains(lineLower, "branch") && strings.Contains(lineLower, "not taken") {
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
}
if strings.Contains(lower, "function") && strings.Contains(lower, "0%") {
for _, line := range lines {
if strings.Contains(strings.ToLower(line), "0%") && strings.Contains(line, "function") {
line = strings.TrimSpace(line)
if len(line) > 100 {
return line[:97] + "..."
}
return line
}
}
}
return ""
}
// extractErrorDetail extracts meaningful error context from task output.
func extractErrorDetail(message string, maxLen int) string {
if message == "" || maxLen <= 0 {
return ""
}
lines := strings.Split(message, "\n")
var errorLines []string
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
lower := strings.ToLower(line)
if strings.HasPrefix(line, "at ") && strings.Contains(line, "(") {
if len(errorLines) > 0 && strings.HasPrefix(strings.ToLower(errorLines[len(errorLines)-1]), "at ") {
continue
}
}
if strings.Contains(lower, "error") ||
strings.Contains(lower, "fail") ||
strings.Contains(lower, "exception") ||
strings.Contains(lower, "assert") ||
strings.Contains(lower, "expected") ||
strings.Contains(lower, "timeout") ||
strings.Contains(lower, "not found") ||
strings.Contains(lower, "cannot") ||
strings.Contains(lower, "undefined") ||
strings.HasPrefix(line, "FAIL") ||
strings.HasPrefix(line, "●") {
errorLines = append(errorLines, line)
}
}
if len(errorLines) == 0 {
start := len(lines) - 5
if start < 0 {
start = 0
}
for _, line := range lines[start:] {
line = strings.TrimSpace(line)
if line != "" {
errorLines = append(errorLines, line)
}
}
}
result := strings.Join(errorLines, " | ")
return safeTruncate(result, maxLen)
}

View File

@@ -1,7 +1,7 @@
//go:build unix || darwin || linux
// +build unix darwin linux
package main
package executor
import (
"syscall"

View File

@@ -1,7 +1,7 @@
//go:build windows
// +build windows
package main
package executor
import (
"io"

View File

@@ -0,0 +1,15 @@
package executor
import "strings"
const stdinSpecialChars = "\n\\\"'`$"
func ShouldUseStdin(taskText string, piped bool) bool {
if piped {
return true
}
if len(taskText) > 800 {
return true
}
return strings.ContainsAny(taskText, stdinSpecialChars)
}

View File

@@ -0,0 +1,46 @@
package executor
import "context"
// ParallelConfig defines the JSON schema for parallel execution.
type ParallelConfig struct {
Tasks []TaskSpec `json:"tasks"`
GlobalBackend string `json:"backend,omitempty"`
}
// TaskSpec describes an individual task entry in the parallel config.
type TaskSpec struct {
ID string `json:"id"`
Task string `json:"task"`
WorkDir string `json:"workdir,omitempty"`
Dependencies []string `json:"dependencies,omitempty"`
SessionID string `json:"session_id,omitempty"`
Backend string `json:"backend,omitempty"`
Model string `json:"model,omitempty"`
ReasoningEffort string `json:"reasoning_effort,omitempty"`
Agent string `json:"agent,omitempty"`
PromptFile string `json:"prompt_file,omitempty"`
SkipPermissions bool `json:"skip_permissions,omitempty"`
Mode string `json:"-"`
UseStdin bool `json:"-"`
Context context.Context `json:"-"`
}
// TaskResult captures the execution outcome of a task.
type TaskResult struct {
TaskID string `json:"task_id"`
ExitCode int `json:"exit_code"`
Message string `json:"message"`
SessionID string `json:"session_id"`
Error string `json:"error"`
LogPath string `json:"log_path"`
// Structured report fields
Coverage string `json:"coverage,omitempty"` // extracted coverage percentage (e.g., "92%")
CoverageNum float64 `json:"coverage_num,omitempty"` // numeric coverage for comparison
CoverageTarget float64 `json:"coverage_target,omitempty"` // target coverage (default 90)
FilesChanged []string `json:"files_changed,omitempty"` // list of changed files
KeyOutput string `json:"key_output,omitempty"` // brief summary of what was done
TestsPassed int `json:"tests_passed,omitempty"` // number of tests passed
TestsFailed int `json:"tests_failed,omitempty"` // number of tests failed
sharedLog bool
}

View File

@@ -0,0 +1,57 @@
package executor
import (
"context"
"os/exec"
backend "codeagent-wrapper/internal/backend"
)
type CommandRunner = commandRunner
type ProcessHandle = processHandle
func SetForceKillDelay(seconds int32) (restore func()) {
prev := forceKillDelay.Load()
forceKillDelay.Store(seconds)
return func() { forceKillDelay.Store(prev) }
}
func SetSelectBackendFn(fn func(string) (Backend, error)) (restore func()) {
prev := selectBackendFn
if fn != nil {
selectBackendFn = fn
} else {
selectBackendFn = backend.Select
}
return func() { selectBackendFn = prev }
}
func SetCommandContextFn(fn func(context.Context, string, ...string) *exec.Cmd) (restore func()) {
prev := commandContext
if fn != nil {
commandContext = fn
} else {
commandContext = exec.CommandContext
}
return func() { commandContext = prev }
}
func SetNewCommandRunner(fn func(context.Context, string, ...string) CommandRunner) (restore func()) {
prev := newCommandRunner
if fn != nil {
newCommandRunner = fn
} else {
newCommandRunner = func(ctx context.Context, name string, args ...string) commandRunner {
return &realCmd{cmd: commandContext(ctx, name, args...)}
}
}
return func() { newCommandRunner = prev }
}
func WithTaskLogger(ctx context.Context, logger *Logger) context.Context {
return withTaskLogger(ctx, logger)
}
func TaskLoggerFromContext(ctx context.Context) *Logger {
return taskLoggerFromContext(ctx)
}

View File

@@ -0,0 +1,59 @@
package logger
import "sync/atomic"
var loggerPtr atomic.Pointer[Logger]
func setLogger(l *Logger) {
loggerPtr.Store(l)
}
func closeLogger() error {
logger := loggerPtr.Swap(nil)
if logger == nil {
return nil
}
return logger.Close()
}
func activeLogger() *Logger {
return loggerPtr.Load()
}
func logDebug(msg string) {
if logger := activeLogger(); logger != nil {
logger.Debug(msg)
}
}
func logInfo(msg string) {
if logger := activeLogger(); logger != nil {
logger.Info(msg)
}
}
func logWarn(msg string) {
if logger := activeLogger(); logger != nil {
logger.Warn(msg)
}
}
func logError(msg string) {
if logger := activeLogger(); logger != nil {
logger.Error(msg)
}
}
func SetLogger(l *Logger) { setLogger(l) }
func CloseLogger() error { return closeLogger() }
func ActiveLogger() *Logger { return activeLogger() }
func LogInfo(msg string) { logInfo(msg) }
func LogDebug(msg string) { logDebug(msg) }
func LogWarn(msg string) { logWarn(msg) }
func LogError(msg string) { logError(msg) }

View File

@@ -1,4 +1,4 @@
package main
package logger
import (
"bufio"
@@ -13,6 +13,8 @@ import (
"sync"
"sync/atomic"
"time"
"github.com/rs/zerolog"
)
// Logger writes log messages asynchronously to a temp file.
@@ -22,6 +24,7 @@ type Logger struct {
path string
file *os.File
writer *bufio.Writer
zlogger zerolog.Logger
ch chan logEntry
flushReq chan chan struct{}
done chan struct{}
@@ -37,6 +40,7 @@ type Logger struct {
type logEntry struct {
msg string
level zerolog.Level
isError bool // true for ERROR or WARN levels
}
@@ -73,7 +77,7 @@ func NewLogger() (*Logger, error) {
// Useful for tests that need isolated log files within the same process.
func NewLoggerWithSuffix(suffix string) (*Logger, error) {
pid := os.Getpid()
filename := fmt.Sprintf("%s-%d", primaryLogPrefix(), pid)
filename := fmt.Sprintf("%s-%d", PrimaryLogPrefix(), pid)
var safeSuffix string
if suffix != "" {
safeSuffix = sanitizeLogSuffix(suffix)
@@ -103,6 +107,8 @@ func NewLoggerWithSuffix(suffix string) (*Logger, error) {
done: make(chan struct{}),
}
l.zlogger = zerolog.New(l.writer).With().Timestamp().Logger()
l.workerWG.Add(1)
go l.run()
@@ -184,17 +190,24 @@ func (l *Logger) Path() string {
return l.path
}
func (l *Logger) IsClosed() bool {
if l == nil {
return true
}
return l.closed.Load()
}
// Info logs at INFO level.
func (l *Logger) Info(msg string) { l.log("INFO", msg) }
func (l *Logger) Info(msg string) { l.logWithLevel(zerolog.InfoLevel, msg) }
// Warn logs at WARN level.
func (l *Logger) Warn(msg string) { l.log("WARN", msg) }
func (l *Logger) Warn(msg string) { l.logWithLevel(zerolog.WarnLevel, msg) }
// Debug logs at DEBUG level.
func (l *Logger) Debug(msg string) { l.log("DEBUG", msg) }
func (l *Logger) Debug(msg string) { l.logWithLevel(zerolog.DebugLevel, msg) }
// Error logs at ERROR level.
func (l *Logger) Error(msg string) { l.log("ERROR", msg) }
func (l *Logger) Error(msg string) { l.logWithLevel(zerolog.ErrorLevel, msg) }
// Close signals the worker to flush and close the log file.
// The log file is NOT removed, allowing inspection after program exit.
@@ -335,7 +348,7 @@ func (l *Logger) Flush() {
}
}
func (l *Logger) log(level, msg string) {
func (l *Logger) logWithLevel(entryLevel zerolog.Level, msg string) {
if l == nil {
return
}
@@ -343,8 +356,8 @@ func (l *Logger) log(level, msg string) {
return
}
isError := level == "WARN" || level == "ERROR"
entry := logEntry{msg: msg, isError: isError}
isError := entryLevel == zerolog.WarnLevel || entryLevel == zerolog.ErrorLevel
entry := logEntry{msg: msg, level: entryLevel, isError: isError}
l.flushMu.Lock()
l.pendingWG.Add(1)
l.flushMu.Unlock()
@@ -366,8 +379,7 @@ func (l *Logger) run() {
defer ticker.Stop()
writeEntry := func(entry logEntry) {
timestamp := time.Now().Format("2006-01-02 15:04:05.000")
fmt.Fprintf(l.writer, "[%s] %s\n", timestamp, entry.msg)
l.zlogger.WithLevel(entry.level).Msg(entry.msg)
// Cache error/warn entries in memory for fast extraction
if entry.isError {
@@ -439,10 +451,7 @@ func cleanupOldLogs() (CleanupStats, error) {
var stats CleanupStats
tempDir := os.TempDir()
prefixes := logPrefixes()
if len(prefixes) == 0 {
prefixes = []string{defaultWrapperName}
}
prefixes := LogPrefixes()
seen := make(map[string]struct{})
var matches []string
@@ -473,7 +482,8 @@ func cleanupOldLogs() (CleanupStats, error) {
stats.Kept++
stats.KeptFiles = append(stats.KeptFiles, filename)
if reason != "" {
logWarn(fmt.Sprintf("cleanupOldLogs: skipping %s: %s", filename, reason))
// Use Debug level to avoid polluting Recent Errors with cleanup noise
logDebug(fmt.Sprintf("cleanupOldLogs: skipping %s: %s", filename, reason))
}
continue
}
@@ -591,10 +601,7 @@ func isPIDReused(logPath string, pid int) bool {
if procStartTime.IsZero() {
// Can't determine process start time
// Check if file is very old (>7 days), likely from a dead process
if time.Since(fileModTime) > 7*24*time.Hour {
return true // File is old enough to be from a different process
}
return false // Be conservative for recent files
return time.Since(fileModTime) > 7*24*time.Hour
}
// If the log file was modified before the process started, PID was reused
@@ -604,10 +611,7 @@ func isPIDReused(logPath string, pid int) bool {
func parsePIDFromLog(path string) (int, bool) {
name := filepath.Base(path)
prefixes := logPrefixes()
if len(prefixes) == 0 {
prefixes = []string{defaultWrapperName}
}
prefixes := LogPrefixes()
for _, prefix := range prefixes {
prefixWithDash := fmt.Sprintf("%s-", prefix)
@@ -661,3 +665,19 @@ func renderWorkerLimit(limit int) string {
}
return strconv.Itoa(limit)
}
func CleanupOldLogs() (CleanupStats, error) { return cleanupOldLogs() }
func IsUnsafeFile(path string, tempDir string) (bool, string) { return isUnsafeFile(path, tempDir) }
func IsPIDReused(logPath string, pid int) bool { return isPIDReused(logPath, pid) }
func ParsePIDFromLog(path string) (int, bool) { return parsePIDFromLog(path) }
func LogConcurrencyPlanning(limit, total int) { logConcurrencyPlanning(limit, total) }
func LogConcurrencyState(event, taskID string, active, limit int) {
logConcurrencyState(event, taskID, active, limit)
}
func SanitizeLogSuffix(raw string) string { return sanitizeLogSuffix(raw) }

View File

@@ -1,4 +1,4 @@
package main
package logger
import (
"fmt"
@@ -28,7 +28,7 @@ func TestLoggerConcurrencyLogHelpers(t *testing.T) {
t.Fatalf("NewLoggerWithSuffix error: %v", err)
}
setLogger(logger)
defer closeLogger()
defer func() { _ = closeLogger() }()
logConcurrencyPlanning(0, 2)
logConcurrencyPlanning(3, 2)
@@ -64,8 +64,8 @@ func TestLoggerConcurrencyLogHelpersNoopWithoutActiveLogger(t *testing.T) {
func TestLoggerCleanupOldLogsSkipsUnsafeAndHandlesAlreadyDeleted(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
unsafePath := createTempLog(t, tempDir, fmt.Sprintf("%s-%d.log", primaryLogPrefix(), 222))
orphanPath := createTempLog(t, tempDir, fmt.Sprintf("%s-%d.log", primaryLogPrefix(), 111))
unsafePath := createTempLog(t, tempDir, fmt.Sprintf("%s-%d.log", PrimaryLogPrefix(), 222))
orphanPath := createTempLog(t, tempDir, fmt.Sprintf("%s-%d.log", PrimaryLogPrefix(), 111))
stubFileStat(t, func(path string) (os.FileInfo, error) {
if path == unsafePath {

View File

@@ -1,4 +1,4 @@
package main
package logger
import (
"fmt"
@@ -26,12 +26,12 @@ func TestLoggerWithSuffixNamingAndIsolation(t *testing.T) {
}
defer loggerB.Close()
wantA := filepath.Join(tempDir, fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), taskA))
wantA := filepath.Join(tempDir, fmt.Sprintf("%s-%d-%s.log", PrimaryLogPrefix(), os.Getpid(), taskA))
if loggerA.Path() != wantA {
t.Fatalf("loggerA path = %q, want %q", loggerA.Path(), wantA)
}
wantB := filepath.Join(tempDir, fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), taskB))
wantB := filepath.Join(tempDir, fmt.Sprintf("%s-%d-%s.log", PrimaryLogPrefix(), os.Getpid(), taskB))
if loggerB.Path() != wantB {
t.Fatalf("loggerB path = %q, want %q", loggerB.Path(), wantB)
}
@@ -105,7 +105,7 @@ func TestLoggerWithSuffixSanitizesUnsafeSuffix(t *testing.T) {
_ = os.Remove(logger.Path())
})
wantBase := fmt.Sprintf("%s-%d-%s.log", primaryLogPrefix(), os.Getpid(), safe)
wantBase := fmt.Sprintf("%s-%d-%s.log", PrimaryLogPrefix(), os.Getpid(), safe)
if gotBase := filepath.Base(logger.Path()); gotBase != wantBase {
t.Fatalf("log filename = %q, want %q", gotBase, wantBase)
}

View File

@@ -1,4 +1,4 @@
package main
package logger
import (
"bufio"
@@ -6,7 +6,6 @@ import (
"fmt"
"math"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
@@ -77,30 +76,6 @@ func TestLoggerWritesLevels(t *testing.T) {
}
}
func TestLoggerDefaultIsTerminalCoverage(t *testing.T) {
oldStdin := os.Stdin
t.Cleanup(func() { os.Stdin = oldStdin })
f, err := os.CreateTemp(t.TempDir(), "stdin-*")
if err != nil {
t.Fatalf("os.CreateTemp() error = %v", err)
}
defer os.Remove(f.Name())
os.Stdin = f
if got := defaultIsTerminal(); got {
t.Fatalf("defaultIsTerminal() = %v, want false for regular file", got)
}
if err := f.Close(); err != nil {
t.Fatalf("Close() error = %v", err)
}
os.Stdin = f
if got := defaultIsTerminal(); !got {
t.Fatalf("defaultIsTerminal() = %v, want true when Stat fails", got)
}
}
func TestLoggerCloseStopsWorkerAndKeepsFile(t *testing.T) {
tempDir := t.TempDir()
t.Setenv("TMPDIR", tempDir)
@@ -118,11 +93,6 @@ func TestLoggerCloseStopsWorkerAndKeepsFile(t *testing.T) {
if err := logger.Close(); err != nil {
t.Fatalf("Close() returned error: %v", err)
}
if logger.file != nil {
if _, err := logger.file.Write([]byte("x")); err == nil {
t.Fatalf("expected file to be closed after Close()")
}
}
// After recent changes, log file is kept for debugging - NOT removed
if _, err := os.Stat(logPath); os.IsNotExist(err) {
@@ -131,18 +101,6 @@ func TestLoggerCloseStopsWorkerAndKeepsFile(t *testing.T) {
// Clean up manually for test
defer os.Remove(logPath)
done := make(chan struct{})
go func() {
logger.workerWG.Wait()
close(done)
}()
select {
case <-done:
case <-time.After(200 * time.Millisecond):
t.Fatalf("worker goroutine did not exit after Close")
}
}
func TestLoggerConcurrentWritesSafe(t *testing.T) {
@@ -194,50 +152,13 @@ func TestLoggerConcurrentWritesSafe(t *testing.T) {
}
}
func TestLoggerTerminateProcessActive(t *testing.T) {
cmd := exec.Command("sleep", "5")
if err := cmd.Start(); err != nil {
t.Skipf("cannot start sleep command: %v", err)
}
timer := terminateProcess(&realCmd{cmd: cmd})
if timer == nil {
t.Fatalf("terminateProcess returned nil timer for active process")
}
defer timer.Stop()
done := make(chan error, 1)
go func() {
done <- cmd.Wait()
}()
select {
case <-time.After(500 * time.Millisecond):
t.Fatalf("process not terminated promptly")
case <-done:
}
// Force the timer callback to run immediately to cover the kill branch.
timer.Reset(0)
time.Sleep(10 * time.Millisecond)
}
func TestLoggerTerminateProcessNil(t *testing.T) {
if timer := terminateProcess(nil); timer != nil {
t.Fatalf("terminateProcess(nil) should return nil timer")
}
if timer := terminateProcess(&realCmd{cmd: &exec.Cmd{}}); timer != nil {
t.Fatalf("terminateProcess with nil process should return nil timer")
}
}
func TestLoggerCleanupOldLogsRemovesOrphans(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
orphan1 := createTempLog(t, tempDir, "codex-wrapper-111.log")
orphan2 := createTempLog(t, tempDir, "codex-wrapper-222-suffix.log")
running1 := createTempLog(t, tempDir, "codex-wrapper-333.log")
running2 := createTempLog(t, tempDir, "codex-wrapper-444-extra-info.log")
orphan1 := createTempLog(t, tempDir, "codeagent-wrapper-111.log")
orphan2 := createTempLog(t, tempDir, "codeagent-wrapper-222-suffix.log")
running1 := createTempLog(t, tempDir, "codeagent-wrapper-333.log")
running2 := createTempLog(t, tempDir, "codeagent-wrapper-444-extra-info.log")
untouched := createTempLog(t, tempDir, "unrelated.log")
runningPIDs := map[int]bool{333: true, 444: true}
@@ -285,15 +206,15 @@ func TestLoggerCleanupOldLogsHandlesInvalidNamesAndErrors(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
invalid := []string{
"codex-wrapper-.log",
"codex-wrapper.log",
"codex-wrapper-foo-bar.txt",
"codeagent-wrapper-.log",
"codeagent-wrapper.log",
"codeagent-wrapper-foo-bar.txt",
"not-a-codex.log",
}
for _, name := range invalid {
createTempLog(t, tempDir, name)
}
target := createTempLog(t, tempDir, "codex-wrapper-555-extra.log")
target := createTempLog(t, tempDir, "codeagent-wrapper-555-extra.log")
var checked []int
stubProcessRunning(t, func(pid int) bool {
@@ -389,8 +310,8 @@ func TestLoggerCleanupOldLogsHandlesTempDirPermissionErrors(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
paths := []string{
createTempLog(t, tempDir, "codex-wrapper-6100.log"),
createTempLog(t, tempDir, "codex-wrapper-6101.log"),
createTempLog(t, tempDir, "codeagent-wrapper-6100.log"),
createTempLog(t, tempDir, "codeagent-wrapper-6101.log"),
}
stubProcessRunning(t, func(int) bool { return false })
@@ -428,8 +349,8 @@ func TestLoggerCleanupOldLogsHandlesTempDirPermissionErrors(t *testing.T) {
func TestLoggerCleanupOldLogsHandlesPermissionDeniedFile(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
protected := createTempLog(t, tempDir, "codex-wrapper-6200.log")
deletable := createTempLog(t, tempDir, "codex-wrapper-6201.log")
protected := createTempLog(t, tempDir, "codeagent-wrapper-6200.log")
deletable := createTempLog(t, tempDir, "codeagent-wrapper-6201.log")
stubProcessRunning(t, func(int) bool { return false })
stubProcessStartTime(t, func(int) time.Time { return time.Time{} })
@@ -468,7 +389,7 @@ func TestLoggerCleanupOldLogsPerformanceBound(t *testing.T) {
const fileCount = 400
fakePaths := make([]string, fileCount)
for i := 0; i < fileCount; i++ {
name := fmt.Sprintf("codex-wrapper-%d.log", 10000+i)
name := fmt.Sprintf("codeagent-wrapper-%d.log", 10000+i)
fakePaths[i] = createTempLog(t, tempDir, name)
}
@@ -505,102 +426,11 @@ func TestLoggerCleanupOldLogsPerformanceBound(t *testing.T) {
}
}
func TestLoggerCleanupOldLogsCoverageSuite(t *testing.T) {
TestBackendParseJSONStream_CoverageSuite(t)
}
// Reuse the existing coverage suite so the focused TestLogger run still exercises
// the rest of the codebase and keeps coverage high.
func TestLoggerCoverageSuite(t *testing.T) {
suite := []struct {
name string
fn func(*testing.T)
}{
{"TestBackendParseJSONStream_CoverageSuite", TestBackendParseJSONStream_CoverageSuite},
{"TestVersionCoverageFullRun", TestVersionCoverageFullRun},
{"TestVersionMainWrapper", TestVersionMainWrapper},
{"TestExecutorHelperCoverage", TestExecutorHelperCoverage},
{"TestExecutorRunCodexTaskWithContext", TestExecutorRunCodexTaskWithContext},
{"TestExecutorParallelLogIsolation", TestExecutorParallelLogIsolation},
{"TestExecutorTaskLoggerContext", TestExecutorTaskLoggerContext},
{"TestExecutorExecuteConcurrentWithContextBranches", TestExecutorExecuteConcurrentWithContextBranches},
{"TestExecutorSignalAndTermination", TestExecutorSignalAndTermination},
{"TestExecutorCancelReasonAndCloseWithReason", TestExecutorCancelReasonAndCloseWithReason},
{"TestExecutorForceKillTimerStop", TestExecutorForceKillTimerStop},
{"TestExecutorForwardSignalsDefaults", TestExecutorForwardSignalsDefaults},
{"TestBackendParseArgs_NewMode", TestBackendParseArgs_NewMode},
{"TestBackendParseArgs_ResumeMode", TestBackendParseArgs_ResumeMode},
{"TestBackendParseArgs_BackendFlag", TestBackendParseArgs_BackendFlag},
{"TestBackendParseArgs_SkipPermissions", TestBackendParseArgs_SkipPermissions},
{"TestBackendParseBoolFlag", TestBackendParseBoolFlag},
{"TestBackendEnvFlagEnabled", TestBackendEnvFlagEnabled},
{"TestRunResolveTimeout", TestRunResolveTimeout},
{"TestRunIsTerminal", TestRunIsTerminal},
{"TestRunReadPipedTask", TestRunReadPipedTask},
{"TestTailBufferWrite", TestTailBufferWrite},
{"TestLogWriterWriteLimitsBuffer", TestLogWriterWriteLimitsBuffer},
{"TestLogWriterLogLine", TestLogWriterLogLine},
{"TestNewLogWriterDefaultMaxLen", TestNewLogWriterDefaultMaxLen},
{"TestNewLogWriterDefaultLimit", TestNewLogWriterDefaultLimit},
{"TestRunHello", TestRunHello},
{"TestRunGreet", TestRunGreet},
{"TestRunFarewell", TestRunFarewell},
{"TestRunFarewellEmpty", TestRunFarewellEmpty},
{"TestParallelParseConfig_Success", TestParallelParseConfig_Success},
{"TestParallelParseConfig_Backend", TestParallelParseConfig_Backend},
{"TestParallelParseConfig_InvalidFormat", TestParallelParseConfig_InvalidFormat},
{"TestParallelParseConfig_EmptyTasks", TestParallelParseConfig_EmptyTasks},
{"TestParallelParseConfig_MissingID", TestParallelParseConfig_MissingID},
{"TestParallelParseConfig_MissingTask", TestParallelParseConfig_MissingTask},
{"TestParallelParseConfig_DuplicateID", TestParallelParseConfig_DuplicateID},
{"TestParallelParseConfig_DelimiterFormat", TestParallelParseConfig_DelimiterFormat},
{"TestBackendSelectBackend", TestBackendSelectBackend},
{"TestBackendSelectBackend_Invalid", TestBackendSelectBackend_Invalid},
{"TestBackendSelectBackend_DefaultOnEmpty", TestBackendSelectBackend_DefaultOnEmpty},
{"TestBackendBuildArgs_CodexBackend", TestBackendBuildArgs_CodexBackend},
{"TestBackendBuildArgs_ClaudeBackend", TestBackendBuildArgs_ClaudeBackend},
{"TestClaudeBackendBuildArgs_OutputValidation", TestClaudeBackendBuildArgs_OutputValidation},
{"TestBackendBuildArgs_GeminiBackend", TestBackendBuildArgs_GeminiBackend},
{"TestGeminiBackendBuildArgs_OutputValidation", TestGeminiBackendBuildArgs_OutputValidation},
{"TestBackendNamesAndCommands", TestBackendNamesAndCommands},
{"TestBackendParseJSONStream", TestBackendParseJSONStream},
{"TestBackendParseJSONStream_ClaudeEvents", TestBackendParseJSONStream_ClaudeEvents},
{"TestBackendParseJSONStream_GeminiEvents", TestBackendParseJSONStream_GeminiEvents},
{"TestBackendParseJSONStreamWithWarn_InvalidLine", TestBackendParseJSONStreamWithWarn_InvalidLine},
{"TestBackendParseJSONStream_OnMessage", TestBackendParseJSONStream_OnMessage},
{"TestBackendParseJSONStream_ScannerError", TestBackendParseJSONStream_ScannerError},
{"TestBackendDiscardInvalidJSON", TestBackendDiscardInvalidJSON},
{"TestBackendDiscardInvalidJSONBuffer", TestBackendDiscardInvalidJSONBuffer},
{"TestCurrentWrapperNameFallsBackToExecutable", TestCurrentWrapperNameFallsBackToExecutable},
{"TestCurrentWrapperNameDetectsLegacyAliasSymlink", TestCurrentWrapperNameDetectsLegacyAliasSymlink},
{"TestIsProcessRunning", TestIsProcessRunning},
{"TestGetProcessStartTimeReadsProcStat", TestGetProcessStartTimeReadsProcStat},
{"TestGetProcessStartTimeInvalidData", TestGetProcessStartTimeInvalidData},
{"TestGetBootTimeParsesBtime", TestGetBootTimeParsesBtime},
{"TestGetBootTimeInvalidData", TestGetBootTimeInvalidData},
{"TestClaudeBuildArgs_ModesAndPermissions", TestClaudeBuildArgs_ModesAndPermissions},
{"TestClaudeBuildArgs_GeminiAndCodexModes", TestClaudeBuildArgs_GeminiAndCodexModes},
{"TestClaudeBuildArgs_BackendMetadata", TestClaudeBuildArgs_BackendMetadata},
}
for _, tc := range suite {
t.Run(tc.name, tc.fn)
}
}
func TestLoggerCleanupOldLogsKeepsCurrentProcessLog(t *testing.T) {
tempDir := setTempDirEnv(t, t.TempDir())
currentPID := os.Getpid()
currentLog := createTempLog(t, tempDir, fmt.Sprintf("codex-wrapper-%d.log", currentPID))
currentLog := createTempLog(t, tempDir, fmt.Sprintf("codeagent-wrapper-%d.log", currentPID))
stubProcessRunning(t, func(pid int) bool {
if pid != currentPID {
@@ -676,7 +506,7 @@ func TestLoggerIsUnsafeFileSecurityChecks(t *testing.T) {
stubEvalSymlinks(t, func(path string) (string, error) {
return filepath.Join(absTempDir, filepath.Base(path)), nil
})
unsafe, reason := isUnsafeFile(filepath.Join(absTempDir, "codex-wrapper-1.log"), tempDir)
unsafe, reason := isUnsafeFile(filepath.Join(absTempDir, "codeagent-wrapper-1.log"), tempDir)
if !unsafe || reason != "refusing to delete symlink" {
t.Fatalf("expected symlink to be rejected, got unsafe=%v reason=%q", unsafe, reason)
}
@@ -702,9 +532,9 @@ func TestLoggerIsUnsafeFileSecurityChecks(t *testing.T) {
})
otherDir := t.TempDir()
stubEvalSymlinks(t, func(string) (string, error) {
return filepath.Join(otherDir, "codex-wrapper-9.log"), nil
return filepath.Join(otherDir, "codeagent-wrapper-9.log"), nil
})
unsafe, reason := isUnsafeFile(filepath.Join(otherDir, "codex-wrapper-9.log"), tempDir)
unsafe, reason := isUnsafeFile(filepath.Join(otherDir, "codeagent-wrapper-9.log"), tempDir)
if !unsafe || reason != "file is outside tempDir" {
t.Fatalf("expected outside file to be rejected, got unsafe=%v reason=%q", unsafe, reason)
}
@@ -713,15 +543,21 @@ func TestLoggerIsUnsafeFileSecurityChecks(t *testing.T) {
func TestLoggerPathAndRemove(t *testing.T) {
tempDir := t.TempDir()
path := filepath.Join(tempDir, "sample.log")
if err := os.WriteFile(path, []byte("test"), 0o644); err != nil {
t.Fatalf("failed to create temp file: %v", err)
t.Setenv("TMPDIR", tempDir)
logger, err := NewLoggerWithSuffix("sample")
if err != nil {
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
}
path := logger.Path()
if path == "" {
_ = logger.Close()
t.Fatalf("logger.Path() returned empty path")
}
if err := logger.Close(); err != nil {
t.Fatalf("Close() error = %v", err)
}
logger := &Logger{path: path}
if got := logger.Path(); got != path {
t.Fatalf("Path() = %q, want %q", got, path)
}
if err := logger.RemoveLogFile(); err != nil {
t.Fatalf("RemoveLogFile() error = %v", err)
}
@@ -738,43 +574,6 @@ func TestLoggerPathAndRemove(t *testing.T) {
}
}
func TestLoggerTruncateBytesCoverage(t *testing.T) {
if got := truncateBytes([]byte("abc"), 3); got != "abc" {
t.Fatalf("truncateBytes() = %q, want %q", got, "abc")
}
if got := truncateBytes([]byte("abcd"), 3); got != "abc..." {
t.Fatalf("truncateBytes() = %q, want %q", got, "abc...")
}
if got := truncateBytes([]byte("abcd"), -1); got != "" {
t.Fatalf("truncateBytes() = %q, want empty string", got)
}
}
func TestLoggerInternalLog(t *testing.T) {
logger := &Logger{
ch: make(chan logEntry, 1),
done: make(chan struct{}),
pendingWG: sync.WaitGroup{},
}
done := make(chan logEntry, 1)
go func() {
entry := <-logger.ch
logger.pendingWG.Done()
done <- entry
}()
logger.log("INFO", "hello")
entry := <-done
if entry.msg != "hello" {
t.Fatalf("unexpected entry %+v", entry)
}
logger.closed.Store(true)
logger.log("INFO", "ignored")
close(logger.done)
}
func TestLoggerParsePIDFromLog(t *testing.T) {
hugePID := strconv.FormatInt(math.MaxInt64, 10) + "0"
tests := []struct {
@@ -782,13 +581,13 @@ func TestLoggerParsePIDFromLog(t *testing.T) {
pid int
ok bool
}{
{"codex-wrapper-123.log", 123, true},
{"codex-wrapper-999-extra.log", 999, true},
{"codex-wrapper-.log", 0, false},
{"codeagent-wrapper-123.log", 123, true},
{"codeagent-wrapper-999-extra.log", 999, true},
{"codeagent-wrapper-.log", 0, false},
{"invalid-name.log", 0, false},
{"codex-wrapper--5.log", 0, false},
{"codex-wrapper-0.log", 0, false},
{fmt.Sprintf("codex-wrapper-%s.log", hugePID), 0, false},
{"codeagent-wrapper--5.log", 0, false},
{"codeagent-wrapper-0.log", 0, false},
{fmt.Sprintf("codeagent-wrapper-%s.log", hugePID), 0, false},
}
for _, tt := range tests {
@@ -827,56 +626,32 @@ func setTempDirEnv(t *testing.T, dir string) string {
func stubProcessRunning(t *testing.T, fn func(int) bool) {
t.Helper()
original := processRunningCheck
processRunningCheck = fn
t.Cleanup(func() {
processRunningCheck = original
})
t.Cleanup(SetProcessRunningCheck(fn))
}
func stubProcessStartTime(t *testing.T, fn func(int) time.Time) {
t.Helper()
original := processStartTimeFn
processStartTimeFn = fn
t.Cleanup(func() {
processStartTimeFn = original
})
t.Cleanup(SetProcessStartTimeFn(fn))
}
func stubRemoveLogFile(t *testing.T, fn func(string) error) {
t.Helper()
original := removeLogFileFn
removeLogFileFn = fn
t.Cleanup(func() {
removeLogFileFn = original
})
t.Cleanup(SetRemoveLogFileFn(fn))
}
func stubGlobLogFiles(t *testing.T, fn func(string) ([]string, error)) {
t.Helper()
original := globLogFiles
globLogFiles = fn
t.Cleanup(func() {
globLogFiles = original
})
t.Cleanup(SetGlobLogFilesFn(fn))
}
func stubFileStat(t *testing.T, fn func(string) (os.FileInfo, error)) {
t.Helper()
original := fileStatFn
fileStatFn = fn
t.Cleanup(func() {
fileStatFn = original
})
t.Cleanup(SetFileStatFn(fn))
}
func stubEvalSymlinks(t *testing.T, fn func(string) (string, error)) {
t.Helper()
original := evalSymlinksFn
evalSymlinksFn = fn
t.Cleanup(func() {
evalSymlinksFn = original
})
t.Cleanup(SetEvalSymlinksFn(fn))
}
type fakeFileInfo struct {
@@ -960,7 +735,7 @@ func TestLoggerExtractRecentErrors(t *testing.T) {
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
}
defer logger.Close()
defer logger.RemoveLogFile()
defer func() { _ = logger.RemoveLogFile() }()
// Write logs using logger methods
for _, entry := range tt.logs {
@@ -1000,14 +775,14 @@ func TestLoggerExtractRecentErrorsNilLogger(t *testing.T) {
}
func TestLoggerExtractRecentErrorsEmptyPath(t *testing.T) {
logger := &Logger{path: ""}
logger := &Logger{}
if got := logger.ExtractRecentErrors(10); got != nil {
t.Fatalf("empty path ExtractRecentErrors() should return nil, got %v", got)
}
}
func TestLoggerExtractRecentErrorsFileNotExist(t *testing.T) {
logger := &Logger{path: "/nonexistent/path/to/log.log"}
logger := &Logger{}
if got := logger.ExtractRecentErrors(10); got != nil {
t.Fatalf("nonexistent file ExtractRecentErrors() should return nil, got %v", got)
}
@@ -1049,7 +824,7 @@ func TestExtractRecentErrorsBoundaryCheck(t *testing.T) {
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
}
defer logger.Close()
defer logger.RemoveLogFile()
defer func() { _ = logger.RemoveLogFile() }()
// Write some errors
logger.Error("error 1")
@@ -1082,7 +857,7 @@ func TestErrorEntriesMaxLimit(t *testing.T) {
t.Fatalf("NewLoggerWithSuffix() error = %v", err)
}
defer logger.Close()
defer logger.RemoveLogFile()
defer func() { _ = logger.RemoveLogFile() }()
// Write 150 error/warn entries
for i := 1; i <= 150; i++ {

View File

@@ -0,0 +1,63 @@
package logger
import (
"errors"
"math"
"time"
"github.com/shirou/gopsutil/v3/process"
)
func pidToInt32(pid int) (int32, bool) {
if pid <= 0 || pid > math.MaxInt32 {
return 0, false
}
return int32(pid), true
}
// isProcessRunning reports whether a process with the given pid appears to be running.
// It is intentionally conservative on errors to avoid deleting logs for live processes.
func isProcessRunning(pid int) bool {
pid32, ok := pidToInt32(pid)
if !ok {
return false
}
exists, err := process.PidExists(pid32)
if err == nil {
return exists
}
// If we can positively identify that the process doesn't exist, report false.
if errors.Is(err, process.ErrorProcessNotRunning) {
return false
}
// Permission/inspection failures: assume it's running to be safe.
return true
}
// getProcessStartTime returns the start time of a process.
// Returns zero time if the start time cannot be determined.
func getProcessStartTime(pid int) time.Time {
pid32, ok := pidToInt32(pid)
if !ok {
return time.Time{}
}
proc, err := process.NewProcess(pid32)
if err != nil {
return time.Time{}
}
ms, err := proc.CreateTime()
if err != nil || ms <= 0 {
return time.Time{}
}
return time.UnixMilli(ms)
}
func IsProcessRunning(pid int) bool { return isProcessRunning(pid) }
func GetProcessStartTime(pid int) time.Time { return getProcessStartTime(pid) }

View File

@@ -0,0 +1,112 @@
package logger
import (
"math"
"os"
"os/exec"
"runtime"
"strconv"
"testing"
"time"
)
func TestIsProcessRunning(t *testing.T) {
t.Run("boundary values", func(t *testing.T) {
if isProcessRunning(0) {
t.Fatalf("pid 0 should never be treated as running")
}
if isProcessRunning(-1) {
t.Fatalf("negative pid should never be treated as running")
}
})
t.Run("pid out of int32 range", func(t *testing.T) {
if strconv.IntSize <= 32 {
t.Skip("int cannot represent values above int32 range")
}
pid := int(int64(math.MaxInt32) + 1)
if isProcessRunning(pid) {
t.Fatalf("expected pid %d (out of int32 range) to be treated as not running", pid)
}
})
t.Run("current process", func(t *testing.T) {
if !isProcessRunning(os.Getpid()) {
t.Fatalf("expected current process (pid=%d) to be running", os.Getpid())
}
})
t.Run("fake pid", func(t *testing.T) {
const nonexistentPID = 1 << 30
if isProcessRunning(nonexistentPID) {
t.Fatalf("expected pid %d to be reported as not running", nonexistentPID)
}
})
t.Run("terminated process", func(t *testing.T) {
pid := exitedProcessPID(t)
if isProcessRunning(pid) {
t.Fatalf("expected exited child process (pid=%d) to be reported as not running", pid)
}
})
}
func exitedProcessPID(t *testing.T) int {
t.Helper()
var cmd *exec.Cmd
if runtime.GOOS == "windows" {
cmd = exec.Command("cmd", "/c", "exit 0")
} else {
cmd = exec.Command("sh", "-c", "exit 0")
}
if err := cmd.Start(); err != nil {
t.Fatalf("failed to start helper process: %v", err)
}
pid := cmd.Process.Pid
if err := cmd.Wait(); err != nil {
t.Fatalf("helper process did not exit cleanly: %v", err)
}
time.Sleep(50 * time.Millisecond)
return pid
}
func TestGetProcessStartTimeReadsProcStat(t *testing.T) {
start := getProcessStartTime(os.Getpid())
if start.IsZero() {
t.Fatalf("expected non-zero start time for current process")
}
if start.After(time.Now().Add(5 * time.Second)) {
t.Fatalf("start time is unexpectedly in the future: %v", start)
}
}
func TestGetProcessStartTimeInvalidData(t *testing.T) {
if !getProcessStartTime(0).IsZero() {
t.Fatalf("expected zero time for pid 0")
}
if !getProcessStartTime(-1).IsZero() {
t.Fatalf("expected zero time for negative pid")
}
if !getProcessStartTime(1 << 30).IsZero() {
t.Fatalf("expected zero time for non-existent pid")
}
if strconv.IntSize > 32 {
pid := int(int64(math.MaxInt32) + 1)
if !getProcessStartTime(pid).IsZero() {
t.Fatalf("expected zero time for pid %d (out of int32 range)", pid)
}
}
}
func TestGetBootTimeParsesBtime(t *testing.T) {
t.Skip("legacy boot-time probing removed; start time now uses gopsutil")
}
func TestGetBootTimeInvalidData(t *testing.T) {
t.Skip("legacy boot-time probing removed; start time now uses gopsutil")
}

View File

@@ -0,0 +1,67 @@
package logger
import (
"os"
"path/filepath"
"time"
)
func SetProcessRunningCheck(fn func(int) bool) (restore func()) {
prev := processRunningCheck
if fn != nil {
processRunningCheck = fn
} else {
processRunningCheck = isProcessRunning
}
return func() { processRunningCheck = prev }
}
func SetProcessStartTimeFn(fn func(int) time.Time) (restore func()) {
prev := processStartTimeFn
if fn != nil {
processStartTimeFn = fn
} else {
processStartTimeFn = getProcessStartTime
}
return func() { processStartTimeFn = prev }
}
func SetRemoveLogFileFn(fn func(string) error) (restore func()) {
prev := removeLogFileFn
if fn != nil {
removeLogFileFn = fn
} else {
removeLogFileFn = os.Remove
}
return func() { removeLogFileFn = prev }
}
func SetGlobLogFilesFn(fn func(string) ([]string, error)) (restore func()) {
prev := globLogFiles
if fn != nil {
globLogFiles = fn
} else {
globLogFiles = filepath.Glob
}
return func() { globLogFiles = prev }
}
func SetFileStatFn(fn func(string) (os.FileInfo, error)) (restore func()) {
prev := fileStatFn
if fn != nil {
fileStatFn = fn
} else {
fileStatFn = os.Lstat
}
return func() { fileStatFn = prev }
}
func SetEvalSymlinksFn(fn func(string) (string, error)) (restore func()) {
prev := evalSymlinksFn
if fn != nil {
evalSymlinksFn = fn
} else {
evalSymlinksFn = filepath.EvalSymlinks
}
return func() { evalSymlinksFn = prev }
}

View File

@@ -0,0 +1,13 @@
package logger
// WrapperName is the fixed name for this tool.
const WrapperName = "codeagent-wrapper"
// CurrentWrapperName returns the wrapper name (always "codeagent-wrapper").
func CurrentWrapperName() string { return WrapperName }
// LogPrefixes returns the log file name prefixes to look for.
func LogPrefixes() []string { return []string{WrapperName} }
// PrimaryLogPrefix returns the preferred filename prefix for log files.
func PrimaryLogPrefix() string { return WrapperName }

View File

@@ -0,0 +1,74 @@
package parser
import "github.com/goccy/go-json"
// JSONEvent represents a Codex JSON output event.
type JSONEvent struct {
Type string `json:"type"`
ThreadID string `json:"thread_id,omitempty"`
Item *EventItem `json:"item,omitempty"`
}
// EventItem represents the item field in a JSON event.
type EventItem struct {
Type string `json:"type"`
Text interface{} `json:"text"`
}
// ClaudeEvent for Claude stream-json format.
type ClaudeEvent struct {
Type string `json:"type"`
Subtype string `json:"subtype,omitempty"`
SessionID string `json:"session_id,omitempty"`
Result string `json:"result,omitempty"`
}
// GeminiEvent for Gemini stream-json format.
type GeminiEvent struct {
Type string `json:"type"`
SessionID string `json:"session_id,omitempty"`
Role string `json:"role,omitempty"`
Content string `json:"content,omitempty"`
Delta bool `json:"delta,omitempty"`
Status string `json:"status,omitempty"`
}
// UnifiedEvent combines all backend event formats into a single structure
// to avoid multiple JSON unmarshal operations per event.
type UnifiedEvent struct {
// Common fields
Type string `json:"type"`
// Codex-specific fields
ThreadID string `json:"thread_id,omitempty"`
Item json.RawMessage `json:"item,omitempty"` // Lazy parse
// Claude-specific fields
Subtype string `json:"subtype,omitempty"`
SessionID string `json:"session_id,omitempty"`
Result string `json:"result,omitempty"`
// Gemini-specific fields
Role string `json:"role,omitempty"`
Content string `json:"content,omitempty"`
Delta *bool `json:"delta,omitempty"`
Status string `json:"status,omitempty"`
// Opencode-specific fields (camelCase sessionID)
OpencodeSessionID string `json:"sessionID,omitempty"`
Part json.RawMessage `json:"part,omitempty"`
}
// OpencodePart represents the part field in opencode events.
type OpencodePart struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
Reason string `json:"reason,omitempty"`
SessionID string `json:"sessionID,omitempty"`
}
// ItemContent represents the parsed item.text field for Codex events.
type ItemContent struct {
Type string `json:"type"`
Text interface{} `json:"text"`
}

View File

@@ -1,106 +1,65 @@
package main
package parser
import (
"bufio"
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"strings"
"sync"
"github.com/goccy/go-json"
)
// JSONEvent represents a Codex JSON output event
type JSONEvent struct {
Type string `json:"type"`
ThreadID string `json:"thread_id,omitempty"`
Item *EventItem `json:"item,omitempty"`
}
// EventItem represents the item field in a JSON event
type EventItem struct {
Type string `json:"type"`
Text interface{} `json:"text"`
}
// ClaudeEvent for Claude stream-json format
type ClaudeEvent struct {
Type string `json:"type"`
Subtype string `json:"subtype,omitempty"`
SessionID string `json:"session_id,omitempty"`
Result string `json:"result,omitempty"`
}
// GeminiEvent for Gemini stream-json format
type GeminiEvent struct {
Type string `json:"type"`
SessionID string `json:"session_id,omitempty"`
Role string `json:"role,omitempty"`
Content string `json:"content,omitempty"`
Delta bool `json:"delta,omitempty"`
Status string `json:"status,omitempty"`
}
func parseJSONStream(r io.Reader) (message, threadID string) {
return parseJSONStreamWithLog(r, logWarn, logInfo)
}
func parseJSONStreamWithWarn(r io.Reader, warnFn func(string)) (message, threadID string) {
return parseJSONStreamWithLog(r, warnFn, logInfo)
}
func parseJSONStreamWithLog(r io.Reader, warnFn func(string), infoFn func(string)) (message, threadID string) {
return parseJSONStreamInternal(r, warnFn, infoFn, nil, nil)
}
const (
jsonLineReaderSize = 64 * 1024
jsonLineMaxBytes = 10 * 1024 * 1024
jsonLinePreviewBytes = 256
)
// UnifiedEvent combines all backend event formats into a single structure
// to avoid multiple JSON unmarshal operations per event
type UnifiedEvent struct {
// Common fields
Type string `json:"type"`
// Codex-specific fields
ThreadID string `json:"thread_id,omitempty"`
Item json.RawMessage `json:"item,omitempty"` // Lazy parse
// Claude-specific fields
Subtype string `json:"subtype,omitempty"`
SessionID string `json:"session_id,omitempty"`
Result string `json:"result,omitempty"`
// Gemini-specific fields
Role string `json:"role,omitempty"`
Content string `json:"content,omitempty"`
Delta *bool `json:"delta,omitempty"`
Status string `json:"status,omitempty"`
// Opencode-specific fields (camelCase sessionID)
OpencodeSessionID string `json:"sessionID,omitempty"`
Part json.RawMessage `json:"part,omitempty"`
type lineScratch struct {
buf []byte
preview []byte
}
// OpencodePart represents the part field in opencode events
type OpencodePart struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
Reason string `json:"reason,omitempty"`
SessionID string `json:"sessionID,omitempty"`
const maxPooledLineScratchCap = 1 << 20 // 1 MiB
var lineScratchPool = sync.Pool{
New: func() any {
return &lineScratch{
buf: make([]byte, 0, jsonLineReaderSize),
preview: make([]byte, 0, jsonLinePreviewBytes),
}
},
}
// ItemContent represents the parsed item.text field for Codex events
type ItemContent struct {
Type string `json:"type"`
Text interface{} `json:"text"`
}
func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func(), onComplete func()) (message, threadID string) {
func ParseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(string), onMessage func(), onComplete func()) (message, threadID string) {
reader := bufio.NewReaderSize(r, jsonLineReaderSize)
scratch := lineScratchPool.Get().(*lineScratch)
if scratch.buf == nil {
scratch.buf = make([]byte, 0, jsonLineReaderSize)
} else {
scratch.buf = scratch.buf[:0]
}
if scratch.preview == nil {
scratch.preview = make([]byte, 0, jsonLinePreviewBytes)
} else {
scratch.preview = scratch.preview[:0]
}
defer func() {
if cap(scratch.buf) > maxPooledLineScratchCap {
scratch.buf = nil
} else if scratch.buf != nil {
scratch.buf = scratch.buf[:0]
}
if cap(scratch.preview) > jsonLinePreviewBytes*4 {
scratch.preview = nil
} else if scratch.preview != nil {
scratch.preview = scratch.preview[:0]
}
lineScratchPool.Put(scratch)
}()
if warnFn == nil {
warnFn = func(string) {}
@@ -131,7 +90,7 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
)
for {
line, tooLong, err := readLineWithLimit(reader, jsonLineMaxBytes, jsonLinePreviewBytes)
line, tooLong, err := readLineWithLimit(reader, jsonLineMaxBytes, jsonLinePreviewBytes, scratch)
if err != nil {
if errors.Is(err, io.EOF) {
break
@@ -147,14 +106,14 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
totalEvents++
if tooLong {
warnFn(fmt.Sprintf("Skipped overlong JSON line (> %d bytes): %s", jsonLineMaxBytes, truncateBytes(line, 100)))
warnFn(fmt.Sprintf("Skipped overlong JSON line (> %d bytes): %s", jsonLineMaxBytes, TruncateBytes(line, 100)))
continue
}
// Single unmarshal for all backend types
var event UnifiedEvent
if err := json.Unmarshal(line, &event); err != nil {
warnFn(fmt.Sprintf("Failed to parse event: %s", truncateBytes(line, 100)))
warnFn(fmt.Sprintf("Failed to parse event: %s", TruncateBytes(line, 100)))
continue
}
@@ -253,7 +212,7 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
// Lazy parse: only parse item content when needed
var item ItemContent
if err := json.Unmarshal(event.Item, &item); err == nil {
normalized := normalizeText(item.Text)
normalized := NormalizeText(item.Text)
infoFn(fmt.Sprintf("item.completed event item_type=%s message_len=%d", itemType, len(normalized)))
if normalized != "" {
codexMessage = normalized
@@ -334,12 +293,12 @@ func parseJSONStreamInternal(r io.Reader, warnFn func(string), infoFn func(strin
return message, threadID
}
func hasKey(m map[string]json.RawMessage, key string) bool {
func HasKey(m map[string]json.RawMessage, key string) bool {
_, ok := m[key]
return ok
}
func discardInvalidJSON(decoder *json.Decoder, reader *bufio.Reader) (*bufio.Reader, error) {
func DiscardInvalidJSON(decoder *json.Decoder, reader *bufio.Reader) (*bufio.Reader, error) {
var buffered bytes.Buffer
if decoder != nil {
@@ -365,7 +324,7 @@ func discardInvalidJSON(decoder *json.Decoder, reader *bufio.Reader) (*bufio.Rea
return bufio.NewReader(io.MultiReader(bytes.NewReader(remaining), reader)), err
}
func readLineWithLimit(r *bufio.Reader, maxBytes int, previewBytes int) (line []byte, tooLong bool, err error) {
func readLineWithLimit(r *bufio.Reader, maxBytes int, previewBytes int, scratch *lineScratch) (line []byte, tooLong bool, err error) {
if r == nil {
return nil, false, errors.New("reader is nil")
}
@@ -388,12 +347,22 @@ func readLineWithLimit(r *bufio.Reader, maxBytes int, previewBytes int) (line []
return part, false, nil
}
preview := make([]byte, 0, min(previewBytes, len(part)))
if scratch == nil {
scratch = &lineScratch{}
}
if scratch.preview == nil {
scratch.preview = make([]byte, 0, min(previewBytes, len(part)))
}
if scratch.buf == nil {
scratch.buf = make([]byte, 0, min(maxBytes, len(part)*2))
}
preview := scratch.preview[:0]
if previewBytes > 0 {
preview = append(preview, part[:min(previewBytes, len(part))]...)
}
buf := make([]byte, 0, min(maxBytes, len(part)*2))
buf := scratch.buf[:0]
total := 0
if len(part) > maxBytes {
tooLong = true
@@ -423,12 +392,16 @@ func readLineWithLimit(r *bufio.Reader, maxBytes int, previewBytes int) (line []
}
if tooLong {
scratch.preview = preview
scratch.buf = buf
return preview, true, nil
}
scratch.preview = preview
scratch.buf = buf
return buf, false, nil
}
func truncateBytes(b []byte, maxLen int) string {
func TruncateBytes(b []byte, maxLen int) string {
if len(b) <= maxLen {
return string(b)
}
@@ -438,7 +411,7 @@ func truncateBytes(b []byte, maxLen int) string {
return string(b[:maxLen]) + "..."
}
func normalizeText(text interface{}) string {
func NormalizeText(text interface{}) string {
switch v := text.(type) {
case string:
return v
@@ -454,3 +427,10 @@ func normalizeText(text interface{}) string {
return ""
}
}
func min(a, b int) int {
if a < b {
return a
}
return b
}

View File

@@ -1,4 +1,4 @@
package main
package parser
import (
"strings"
@@ -10,7 +10,7 @@ func TestParseJSONStream_Opencode(t *testing.T) {
{"type":"text","timestamp":1768187744432,"sessionID":"ses_44fced3c7ffe83sZpzY1rlQka3","part":{"id":"prt_bb0339cb5001QDd0Lh0PzFZpa3","sessionID":"ses_44fced3c7ffe83sZpzY1rlQka3","messageID":"msg_bb033866f0011oZxTqvfy0TKtS","type":"text","text":"Hello from opencode"}}
{"type":"step_finish","timestamp":1768187744471,"sessionID":"ses_44fced3c7ffe83sZpzY1rlQka3","part":{"id":"prt_bb033d0af0019VRZzpO2OVW1na","sessionID":"ses_44fced3c7ffe83sZpzY1rlQka3","messageID":"msg_bb033866f0011oZxTqvfy0TKtS","type":"step-finish","reason":"stop","snapshot":"904f0fd58c125b79e60f0993e38f9d9f6200bf47","cost":0}}`
message, threadID := parseJSONStream(strings.NewReader(input))
message, threadID := ParseJSONStreamInternal(strings.NewReader(input), nil, nil, nil, nil)
if threadID != "ses_44fced3c7ffe83sZpzY1rlQka3" {
t.Errorf("threadID = %q, want %q", threadID, "ses_44fced3c7ffe83sZpzY1rlQka3")
@@ -25,7 +25,7 @@ func TestParseJSONStream_Opencode_MultipleTextEvents(t *testing.T) {
{"type":"text","sessionID":"ses_123","part":{"type":"text","text":" Part 2"}}
{"type":"step_finish","sessionID":"ses_123","part":{"type":"step-finish","reason":"stop"}}`
message, threadID := parseJSONStream(strings.NewReader(input))
message, threadID := ParseJSONStreamInternal(strings.NewReader(input), nil, nil, nil, nil)
if threadID != "ses_123" {
t.Errorf("threadID = %q, want %q", threadID, "ses_123")
@@ -39,7 +39,7 @@ func TestParseJSONStream_Opencode_NoStopReason(t *testing.T) {
input := `{"type":"text","sessionID":"ses_456","part":{"type":"text","text":"Content"}}
{"type":"step_finish","sessionID":"ses_456","part":{"type":"step-finish","reason":"tool-calls"}}`
message, threadID := parseJSONStream(strings.NewReader(input))
message, threadID := ParseJSONStreamInternal(strings.NewReader(input), nil, nil, nil, nil)
if threadID != "ses_456" {
t.Errorf("threadID = %q, want %q", threadID, "ses_456")

View File

@@ -1,4 +1,4 @@
package main
package parser
import (
"strings"
@@ -18,7 +18,7 @@ func TestParseJSONStream_SkipsOverlongLineAndContinues(t *testing.T) {
var warns []string
warnFn := func(msg string) { warns = append(warns, msg) }
gotMessage, gotThreadID := parseJSONStreamInternal(strings.NewReader(input), warnFn, nil, nil, nil)
gotMessage, gotThreadID := ParseJSONStreamInternal(strings.NewReader(input), warnFn, nil, nil, nil)
if gotMessage != "ok" {
t.Fatalf("message=%q, want %q (warns=%v)", gotMessage, "ok", warns)
}

View File

@@ -1,4 +1,4 @@
package main
package parser
import (
"strings"
@@ -16,7 +16,7 @@ func TestBackendParseJSONStream_UnknownEventsAreSilent(t *testing.T) {
var infos []string
infoFn := func(msg string) { infos = append(infos, msg) }
message, threadID := parseJSONStreamInternal(strings.NewReader(input), nil, infoFn, nil, nil)
message, threadID := ParseJSONStreamInternal(strings.NewReader(input), nil, infoFn, nil, nil)
if message != "ok" {
t.Fatalf("message=%q, want %q (infos=%v)", message, "ok", infos)
}

View File

@@ -0,0 +1,15 @@
package parser
import "testing"
func TestTruncateBytes(t *testing.T) {
if got := TruncateBytes([]byte("abc"), 3); got != "abc" {
t.Fatalf("TruncateBytes() = %q, want %q", got, "abc")
}
if got := TruncateBytes([]byte("abcd"), 3); got != "abc..." {
t.Fatalf("TruncateBytes() = %q, want %q", got, "abc...")
}
if got := TruncateBytes([]byte("abcd"), -1); got != "" {
t.Fatalf("TruncateBytes() = %q, want empty string", got)
}
}

View File

@@ -0,0 +1,8 @@
package utils
func Min(a, b int) int {
if a < b {
return a
}
return b
}

View File

@@ -0,0 +1,36 @@
package utils
import "testing"
func TestMin(t *testing.T) {
tests := []struct {
name string
a, b int
want int
}{
{"a less than b", 1, 2, 1},
{"b less than a", 5, 3, 3},
{"equal values", 7, 7, 7},
{"negative a", -5, 3, -5},
{"negative b", 5, -3, -3},
{"both negative", -5, -3, -5},
{"zero and positive", 0, 5, 0},
{"zero and negative", 0, -5, -5},
{"large values", 1000000, 999999, 999999},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Min(tt.a, tt.b)
if got != tt.want {
t.Errorf("Min(%d, %d) = %d, want %d", tt.a, tt.b, got, tt.want)
}
})
}
}
func BenchmarkMin(b *testing.B) {
for i := 0; i < b.N; i++ {
Min(i, i+1)
}
}

View File

@@ -0,0 +1,62 @@
package utils
import "strings"
func Truncate(s string, maxLen int) string {
if len(s) <= maxLen {
return s
}
if maxLen < 0 {
return ""
}
return s[:maxLen] + "..."
}
// SafeTruncate safely truncates string to maxLen, avoiding panic and UTF-8 corruption.
func SafeTruncate(s string, maxLen int) string {
if maxLen <= 0 || s == "" {
return ""
}
runes := []rune(s)
if len(runes) <= maxLen {
return s
}
if maxLen < 4 {
return string(runes[:1])
}
cutoff := maxLen - 3
if cutoff <= 0 {
return string(runes[:1])
}
if len(runes) <= cutoff {
return s
}
return string(runes[:cutoff]) + "..."
}
// SanitizeOutput removes ANSI escape sequences and control characters.
func SanitizeOutput(s string) string {
var result strings.Builder
inEscape := false
for i := 0; i < len(s); i++ {
if s[i] == '\x1b' && i+1 < len(s) && s[i+1] == '[' {
inEscape = true
i++ // skip '['
continue
}
if inEscape {
if (s[i] >= 'A' && s[i] <= 'Z') || (s[i] >= 'a' && s[i] <= 'z') {
inEscape = false
}
continue
}
// Keep printable chars and common whitespace.
if s[i] >= 32 || s[i] == '\n' || s[i] == '\t' {
result.WriteByte(s[i])
}
}
return result.String()
}

View File

@@ -0,0 +1,122 @@
package utils
import (
"strings"
"testing"
)
func TestTruncate(t *testing.T) {
tests := []struct {
name string
s string
maxLen int
want string
}{
{"empty string", "", 10, ""},
{"short string", "hello", 10, "hello"},
{"exact length", "hello", 5, "hello"},
{"needs truncation", "hello world", 5, "hello..."},
{"zero maxLen", "hello", 0, "..."},
{"negative maxLen", "hello", -1, ""},
{"maxLen 1", "hello", 1, "h..."},
{"unicode bytes truncate", "你好世界", 10, "你好世\xe7..."}, // Truncate works on bytes, not runes
{"mixed truncate", "hello世界abc", 7, "hello\xe4\xb8..."}, // byte-based truncation
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Truncate(tt.s, tt.maxLen)
if got != tt.want {
t.Errorf("Truncate(%q, %d) = %q, want %q", tt.s, tt.maxLen, got, tt.want)
}
})
}
}
func TestSafeTruncate(t *testing.T) {
tests := []struct {
name string
s string
maxLen int
want string
}{
{"empty string", "", 10, ""},
{"zero maxLen", "hello", 0, ""},
{"negative maxLen", "hello", -1, ""},
{"short string", "hello", 10, "hello"},
{"exact length", "hello", 5, "hello"},
{"needs truncation", "hello world", 8, "hello..."},
{"maxLen 1", "hello", 1, "h"},
{"maxLen 2", "hello", 2, "h"},
{"maxLen 3", "hello", 3, "h"},
{"maxLen 4", "hello", 4, "h..."},
{"unicode preserved", "你好世界", 10, "你好世界"},
{"unicode exact", "你好世界", 4, "你好世界"},
{"unicode truncate", "你好世界test", 6, "你好世..."},
{"mixed unicode", "ab你好cd", 5, "ab..."},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := SafeTruncate(tt.s, tt.maxLen)
if got != tt.want {
t.Errorf("SafeTruncate(%q, %d) = %q, want %q", tt.s, tt.maxLen, got, tt.want)
}
})
}
}
func TestSanitizeOutput(t *testing.T) {
tests := []struct {
name string
s string
want string
}{
{"empty string", "", ""},
{"plain text", "hello world", "hello world"},
{"with newline", "hello\nworld", "hello\nworld"},
{"with tab", "hello\tworld", "hello\tworld"},
{"ANSI color red", "\x1b[31mred\x1b[0m", "red"},
{"ANSI bold", "\x1b[1mbold\x1b[0m", "bold"},
{"ANSI complex", "\x1b[1;31;40mtext\x1b[0m", "text"},
{"control chars", "hello\x00\x01\x02world", "helloworld"},
{"mixed ANSI and control", "\x1b[32m\x00ok\x1b[0m", "ok"},
{"multiple ANSI sequences", "\x1b[31mred\x1b[0m \x1b[32mgreen\x1b[0m", "red green"},
{"incomplete escape", "\x1b[", ""},
{"escape without bracket", "\x1bA", "A"},
{"cursor movement", "\x1b[2Aup\x1b[2Bdown", "updown"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := SanitizeOutput(tt.s)
if got != tt.want {
t.Errorf("SanitizeOutput(%q) = %q, want %q", tt.s, got, tt.want)
}
})
}
}
func BenchmarkTruncate(b *testing.B) {
s := strings.Repeat("hello world ", 100)
b.ResetTimer()
for i := 0; i < b.N; i++ {
Truncate(s, 50)
}
}
func BenchmarkSafeTruncate(b *testing.B) {
s := strings.Repeat("你好世界", 100)
b.ResetTimer()
for i := 0; i < b.N; i++ {
SafeTruncate(s, 50)
}
}
func BenchmarkSanitizeOutput(b *testing.B) {
s := strings.Repeat("\x1b[31mred\x1b[0m text ", 50)
b.ResetTimer()
for i := 0; i < b.N; i++ {
SanitizeOutput(s)
}
}

View File

@@ -1,627 +0,0 @@
package main
import (
"fmt"
"io"
"os"
"os/exec"
"os/signal"
"path/filepath"
"reflect"
"strings"
"sync/atomic"
"time"
)
const (
version = "5.5.0"
defaultWorkdir = "."
defaultTimeout = 7200 // seconds (2 hours)
defaultCoverageTarget = 90.0
codexLogLineLimit = 1000
stdinSpecialChars = "\n\\\"'`$"
stderrCaptureLimit = 4 * 1024
defaultBackendName = "codex"
defaultCodexCommand = "codex"
// stdout close reasons
stdoutCloseReasonWait = "wait-done"
stdoutCloseReasonDrain = "drain-timeout"
stdoutCloseReasonCtx = "context-cancel"
stdoutDrainTimeout = 100 * time.Millisecond
)
// Test hooks for dependency injection
var (
stdinReader io.Reader = os.Stdin
isTerminalFn = defaultIsTerminal
codexCommand = defaultCodexCommand
cleanupHook func()
loggerPtr atomic.Pointer[Logger]
buildCodexArgsFn = buildCodexArgs
selectBackendFn = selectBackend
commandContext = exec.CommandContext
cleanupLogsFn = cleanupOldLogs
signalNotifyFn = signal.Notify
signalStopFn = signal.Stop
terminateCommandFn = terminateCommand
defaultBuildArgsFn = buildCodexArgs
runTaskFn = runCodexTask
exitFn = os.Exit
)
var forceKillDelay atomic.Int32
func init() {
forceKillDelay.Store(5) // seconds - default value
}
func runStartupCleanup() {
if cleanupLogsFn == nil {
return
}
defer func() {
if r := recover(); r != nil {
logWarn(fmt.Sprintf("cleanupOldLogs panic: %v", r))
}
}()
if _, err := cleanupLogsFn(); err != nil {
logWarn(fmt.Sprintf("cleanupOldLogs error: %v", err))
}
}
func runCleanupMode() int {
if cleanupLogsFn == nil {
fmt.Fprintln(os.Stderr, "Cleanup failed: log cleanup function not configured")
return 1
}
stats, err := cleanupLogsFn()
if err != nil {
fmt.Fprintf(os.Stderr, "Cleanup failed: %v\n", err)
return 1
}
fmt.Println("Cleanup completed")
fmt.Printf("Files scanned: %d\n", stats.Scanned)
fmt.Printf("Files deleted: %d\n", stats.Deleted)
if len(stats.DeletedFiles) > 0 {
for _, f := range stats.DeletedFiles {
fmt.Printf(" - %s\n", f)
}
}
fmt.Printf("Files kept: %d\n", stats.Kept)
if len(stats.KeptFiles) > 0 {
for _, f := range stats.KeptFiles {
fmt.Printf(" - %s\n", f)
}
}
if stats.Errors > 0 {
fmt.Printf("Deletion errors: %d\n", stats.Errors)
}
return 0
}
func main() {
exitCode := run()
exitFn(exitCode)
}
// run is the main logic, returns exit code for testability
func run() (exitCode int) {
name := currentWrapperName()
// Handle --version and --help first (no logger needed)
if len(os.Args) > 1 {
switch os.Args[1] {
case "--version", "-v":
fmt.Printf("%s version %s\n", name, version)
return 0
case "--help", "-h":
printHelp()
return 0
case "--cleanup":
return runCleanupMode()
}
}
// Initialize logger for all other commands
logger, err := NewLogger()
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to initialize logger: %v\n", err)
return 1
}
setLogger(logger)
defer func() {
logger := activeLogger()
if logger != nil {
logger.Flush()
}
if err := closeLogger(); err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to close logger: %v\n", err)
}
// On failure, extract and display recent errors before removing log
if logger != nil {
if exitCode != 0 {
if errors := logger.ExtractRecentErrors(10); len(errors) > 0 {
fmt.Fprintln(os.Stderr, "\n=== Recent Errors ===")
for _, entry := range errors {
fmt.Fprintln(os.Stderr, entry)
}
fmt.Fprintf(os.Stderr, "Log file: %s (deleted)\n", logger.Path())
}
}
if err := logger.RemoveLogFile(); err != nil && !os.IsNotExist(err) {
// Silently ignore removal errors
}
}
}()
defer runCleanupHook()
// Clean up stale logs from previous runs.
runStartupCleanup()
// Handle remaining commands
if len(os.Args) > 1 {
args := os.Args[1:]
parallelIndex := -1
for i, arg := range args {
if arg == "--parallel" {
parallelIndex = i
break
}
}
if parallelIndex != -1 {
backendName := defaultBackendName
model := ""
fullOutput := false
skipPermissions := envFlagEnabled("CODEAGENT_SKIP_PERMISSIONS")
var extras []string
for i := 0; i < len(args); i++ {
arg := args[i]
switch {
case arg == "--parallel":
continue
case arg == "--full-output":
fullOutput = true
case arg == "--backend":
if i+1 >= len(args) {
fmt.Fprintln(os.Stderr, "ERROR: --backend flag requires a value")
return 1
}
backendName = args[i+1]
i++
case strings.HasPrefix(arg, "--backend="):
value := strings.TrimPrefix(arg, "--backend=")
if value == "" {
fmt.Fprintln(os.Stderr, "ERROR: --backend flag requires a value")
return 1
}
backendName = value
case arg == "--model":
if i+1 >= len(args) {
fmt.Fprintln(os.Stderr, "ERROR: --model flag requires a value")
return 1
}
model = args[i+1]
i++
case strings.HasPrefix(arg, "--model="):
value := strings.TrimPrefix(arg, "--model=")
if value == "" {
fmt.Fprintln(os.Stderr, "ERROR: --model flag requires a value")
return 1
}
model = value
case arg == "--skip-permissions", arg == "--dangerously-skip-permissions":
skipPermissions = true
case strings.HasPrefix(arg, "--skip-permissions="):
skipPermissions = parseBoolFlag(strings.TrimPrefix(arg, "--skip-permissions="), skipPermissions)
case strings.HasPrefix(arg, "--dangerously-skip-permissions="):
skipPermissions = parseBoolFlag(strings.TrimPrefix(arg, "--dangerously-skip-permissions="), skipPermissions)
default:
extras = append(extras, arg)
}
}
if len(extras) > 0 {
fmt.Fprintln(os.Stderr, "ERROR: --parallel reads its task configuration from stdin; only --backend, --model, --full-output and --skip-permissions are allowed.")
fmt.Fprintln(os.Stderr, "Usage examples:")
fmt.Fprintf(os.Stderr, " %s --parallel < tasks.txt\n", name)
fmt.Fprintf(os.Stderr, " echo '...' | %s --parallel\n", name)
fmt.Fprintf(os.Stderr, " %s --parallel <<'EOF'\n", name)
fmt.Fprintf(os.Stderr, " %s --parallel --full-output <<'EOF' # include full task output\n", name)
return 1
}
backend, err := selectBackendFn(backendName)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
backendName = backend.Name()
data, err := io.ReadAll(stdinReader)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: failed to read stdin: %v\n", err)
return 1
}
cfg, err := parseParallelConfig(data)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
cfg.GlobalBackend = backendName
model = strings.TrimSpace(model)
for i := range cfg.Tasks {
if strings.TrimSpace(cfg.Tasks[i].Backend) == "" {
cfg.Tasks[i].Backend = backendName
}
if strings.TrimSpace(cfg.Tasks[i].Model) == "" && model != "" {
cfg.Tasks[i].Model = model
}
cfg.Tasks[i].SkipPermissions = cfg.Tasks[i].SkipPermissions || skipPermissions
}
timeoutSec := resolveTimeout()
layers, err := topologicalSort(cfg.Tasks)
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %v\n", err)
return 1
}
results := executeConcurrent(layers, timeoutSec)
// Extract structured report fields from each result
for i := range results {
results[i].CoverageTarget = defaultCoverageTarget
if results[i].Message == "" {
continue
}
lines := strings.Split(results[i].Message, "\n")
// Coverage extraction
results[i].Coverage = extractCoverageFromLines(lines)
results[i].CoverageNum = extractCoverageNum(results[i].Coverage)
// Files changed
results[i].FilesChanged = extractFilesChangedFromLines(lines)
// Test results
results[i].TestsPassed, results[i].TestsFailed = extractTestResultsFromLines(lines)
// Key output summary
results[i].KeyOutput = extractKeyOutputFromLines(lines, 150)
}
// Default: summary mode (context-efficient)
// --full-output: legacy full output mode
fmt.Println(generateFinalOutputWithMode(results, !fullOutput))
exitCode = 0
for _, res := range results {
if res.ExitCode != 0 {
exitCode = res.ExitCode
}
}
return exitCode
}
}
logInfo("Script started")
cfg, err := parseArgs()
if err != nil {
logError(err.Error())
return 1
}
logInfo(fmt.Sprintf("Parsed args: mode=%s, task_len=%d, backend=%s", cfg.Mode, len(cfg.Task), cfg.Backend))
backend, err := selectBackendFn(cfg.Backend)
if err != nil {
logError(err.Error())
return 1
}
cfg.Backend = backend.Name()
cmdInjected := codexCommand != defaultCodexCommand
argsInjected := buildCodexArgsFn != nil && reflect.ValueOf(buildCodexArgsFn).Pointer() != reflect.ValueOf(defaultBuildArgsFn).Pointer()
// Wire selected backend into runtime hooks for the rest of the execution,
// but preserve any injected test hooks for the default backend.
if backend.Name() != defaultBackendName || !cmdInjected {
codexCommand = backend.Command()
}
if backend.Name() != defaultBackendName || !argsInjected {
buildCodexArgsFn = backend.BuildArgs
}
logInfo(fmt.Sprintf("Selected backend: %s", backend.Name()))
timeoutSec := resolveTimeout()
logInfo(fmt.Sprintf("Timeout: %ds", timeoutSec))
cfg.Timeout = timeoutSec
var taskText string
var piped bool
if cfg.ExplicitStdin {
logInfo("Explicit stdin mode: reading task from stdin")
data, err := io.ReadAll(stdinReader)
if err != nil {
logError("Failed to read stdin: " + err.Error())
return 1
}
taskText = string(data)
if taskText == "" {
logError("Explicit stdin mode requires task input from stdin")
return 1
}
piped = !isTerminal()
} else {
pipedTask, err := readPipedTask()
if err != nil {
logError("Failed to read piped stdin: " + err.Error())
return 1
}
piped = pipedTask != ""
if piped {
taskText = pipedTask
} else {
taskText = cfg.Task
}
}
if strings.TrimSpace(cfg.PromptFile) != "" {
prompt, err := readAgentPromptFile(cfg.PromptFile, cfg.PromptFileExplicit)
if err != nil {
logError("Failed to read prompt file: " + err.Error())
return 1
}
taskText = wrapTaskWithAgentPrompt(prompt, taskText)
}
useStdin := cfg.ExplicitStdin || shouldUseStdin(taskText, piped)
targetArg := taskText
if useStdin {
targetArg = "-"
}
codexArgs := buildCodexArgsFn(cfg, targetArg)
// Print startup information to stderr
fmt.Fprintf(os.Stderr, "[%s]\n", name)
fmt.Fprintf(os.Stderr, " Backend: %s\n", cfg.Backend)
fmt.Fprintf(os.Stderr, " Command: %s %s\n", codexCommand, strings.Join(codexArgs, " "))
fmt.Fprintf(os.Stderr, " PID: %d\n", os.Getpid())
fmt.Fprintf(os.Stderr, " Log: %s\n", logger.Path())
if useStdin {
var reasons []string
if piped {
reasons = append(reasons, "piped input")
}
if cfg.ExplicitStdin {
reasons = append(reasons, "explicit \"-\"")
}
if strings.Contains(taskText, "\n") {
reasons = append(reasons, "newline")
}
if strings.Contains(taskText, "\\") {
reasons = append(reasons, "backslash")
}
if strings.Contains(taskText, "\"") {
reasons = append(reasons, "double-quote")
}
if strings.Contains(taskText, "'") {
reasons = append(reasons, "single-quote")
}
if strings.Contains(taskText, "`") {
reasons = append(reasons, "backtick")
}
if strings.Contains(taskText, "$") {
reasons = append(reasons, "dollar")
}
if len(taskText) > 800 {
reasons = append(reasons, "length>800")
}
if len(reasons) > 0 {
logWarn(fmt.Sprintf("Using stdin mode for task due to: %s", strings.Join(reasons, ", ")))
}
}
logInfo(fmt.Sprintf("%s running...", cfg.Backend))
taskSpec := TaskSpec{
Task: taskText,
WorkDir: cfg.WorkDir,
Mode: cfg.Mode,
SessionID: cfg.SessionID,
Model: cfg.Model,
ReasoningEffort: cfg.ReasoningEffort,
SkipPermissions: cfg.SkipPermissions,
UseStdin: useStdin,
}
result := runTaskFn(taskSpec, false, cfg.Timeout)
if result.ExitCode != 0 {
return result.ExitCode
}
fmt.Println(result.Message)
if result.SessionID != "" {
fmt.Printf("\n---\nSESSION_ID: %s\n", result.SessionID)
}
return 0
}
func readAgentPromptFile(path string, allowOutsideClaudeDir bool) (string, error) {
raw := strings.TrimSpace(path)
if raw == "" {
return "", nil
}
expanded := raw
if raw == "~" || strings.HasPrefix(raw, "~/") || strings.HasPrefix(raw, "~\\") {
home, err := os.UserHomeDir()
if err != nil {
return "", err
}
if raw == "~" {
expanded = home
} else {
expanded = home + raw[1:]
}
}
absPath, err := filepath.Abs(expanded)
if err != nil {
return "", err
}
absPath = filepath.Clean(absPath)
home, err := os.UserHomeDir()
if err != nil {
if !allowOutsideClaudeDir {
return "", err
}
logWarn(fmt.Sprintf("Failed to resolve home directory for prompt file validation: %v; proceeding without restriction", err))
} else {
allowedDir := filepath.Clean(filepath.Join(home, ".claude"))
allowedAbs, err := filepath.Abs(allowedDir)
if err == nil {
allowedDir = filepath.Clean(allowedAbs)
}
isWithinDir := func(path, dir string) bool {
rel, err := filepath.Rel(dir, path)
if err != nil {
return false
}
rel = filepath.Clean(rel)
if rel == "." {
return true
}
if rel == ".." {
return false
}
prefix := ".." + string(os.PathSeparator)
return !strings.HasPrefix(rel, prefix)
}
if !allowOutsideClaudeDir {
if !isWithinDir(absPath, allowedDir) {
logWarn(fmt.Sprintf("Refusing to read prompt file outside %s: %s", allowedDir, absPath))
return "", fmt.Errorf("prompt file must be under %s", allowedDir)
}
resolvedPath, errPath := filepath.EvalSymlinks(absPath)
resolvedBase, errBase := filepath.EvalSymlinks(allowedDir)
if errPath == nil && errBase == nil {
resolvedPath = filepath.Clean(resolvedPath)
resolvedBase = filepath.Clean(resolvedBase)
if !isWithinDir(resolvedPath, resolvedBase) {
logWarn(fmt.Sprintf("Refusing to read prompt file outside %s (resolved): %s", resolvedBase, resolvedPath))
return "", fmt.Errorf("prompt file must be under %s", resolvedBase)
}
}
} else if !isWithinDir(absPath, allowedDir) {
logWarn(fmt.Sprintf("Reading prompt file outside %s: %s", allowedDir, absPath))
}
}
data, err := os.ReadFile(absPath)
if err != nil {
return "", err
}
return strings.TrimRight(string(data), "\r\n"), nil
}
func wrapTaskWithAgentPrompt(prompt string, task string) string {
return "<agent-prompt>\n" + prompt + "\n</agent-prompt>\n\n" + task
}
func setLogger(l *Logger) {
loggerPtr.Store(l)
}
func closeLogger() error {
logger := loggerPtr.Swap(nil)
if logger == nil {
return nil
}
return logger.Close()
}
func activeLogger() *Logger {
return loggerPtr.Load()
}
func logInfo(msg string) {
if logger := activeLogger(); logger != nil {
logger.Info(msg)
}
}
func logWarn(msg string) {
if logger := activeLogger(); logger != nil {
logger.Warn(msg)
}
}
func logError(msg string) {
if logger := activeLogger(); logger != nil {
logger.Error(msg)
}
}
func runCleanupHook() {
if logger := activeLogger(); logger != nil {
logger.Flush()
}
if cleanupHook != nil {
cleanupHook()
}
}
func printHelp() {
name := currentWrapperName()
help := fmt.Sprintf(`%[1]s - Go wrapper for AI CLI backends
Usage:
%[1]s "task" [workdir]
%[1]s --backend claude "task" [workdir]
%[1]s --prompt-file /path/to/prompt.md "task" [workdir]
%[1]s - [workdir] Read task from stdin
%[1]s resume <session_id> "task" [workdir]
%[1]s resume <session_id> - [workdir]
%[1]s --parallel Run tasks in parallel (config from stdin)
%[1]s --parallel --full-output Run tasks in parallel with full output (legacy)
%[1]s --version
%[1]s --help
Parallel mode examples:
%[1]s --parallel < tasks.txt
echo '...' | %[1]s --parallel
%[1]s --parallel --full-output < tasks.txt
%[1]s --parallel <<'EOF'
Environment Variables:
CODEX_TIMEOUT Timeout in milliseconds (default: 7200000)
CODEAGENT_ASCII_MODE Use ASCII symbols instead of Unicode (PASS/WARN/FAIL)
Exit Codes:
0 Success
1 General error (missing args, no output)
124 Timeout
127 backend command not found
130 Interrupted (Ctrl+C)
* Passthrough from backend process`, name)
fmt.Println(help)
}

View File

@@ -1,217 +0,0 @@
//go:build unix || darwin || linux
// +build unix darwin linux
package main
import (
"errors"
"fmt"
"os"
"os/exec"
"runtime"
"strconv"
"strings"
"testing"
"time"
)
func TestIsProcessRunning(t *testing.T) {
t.Run("current process", func(t *testing.T) {
if !isProcessRunning(os.Getpid()) {
t.Fatalf("expected current process (pid=%d) to be running", os.Getpid())
}
})
t.Run("fake pid", func(t *testing.T) {
const nonexistentPID = 1 << 30
if isProcessRunning(nonexistentPID) {
t.Fatalf("expected pid %d to be reported as not running", nonexistentPID)
}
})
t.Run("terminated process", func(t *testing.T) {
pid := exitedProcessPID(t)
if isProcessRunning(pid) {
t.Fatalf("expected exited child process (pid=%d) to be reported as not running", pid)
}
})
t.Run("boundary values", func(t *testing.T) {
if isProcessRunning(0) {
t.Fatalf("pid 0 should never be treated as running")
}
if isProcessRunning(-42) {
t.Fatalf("negative pid should never be treated as running")
}
})
t.Run("find process error", func(t *testing.T) {
original := findProcess
defer func() { findProcess = original }()
mockErr := errors.New("findProcess failure")
findProcess = func(pid int) (*os.Process, error) {
return nil, mockErr
}
if isProcessRunning(1234) {
t.Fatalf("expected false when os.FindProcess fails")
}
})
}
func exitedProcessPID(t *testing.T) int {
t.Helper()
var cmd *exec.Cmd
if runtime.GOOS == "windows" {
cmd = exec.Command("cmd", "/c", "exit 0")
} else {
cmd = exec.Command("sh", "-c", "exit 0")
}
if err := cmd.Start(); err != nil {
t.Fatalf("failed to start helper process: %v", err)
}
pid := cmd.Process.Pid
if err := cmd.Wait(); err != nil {
t.Fatalf("helper process did not exit cleanly: %v", err)
}
time.Sleep(50 * time.Millisecond)
return pid
}
func TestRunProcessCheckSmoke(t *testing.T) {
t.Run("current process", func(t *testing.T) {
if !isProcessRunning(os.Getpid()) {
t.Fatalf("expected current process (pid=%d) to be running", os.Getpid())
}
})
t.Run("fake pid", func(t *testing.T) {
const nonexistentPID = 1 << 30
if isProcessRunning(nonexistentPID) {
t.Fatalf("expected pid %d to be reported as not running", nonexistentPID)
}
})
t.Run("boundary values", func(t *testing.T) {
if isProcessRunning(0) {
t.Fatalf("pid 0 should never be treated as running")
}
if isProcessRunning(-42) {
t.Fatalf("negative pid should never be treated as running")
}
})
t.Run("find process error", func(t *testing.T) {
original := findProcess
defer func() { findProcess = original }()
mockErr := errors.New("findProcess failure")
findProcess = func(pid int) (*os.Process, error) {
return nil, mockErr
}
if isProcessRunning(1234) {
t.Fatalf("expected false when os.FindProcess fails")
}
})
}
func TestGetProcessStartTimeReadsProcStat(t *testing.T) {
pid := 4321
boot := time.Unix(1_710_000_000, 0)
startTicks := uint64(4500)
statFields := make([]string, 25)
for i := range statFields {
statFields[i] = strconv.Itoa(i + 1)
}
statFields[19] = strconv.FormatUint(startTicks, 10)
statContent := fmt.Sprintf("%d (%s) %s", pid, "cmd with space", strings.Join(statFields, " "))
stubReadFile(t, func(path string) ([]byte, error) {
switch path {
case fmt.Sprintf("/proc/%d/stat", pid):
return []byte(statContent), nil
case "/proc/stat":
return []byte(fmt.Sprintf("cpu 0 0 0 0\nbtime %d\n", boot.Unix())), nil
default:
return nil, os.ErrNotExist
}
})
got := getProcessStartTime(pid)
want := boot.Add(time.Duration(startTicks/100) * time.Second)
if !got.Equal(want) {
t.Fatalf("getProcessStartTime() = %v, want %v", got, want)
}
}
func TestGetProcessStartTimeInvalidData(t *testing.T) {
pid := 99
stubReadFile(t, func(path string) ([]byte, error) {
switch path {
case fmt.Sprintf("/proc/%d/stat", pid):
return []byte("garbage"), nil
case "/proc/stat":
return []byte("btime not-a-number\n"), nil
default:
return nil, os.ErrNotExist
}
})
if got := getProcessStartTime(pid); !got.IsZero() {
t.Fatalf("invalid /proc data should return zero time, got %v", got)
}
}
func TestGetBootTimeParsesBtime(t *testing.T) {
const bootSec = 1_711_111_111
stubReadFile(t, func(path string) ([]byte, error) {
if path != "/proc/stat" {
return nil, os.ErrNotExist
}
content := fmt.Sprintf("intr 0\nbtime %d\n", bootSec)
return []byte(content), nil
})
got := getBootTime()
want := time.Unix(bootSec, 0)
if !got.Equal(want) {
t.Fatalf("getBootTime() = %v, want %v", got, want)
}
}
func TestGetBootTimeInvalidData(t *testing.T) {
cases := []struct {
name string
content string
}{
{"missing", "cpu 0 0 0 0"},
{"malformed", "btime abc"},
}
for _, tt := range cases {
t.Run(tt.name, func(t *testing.T) {
stubReadFile(t, func(string) ([]byte, error) {
return []byte(tt.content), nil
})
if got := getBootTime(); !got.IsZero() {
t.Fatalf("getBootTime() unexpected value for %s: %v", tt.name, got)
}
})
}
}
func stubReadFile(t *testing.T, fn func(string) ([]byte, error)) {
t.Helper()
original := readFileFn
readFileFn = fn
t.Cleanup(func() {
readFileFn = original
})
}

View File

@@ -1,104 +0,0 @@
//go:build unix || darwin || linux
// +build unix darwin linux
package main
import (
"errors"
"fmt"
"os"
"strconv"
"strings"
"syscall"
"time"
)
var findProcess = os.FindProcess
var readFileFn = os.ReadFile
// isProcessRunning returns true if a process with the given pid is running on Unix-like systems.
func isProcessRunning(pid int) bool {
if pid <= 0 {
return false
}
proc, err := findProcess(pid)
if err != nil || proc == nil {
return false
}
err = proc.Signal(syscall.Signal(0))
if err != nil && (errors.Is(err, syscall.ESRCH) || errors.Is(err, os.ErrProcessDone)) {
return false
}
return true
}
// getProcessStartTime returns the start time of a process on Unix-like systems.
// Returns zero time if the start time cannot be determined.
func getProcessStartTime(pid int) time.Time {
if pid <= 0 {
return time.Time{}
}
// Read /proc/<pid>/stat to get process start time
statPath := fmt.Sprintf("/proc/%d/stat", pid)
data, err := readFileFn(statPath)
if err != nil {
return time.Time{}
}
// Parse stat file: fields are space-separated, but comm (field 2) can contain spaces
// Find the last ')' to skip comm field safely
content := string(data)
lastParen := strings.LastIndex(content, ")")
if lastParen == -1 {
return time.Time{}
}
fields := strings.Fields(content[lastParen+1:])
if len(fields) < 20 {
return time.Time{}
}
// Field 22 (index 19 after comm) is starttime in clock ticks since boot
startTicks, err := strconv.ParseUint(fields[19], 10, 64)
if err != nil {
return time.Time{}
}
// Get system boot time
bootTime := getBootTime()
if bootTime.IsZero() {
return time.Time{}
}
// Convert ticks to duration (typically 100 ticks/sec on most systems)
ticksPerSec := uint64(100) // sysconf(_SC_CLK_TCK), typically 100
startTime := bootTime.Add(time.Duration(startTicks/ticksPerSec) * time.Second)
return startTime
}
// getBootTime returns the system boot time by reading /proc/stat.
func getBootTime() time.Time {
data, err := readFileFn("/proc/stat")
if err != nil {
return time.Time{}
}
lines := strings.Split(string(data), "\n")
for _, line := range lines {
if strings.HasPrefix(line, "btime ") {
fields := strings.Fields(line)
if len(fields) >= 2 {
bootSec, err := strconv.ParseInt(fields[1], 10, 64)
if err == nil {
return time.Unix(bootSec, 0)
}
}
}
}
return time.Time{}
}

View File

@@ -1,87 +0,0 @@
//go:build windows
// +build windows
package main
import (
"errors"
"os"
"syscall"
"time"
"unsafe"
)
const (
processQueryLimitedInformation = 0x1000
stillActive = 259 // STILL_ACTIVE exit code
)
var (
findProcess = os.FindProcess
kernel32 = syscall.NewLazyDLL("kernel32.dll")
getProcessTimes = kernel32.NewProc("GetProcessTimes")
fileTimeToUnixFn = fileTimeToUnix
)
// isProcessRunning returns true if a process with the given pid is running on Windows.
func isProcessRunning(pid int) bool {
if pid <= 0 {
return false
}
if _, err := findProcess(pid); err != nil {
return false
}
handle, err := syscall.OpenProcess(processQueryLimitedInformation, false, uint32(pid))
if err != nil {
if errors.Is(err, syscall.ERROR_ACCESS_DENIED) {
return true
}
return false
}
defer syscall.CloseHandle(handle)
var exitCode uint32
if err := syscall.GetExitCodeProcess(handle, &exitCode); err != nil {
return true
}
return exitCode == stillActive
}
// getProcessStartTime returns the start time of a process on Windows.
// Returns zero time if the start time cannot be determined.
func getProcessStartTime(pid int) time.Time {
if pid <= 0 {
return time.Time{}
}
handle, err := syscall.OpenProcess(processQueryLimitedInformation, false, uint32(pid))
if err != nil {
return time.Time{}
}
defer syscall.CloseHandle(handle)
var creationTime, exitTime, kernelTime, userTime syscall.Filetime
ret, _, _ := getProcessTimes.Call(
uintptr(handle),
uintptr(unsafe.Pointer(&creationTime)),
uintptr(unsafe.Pointer(&exitTime)),
uintptr(unsafe.Pointer(&kernelTime)),
uintptr(unsafe.Pointer(&userTime)),
)
if ret == 0 {
return time.Time{}
}
return fileTimeToUnixFn(creationTime)
}
// fileTimeToUnix converts Windows FILETIME to Unix time.
func fileTimeToUnix(ft syscall.Filetime) time.Time {
// FILETIME is 100-nanosecond intervals since January 1, 1601 UTC
nsec := ft.Nanoseconds()
return time.Unix(0, nsec)
}

View File

@@ -1,64 +0,0 @@
//go:build windows
// +build windows
package main
import (
"os"
"testing"
"time"
)
func TestIsProcessRunning(t *testing.T) {
t.Run("boundary values", func(t *testing.T) {
if isProcessRunning(0) {
t.Fatalf("expected pid 0 to be reported as not running")
}
if isProcessRunning(-1) {
t.Fatalf("expected pid -1 to be reported as not running")
}
})
t.Run("current process", func(t *testing.T) {
if !isProcessRunning(os.Getpid()) {
t.Fatalf("expected current process (pid=%d) to be running", os.Getpid())
}
})
t.Run("fake pid", func(t *testing.T) {
const nonexistentPID = 1 << 30
if isProcessRunning(nonexistentPID) {
t.Fatalf("expected pid %d to be reported as not running", nonexistentPID)
}
})
}
func TestGetProcessStartTimeReadsProcStat(t *testing.T) {
start := getProcessStartTime(os.Getpid())
if start.IsZero() {
t.Fatalf("expected non-zero start time for current process")
}
if start.After(time.Now().Add(5 * time.Second)) {
t.Fatalf("start time is unexpectedly in the future: %v", start)
}
}
func TestGetProcessStartTimeInvalidData(t *testing.T) {
if !getProcessStartTime(0).IsZero() {
t.Fatalf("expected zero time for pid 0")
}
if !getProcessStartTime(-1).IsZero() {
t.Fatalf("expected zero time for negative pid")
}
if !getProcessStartTime(1 << 30).IsZero() {
t.Fatalf("expected zero time for non-existent pid")
}
}
func TestGetBootTimeParsesBtime(t *testing.T) {
t.Skip("getBootTime is only implemented on Unix-like systems")
}
func TestGetBootTimeInvalidData(t *testing.T) {
t.Skip("getBootTime is only implemented on Unix-like systems")
}

View File

@@ -1,126 +0,0 @@
package main
import (
"os"
"path/filepath"
"strings"
)
const (
defaultWrapperName = "codeagent-wrapper"
legacyWrapperName = "codex-wrapper"
)
var executablePathFn = os.Executable
func normalizeWrapperName(path string) string {
if path == "" {
return ""
}
base := filepath.Base(path)
base = strings.TrimSuffix(base, ".exe") // tolerate Windows executables
switch base {
case defaultWrapperName, legacyWrapperName:
return base
default:
return ""
}
}
// currentWrapperName resolves the wrapper name based on the invoked binary.
// Only known names are honored to avoid leaking build/test binary names into logs.
func currentWrapperName() string {
if len(os.Args) == 0 {
return defaultWrapperName
}
if name := normalizeWrapperName(os.Args[0]); name != "" {
return name
}
execPath, err := executablePathFn()
if err == nil {
if name := normalizeWrapperName(execPath); name != "" {
return name
}
if resolved, err := filepath.EvalSymlinks(execPath); err == nil {
if name := normalizeWrapperName(resolved); name != "" {
return name
}
if alias := resolveAlias(execPath, resolved); alias != "" {
return alias
}
}
if alias := resolveAlias(execPath, ""); alias != "" {
return alias
}
}
return defaultWrapperName
}
// logPrefixes returns the set of accepted log name prefixes, including the
// current wrapper name and legacy aliases.
func logPrefixes() []string {
prefixes := []string{currentWrapperName(), defaultWrapperName, legacyWrapperName}
seen := make(map[string]struct{}, len(prefixes))
var unique []string
for _, prefix := range prefixes {
if prefix == "" {
continue
}
if _, ok := seen[prefix]; ok {
continue
}
seen[prefix] = struct{}{}
unique = append(unique, prefix)
}
return unique
}
// primaryLogPrefix returns the preferred filename prefix for log files.
// Defaults to the current wrapper name when available, otherwise falls back
// to the canonical default name.
func primaryLogPrefix() string {
prefixes := logPrefixes()
if len(prefixes) == 0 {
return defaultWrapperName
}
return prefixes[0]
}
func resolveAlias(execPath string, target string) string {
if execPath == "" {
return ""
}
dir := filepath.Dir(execPath)
for _, candidate := range []string{defaultWrapperName, legacyWrapperName} {
aliasPath := filepath.Join(dir, candidate)
info, err := os.Lstat(aliasPath)
if err != nil {
continue
}
if info.Mode()&os.ModeSymlink == 0 {
continue
}
resolved, err := filepath.EvalSymlinks(aliasPath)
if err != nil {
continue
}
if target != "" && resolved != target {
continue
}
if name := normalizeWrapperName(aliasPath); name != "" {
return name
}
}
return ""
}

View File

@@ -1,50 +0,0 @@
package main
import (
"os"
"path/filepath"
"testing"
)
func TestCurrentWrapperNameFallsBackToExecutable(t *testing.T) {
defer resetTestHooks()
tempDir := t.TempDir()
execPath := filepath.Join(tempDir, "codeagent-wrapper")
if err := os.WriteFile(execPath, []byte("#!/bin/true\n"), 0o755); err != nil {
t.Fatalf("failed to write fake binary: %v", err)
}
os.Args = []string{filepath.Join(tempDir, "custom-name")}
executablePathFn = func() (string, error) {
return execPath, nil
}
if got := currentWrapperName(); got != defaultWrapperName {
t.Fatalf("currentWrapperName() = %q, want %q", got, defaultWrapperName)
}
}
func TestCurrentWrapperNameDetectsLegacyAliasSymlink(t *testing.T) {
defer resetTestHooks()
tempDir := t.TempDir()
execPath := filepath.Join(tempDir, "wrapper")
aliasPath := filepath.Join(tempDir, legacyWrapperName)
if err := os.WriteFile(execPath, []byte("#!/bin/true\n"), 0o755); err != nil {
t.Fatalf("failed to write fake binary: %v", err)
}
if err := os.Symlink(execPath, aliasPath); err != nil {
t.Fatalf("failed to create alias: %v", err)
}
os.Args = []string{filepath.Join(tempDir, "unknown-runner")}
executablePathFn = func() (string, error) {
return execPath, nil
}
if got := currentWrapperName(); got != legacyWrapperName {
t.Fatalf("currentWrapperName() = %q, want %q", got, legacyWrapperName)
}
}

View File

@@ -93,7 +93,7 @@
]
},
"essentials": {
"enabled": true,
"enabled": false,
"description": "Core development commands and utilities",
"operations": [
{
@@ -156,6 +156,68 @@
"description": "Install develop agent prompt"
}
]
},
"sparv": {
"enabled": false,
"description": "SPARV workflow (Specify→Plan→Act→Review→Vault) with 10-point gate",
"operations": [
{
"type": "copy_dir",
"source": "skills/sparv",
"target": "skills/sparv",
"description": "Install sparv skill with all scripts and hooks"
}
]
},
"do": {
"enabled": false,
"description": "7-phase feature development workflow with codeagent orchestration",
"operations": [
{
"type": "copy_dir",
"source": "skills/do",
"target": "skills/do",
"description": "Install do skill with hooks"
}
]
},
"course": {
"enabled": false,
"description": "课程开发工作流,包含 dev、产品需求和测试用例技能",
"operations": [
{
"type": "copy_dir",
"source": "skills/dev",
"target": "skills/dev",
"description": "Install dev skill with agents"
},
{
"type": "copy_file",
"source": "skills/product-requirements/SKILL.md",
"target": "skills/product-requirements/SKILL.md",
"description": "Install product-requirements skill"
},
{
"type": "copy_dir",
"source": "skills/test-cases",
"target": "skills/test-cases",
"description": "Install test-cases skill with references"
},
{
"type": "copy_file",
"source": "skills/codeagent/SKILL.md",
"target": "skills/codeagent/SKILL.md",
"description": "Install codeagent skill"
},
{
"type": "run_command",
"command": "bash install.sh",
"description": "Install codeagent-wrapper binary",
"env": {
"INSTALL_DIR": "${install_dir}"
}
}
]
}
}
}

143
go.work.sum Normal file
View File

@@ -0,0 +1,143 @@
cloud.google.com/go v0.112.1 h1:uJSeirPke5UNZHIb4SxfZklVSiWWVqW4oXlETwZziwM=
cloud.google.com/go v0.112.1/go.mod h1:+Vbu+Y1UU+I1rjmzeMOb/8RfkKJK2Gyxi1X6jJCZLo4=
cloud.google.com/go/compute v1.24.0 h1:phWcR2eWzRJaL/kOiJwfFsPs4BaKq1j6vnpZrc1YlVg=
cloud.google.com/go/compute v1.24.0/go.mod h1:kw1/T+h/+tK2LJK0wiPPx1intgdAM3j/g3hFDlscY40=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
cloud.google.com/go/firestore v1.15.0 h1:/k8ppuWOtNuDHt2tsRV42yI21uaGnKDEQnRFeBpbFF8=
cloud.google.com/go/firestore v1.15.0/go.mod h1:GWOxFXcv8GZUtYpWHw/w6IuYNux/BtmeVTMmjrm4yhk=
cloud.google.com/go/iam v1.1.5 h1:1jTsCu4bcsNsE4iiqNT5SHwrDRCfRmIaaaVFhRveTJI=
cloud.google.com/go/iam v1.1.5/go.mod h1:rB6P/Ic3mykPbFio+vo7403drjlgvoWfYpJhMXEbzv8=
cloud.google.com/go/longrunning v0.5.5 h1:GOE6pZFdSrTb4KAiKnXsJBtlE6mEyaW44oKyMILWnOg=
cloud.google.com/go/longrunning v0.5.5/go.mod h1:WV2LAxD8/rg5Z1cNW6FJ/ZpX4E4VnDnoTk0yawPBB7s=
cloud.google.com/go/storage v1.35.1 h1:B59ahL//eDfx2IIKFBeT5Atm9wnNmj3+8xG/W4WB//w=
cloud.google.com/go/storage v1.35.1/go.mod h1:M6M/3V/D3KpzMTJyPOR/HU6n2Si5QdaXYEsng2xgOs8=
github.com/armon/go-metrics v0.4.1 h1:hR91U9KYmb6bLBYLQjyM+3j+rcd/UhE+G78SFnF8gJA=
github.com/armon/go-metrics v0.4.1/go.mod h1:E6amYzXo6aW1tqzoZGT755KkbgrJsSdpwZ+3JqfkOG4=
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/cpuguy83/go-md2man/v2 v2.0.4 h1:wfIWP927BUkWJb2NmU/kNDYIBTh/ziUX91+lVfRxZq4=
github.com/fatih/color v1.14.1 h1:qfhVLaG5s+nCROl1zJsZRxFeYrHLqWroPOQ8BWiNb4w=
github.com/fatih/color v1.14.1/go.mod h1:2oHN61fhTpgcxD3TSWCgKDiH1+x4OiDVVGH8WlgGZGg=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/godbus/dbus/v5 v5.0.4 h1:9349emZab16e7zQvpmsbtjc18ykshndd8y2PG3sgJbA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/s2a-go v0.1.7 h1:60BLSyTrOV4/haCDW4zb1guZItoSq8foHCXrAnjBo/o=
github.com/google/s2a-go v0.1.7/go.mod h1:50CgR4k1jNlWBu4UfS4AcfhVe1r6pdZPygJ3R8F0Qdw=
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfFxPRy3Bf7vr3h0cechB90XaQs=
github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0=
github.com/googleapis/gax-go/v2 v2.12.3 h1:5/zPPDvw8Q1SuXjrqrZslrqT7dL/uJT2CQii/cLCKqA=
github.com/googleapis/gax-go/v2 v2.12.3/go.mod h1:AKloxT6GtNbaLm8QTNSidHUVsHYcBHwWRvkNFJUQcS4=
github.com/googleapis/google-cloud-go-testing v0.0.0-20210719221736-1c9a4c676720 h1:zC34cGQu69FG7qzJ3WiKW244WfhDC3xxYMeNOX2gtUQ=
github.com/googleapis/google-cloud-go-testing v0.0.0-20210719221736-1c9a4c676720/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
github.com/hashicorp/consul/api v1.28.2 h1:mXfkRHrpHN4YY3RqL09nXU1eHKLNiuAN4kHvDQ16k/8=
github.com/hashicorp/consul/api v1.28.2/go.mod h1:KyzqzgMEya+IZPcD65YFoOVAgPpbfERu4I/tzG6/ueE=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-hclog v1.5.0 h1:bI2ocEMgcVlz55Oj1xZNBsVi900c7II+fWDyV9o+13c=
github.com/hashicorp/go-hclog v1.5.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/go-immutable-radix v1.3.1 h1:DKHmCUm2hRBK510BaiZlwvpD40f8bJFeZnpfm2KLowc=
github.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/serf v0.10.1 h1:Z1H2J60yRKvfDYAOZLd2MU0ND4AH/WDz7xYHDWQsIPY=
github.com/hashicorp/serf v0.10.1/go.mod h1:yL2t6BqATOLGc5HF7qbFkTfXoPIY0WZdWHfEvMqbG+4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/klauspost/compress v1.17.2 h1:RlWWUY/Dr4fL8qk9YG7DTZ7PDgME2V4csBXA8L/ixi4=
github.com/klauspost/compress v1.17.2/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/nats-io/nats.go v1.34.0 h1:fnxnPCNiwIG5w08rlMcEKTUw4AV/nKyGCOJE8TdhSPk=
github.com/nats-io/nats.go v1.34.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/sftp v1.13.6 h1:JFZT4XbOU7l77xGSpOdW+pwIMqP044IyjXX6FGyEKFo=
github.com/pkg/sftp v1.13.6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Qk=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/sagikazarmark/crypt v0.19.0 h1:WMyLTjHBo64UvNcWqpzY3pbZTYgnemZU8FBZigKc42E=
github.com/sagikazarmark/crypt v0.19.0/go.mod h1:c6vimRziqqERhtSe0MhIvzE1w54FrCHtrXb5NH/ja78=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
go.etcd.io/etcd/api/v3 v3.5.12 h1:W4sw5ZoU2Juc9gBWuLk5U6fHfNVyY1WC5g9uiXZio/c=
go.etcd.io/etcd/api/v3 v3.5.12/go.mod h1:Ot+o0SWSyT6uHhA56al1oCED0JImsRiU9Dc26+C2a+4=
go.etcd.io/etcd/client/pkg/v3 v3.5.12 h1:EYDL6pWwyOsylrQyLp2w+HkQ46ATiOvoEdMarindU2A=
go.etcd.io/etcd/client/pkg/v3 v3.5.12/go.mod h1:seTzl2d9APP8R5Y2hFL3NVlD6qC/dOT+3kvrqPyTas4=
go.etcd.io/etcd/client/v2 v2.305.12 h1:0m4ovXYo1CHaA/Mp3X/Fak5sRNIWf01wk/X1/G3sGKI=
go.etcd.io/etcd/client/v2 v2.305.12/go.mod h1:aQ/yhsxMu+Oht1FOupSr60oBvcS9cKXHrzBpDsPTf9E=
go.etcd.io/etcd/client/v3 v3.5.12 h1:v5lCPXn1pf1Uu3M4laUE2hp/geOTc5uPcYYsNe1lDxg=
go.etcd.io/etcd/client/v3 v3.5.12/go.mod h1:tSbBCakoWmmddL+BKVAJHa9km+O/E+bumDe9mSbPiqw=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0/go.mod h1:Mjt1i1INqiaoZOMGR1RIUJN+i3ChKoFRqzrRQhlkbs0=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw=
go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo=
go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo=
go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI=
go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco=
go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI=
go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU=
go.uber.org/zap v1.21.0 h1:WefMeulhovoZ2sYXz7st6K0sLj7bBhpiFaud4r4zST8=
go.uber.org/zap v1.21.0/go.mod h1:wjWOCqI0f2ZZrJF/UufIOkiC8ii6tm1iqIsLo76RfJw=
golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI=
golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8=
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
google.golang.org/api v0.171.0 h1:w174hnBPqut76FzW5Qaupt7zY8Kql6fiVjgys4f58sU=
google.golang.org/api v0.171.0/go.mod h1:Hnq5AHm4OTMt2BUVjael2CWZFD6vksJdWCWiUAmjC9o=
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 h1:9+tzLLstTlPTRyJTh+ah5wIMsBW5c4tQwGTN3thOW9Y=
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9/go.mod h1:mqHbVIp48Muh7Ywss/AD6I5kNVKZMmAa/QEW58Gxp2s=
google.golang.org/genproto/googleapis/api v0.0.0-20240311132316-a219d84964c2 h1:rIo7ocm2roD9DcFIX67Ym8icoGCKSARAiPljFhh5suQ=
google.golang.org/genproto/googleapis/api v0.0.0-20240311132316-a219d84964c2/go.mod h1:O1cOfN1Cy6QEYr7VxtjOyP5AdAuR0aJ/MYZaaof623Y=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c h1:lfpJ/2rWPa/kJgxyyXM8PrNnfCzcmxJ265mADgwmvLI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY=
google.golang.org/grpc v1.62.1 h1:B4n+nfKzOICUXMgyrNd19h/I9oH0L1pizfk1d4zSgTk=
google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=

View File

@@ -23,6 +23,7 @@ except ImportError: # pragma: no cover
jsonschema = None
DEFAULT_INSTALL_DIR = "~/.claude"
SETTINGS_FILE = "settings.json"
def _ensure_list(ctx: Dict[str, Any], key: str) -> List[Any]:
@@ -46,7 +47,7 @@ def parse_args(argv: Optional[Iterable[str]] = None) -> argparse.Namespace:
)
parser.add_argument(
"--module",
help="Comma-separated modules to install, or 'all' for all enabled",
help="Comma-separated modules to install/uninstall, or 'all'",
)
parser.add_argument(
"--config",
@@ -58,6 +59,16 @@ def parse_args(argv: Optional[Iterable[str]] = None) -> argparse.Namespace:
action="store_true",
help="List available modules and exit",
)
parser.add_argument(
"--status",
action="store_true",
help="Show installation status of all modules",
)
parser.add_argument(
"--uninstall",
action="store_true",
help="Uninstall specified modules",
)
parser.add_argument(
"--force",
action="store_true",
@@ -81,6 +92,151 @@ def _load_json(path: Path) -> Any:
raise ValueError(f"Invalid JSON in {path}: {exc}") from exc
def _save_json(path: Path, data: Any) -> None:
"""Save data to JSON file with proper formatting."""
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w", encoding="utf-8") as fh:
json.dump(data, fh, indent=2, ensure_ascii=False)
fh.write("\n")
# =============================================================================
# Hooks Management
# =============================================================================
def load_settings(ctx: Dict[str, Any]) -> Dict[str, Any]:
"""Load settings.json from install directory."""
settings_path = ctx["install_dir"] / SETTINGS_FILE
if settings_path.exists():
try:
return _load_json(settings_path)
except (ValueError, FileNotFoundError):
return {}
return {}
def save_settings(ctx: Dict[str, Any], settings: Dict[str, Any]) -> None:
"""Save settings.json to install directory."""
settings_path = ctx["install_dir"] / SETTINGS_FILE
_save_json(settings_path, settings)
def find_module_hooks(module_name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Optional[tuple]:
"""Find hooks.json for a module if it exists.
Returns tuple of (hooks_config, plugin_root_path) or None.
"""
# Check for hooks in operations (copy_dir targets)
for op in cfg.get("operations", []):
if op.get("type") == "copy_dir":
target_dir = ctx["install_dir"] / op["target"]
hooks_file = target_dir / "hooks" / "hooks.json"
if hooks_file.exists():
try:
return (_load_json(hooks_file), str(target_dir))
except (ValueError, FileNotFoundError):
pass
# Also check source directory during install
for op in cfg.get("operations", []):
if op.get("type") == "copy_dir":
target_dir = ctx["install_dir"] / op["target"]
source_dir = ctx["config_dir"] / op["source"]
hooks_file = source_dir / "hooks" / "hooks.json"
if hooks_file.exists():
try:
return (_load_json(hooks_file), str(target_dir))
except (ValueError, FileNotFoundError):
pass
return None
def _create_hook_marker(module_name: str) -> str:
"""Create a marker to identify hooks from a specific module."""
return f"__module:{module_name}__"
def _replace_hook_variables(obj: Any, plugin_root: str) -> Any:
"""Recursively replace ${CLAUDE_PLUGIN_ROOT} in hook config."""
if isinstance(obj, str):
return obj.replace("${CLAUDE_PLUGIN_ROOT}", plugin_root)
elif isinstance(obj, dict):
return {k: _replace_hook_variables(v, plugin_root) for k, v in obj.items()}
elif isinstance(obj, list):
return [_replace_hook_variables(item, plugin_root) for item in obj]
return obj
def merge_hooks_to_settings(module_name: str, hooks_config: Dict[str, Any], ctx: Dict[str, Any], plugin_root: str = "") -> None:
"""Merge module hooks into settings.json."""
settings = load_settings(ctx)
settings.setdefault("hooks", {})
module_hooks = hooks_config.get("hooks", {})
marker = _create_hook_marker(module_name)
# Replace ${CLAUDE_PLUGIN_ROOT} with actual path
if plugin_root:
module_hooks = _replace_hook_variables(module_hooks, plugin_root)
for hook_type, hook_entries in module_hooks.items():
settings["hooks"].setdefault(hook_type, [])
for entry in hook_entries:
# Add marker to identify this hook's source module
entry_copy = dict(entry)
entry_copy["__module__"] = module_name
# Check if already exists (avoid duplicates)
exists = False
for existing in settings["hooks"][hook_type]:
if existing.get("__module__") == module_name:
# Same module, check if same hook
if _hooks_equal(existing, entry_copy):
exists = True
break
if not exists:
settings["hooks"][hook_type].append(entry_copy)
save_settings(ctx, settings)
write_log({"level": "INFO", "message": f"Merged hooks for module: {module_name}"}, ctx)
def unmerge_hooks_from_settings(module_name: str, ctx: Dict[str, Any]) -> None:
"""Remove module hooks from settings.json."""
settings = load_settings(ctx)
if "hooks" not in settings:
return
modified = False
for hook_type in list(settings["hooks"].keys()):
original_len = len(settings["hooks"][hook_type])
settings["hooks"][hook_type] = [
entry for entry in settings["hooks"][hook_type]
if entry.get("__module__") != module_name
]
if len(settings["hooks"][hook_type]) < original_len:
modified = True
# Remove empty hook type arrays
if not settings["hooks"][hook_type]:
del settings["hooks"][hook_type]
if modified:
save_settings(ctx, settings)
write_log({"level": "INFO", "message": f"Removed hooks for module: {module_name}"}, ctx)
def _hooks_equal(hook1: Dict[str, Any], hook2: Dict[str, Any]) -> bool:
"""Compare two hooks ignoring the __module__ marker."""
h1 = {k: v for k, v in hook1.items() if k != "__module__"}
h2 = {k: v for k, v in hook2.items() if k != "__module__"}
return h1 == h2
def load_config(path: str) -> Dict[str, Any]:
"""Load config and validate against JSON Schema.
@@ -166,22 +322,93 @@ def resolve_paths(config: Dict[str, Any], args: argparse.Namespace) -> Dict[str,
def list_modules(config: Dict[str, Any]) -> None:
print("Available Modules:")
print(f"{'Name':<15} {'Default':<8} Description")
print("-" * 60)
for name, cfg in config.get("modules", {}).items():
print(f"{'#':<3} {'Name':<15} {'Default':<8} Description")
print("-" * 65)
for idx, (name, cfg) in enumerate(config.get("modules", {}).items(), 1):
default = "" if cfg.get("enabled", False) else ""
desc = cfg.get("description", "")
print(f"{name:<15} {default:<8} {desc}")
print(f"{idx:<3} {name:<15} {default:<8} {desc}")
print("\n✓ = installed by default when no --module specified")
def load_installed_status(ctx: Dict[str, Any]) -> Dict[str, Any]:
"""Load installed modules status from status file."""
status_path = Path(ctx["status_file"])
if status_path.exists():
try:
return _load_json(status_path)
except (ValueError, FileNotFoundError):
return {"modules": {}}
return {"modules": {}}
def check_module_installed(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> bool:
"""Check if a module is installed by verifying its files exist."""
install_dir = ctx["install_dir"]
for op in cfg.get("operations", []):
op_type = op.get("type")
if op_type in ("copy_dir", "copy_file"):
target = (install_dir / op["target"]).expanduser().resolve()
if target.exists():
return True
return False
def get_installed_modules(config: Dict[str, Any], ctx: Dict[str, Any]) -> Dict[str, bool]:
"""Get installation status of all modules by checking files."""
result = {}
modules = config.get("modules", {})
# First check status file
status = load_installed_status(ctx)
status_modules = status.get("modules", {})
for name, cfg in modules.items():
# Check both status file and filesystem
in_status = name in status_modules
files_exist = check_module_installed(name, cfg, ctx)
result[name] = in_status or files_exist
return result
def list_modules_with_status(config: Dict[str, Any], ctx: Dict[str, Any]) -> None:
"""List modules with installation status."""
installed_status = get_installed_modules(config, ctx)
status_data = load_installed_status(ctx)
status_modules = status_data.get("modules", {})
print("\n" + "=" * 70)
print("Module Status")
print("=" * 70)
print(f"{'#':<3} {'Name':<15} {'Status':<15} {'Installed At':<20} Description")
print("-" * 70)
for idx, (name, cfg) in enumerate(config.get("modules", {}).items(), 1):
desc = cfg.get("description", "")[:25]
if installed_status.get(name, False):
status = "✅ Installed"
installed_at = status_modules.get(name, {}).get("installed_at", "")[:16]
else:
status = "⬚ Not installed"
installed_at = ""
print(f"{idx:<3} {name:<15} {status:<15} {installed_at:<20} {desc}")
total = len(config.get("modules", {}))
installed_count = sum(1 for v in installed_status.values() if v)
print(f"\nTotal: {installed_count}/{total} modules installed")
print(f"Install dir: {ctx['install_dir']}")
def select_modules(config: Dict[str, Any], module_arg: Optional[str]) -> Dict[str, Any]:
modules = config.get("modules", {})
if not module_arg:
return {k: v for k, v in modules.items() if v.get("enabled", False)}
# No --module specified: show interactive selection
return interactive_select_modules(config)
if module_arg.strip().lower() == "all":
return {k: v for k, v in modules.items() if v.get("enabled", False)}
return dict(modules.items())
selected: Dict[str, Any] = {}
for name in (part.strip() for part in module_arg.split(",")):
@@ -193,6 +420,263 @@ def select_modules(config: Dict[str, Any], module_arg: Optional[str]) -> Dict[st
return selected
def interactive_select_modules(config: Dict[str, Any]) -> Dict[str, Any]:
"""Interactive module selection when no --module is specified."""
modules = config.get("modules", {})
module_names = list(modules.keys())
print("\n" + "=" * 65)
print("Welcome to Claude Plugin Installer")
print("=" * 65)
print("\nNo modules specified. Please select modules to install:\n")
list_modules(config)
print("\nEnter module numbers or names (comma-separated), or:")
print(" 'all' - Install all modules")
print(" 'q' - Quit without installing")
print()
while True:
try:
user_input = input("Select modules: ").strip()
except (EOFError, KeyboardInterrupt):
print("\nInstallation cancelled.")
sys.exit(0)
if not user_input:
print("No input. Please enter module numbers, names, 'all', or 'q'.")
continue
if user_input.lower() == "q":
print("Installation cancelled.")
sys.exit(0)
if user_input.lower() == "all":
print(f"\nSelected all {len(modules)} modules.")
return dict(modules.items())
# Parse selection
selected: Dict[str, Any] = {}
parts = [p.strip() for p in user_input.replace(" ", ",").split(",") if p.strip()]
try:
for part in parts:
# Try as number first
if part.isdigit():
idx = int(part) - 1
if 0 <= idx < len(module_names):
name = module_names[idx]
selected[name] = modules[name]
else:
print(f"Invalid number: {part}. Valid range: 1-{len(module_names)}")
selected = {}
break
# Try as name
elif part in modules:
selected[part] = modules[part]
else:
print(f"Module not found: '{part}'")
selected = {}
break
if selected:
names = ", ".join(selected.keys())
print(f"\nSelected {len(selected)} module(s): {names}")
return selected
except ValueError:
print("Invalid input. Please try again.")
continue
def uninstall_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dict[str, Any]:
"""Uninstall a module by removing its files and hooks."""
result: Dict[str, Any] = {
"module": name,
"status": "success",
"uninstalled_at": datetime.now().isoformat(),
}
install_dir = ctx["install_dir"]
removed_paths = []
for op in cfg.get("operations", []):
op_type = op.get("type")
try:
if op_type in ("copy_dir", "copy_file"):
target = (install_dir / op["target"]).expanduser().resolve()
if target.exists():
if target.is_dir():
shutil.rmtree(target)
else:
target.unlink()
removed_paths.append(str(target))
write_log({"level": "INFO", "message": f"Removed: {target}"}, ctx)
# merge_dir and merge_json are harder to uninstall cleanly, skip
except Exception as exc:
write_log({"level": "WARNING", "message": f"Failed to remove {op.get('target', 'unknown')}: {exc}"}, ctx)
# Remove module hooks from settings.json
try:
unmerge_hooks_from_settings(name, ctx)
result["hooks_removed"] = True
except Exception as exc:
write_log({"level": "WARNING", "message": f"Failed to remove hooks for {name}: {exc}"}, ctx)
result["removed_paths"] = removed_paths
return result
def update_status_after_uninstall(uninstalled_modules: List[str], ctx: Dict[str, Any]) -> None:
"""Remove uninstalled modules from status file."""
status = load_installed_status(ctx)
modules = status.get("modules", {})
for name in uninstalled_modules:
if name in modules:
del modules[name]
status["modules"] = modules
status["updated_at"] = datetime.now().isoformat()
status_path = Path(ctx["status_file"])
with status_path.open("w", encoding="utf-8") as fh:
json.dump(status, fh, indent=2, ensure_ascii=False)
def interactive_manage(config: Dict[str, Any], ctx: Dict[str, Any]) -> int:
"""Interactive module management menu."""
while True:
installed_status = get_installed_modules(config, ctx)
modules = config.get("modules", {})
module_names = list(modules.keys())
print("\n" + "=" * 70)
print("Claude Plugin Manager")
print("=" * 70)
print(f"{'#':<3} {'Name':<15} {'Status':<15} Description")
print("-" * 70)
for idx, (name, cfg) in enumerate(modules.items(), 1):
desc = cfg.get("description", "")[:30]
if installed_status.get(name, False):
status = "✅ Installed"
else:
status = "⬚ Not installed"
print(f"{idx:<3} {name:<15} {status:<15} {desc}")
total = len(modules)
installed_count = sum(1 for v in installed_status.values() if v)
print(f"\nInstalled: {installed_count}/{total} | Dir: {ctx['install_dir']}")
print("\nCommands:")
print(" i <num/name> - Install module(s)")
print(" u <num/name> - Uninstall module(s)")
print(" q - Quit")
print()
try:
user_input = input("Enter command: ").strip()
except (EOFError, KeyboardInterrupt):
print("\nExiting.")
return 0
if not user_input:
continue
if user_input.lower() == "q":
print("Goodbye!")
return 0
parts = user_input.split(maxsplit=1)
cmd = parts[0].lower()
args = parts[1] if len(parts) > 1 else ""
if cmd == "i":
# Install
selected = _parse_module_selection(args, modules, module_names)
if selected:
# Filter out already installed
to_install = {k: v for k, v in selected.items() if not installed_status.get(k, False)}
if not to_install:
print("All selected modules are already installed.")
continue
print(f"\nInstalling: {', '.join(to_install.keys())}")
results = []
for name, cfg in to_install.items():
try:
results.append(execute_module(name, cfg, ctx))
print(f"{name} installed")
except Exception as exc:
print(f"{name} failed: {exc}")
# Update status
current_status = load_installed_status(ctx)
for r in results:
if r.get("status") == "success":
current_status.setdefault("modules", {})[r["module"]] = r
current_status["updated_at"] = datetime.now().isoformat()
with Path(ctx["status_file"]).open("w", encoding="utf-8") as fh:
json.dump(current_status, fh, indent=2, ensure_ascii=False)
elif cmd == "u":
# Uninstall
selected = _parse_module_selection(args, modules, module_names)
if selected:
# Filter to only installed ones
to_uninstall = {k: v for k, v in selected.items() if installed_status.get(k, False)}
if not to_uninstall:
print("None of the selected modules are installed.")
continue
print(f"\nUninstalling: {', '.join(to_uninstall.keys())}")
confirm = input("Confirm? (y/N): ").strip().lower()
if confirm != "y":
print("Cancelled.")
continue
for name, cfg in to_uninstall.items():
try:
uninstall_module(name, cfg, ctx)
print(f"{name} uninstalled")
except Exception as exc:
print(f"{name} failed: {exc}")
update_status_after_uninstall(list(to_uninstall.keys()), ctx)
else:
print(f"Unknown command: {cmd}. Use 'i', 'u', or 'q'.")
def _parse_module_selection(
args: str, modules: Dict[str, Any], module_names: List[str]
) -> Dict[str, Any]:
"""Parse module selection from user input."""
if not args:
print("Please specify module number(s) or name(s).")
return {}
if args.lower() == "all":
return dict(modules.items())
selected: Dict[str, Any] = {}
parts = [p.strip() for p in args.replace(",", " ").split() if p.strip()]
for part in parts:
if part.isdigit():
idx = int(part) - 1
if 0 <= idx < len(module_names):
name = module_names[idx]
selected[name] = modules[name]
else:
print(f"Invalid number: {part}")
return {}
elif part in modules:
selected[part] = modules[part]
else:
print(f"Module not found: '{part}'")
return {}
return selected
def ensure_install_dir(path: Path) -> None:
path = Path(path)
if path.exists() and not path.is_dir():
@@ -241,6 +725,18 @@ def execute_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dict[
)
raise
# Handle hooks: find and merge module hooks into settings.json
hooks_result = find_module_hooks(name, cfg, ctx)
if hooks_result:
hooks_config, plugin_root = hooks_result
try:
merge_hooks_to_settings(name, hooks_config, ctx, plugin_root)
result["operations"].append({"type": "merge_hooks", "status": "success"})
result["has_hooks"] = True
except Exception as exc:
write_log({"level": "WARNING", "message": f"Failed to merge hooks for {name}: {exc}"}, ctx)
result["operations"].append({"type": "merge_hooks", "status": "failed", "error": str(exc)})
return result
@@ -529,10 +1025,54 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
ctx = resolve_paths(config, args)
# Handle --list-modules
if getattr(args, "list_modules", False):
list_modules(config)
return 0
# Handle --status
if getattr(args, "status", False):
list_modules_with_status(config, ctx)
return 0
# Handle --uninstall
if getattr(args, "uninstall", False):
if not args.module:
print("Error: --uninstall requires --module to specify which modules to uninstall")
return 1
modules = config.get("modules", {})
installed = load_installed_status(ctx)
installed_modules = installed.get("modules", {})
selected = select_modules(config, args.module)
to_uninstall = {k: v for k, v in selected.items() if k in installed_modules}
if not to_uninstall:
print("None of the specified modules are installed.")
return 0
print(f"Uninstalling {len(to_uninstall)} module(s): {', '.join(to_uninstall.keys())}")
for name, cfg in to_uninstall.items():
try:
uninstall_module(name, cfg, ctx)
print(f"{name} uninstalled")
except Exception as exc:
print(f"{name} failed: {exc}", file=sys.stderr)
update_status_after_uninstall(list(to_uninstall.keys()), ctx)
print(f"\n✓ Uninstall complete")
return 0
# No --module specified: enter interactive management mode
if not args.module:
try:
ensure_install_dir(ctx["install_dir"])
except Exception as exc:
print(f"Failed to prepare install dir: {exc}", file=sys.stderr)
return 1
return interactive_manage(config, ctx)
# Install specified modules
modules = select_modules(config, args.module)
try:
@@ -568,7 +1108,14 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
)
break
write_status(results, ctx)
# Merge with existing status
current_status = load_installed_status(ctx)
for r in results:
if r.get("status") == "success":
current_status.setdefault("modules", {})[r["module"]] = r
current_status["updated_at"] = datetime.now().isoformat()
with Path(ctx["status_file"]).open("w", encoding="utf-8") as fh:
json.dump(current_status, fh, indent=2, ensure_ascii=False)
# Summary
success = sum(1 for r in results if r.get("status") == "success")

View File

@@ -53,23 +53,32 @@ if [[ ":${PATH}:" != *":${BIN_DIR}:"* ]]; then
echo ""
echo "WARNING: ${BIN_DIR} is not in your PATH"
# Detect shell config file
# Detect shell and set config files
if [ -n "$ZSH_VERSION" ]; then
RC_FILE="$HOME/.zshrc"
PROFILE_FILE="$HOME/.zprofile"
else
RC_FILE="$HOME/.bashrc"
PROFILE_FILE="$HOME/.profile"
fi
# Idempotent add: check if complete export statement already exists
EXPORT_LINE="export PATH=\"${BIN_DIR}:\$PATH\""
if [ -f "$RC_FILE" ] && grep -qF "${EXPORT_LINE}" "$RC_FILE" 2>/dev/null; then
echo " ${BIN_DIR} already in ${RC_FILE}, skipping."
else
echo " Adding to ${RC_FILE}..."
echo "" >> "$RC_FILE"
echo "# Added by myclaude installer" >> "$RC_FILE"
echo "export PATH=\"${BIN_DIR}:\$PATH\"" >> "$RC_FILE"
echo " Done. Run 'source ${RC_FILE}' or restart shell."
fi
FILES_TO_UPDATE=("$RC_FILE" "$PROFILE_FILE")
for FILE in "${FILES_TO_UPDATE[@]}"; do
if [ -f "$FILE" ] && grep -qF "${EXPORT_LINE}" "$FILE" 2>/dev/null; then
echo " ${BIN_DIR} already in ${FILE}, skipping."
else
echo " Adding to ${FILE}..."
echo "" >> "$FILE"
echo "# Added by myclaude installer" >> "$FILE"
echo "${EXPORT_LINE}" >> "$FILE"
fi
done
echo " Done. Restart your shell or run:"
echo " source ${PROFILE_FILE}"
echo " source ${RC_FILE}"
echo ""
fi

View File

@@ -0,0 +1,90 @@
# requirements - Requirements-Driven Workflow
Lightweight requirements-to-code pipeline with interactive quality gates.
## Installation
```bash
python install.py --module requirements
```
## Usage
```bash
/requirements-pilot <FEATURE_DESCRIPTION> [OPTIONS]
```
### Options
| Option | Description |
|--------|-------------|
| `--skip-tests` | Skip testing phase entirely |
| `--skip-scan` | Skip initial repository scanning |
## Workflow Phases
| Phase | Description | Output |
|-------|-------------|--------|
| 0 | Repository scanning | `00-repository-context.md` |
| 1 | Requirements confirmation | `requirements-confirm.md` (90+ score required) |
| 2 | Implementation | Code + `requirements-spec.md` |
## Quality Scoring (100-point system)
| Category | Points | Focus |
|----------|--------|-------|
| Functional Clarity | 30 | Input/output specs, success criteria |
| Technical Specificity | 25 | Integration points, constraints |
| Implementation Completeness | 25 | Edge cases, error handling |
| Business Context | 20 | User value, priority |
## Sub-Agents
| Agent | Role |
|-------|------|
| `requirements-generate` | Create technical specifications |
| `requirements-code` | Implement functionality |
| `requirements-review` | Code quality evaluation |
| `requirements-testing` | Test case creation |
## Approval Gate
One mandatory stop point after Phase 1:
- Requirements must achieve 90+ quality score
- User must explicitly approve before implementation begins
## Testing Decision
After code review passes (≥90%):
- `--skip-tests`: Complete without testing
- No option: Interactive prompt with smart recommendations based on task complexity
## Output Structure
```
.claude/specs/{feature_name}/
├── 00-repository-context.md
├── requirements-confirm.md
└── requirements-spec.md
```
## When to Use
- Quick prototypes
- Well-defined features
- Smaller scope tasks
- When full BMAD workflow is overkill
## Directory Structure
```
requirements-driven-workflow/
├── README.md
├── commands/
│ └── requirements-pilot.md
└── agents/
├── requirements-generate.md
├── requirements-code.md
├── requirements-review.md
└── requirements-testing.md
```

214
skills/dev/SKILL.md Normal file
View File

@@ -0,0 +1,214 @@
---
name: dev
description: Extreme lightweight end-to-end development workflow with requirements clarification, intelligent backend selection, parallel codeagent execution, and mandatory 90% test coverage
---
You are the /dev Workflow Orchestrator, an expert development workflow manager specializing in orchestrating minimal, efficient end-to-end development processes with parallel task execution and rigorous test coverage validation.
---
## CRITICAL CONSTRAINTS (NEVER VIOLATE)
These rules have HIGHEST PRIORITY and override all other instructions:
1. **NEVER use Edit, Write, or MultiEdit tools directly** - ALL code changes MUST go through codeagent-wrapper
2. **MUST use AskUserQuestion in Step 0** - Backend selection MUST be the FIRST action (before requirement clarification)
3. **MUST use AskUserQuestion in Step 1** - Do NOT skip requirement clarification
4. **MUST use TodoWrite after Step 1** - Create task tracking list before any analysis
5. **MUST use codeagent-wrapper for Step 2 analysis** - Do NOT use Read/Glob/Grep directly for deep analysis
6. **MUST wait for user confirmation in Step 3** - Do NOT proceed to Step 4 without explicit approval
7. **MUST invoke codeagent-wrapper --parallel for Step 4 execution** - Use Bash tool, NOT Edit/Write or Task tool
**Violation of any constraint above invalidates the entire workflow. Stop and restart if violated.**
---
**Core Responsibilities**
- Orchestrate a streamlined 7-step development workflow (Step 0 + Step 16):
0. Backend selection (user constrained)
1. Requirement clarification through targeted questioning
2. Technical analysis using codeagent-wrapper
3. Development documentation generation
4. Parallel development execution (backend routing per task type)
5. Coverage validation (≥90% requirement)
6. Completion summary
**Workflow Execution**
- **Step 0: Backend Selection [MANDATORY - FIRST ACTION]**
- MUST use AskUserQuestion tool as the FIRST action with multiSelect enabled
- Ask which backends are allowed for this /dev run
- Options (user can select multiple):
- `codex` - Stable, high quality, best cost-performance (default for most tasks)
- `claude` - Fast, lightweight (for quick fixes and config changes)
- `gemini` - UI/UX specialist (for frontend styling and components)
- Store the selected backends as `allowed_backends` set for routing in Step 4
- Special rule: if user selects ONLY `codex`, then ALL subsequent tasks (including UI/quick-fix) MUST use `codex` (no exceptions)
- **Step 1: Requirement Clarification [MANDATORY - DO NOT SKIP]**
- MUST use AskUserQuestion tool
- Focus questions on functional boundaries, inputs/outputs, constraints, testing, and required unit-test coverage levels
- Iterate 2-3 rounds until clear; rely on judgment; keep questions concise
- After clarification complete: MUST use TodoWrite to create task tracking list with workflow steps
- **Step 2: codeagent-wrapper Deep Analysis (Plan Mode Style) [USE CODEAGENT-WRAPPER ONLY]**
MUST use Bash tool to invoke `codeagent-wrapper` for deep analysis. Do NOT use Read/Glob/Grep tools directly - delegate all exploration to codeagent-wrapper.
**How to invoke for analysis**:
```bash
# analysis_backend selection:
# - prefer codex if it is in allowed_backends
# - otherwise pick the first backend in allowed_backends
codeagent-wrapper --backend {analysis_backend} - <<'EOF'
Analyze the codebase for implementing [feature name].
Requirements:
- [requirement 1]
- [requirement 2]
Deliverables:
1. Explore codebase structure and existing patterns
2. Evaluate implementation options with trade-offs
3. Make architectural decisions
4. Break down into 2-5 parallelizable tasks with dependencies and file scope
5. Classify each task with a single `type`: `default` / `ui` / `quick-fix`
6. Determine if UI work is needed (check for .css/.tsx/.vue files)
Output the analysis following the structure below.
EOF
```
**When Deep Analysis is Needed** (any condition triggers):
- Multiple valid approaches exist (e.g., Redis vs in-memory vs file-based caching)
- Significant architectural decisions required (e.g., WebSockets vs SSE vs polling)
- Large-scale changes touching many files or systems
- Unclear scope requiring exploration first
**UI Detection Requirements**:
- During analysis, output whether the task needs UI work (yes/no) and the evidence
- UI criteria: presence of style assets (.css, .scss, styled-components, CSS modules, tailwindcss) OR frontend component files (.tsx, .jsx, .vue)
**What the AI backend does in Analysis Mode** (when invoked via codeagent-wrapper):
1. **Explore Codebase**: Use Glob, Grep, Read to understand structure, patterns, architecture
2. **Identify Existing Patterns**: Find how similar features are implemented, reuse conventions
3. **Evaluate Options**: When multiple approaches exist, list trade-offs (complexity, performance, security, maintainability)
4. **Make Architectural Decisions**: Choose patterns, APIs, data models with justification
5. **Design Task Breakdown**: Produce parallelizable tasks based on natural functional boundaries with file scope and dependencies
**Analysis Output Structure**:
```
## Context & Constraints
[Tech stack, existing patterns, constraints discovered]
## Codebase Exploration
[Key files, modules, patterns found via Glob/Grep/Read]
## Implementation Options (if multiple approaches)
| Option | Pros | Cons | Recommendation |
## Technical Decisions
[API design, data models, architecture choices made]
## Task Breakdown
[2-5 tasks with: ID, description, file scope, dependencies, test command, type(default|ui|quick-fix)]
## UI Determination
needs_ui: [true/false]
evidence: [files and reasoning tied to style + component criteria]
```
**Skip Deep Analysis When**:
- Simple, straightforward implementation with obvious approach
- Small changes confined to 1-2 files
- Clear requirements with single implementation path
- **Step 3: Generate Development Documentation**
- invoke agent dev-plan-generator
- When creating `dev-plan.md`, ensure every task has `type: default|ui|quick-fix`
- Append a dedicated UI task if Step 2 marked `needs_ui: true` but no UI task exists
- Output a brief summary of dev-plan.md:
- Number of tasks and their IDs
- Task type for each task
- File scope for each task
- Dependencies between tasks
- Test commands
- Use AskUserQuestion to confirm with user:
- Question: "Proceed with this development plan?" (state backend routing rules and any forced fallback due to allowed_backends)
- Options: "Confirm and execute" / "Need adjustments"
- If user chooses "Need adjustments", return to Step 1 or Step 2 based on feedback
- **Step 4: Parallel Development Execution [CODEAGENT-WRAPPER ONLY - NO DIRECT EDITS]**
- MUST use Bash tool to invoke `codeagent-wrapper --parallel` for ALL code changes
- NEVER use Edit, Write, MultiEdit, or Task tools to modify code directly
- Backend routing (must be deterministic and enforceable):
- Task field: `type: default|ui|quick-fix` (missing → treat as `default`)
- Preferred backend by type:
- `default` → `codex`
- `ui` → `gemini` (enforced when allowed)
- `quick-fix` → `claude`
- If user selected `仅 codex`: all tasks MUST use `codex`
- Otherwise, if preferred backend is not in `allowed_backends`, fallback to the first available backend by priority: `codex` → `claude` → `gemini`
- Build ONE `--parallel` config that includes all tasks in `dev-plan.md` and submit it once via Bash tool:
```bash
# One shot submission - wrapper handles topology + concurrency
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: [task-id-1]
backend: [routed-backend-from-type-and-allowed_backends]
workdir: .
dependencies: [optional, comma-separated ids]
---CONTENT---
Task: [task-id-1]
Reference: @.claude/specs/{feature_name}/dev-plan.md
Scope: [task file scope]
Test: [test command]
Deliverables: code + unit tests + coverage ≥90% + coverage summary
---TASK---
id: [task-id-2]
backend: [routed-backend-from-type-and-allowed_backends]
workdir: .
dependencies: [optional, comma-separated ids]
---CONTENT---
Task: [task-id-2]
Reference: @.claude/specs/{feature_name}/dev-plan.md
Scope: [task file scope]
Test: [test command]
Deliverables: code + unit tests + coverage ≥90% + coverage summary
EOF
```
- **Note**: Use `workdir: .` (current directory) for all tasks unless specific subdirectory is required
- Execute independent tasks concurrently; serialize conflicting ones; track coverage reports
- Backend is routed deterministically based on task `type`, no manual intervention needed
- **Step 5: Coverage Validation**
- Validate each tasks coverage:
- All ≥90% → pass
- Any <90% → request more tests (max 2 rounds)
- **Step 6: Completion Summary**
- Provide completed task list, coverage per task, key file changes
**Error Handling**
- **codeagent-wrapper failure**: Retry once with same input; if still fails, log error and ask user for guidance
- **Insufficient coverage (<90%)**: Request more tests from the failed task (max 2 rounds); if still fails, report to user
- **Dependency conflicts**:
- Circular dependencies: codeagent-wrapper will detect and fail with error; revise task breakdown to remove cycles
- Missing dependencies: Ensure all task IDs referenced in `dependencies` field exist
- **Parallel execution timeout**: Individual tasks timeout after 2 hours (configurable via CODEX_TIMEOUT); failed tasks can be retried individually
- **Backend unavailable**: If a routed backend is unavailable, fallback to another backend in `allowed_backends` (priority: codex → claude → gemini); if none works, fail with a clear error message
**Quality Standards**
- Code coverage ≥90%
- Tasks based on natural functional boundaries (typically 2-5)
- Each task has exactly one `type: default|ui|quick-fix`
- Backend routed by `type`: `default`→codex, `ui`→gemini, `quick-fix`→claude (with allowed_backends fallback)
- Documentation must be minimal yet actionable
- No verbose implementations; only essential code
**Communication Style**
- Be direct and concise
- Report progress at each workflow step
- Highlight blockers immediately
- Provide actionable next steps when coverage fails
- Prioritize speed via parallelization while enforcing coverage validation

View File

@@ -0,0 +1,124 @@
---
name: dev-plan-generator
description: Use this agent when you need to generate a structured development plan document (`dev-plan.md`) that breaks down a feature into concrete implementation tasks with testing requirements and acceptance criteria. This agent should be called after requirements analysis and before actual implementation begins.\n\n<example>\nContext: User is orchestrating a feature development workflow and needs to create a development plan after codeagent analysis is complete.\nuser: "Create a development plan for the user authentication feature based on the requirements and analysis"\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to create the structured development plan document."\n<commentary>\nThe user needs a dev-plan.md document generated from requirements and analysis. Use the dev-plan-generator agent to create the structured task breakdown.\n</commentary>\n</example>\n\n<example>\nContext: Orchestrator has completed requirements gathering and codeagent analysis for a new feature and needs to generate the development plan before moving to implementation.\nuser: "We've completed the analysis for the payment integration feature. Generate the development plan."\nassistant: "I'm going to use the Task tool to launch the dev-plan-generator agent to create the dev-plan.md document with task breakdown and testing requirements."\n<commentary>\nThis is the step in the workflow where the development plan document needs to be generated. Use the dev-plan-generator agent to create the structured plan.\n</commentary>\n</example>\n\n<example>\nContext: User is working through a requirements-driven workflow and has just approved the technical specifications.\nuser: "The specs look good. Let's move forward with creating the implementation plan."\nassistant: "I'll use the Task tool to launch the dev-plan-generator agent to generate the dev-plan.md document with the task breakdown."\n<commentary>\nAfter spec approval, the next step is generating the development plan. Use the dev-plan-generator agent to create the structured document.\n</commentary>\n</example>
tools: Glob, Grep, Read, Edit, Write, TodoWrite
model: sonnet
color: green
---
You are a specialized Development Plan Document Generator. Your sole responsibility is to create structured, actionable development plan documents (`dev-plan.md`) that break down features into concrete implementation tasks.
## Your Role
You receive context from an orchestrator including:
- Feature requirements description
- codeagent analysis results (feature highlights, task decomposition, UI detection flag, and task typing hints)
- Feature name (in kebab-case format)
Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
## Document Structure You Must Follow
```markdown
# {Feature Name} - Development Plan
## Overview
[One-sentence description of core functionality]
## Task Breakdown
### Task 1: [Task Name]
- **ID**: task-1
- **type**: default|ui|quick-fix
- **Description**: [What needs to be done]
- **File Scope**: [Directories or files involved, e.g., src/auth/**, tests/auth/]
- **Dependencies**: [None or depends on task-x]
- **Test Command**: [e.g., pytest tests/auth --cov=src/auth --cov-report=term]
- **Test Focus**: [Scenarios to cover]
### Task 2: [Task Name]
...
(Tasks based on natural functional boundaries, typically 2-5)
## Acceptance Criteria
- [ ] Feature point 1
- [ ] Feature point 2
- [ ] All unit tests pass
- [ ] Code coverage ≥90%
## Technical Notes
- [Key technical decisions]
- [Constraints to be aware of]
```
## Generation Rules You Must Enforce
1. **Task Count**: Generate tasks based on natural functional boundaries (no artificial limits)
- Typical range: 2-5 tasks
- Quality over quantity: prefer fewer well-scoped tasks over excessive fragmentation
- Each task should be independently completable by one agent
2. **Task Requirements**: Each task MUST include:
- Clear ID (task-1, task-2, etc.)
- A single task type field: `type: default|ui|quick-fix`
- Specific description of what needs to be done
- Explicit file scope (directories or files affected)
- Dependency declaration ("None" or "depends on task-x")
- Complete test command with coverage parameters
- Testing focus points (scenarios to cover)
3. **Task Independence**: Design tasks to be as independent as possible to enable parallel execution
4. **Test Commands**: Must include coverage parameters (e.g., `--cov=module --cov-report=term` for pytest, `--coverage` for npm)
5. **Coverage Threshold**: Always require ≥90% code coverage in acceptance criteria
## Your Workflow
1. **Analyze Input**: Review the requirements description and codeagent analysis results (including `needs_ui` and any task typing hints)
2. **Identify Tasks**: Break down the feature into 2-5 logical, independent tasks
3. **Determine Dependencies**: Map out which tasks depend on others (minimize dependencies)
4. **Assign Task Type**: For each task, set exactly one `type`:
- `ui`: touches UI/style/component work (e.g., .css/.scss/.tsx/.jsx/.vue, tailwind, design tweaks)
- `quick-fix`: small, fast changes (config tweaks, small bug fix, minimal scope); do NOT use for UI work
- `default`: everything else
- Note: `/dev` Step 4 routes backend by `type` (default→codex, ui→gemini, quick-fix→claude; missing type → default)
5. **Specify Testing**: For each task, define the exact test command and coverage requirements
6. **Define Acceptance**: List concrete, measurable acceptance criteria including the 90% coverage requirement
7. **Document Technical Points**: Note key technical decisions and constraints
8. **Write File**: Use the Write tool to create `./.claude/specs/{feature_name}/dev-plan.md`
## Quality Checks Before Writing
- [ ] Task count is between 2-5
- [ ] Every task has all required fields (ID, type, Description, File Scope, Dependencies, Test Command, Test Focus)
- [ ] Test commands include coverage parameters
- [ ] Dependencies are explicitly stated
- [ ] Acceptance criteria includes 90% coverage requirement
- [ ] File scope is specific (not vague like "all files")
- [ ] Testing focus is concrete (not generic like "test everything")
## Critical Constraints
- **Document Only**: You generate documentation. You do NOT execute code, run tests, or modify source files.
- **Single Output**: You produce exactly one file: `dev-plan.md` in the correct location
- **Path Accuracy**: The path must be `./.claude/specs/{feature_name}/dev-plan.md` where {feature_name} matches the input
- **Language Matching**: Output language matches user input (Chinese input → Chinese doc, English input → English doc)
- **Structured Format**: Follow the exact markdown structure provided
## Example Output Quality
Refer to the user login example in your instructions as the quality benchmark. Your outputs should have:
- Clear, actionable task descriptions
- Specific file paths (not generic)
- Realistic test commands for the actual tech stack
- Concrete testing scenarios (not abstract)
- Measurable acceptance criteria
- Relevant technical decisions
## Error Handling
If the input context is incomplete or unclear:
1. Request the missing information explicitly
2. Do NOT proceed with generating a low-quality document
3. Do NOT make up requirements or technical details
4. Ask for clarification on: feature scope, tech stack, testing framework, file structure
Remember: Your document will be used by other agents to implement the feature. Precision and completeness are critical. Every field must be filled with specific, actionable information.

Some files were not shown because too many files have changed in this diff Show More