Compare commits

..

3 Commits

Author SHA1 Message Date
cexll
1dd7b23942 feat harness skill 2026-02-14 23:58:38 +08:00
cexll
664d82795a fixed do worktree use error 2026-02-14 23:58:20 +08:00
cexll
7cc7f50f46 docs: add bilingual README (EN + CN) reflecting current codebase
Rewrite README.md in English and add README_CN.md in Chinese with
language switcher links. Updated to cover all recent features:
worktree isolation, skill auto-detection, dynamic agents, per-backend
config, allowed/disallowed tools, stderr noise filtering, cross-platform
support, and modular internal/ project structure.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-02-10 23:13:06 +08:00
4 changed files with 845 additions and 100 deletions

View File

@@ -1,97 +1,158 @@
# codeagent-wrapper
`codeagent-wrapper` 是一个用 Go 编写的“多后端 AI 代码代理”命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端Codex / Claude / Gemini / Opencode并提供一致的参数、配置与会话恢复体验。
[English](README.md) | [中文](README_CN.md)
入口:`cmd/codeagent/main.go`(生成二进制名:`codeagent`)和 `cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。两者行为一致。
A multi-backend AI code agent CLI wrapper written in Go. Provides a unified CLI entry point wrapping different AI tool backends (Codex / Claude / Gemini / OpenCode) with consistent flags, configuration, skill injection, and session resumption.
## 功能特性
Entry point: `cmd/codeagent-wrapper/main.go` (binary: `codeagent-wrapper`).
- 多后端支持:`codex` / `claude` / `gemini` / `opencode`
- 统一命令行:`codeagent [flags] <task>` / `codeagent resume <session_id> <task> [workdir]`
- 自动 stdin遇到换行/特殊字符/超长任务自动走 stdin避免 shell quoting 地狱;也可显式使用 `-`
- 配置合并:支持配置文件与 `CODEAGENT_*` 环境变量viper
- Agent 预设:从 `~/.codeagent/models.json` 读取 backend/model/prompt 等预设
- 并行执行:`--parallel` 从 stdin 读取多任务配置,支持依赖拓扑并发执行
- 日志清理:`codeagent cleanup` 清理旧日志(日志写入系统临时目录)
## Features
## 安装
- **Multi-backend support**: `codex` / `claude` / `gemini` / `opencode`
- **Unified CLI**: `codeagent-wrapper [flags] <task>` / `codeagent-wrapper resume <session_id> <task> [workdir]`
- **Auto stdin**: Automatically pipes via stdin when task contains newlines, special characters, or exceeds length; also supports explicit `-`
- **Config merging**: Config files + `CODEAGENT_*` environment variables (viper)
- **Agent presets**: Read backend/model/prompt/reasoning/yolo/allowed_tools from `~/.codeagent/models.json`
- **Dynamic agents**: Place a `{name}.md` prompt file in `~/.codeagent/agents/` to use as an agent
- **Skill auto-injection**: `--skills` for manual specification, or auto-detect from project tech stack (Go/Rust/Python/Node.js/Vue)
- **Git worktree isolation**: `--worktree` executes tasks in an isolated git worktree with auto-generated task_id and branch
- **Parallel execution**: `--parallel` reads multi-task config from stdin with dependency-aware topological concurrent execution and structured summary reports
- **Backend config**: `backends` section in `models.json` supports per-backend `base_url` / `api_key` injection
- **Claude tool control**: `allowed_tools` / `disallowed_tools` to restrict available tools for Claude backend
- **Stderr noise filtering**: Automatically filters noisy stderr output from Gemini and Codex backends
- **Log cleanup**: `codeagent-wrapper cleanup` cleans old logs (logs written to system temp directory)
- **Cross-platform**: macOS / Linux / Windows
要求Go 1.21+。
## Installation
在仓库根目录执行:
### Recommended (interactive installer)
```bash
go install ./cmd/codeagent
go install ./cmd/codeagent-wrapper
npx github:cexll/myclaude
```
安装后确认:
Select the `codeagent-wrapper` module to install.
### Manual build
Requires: Go 1.21+.
```bash
codeagent version
codeagent-wrapper version
# Build from source
make build
# Or install to $GOPATH/bin
make install
```
## 使用示例
最简单用法(默认后端:`codex`
Verify installation:
```bash
codeagent "分析 internal/app/cli.go 的入口逻辑,给出改进建议"
codeagent-wrapper --version
```
指定后端:
## Usage
Basic usage (default backend: `codex`):
```bash
codeagent --backend claude "解释 internal/executor/parallel_config.go 的并行配置格式"
codeagent-wrapper "analyze the entry logic of internal/app/cli.go"
```
指定工作目录(第 2 个位置参数):
Specify backend:
```bash
codeagent "在当前 repo 下搜索潜在数据竞争" .
codeagent-wrapper --backend claude "explain the parallel config format in internal/executor/parallel_config.go"
```
显式从 stdin 读取 task使用 `-`
Specify working directory (2nd positional argument):
```bash
cat task.txt | codeagent -
codeagent-wrapper "search for potential data races in this repo" .
```
恢复会话:
Explicit stdin (using `-`):
```bash
codeagent resume <session_id> "继续上次任务"
cat task.txt | codeagent-wrapper -
```
并行模式(从 stdin 读取任务配置;禁止位置参数):
HEREDOC (recommended for multi-line tasks):
```bash
codeagent --parallel <<'EOF'
codeagent-wrapper --backend claude - <<'EOF'
Implement user authentication:
- JWT tokens
- bcrypt password hashing
- Session management
EOF
```
Resume session:
```bash
codeagent-wrapper resume <session_id> "continue the previous task"
```
Execute in isolated git worktree:
```bash
codeagent-wrapper --worktree "refactor the auth module"
```
Manual skill injection:
```bash
codeagent-wrapper --skills golang-base-practices "optimize database queries"
```
Parallel mode (task config from stdin):
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: t1
workdir: .
backend: codex
---CONTENT---
列出本项目的主要模块以及它们的职责。
List the main modules and their responsibilities.
---TASK---
id: t2
dependencies: t1
backend: claude
---CONTENT---
基于 t1 的结论,提出重构风险点与建议。
Based on t1's findings, identify refactoring risks and suggestions.
EOF
```
## 配置说明
## CLI Flags
### 配置文件
| Flag | Description |
|------|-------------|
| `--backend <name>` | Backend selection (codex/claude/gemini/opencode) |
| `--model <name>` | Model override |
| `--agent <name>` | Agent preset name (from models.json or ~/.codeagent/agents/) |
| `--prompt-file <path>` | Read prompt from file |
| `--skills <names>` | Comma-separated skill names for spec injection |
| `--reasoning-effort <level>` | Reasoning effort (backend-specific) |
| `--skip-permissions` | Skip permission prompts |
| `--dangerously-skip-permissions` | Alias for `--skip-permissions` |
| `--worktree` | Execute in a new git worktree (auto-generates task_id) |
| `--parallel` | Parallel task mode (config from stdin) |
| `--full-output` | Full output in parallel mode (default: summary only) |
| `--config <path>` | Config file path (default: `$HOME/.codeagent/config.*`) |
| `--version`, `-v` | Print version |
| `--cleanup` | Clean up old logs |
默认查找路径(当 `--config` 为空时):
## Configuration
### Config File
Default search path (when `--config` is empty):
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
示例(YAML
Example (YAML):
```yaml
backend: codex
@@ -99,59 +160,113 @@ model: gpt-4.1
skip-permissions: false
```
也可以通过 `--config /path/to/config.yaml` 显式指定。
Can also be specified explicitly via `--config /path/to/config.yaml`.
### 环境变量(`CODEAGENT_*`
### Environment Variables (`CODEAGENT_*`)
通过 viper 读取并自动映射 `-` `_`,常用项:
Read via viper with automatic `-` to `_` mapping:
- `CODEAGENT_BACKEND``codex|claude|gemini|opencode`
- `CODEAGENT_MODEL`
- `CODEAGENT_AGENT`
- `CODEAGENT_PROMPT_FILE`
- `CODEAGENT_REASONING_EFFORT`
- `CODEAGENT_SKIP_PERMISSIONS`
- `CODEAGENT_FULL_OUTPUT`(并行模式 legacy 输出)
- `CODEAGENT_MAX_PARALLEL_WORKERS`0 表示不限制,上限 100
| Variable | Description |
|----------|-------------|
| `CODEAGENT_BACKEND` | Backend name (codex/claude/gemini/opencode) |
| `CODEAGENT_MODEL` | Model name |
| `CODEAGENT_AGENT` | Agent preset name |
| `CODEAGENT_PROMPT_FILE` | Prompt file path |
| `CODEAGENT_REASONING_EFFORT` | Reasoning effort |
| `CODEAGENT_SKIP_PERMISSIONS` | Skip permission prompts (default true; set `false` to disable) |
| `CODEAGENT_FULL_OUTPUT` | Full output in parallel mode |
| `CODEAGENT_MAX_PARALLEL_WORKERS` | Parallel worker count (0=unlimited, max 100) |
| `CODEAGENT_TMPDIR` | Custom temp directory (for macOS permission issues) |
| `CODEX_TIMEOUT` | Timeout in ms (default 7200000 = 2 hours) |
| `CODEX_BYPASS_SANDBOX` | Codex sandbox bypass (default true; set `false` to disable) |
| `DO_WORKTREE_DIR` | Reuse existing worktree directory (set by /do workflow) |
### Agent 预设(`~/.codeagent/models.json`
可在 `~/.codeagent/models.json` 定义 agent → backend/model/prompt 等映射,用 `--agent <name>` 选择:
### Agent Presets (`~/.codeagent/models.json`)
```json
{
"default_backend": "opencode",
"default_model": "opencode/grok-code",
"default_backend": "codex",
"default_model": "gpt-4.1",
"backends": {
"codex": { "api_key": "..." },
"claude": { "base_url": "http://localhost:23001", "api_key": "..." }
},
"agents": {
"develop": {
"backend": "codex",
"model": "gpt-4.1",
"prompt_file": "~/.codeagent/prompts/develop.md",
"description": "Code development"
"reasoning": "high",
"yolo": true,
"allowed_tools": ["Read", "Write", "Bash"],
"disallowed_tools": ["WebFetch"]
}
}
}
```
## 支持的后端
Use `--agent <name>` to select a preset. Agents inherit `base_url` / `api_key` from the corresponding `backends` entry.
该项目本身不内置模型能力,依赖你本机安装并可在 `PATH` 中找到对应 CLI
### Dynamic Agents
- `codex`:执行 `codex e ...`(默认会添加 `--dangerously-bypass-approvals-and-sandbox`;如需关闭请设置 `CODEX_BYPASS_SANDBOX=false`
- `claude`:执行 `claude -p ... --output-format stream-json`(默认会跳过权限提示;如需开启请设置 `CODEAGENT_SKIP_PERMISSIONS=false`
- `gemini`:执行 `gemini ... -o stream-json`(可从 `~/.gemini/.env` 加载环境变量)
- `opencode`:执行 `opencode run --format json`
Place a `{name}.md` file in `~/.codeagent/agents/` to use it via `--agent {name}`. The Markdown file is read as the prompt, using `default_backend` and `default_model`.
## 开发
### Skill Auto-Detection
```bash
make build
make test
make lint
make clean
When no skills are specified via `--skills`, codeagent-wrapper auto-detects the tech stack from files in the working directory:
| Detected Files | Injected Skills |
|----------------|-----------------|
| `go.mod` / `go.sum` | `golang-base-practices` |
| `Cargo.toml` | `rust-best-practices` |
| `pyproject.toml` / `setup.py` / `requirements.txt` | `python-best-practices` |
| `package.json` | `vercel-react-best-practices`, `frontend-design` |
| `vue.config.js` / `vite.config.ts` / `nuxt.config.ts` | `vue-web-app` |
Skill specs are read from `~/.claude/skills/{name}/SKILL.md`, subject to a 16000-character budget.
## Supported Backends
This project does not embed model capabilities. It requires the corresponding CLI tools installed and available in `PATH`:
| Backend | Command | Notes |
|---------|---------|-------|
| `codex` | `codex e ...` | Adds `--dangerously-bypass-approvals-and-sandbox` by default; set `CODEX_BYPASS_SANDBOX=false` to disable |
| `claude` | `claude -p ... --output-format stream-json` | Skips permissions and disables setting-sources to prevent recursion; set `CODEAGENT_SKIP_PERMISSIONS=false` to enable prompts; auto-reads env and model from `~/.claude/settings.json` |
| `gemini` | `gemini -o stream-json -y ...` | Auto-loads env vars from `~/.gemini/.env` (GEMINI_API_KEY, GEMINI_MODEL, etc.) |
| `opencode` | `opencode run --format json` | — |
## Project Structure
```
cmd/codeagent-wrapper/main.go # CLI entry point
internal/
app/ # CLI command definitions, argument parsing, main orchestration
backend/ # Backend abstraction and implementations (codex/claude/gemini/opencode)
config/ # Config loading, agent resolution, viper bindings
executor/ # Task execution engine: single/parallel/worktree/skill injection
logger/ # Structured logging system
parser/ # JSON stream parser
utils/ # Common utility functions
worktree/ # Git worktree management
```
## 故障排查
## Development
- macOS 下如果看到临时目录相关的 `permission denied`(例如临时可执行文件无法在 `/var/folders/.../T` 执行),可设置一个可执行的临时目录:`CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`
- `claude` 后端的 `base_url/api_key`(来自 `~/.codeagent/models.json`)会注入到子进程环境变量:`ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY`。若 `base_url` 指向本地代理(如 `localhost:23001`),请确认代理进程在运行。
```bash
make build # Build binary
make test # Run tests
make lint # golangci-lint + staticcheck
make clean # Clean build artifacts
make install # Install to $GOPATH/bin
```
CI uses GitHub Actions with Go 1.21 / 1.22 matrix testing.
## Troubleshooting
- On macOS, if you see `permission denied` related to temp directories, set: `CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`
- `claude` backend's `base_url` / `api_key` (from `~/.codeagent/models.json` `backends.claude`) are injected as `ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY` env vars
- `gemini` backend's API key is loaded from `~/.gemini/.env`, injected as `GEMINI_API_KEY` with `GEMINI_API_KEY_AUTH_MECHANISM=bearer` auto-set
- Exit codes: 127 = backend not found, 124 = timeout, 130 = interrupted
- Parallel mode outputs structured summary by default; use `--full-output` for complete output when debugging

View File

@@ -0,0 +1,272 @@
# codeagent-wrapper
[English](README.md) | [中文](README_CN.md)
`codeagent-wrapper` 是一个用 Go 编写的多后端 AI 代码代理命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端Codex / Claude / Gemini / OpenCode并提供一致的参数、配置、技能注入与会话恢复体验。
入口:`cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。
## 功能特性
- **多后端支持**`codex` / `claude` / `gemini` / `opencode`
- **统一命令行**`codeagent-wrapper [flags] <task>` / `codeagent-wrapper resume <session_id> <task> [workdir]`
- **自动 stdin**:遇到换行/特殊字符/超长任务自动走 stdin避免 shell quoting 问题;也可显式使用 `-`
- **配置合并**:支持配置文件与 `CODEAGENT_*` 环境变量viper
- **Agent 预设**:从 `~/.codeagent/models.json` 读取 backend/model/prompt/reasoning/yolo/allowed_tools 等预设
- **动态 Agent**:在 `~/.codeagent/agents/{name}.md` 放置 prompt 文件即可作为 agent 使用
- **技能自动注入**`--skills` 手动指定或根据项目技术栈自动检测Go/Rust/Python/Node.js/Vue并注入对应技能规范
- **Git Worktree 隔离**`--worktree` 在独立 git worktree 中执行任务,自动生成 task_id 和分支
- **并行执行**`--parallel` 从 stdin 读取多任务配置,支持依赖拓扑并发执行,带结构化摘要报告
- **后端配置**`models.json``backends` 节支持 per-backend 的 `base_url` / `api_key` 注入
- **Claude 工具控制**`allowed_tools` / `disallowed_tools` 限制 Claude 后端可用工具
- **Stderr 降噪**:自动过滤 Gemini 和 Codex 后端的噪声 stderr 输出
- **日志清理**`codeagent-wrapper cleanup` 清理旧日志(日志写入系统临时目录)
- **跨平台**:支持 macOS / Linux / Windows
## 安装
### 推荐方式(交互式安装器)
```bash
npx github:cexll/myclaude
```
选择 `codeagent-wrapper` 模块进行安装。
### 手动构建
要求Go 1.21+。
```bash
# 从源码构建
make build
# 或直接安装到 $GOPATH/bin
make install
```
安装后确认:
```bash
codeagent-wrapper --version
```
## 使用示例
最简单用法(默认后端:`codex`
```bash
codeagent-wrapper "分析 internal/app/cli.go 的入口逻辑,给出改进建议"
```
指定后端:
```bash
codeagent-wrapper --backend claude "解释 internal/executor/parallel_config.go 的并行配置格式"
```
指定工作目录(第 2 个位置参数):
```bash
codeagent-wrapper "在当前 repo 下搜索潜在数据竞争" .
```
显式从 stdin 读取 task使用 `-`
```bash
cat task.txt | codeagent-wrapper -
```
使用 HEREDOC推荐用于多行任务
```bash
codeagent-wrapper --backend claude - <<'EOF'
实现用户认证系统:
- JWT 令牌
- bcrypt 密码哈希
- 会话管理
EOF
```
恢复会话:
```bash
codeagent-wrapper resume <session_id> "继续上次任务"
```
在 git worktree 中隔离执行:
```bash
codeagent-wrapper --worktree "重构认证模块"
```
手动指定技能注入:
```bash
codeagent-wrapper --skills golang-base-practices "优化数据库查询"
```
并行模式(从 stdin 读取任务配置):
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: t1
workdir: .
backend: codex
---CONTENT---
列出本项目的主要模块以及它们的职责。
---TASK---
id: t2
dependencies: t1
backend: claude
---CONTENT---
基于 t1 的结论,提出重构风险点与建议。
EOF
```
## CLI 参数
| 参数 | 说明 |
|------|------|
| `--backend <name>` | 后端选择codex/claude/gemini/opencode |
| `--model <name>` | 覆盖模型 |
| `--agent <name>` | Agent 预设名(来自 models.json 或 ~/.codeagent/agents/ |
| `--prompt-file <path>` | 从文件读取 prompt |
| `--skills <names>` | 逗号分隔的技能名,注入对应规范 |
| `--reasoning-effort <level>` | 推理力度(后端相关) |
| `--skip-permissions` | 跳过权限提示 |
| `--dangerously-skip-permissions` | `--skip-permissions` 的别名 |
| `--worktree` | 在新 git worktree 中执行(自动生成 task_id |
| `--parallel` | 并行任务模式(从 stdin 读取配置) |
| `--full-output` | 并行模式下输出完整消息(默认仅输出摘要) |
| `--config <path>` | 配置文件路径(默认:`$HOME/.codeagent/config.*` |
| `--version`, `-v` | 打印版本号 |
| `--cleanup` | 清理旧日志 |
## 配置说明
### 配置文件
默认查找路径(当 `--config` 为空时):
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
示例YAML
```yaml
backend: codex
model: gpt-4.1
skip-permissions: false
```
也可以通过 `--config /path/to/config.yaml` 显式指定。
### 环境变量(`CODEAGENT_*`
通过 viper 读取并自动映射 `-``_`,常用项:
| 变量 | 说明 |
|------|------|
| `CODEAGENT_BACKEND` | 后端名codex/claude/gemini/opencode |
| `CODEAGENT_MODEL` | 模型名 |
| `CODEAGENT_AGENT` | Agent 预设名 |
| `CODEAGENT_PROMPT_FILE` | Prompt 文件路径 |
| `CODEAGENT_REASONING_EFFORT` | 推理力度 |
| `CODEAGENT_SKIP_PERMISSIONS` | 跳过权限提示(默认 true`false` 关闭) |
| `CODEAGENT_FULL_OUTPUT` | 并行模式完整输出 |
| `CODEAGENT_MAX_PARALLEL_WORKERS` | 并行 worker 数0=不限制,上限 100 |
| `CODEAGENT_TMPDIR` | 自定义临时目录macOS 权限问题时使用) |
| `CODEX_TIMEOUT` | 超时(毫秒,默认 7200000 即 2 小时) |
| `CODEX_BYPASS_SANDBOX` | Codex sandbox bypass默认 true`false` 关闭) |
| `DO_WORKTREE_DIR` | 复用已有 worktree 目录(由 /do 工作流设置) |
### Agent 预设(`~/.codeagent/models.json`
```json
{
"default_backend": "codex",
"default_model": "gpt-4.1",
"backends": {
"codex": { "api_key": "..." },
"claude": { "base_url": "http://localhost:23001", "api_key": "..." }
},
"agents": {
"develop": {
"backend": "codex",
"model": "gpt-4.1",
"prompt_file": "~/.codeagent/prompts/develop.md",
"reasoning": "high",
"yolo": true,
"allowed_tools": ["Read", "Write", "Bash"],
"disallowed_tools": ["WebFetch"]
}
}
}
```
`--agent <name>` 选择预设agent 会继承 `backends` 下对应后端的 `base_url` / `api_key`
### 动态 Agent
`~/.codeagent/agents/` 目录放置 `{name}.md` 文件,即可通过 `--agent {name}` 使用,自动读取该 Markdown 作为 prompt使用 `default_backend``default_model`
### 技能自动检测
当未通过 `--skills` 显式指定技能时codeagent-wrapper 会根据工作目录中的文件自动检测技术栈:
| 检测文件 | 注入技能 |
|----------|----------|
| `go.mod` / `go.sum` | `golang-base-practices` |
| `Cargo.toml` | `rust-best-practices` |
| `pyproject.toml` / `setup.py` / `requirements.txt` | `python-best-practices` |
| `package.json` | `vercel-react-best-practices`, `frontend-design` |
| `vue.config.js` / `vite.config.ts` / `nuxt.config.ts` | `vue-web-app` |
技能规范从 `~/.claude/skills/{name}/SKILL.md` 读取,受 16000 字符预算限制。
## 支持的后端
该项目本身不内置模型能力,依赖本机安装并可在 `PATH` 中找到对应 CLI
| 后端 | 执行命令 | 说明 |
|------|----------|------|
| `codex` | `codex e ...` | 默认添加 `--dangerously-bypass-approvals-and-sandbox`;设 `CODEX_BYPASS_SANDBOX=false` 关闭 |
| `claude` | `claude -p ... --output-format stream-json` | 默认跳过权限并禁用 setting-sources 防止递归;设 `CODEAGENT_SKIP_PERMISSIONS=false` 开启权限;自动读取 `~/.claude/settings.json` 中的 env 和 model |
| `gemini` | `gemini -o stream-json -y ...` | 自动从 `~/.gemini/.env` 加载环境变量GEMINI_API_KEY, GEMINI_MODEL 等) |
| `opencode` | `opencode run --format json` | — |
## 项目结构
```
cmd/codeagent-wrapper/main.go # CLI 入口
internal/
app/ # CLI 命令定义、参数解析、主逻辑编排
backend/ # 后端抽象与实现codex/claude/gemini/opencode
config/ # 配置加载、agent 解析、viper 绑定
executor/ # 任务执行引擎:单任务/并行/worktree/技能注入
logger/ # 结构化日志系统
parser/ # JSON stream 解析器
utils/ # 通用工具函数
worktree/ # Git worktree 管理
```
## 开发
```bash
make build # 构建
make test # 运行测试
make lint # golangci-lint + staticcheck
make clean # 清理构建产物
make install # 安装到 $GOPATH/bin
```
CI 使用 GitHub ActionsGo 1.21 / 1.22 矩阵测试。
## 故障排查
- macOS 下如果看到临时目录相关的 `permission denied`,可设置:`CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`
- `claude` 后端的 `base_url` / `api_key`(来自 `~/.codeagent/models.json``backends.claude`)会注入到子进程环境变量 `ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY`
- `gemini` 后端的 API key 从 `~/.gemini/.env` 加载,注入 `GEMINI_API_KEY` 并自动设置 `GEMINI_API_KEY_AUTH_MECHANISM=bearer`
- 后端命令未找到时返回退出码 127超时返回 124中断返回 130
- 并行模式默认输出结构化摘要,使用 `--full-output` 查看完整输出以便调试

View File

@@ -10,31 +10,17 @@ An orchestrator for systematic feature development. Invoke agents via `codeagent
## Loop Initialization (REQUIRED)
When triggered via `/do <task>`, follow these steps:
### Step 1: Ask about worktree mode
Use AskUserQuestion to ask:
```
Develop in a separate worktree? (Isolates changes from main branch)
- Yes (Recommended for larger changes)
- No (Work directly in current directory)
```
### Step 2: Initialize task directory
When triggered via `/do <task>`, initialize the task directory immediately without asking about worktree:
```bash
# If worktree mode selected:
python3 ".claude/skills/do/scripts/setup-do.py" --worktree "<task description>"
# If no worktree:
python3 ".claude/skills/do/scripts/setup-do.py" "<task description>"
```
This creates a task directory under `.claude/do-tasks/` with:
- `task.md`: Single file containing YAML frontmatter (metadata) + Markdown body (requirements/context)
**Worktree decision is deferred until Phase 4 (Implement).** Phases 1-3 are read-only and do not require worktree isolation.
## Task Directory Management
Use `task.py` to manage task state:
@@ -52,15 +38,23 @@ python3 ".claude/skills/do/scripts/task.py" list
## Worktree Mode
When worktree mode is enabled in task.json, ALL `codeagent-wrapper` calls that modify code MUST include `--worktree`:
The worktree is created **only when needed** (right before Phase 4: Implement). If the user chooses worktree mode:
1. Run setup with `--worktree` flag to create the worktree:
```bash
python3 ".claude/skills/do/scripts/setup-do.py" --worktree "<task description>"
```
2. Use the `DO_WORKTREE_DIR` environment variable to direct `codeagent-wrapper` develop agent into the worktree. **Do NOT pass `--worktree` to subsequent calls** — that creates a new worktree each time.
```bash
codeagent-wrapper --worktree --agent develop - . <<'EOF'
# Save the worktree path from setup output, then prefix all develop calls:
DO_WORKTREE_DIR=<worktree_dir> codeagent-wrapper --agent develop - . <<'EOF'
...
EOF
```
Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `--worktree`.
Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `DO_WORKTREE_DIR`.
## Hard Constraints
@@ -69,7 +63,7 @@ Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `--w
3. **Update phase after each phase.** Use `task.py update-phase <N>`.
4. **Expect long-running `codeagent-wrapper` calls.** High-reasoning modes can take a long time.
5. **Timeouts are not an escape hatch.** If a call times out, retry with narrower scope.
6. **Respect worktree setting.** If enabled, always pass `--worktree` to develop agent calls.
6. **Defer worktree decision until Phase 4.** Only ask about worktree mode right before implementation. If enabled, prefix develop agent calls with `DO_WORKTREE_DIR=<path>`. Never pass `--worktree` after initialization.
## Agents
@@ -78,7 +72,7 @@ Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `--w
| `code-explorer` | Trace code, map architecture, find patterns | No (read-only) |
| `code-architect` | Design approaches, file plans, build sequences | No (read-only) |
| `code-reviewer` | Review for bugs, simplicity, conventions | No (read-only) |
| `develop` | Implement code, run tests | **Yes** (if worktree enabled) |
| `develop` | Implement code, run tests | **Yes** — use `DO_WORKTREE_DIR` env prefix |
## Issue Severity Definitions
@@ -175,12 +169,39 @@ EOF
**Goal:** Build feature and review in one phase.
1. Invoke `develop` to implement. For full-stack projects, split into backend/frontend tasks with per-task `skills:` injection. Use `--parallel` when tasks can be split; use single agent when the change is small or single-domain.
**Step 1: Decide on worktree mode (ONLY NOW)**
**Single-domain example** (add `--worktree` if enabled):
Use AskUserQuestion to ask:
```
Develop in a separate worktree? (Isolates changes from main branch)
- Yes (Recommended for larger changes)
- No (Work directly in current directory)
```
If user chooses worktree:
```bash
python3 ".claude/skills/do/scripts/setup-do.py" --worktree "<task description>"
# Save the worktree path from output for DO_WORKTREE_DIR
```
**Step 2: Invoke develop agent**
For full-stack projects, split into backend/frontend tasks with per-task `skills:` injection. Use `--parallel` when tasks can be split; use single agent when the change is small or single-domain.
**Single-domain example** (prefix with `DO_WORKTREE_DIR` if worktree enabled):
```bash
codeagent-wrapper --worktree --agent develop --skills golang-base-practices - . <<'EOF'
# With worktree:
DO_WORKTREE_DIR=<worktree_dir> codeagent-wrapper --agent develop --skills golang-base-practices - . <<'EOF'
Implement with minimal change set following the Phase 3 blueprint.
- Follow Phase 1 patterns
- Add/adjust tests per Phase 3 plan
- Run narrowest relevant tests
EOF
# Without worktree:
codeagent-wrapper --agent develop --skills golang-base-practices - . <<'EOF'
Implement with minimal change set following the Phase 3 blueprint.
- Follow Phase 1 patterns
- Add/adjust tests per Phase 3 plan
@@ -191,7 +212,8 @@ EOF
**Full-stack parallel example** (adapt task IDs, skills, and content based on Phase 3 design):
```bash
codeagent-wrapper --worktree --parallel <<'EOF'
# With worktree:
DO_WORKTREE_DIR=<worktree_dir> codeagent-wrapper --parallel <<'EOF'
---TASK---
id: p4_backend
agent: develop
@@ -213,11 +235,17 @@ Implement frontend changes following Phase 3 blueprint.
- Follow Phase 1 patterns
- Add/adjust tests per Phase 3 plan
EOF
# Without worktree: remove DO_WORKTREE_DIR prefix
```
Note: Choose which skills to inject based on Phase 3 design output. Only inject skills relevant to each task's domain.
2. Run parallel reviews:
**Step 3: Review**
**Step 3: Review**
Run parallel reviews:
```bash
codeagent-wrapper --parallel <<'EOF'
@@ -239,9 +267,10 @@ Classify each issue as BLOCKING or MINOR.
EOF
```
3. Handle review results:
- **MINOR issues only** → Auto-fix via `develop`, no user interaction
- **BLOCKING issues** → Use AskUserQuestion: "Fix now / Proceed as-is"
**Step 4: Handle review results**
- **MINOR issues only** → Auto-fix via `develop`, no user interaction
- **BLOCKING issues** → Use AskUserQuestion: "Fix now / Proceed as-is"
### Phase 5: Complete (No Interaction)

329
skills/harness/SKILL.md Normal file
View File

@@ -0,0 +1,329 @@
---
name: harness
description: "This skill should be used for multi-session autonomous agent work requiring progress checkpointing, failure recovery, and task dependency management. Triggers on '/harness' command, or when a task involves many subtasks needing progress persistence, sleep/resume cycles across context windows, recovery from mid-task failures with partial state, or distributed work across multiple agent sessions. Synthesized from Anthropic and OpenAI engineering practices for long-running agents."
---
# Harness — Long-Running Agent Framework
Executable protocol enabling any agent task to run continuously across multiple sessions with automatic progress recovery, task dependency resolution, failure rollback, and standardized error handling.
## Design Principles
1. **Design for the agent, not the human** — Test output, docs, and task structure are the agent's primary interface
2. **Progress files ARE the context** — When context window resets, progress files + git history = full recovery
3. **Premature completion is the #1 failure mode** — Structured task lists with explicit completion criteria prevent declaring victory early
4. **Standardize everything grep-able** — ERROR on same line, structured timestamps, consistent prefixes
5. **Fast feedback loops** — Pre-compute stats, run smoke tests before full validation
6. **Idempotent everything** — Init scripts, task execution, environment setup must all be safe to re-run
7. **Fail safe, not fail silent** — Every failure must have an explicit recovery strategy
## Commands
```
/harness init <project-path> # Initialize harness files in project
/harness run # Start/resume the infinite loop
/harness status # Show current progress and stats
/harness add "task description" # Add a task to the list
```
## Progress Persistence (Dual-File System)
Maintain two files in the project working directory:
### harness-progress.txt (Append-Only Log)
Free-text log of all agent actions across sessions. Never truncate.
```
[2025-07-01T10:00:00Z] [SESSION-1] INIT Harness initialized for project /path/to/project
[2025-07-01T10:00:05Z] [SESSION-1] INIT Environment health check: PASS
[2025-07-01T10:00:10Z] [SESSION-1] LOCK acquired (pid=12345)
[2025-07-01T10:00:11Z] [SESSION-1] Starting [task-001] Implement user authentication (base=def5678)
[2025-07-01T10:05:00Z] [SESSION-1] CHECKPOINT [task-001] step=2/4 "auth routes created, tests pending"
[2025-07-01T10:15:30Z] [SESSION-1] Completed [task-001] (commit abc1234)
[2025-07-01T10:15:31Z] [SESSION-1] Starting [task-002] Add rate limiting (base=abc1234)
[2025-07-01T10:20:00Z] [SESSION-1] ERROR [task-002] [TASK_EXEC] Redis connection refused
[2025-07-01T10:20:01Z] [SESSION-1] ROLLBACK [task-002] git reset --hard abc1234
[2025-07-01T10:20:02Z] [SESSION-1] STATS tasks_total=5 completed=1 failed=1 pending=3 blocked=0 attempts_total=2 checkpoints=1
```
### harness-tasks.json (Structured State)
```json
{
"version": 2,
"created": "2025-07-01T10:00:00Z",
"session_config": {
"max_tasks_per_session": 20,
"max_sessions": 50
},
"tasks": [
{
"id": "task-001",
"title": "Implement user authentication",
"status": "completed",
"priority": "P0",
"depends_on": [],
"attempts": 1,
"max_attempts": 3,
"started_at_commit": "def5678",
"validation": {
"command": "npm test -- --testPathPattern=auth",
"timeout_seconds": 300
},
"on_failure": {
"cleanup": null
},
"error_log": [],
"checkpoints": [],
"completed_at": "2025-07-01T10:15:30Z"
},
{
"id": "task-002",
"title": "Add rate limiting",
"status": "failed",
"priority": "P1",
"depends_on": [],
"attempts": 1,
"max_attempts": 3,
"started_at_commit": "abc1234",
"validation": {
"command": "npm test -- --testPathPattern=rate-limit",
"timeout_seconds": 120
},
"on_failure": {
"cleanup": "docker compose down redis"
},
"error_log": ["[TASK_EXEC] Redis connection refused"],
"checkpoints": [],
"completed_at": null
},
{
"id": "task-003",
"title": "Add OAuth providers",
"status": "pending",
"priority": "P1",
"depends_on": ["task-001"],
"attempts": 0,
"max_attempts": 3,
"started_at_commit": null,
"validation": {
"command": "npm test -- --testPathPattern=oauth",
"timeout_seconds": 180
},
"on_failure": {
"cleanup": null
},
"error_log": [],
"checkpoints": [],
"completed_at": null
}
],
"session_count": 1,
"last_session": "2025-07-01T10:20:02Z"
}
```
Task statuses: `pending``in_progress` (transient, set only during active execution) → `completed` or `failed`. A task found as `in_progress` at session start means the previous session was interrupted — handle via Context Window Recovery Protocol.
**Session boundary**: A session starts when the agent begins executing the Session Start protocol and ends when a Stopping Condition is met or the context window resets. Each session gets a unique `SESSION-N` identifier (N = `session_count` after increment).
## Concurrency Control
Before modifying `harness-tasks.json`, acquire an exclusive lock using portable `mkdir` (atomic on all POSIX systems, works on both macOS and Linux):
```bash
# Acquire lock (fail fast if another agent is running)
LOCKDIR="/tmp/harness-$(printf '%s' "$(pwd)" | shasum -a 256 2>/dev/null || sha256sum | cut -c1-8).lock"
if ! mkdir "$LOCKDIR" 2>/dev/null; then
# Check if lock holder is still alive
LOCK_PID=$(cat "$LOCKDIR/pid" 2>/dev/null)
if [ -n "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2>/dev/null; then
echo "ERROR: Another harness session is active (pid=$LOCK_PID)"; exit 1
fi
# Stale lock — atomically reclaim via mv to avoid TOCTOU race
STALE="$LOCKDIR.stale.$$"
if mv "$LOCKDIR" "$STALE" 2>/dev/null; then
rm -rf "$STALE"
mkdir "$LOCKDIR" || { echo "ERROR: Lock contention"; exit 1; }
echo "WARN: Removed stale lock${LOCK_PID:+ from pid=$LOCK_PID}"
else
echo "ERROR: Another agent reclaimed the lock"; exit 1
fi
fi
echo "$$" > "$LOCKDIR/pid"
trap 'rm -rf "$LOCKDIR"' EXIT
```
Log lock acquisition: `[timestamp] [SESSION-N] LOCK acquired (pid=<PID>)`
Log lock release: `[timestamp] [SESSION-N] LOCK released`
The lock is held for the entire session. The `trap EXIT` handler releases it automatically on normal exit, errors, or signals. Never release the lock between tasks within a session.
## Infinite Loop Protocol
### Session Start (Execute Every Time)
1. **Read state**: Read last 200 lines of `harness-progress.txt` + full `harness-tasks.json`. If JSON is unparseable, see JSON corruption recovery in Error Handling.
2. **Read git**: Run `git log --oneline -20` and `git diff --stat` to detect uncommitted work
3. **Acquire lock**: Fail if another session is active
4. **Recover interrupted tasks** (see Context Window Recovery below)
5. **Health check**: Run `harness-init.sh` if it exists
6. **Track session**: Increment `session_count` in JSON. Check `session_count` against `max_sessions` — if reached, log STATS and STOP. Initialize per-session task counter to 0.
7. **Pick next task** using Task Selection Algorithm below
### Task Selection Algorithm
Before selecting, run dependency validation:
1. **Cycle detection**: For each non-completed task, walk `depends_on` transitively. If any task appears in its own chain, mark it `failed` with `[DEPENDENCY] Circular dependency detected: task-A -> task-B -> task-A`. Self-references (`depends_on` includes own id) are also cycles.
2. **Blocked propagation**: If a task's `depends_on` includes a task that is `failed` and will never be retried (either `attempts >= max_attempts` OR its `error_log` contains a `[DEPENDENCY]` entry), mark the blocked task as `failed` with `[DEPENDENCY] Blocked by failed task-XXX`. Repeat until no more tasks can be propagated.
Then pick the next task in this priority order:
1. Tasks with `status: "pending"` where ALL `depends_on` tasks are `completed` — sorted by `priority` (P0 > P1 > P2), then by `id` (lowest first)
2. Tasks with `status: "failed"` where `attempts < max_attempts` and ALL `depends_on` are `completed` — sorted by priority, then oldest failure first
3. If no eligible tasks remain → log final STATS → STOP
### Task Execution Cycle
For each task, execute this exact sequence:
1. **Claim**: Record `started_at_commit` = current HEAD hash. Set status to `in_progress`, log `Starting [<task-id>] <title> (base=<hash>)`
2. **Execute with checkpoints**: Perform the work. After each significant step, log:
```
[timestamp] [SESSION-N] CHECKPOINT [task-id] step=M/N "description of what was done"
```
Also append to the task's `checkpoints` array: `{ "step": M, "total": N, "description": "...", "timestamp": "ISO" }`
3. **Validate**: Run the task's `validation.command` wrapped with `timeout`: `timeout <timeout_seconds> <command>`. If no validation command, skip. Before running, verify the command exists (e.g., `command -v <binary>`) — if missing, treat as `ENV_SETUP` error.
- Command exits 0 → PASS
- Command exits non-zero → FAIL
- Command exceeds timeout → TIMEOUT
4. **Record outcome**:
- **Success**: status=`completed`, set `completed_at`, log `Completed [<task-id>] (commit <hash>)`, git commit
- **Failure**: increment `attempts`, append error to `error_log`. Verify `started_at_commit` exists via `git cat-file -t <hash>` — if missing, mark failed at max_attempts. Otherwise execute `git reset --hard <started_at_commit>` and `git clean -fd` to rollback ALL commits and remove untracked files. Execute `on_failure.cleanup` if defined. Log `ERROR [<task-id>] [<category>] <message>`. Set status=`failed` (Task Selection Algorithm pass 2 handles retries when attempts < max_attempts)
5. **Track**: Increment per-session task counter. If `max_tasks_per_session` reached, log STATS and STOP.
6. **Continue**: Immediately pick next task (zero idle time)
### Stopping Conditions
- All tasks `completed`
- All remaining tasks `failed` at max_attempts or blocked by failed dependencies
- `session_config.max_tasks_per_session` reached for this session
- `session_config.max_sessions` reached across all sessions
- User interrupts
## Context Window Recovery Protocol
When a new session starts and finds a task with `status: "in_progress"`:
1. **Check git state**:
```bash
git diff --stat # Uncommitted changes?
git log --oneline -5 # Recent commits since task started?
git stash list # Any stashed work?
```
2. **Check checkpoints**: Read the task's `checkpoints` array to determine last completed step
3. **Decision matrix** (verify recent commits belong to this task by checking commit messages for the task-id):
| Uncommitted? | Recent task commits? | Checkpoints? | Action |
|---|---|---|---|
| No | No | None | Mark `failed` with `[SESSION_TIMEOUT] No progress detected`, increment attempts |
| No | No | Some | Verify file state matches checkpoint claims. If files reflect checkpoint progress, resume from last step. If not, mark `failed` — work was lost |
| No | Yes | Any | Run `validation.command`. If passes → mark `completed`. If fails → `git reset --hard <started_at_commit>`, mark `failed` |
| Yes | No | Any | Run validation WITH uncommitted changes present. If passes → commit, mark `completed`. If fails → `git reset --hard <started_at_commit>` + `git clean -fd`, mark `failed` |
| Yes | Yes | Any | Commit uncommitted changes, run `validation.command`. If passes → mark `completed`. If fails → `git reset --hard <started_at_commit>` + `git clean -fd`, mark `failed` |
4. **Log recovery**: `[timestamp] [SESSION-N] RECOVERY [task-id] action="<action taken>" reason="<reason>"`
## Error Handling & Recovery Strategies
Each error category has a default recovery strategy:
| Category | Default Recovery | Agent Action |
|----------|-----------------|--------------|
| `ENV_SETUP` | Re-run init, then STOP if still failing | Run `harness-init.sh` again immediately. If fails twice, log and stop — environment is broken |
| `TASK_EXEC` | Rollback via `git reset --hard <started_at_commit>`, retry | Verify `started_at_commit` exists (`git cat-file -t <hash>`). If missing, mark failed at max_attempts. Otherwise reset, run `on_failure.cleanup` if defined, retry if attempts < max_attempts |
| `TEST_FAIL` | Rollback via `git reset --hard <started_at_commit>`, retry | Reset to `started_at_commit`, analyze test output to identify fix, retry with targeted changes |
| `TIMEOUT` | Kill process, execute cleanup, retry | Wrap validation with `timeout <seconds> <command>`. On timeout, run `on_failure.cleanup`, retry (consider splitting task if repeated) |
| `DEPENDENCY` | Skip task, mark blocked | Log which dependency failed, mark task as `failed` with dependency reason |
| `SESSION_TIMEOUT` | Use Context Window Recovery Protocol | New session assesses partial progress via Recovery Protocol — may result in completion or failure depending on validation |
**JSON corruption**: If `harness-tasks.json` cannot be parsed, check for `harness-tasks.json.bak` (written before each modification). If backup exists and is valid, restore from it. If no valid backup, log `ERROR [ENV_SETUP] harness-tasks.json corrupted and unrecoverable` and STOP — task metadata (validation commands, dependencies, cleanup) cannot be reconstructed from logs alone.
**Backup protocol**: Before every write to `harness-tasks.json`, copy the current file to `harness-tasks.json.bak`.
## Environment Initialization
If `harness-init.sh` exists in the project root, run it at every session start. The script must be idempotent.
Example `harness-init.sh`:
```bash
#!/bin/bash
set -e
npm install 2>/dev/null || pip install -r requirements.txt 2>/dev/null || true
curl -sf http://localhost:5432 >/dev/null 2>&1 || echo "WARN: DB not reachable"
npm test -- --bail --silent 2>/dev/null || echo "WARN: Smoke test failed"
echo "Environment health check complete"
```
## Standardized Log Format
All log entries use grep-friendly format on a single line:
```
[ISO-timestamp] [SESSION-N] <TYPE> [task-id]? [category]? message
```
`[task-id]` and `[category]` are included when applicable (task-scoped entries). Session-level entries (`INIT`, `LOCK`, `STATS`) omit them.
Types: `INIT`, `Starting`, `Completed`, `ERROR`, `CHECKPOINT`, `ROLLBACK`, `RECOVERY`, `STATS`, `LOCK`, `WARN`
Error categories: `ENV_SETUP`, `TASK_EXEC`, `TEST_FAIL`, `TIMEOUT`, `DEPENDENCY`, `SESSION_TIMEOUT`
Filtering:
```bash
grep "ERROR" harness-progress.txt # All errors
grep "ERROR" harness-progress.txt | grep "TASK_EXEC" # Execution errors only
grep "SESSION-3" harness-progress.txt # All session 3 activity
grep "STATS" harness-progress.txt # All session summaries
grep "CHECKPOINT" harness-progress.txt # All checkpoints
grep "RECOVERY" harness-progress.txt # All recovery actions
```
## Session Statistics
At session end, update `harness-tasks.json`: increment `session_count`, set `last_session` to current timestamp. Then append:
```
[timestamp] [SESSION-N] STATS tasks_total=10 completed=7 failed=1 pending=2 blocked=0 attempts_total=12 checkpoints=23
```
`blocked` is computed at stats time: count of pending tasks whose `depends_on` includes a permanently failed task. It is not a stored status value.
## Init Command (`/harness init`)
1. Create `harness-progress.txt` with initialization entry
2. Create `harness-tasks.json` with empty task list and default `session_config`
3. Optionally create `harness-init.sh` template (chmod +x)
4. Ask user: add harness files to `.gitignore`?
## Status Command (`/harness status`)
Read `harness-tasks.json` and `harness-progress.txt`, then display:
1. Task summary: count by status (completed, failed, pending, blocked). `blocked` = pending tasks whose `depends_on` includes a permanently failed task (computed, not a stored status).
2. Per-task one-liner: `[status] task-id: title (attempts/max_attempts)`
3. Last 5 lines from `harness-progress.txt`
4. Session count and last session timestamp
Does NOT acquire the lock (read-only operation).
## Add Command (`/harness add`)
Append a new task to `harness-tasks.json` with auto-incremented id (`task-NNN`), status `pending`, default `max_attempts: 3`, empty `depends_on`, and no validation command. Prompt user for optional fields: `priority`, `depends_on`, `validation.command`, `timeout_seconds`. Requires lock acquisition (modifies JSON).
## Tool Dependencies
Requires: Bash, file read/write, git. All harness operations must be executed from the project root directory.
Does NOT require: specific MCP servers, programming languages, or test frameworks.