Compare commits

..

4 Commits

Author SHA1 Message Date
cexll
1dd7b23942 feat harness skill 2026-02-14 23:58:38 +08:00
cexll
664d82795a fixed do worktree use error 2026-02-14 23:58:20 +08:00
cexll
7cc7f50f46 docs: add bilingual README (EN + CN) reflecting current codebase
Rewrite README.md in English and add README_CN.md in Chinese with
language switcher links. Updated to cover all recent features:
worktree isolation, skill auto-detection, dynamic agents, per-backend
config, allowed/disallowed tools, stderr noise filtering, cross-platform
support, and modular internal/ project structure.

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-02-10 23:13:06 +08:00
cexll
ebd795c583 feat(install): per-module agent merge and documentation overhaul
- Add per-module agent merge/unmerge for ~/.codeagent/models.json with
  __module__ tracking, user-customization protection, and agent restore
  on uninstall when shared by multiple modules
- Add post-install verification (wrapper version, PATH, backend CLIs)
- Install CLAUDE.md by default, best-effort (never crashes main flow)
- Fix 7-phase → 5-phase references across all docs
- Document 9 skills, 11 commands, claudekit module, OpenCode backend
- Add templates/models.json.example with all agent presets (do + omo)
- Fix empty parent directory cleanup on copy_file uninstall
- Update USER_GUIDE.md with 13 CLI flags and OpenCode backend

Generated with SWE-Agent.ai

Co-Authored-By: SWE-Agent.ai <noreply@swe-agent.ai>
2026-02-10 15:26:33 +08:00
13 changed files with 1341 additions and 127 deletions

View File

@@ -2,6 +2,47 @@
All notable changes to this project will be documented in this file.
## [6.7.0] - 2026-02-10
### 🚀 Features
- feat(install): per-module agent merge/unmerge for ~/.codeagent/models.json
- feat(install): post-install verification (wrapper version, PATH, backend CLIs)
- feat(install): install CLAUDE.md by default
- feat(docs): document 9 skills, 11 commands, claudekit module, OpenCode backend
### 🐛 Bug Fixes
- fix(docs): correct 7-phase → 5-phase for do skill across all docs
- fix(install): best-effort default config install (never crashes main flow)
- fix(install): interactive quit no longer triggers post-install actions
- fix(install): empty parent directory cleanup on copy_file uninstall
- fix(install): agent restore on uninstall when shared by multiple modules
- fix(docs): remove non-existent on-stop hook references
### 📚 Documentation
- Updated USER_GUIDE.md with 13 CLI flags and OpenCode backend
- Updated README.md/README_CN.md with complete module and skill listings
- Added templates/models.json.example with all agent presets (do + omo)
## [6.6.0] - 2026-02-10
### 🚀 Features
- feat(skills): add per-task skill spec auto-detection and injection
- feat: add worktree support and refactor do skill to Python
### 🐛 Bug Fixes
- fix(test): set USERPROFILE on Windows for skills tests
- fix(do): reuse worktree across phases via DO_WORKTREE_DIR env var
- fix(release): auto-generate release notes from git history
### 📚 Documentation
- audit and fix documentation, installation scripts, and default configuration
## [6.0.0] - 2026-01-26
### 🚀 Features

View File

@@ -19,13 +19,30 @@ npx github:cexll/myclaude
| Module | Description | Documentation |
|--------|-------------|---------------|
| [do](skills/do/README.md) | **Recommended** - 7-phase feature development with codeagent orchestration | `/do` command |
| [do](skills/do/README.md) | **Recommended** - 5-phase feature development with codeagent orchestration | `/do` command |
| [omo](skills/omo/README.md) | Multi-agent orchestration with intelligent routing | `/omo` command |
| [bmad](agents/bmad/README.md) | BMAD agile workflow with 6 specialized agents | `/bmad-pilot` command |
| [requirements](agents/requirements/README.md) | Lightweight requirements-to-code pipeline | `/requirements-pilot` command |
| [essentials](agents/development-essentials/README.md) | Core development commands and utilities | `/code`, `/debug`, etc. |
| [essentials](agents/development-essentials/README.md) | 11 core dev commands: ask, bugfix, code, debug, docs, enhance-prompt, optimize, refactor, review, test, think | `/code`, `/debug`, etc. |
| [sparv](skills/sparv/README.md) | SPARV workflow (Specify→Plan→Act→Review→Vault) | `/sparv` command |
| course | Course development (combines dev + product-requirements + test-cases) | Composite module |
| claudekit | ClaudeKit: do skill + global hooks (pre-bash, inject-spec, log-prompt) | Composite module |
### Available Skills
Individual skills can be installed separately via `npx github:cexll/myclaude --list` (skills bundled in modules like do, omo, sparv are listed above):
| Skill | Description |
|-------|-------------|
| browser | Browser automation for web testing and data extraction |
| codeagent | codeagent-wrapper invocation for multi-backend AI code tasks |
| codex | Direct Codex backend execution |
| dev | Lightweight end-to-end development workflow |
| gemini | Direct Gemini backend execution |
| product-requirements | Interactive PRD generation with quality scoring |
| prototype-prompt-generator | Structured UI/UX prototype prompt generation |
| skill-install | Install skills from GitHub with security scanning |
| test-cases | Comprehensive test case generation from requirements |
## Installation
@@ -87,17 +104,20 @@ Edit `config.json` to enable/disable modules:
| Codex | `codex e`, `--json`, `-C`, `resume` |
| Claude | `--output-format stream-json`, `-r` |
| Gemini | `-o stream-json`, `-y`, `-r` |
| OpenCode | `opencode`, stdin mode |
## Directory Structure After Installation
```
~/.claude/
├── bin/codeagent-wrapper
├── CLAUDE.md
├── commands/
├── agents/
├── skills/
── config.json
├── CLAUDE.md (installed by default)
├── commands/ (from essentials module)
├── agents/ (from bmad/requirements modules)
├── skills/ (from do/omo/sparv/course modules)
── hooks/ (from claudekit module)
├── settings.json (auto-generated, hooks config)
└── installed_modules.json (auto-generated, tracks modules)
```
## Documentation

View File

@@ -16,13 +16,30 @@ npx github:cexll/myclaude
| 模块 | 描述 | 文档 |
|------|------|------|
| [do](skills/do/README.md) | **推荐** - 7 阶段功能开发 + codeagent 编排 | `/do` 命令 |
| [do](skills/do/README.md) | **推荐** - 5 阶段功能开发 + codeagent 编排 | `/do` 命令 |
| [omo](skills/omo/README.md) | 多智能体编排 + 智能路由 | `/omo` 命令 |
| [bmad](agents/bmad/README.md) | BMAD 敏捷工作流 + 6 个专业智能体 | `/bmad-pilot` 命令 |
| [requirements](agents/requirements/README.md) | 轻量级需求到代码流水线 | `/requirements-pilot` 命令 |
| [essentials](agents/development-essentials/README.md) | 核心开发命令和工具 | `/code`, `/debug` 等 |
| [essentials](agents/development-essentials/README.md) | 11 个核心开发命令ask、bugfix、code、debug、docs、enhance-prompt、optimize、refactor、review、test、think | `/code`, `/debug` 等 |
| [sparv](skills/sparv/README.md) | SPARV 工作流 (Specify→Plan→Act→Review→Vault) | `/sparv` 命令 |
| course | 课程开发(组合 dev + product-requirements + test-cases | 组合模块 |
| claudekit | ClaudeKitdo 技能 + 全局钩子pre-bash、inject-spec、log-prompt| 组合模块 |
### 可用技能
可通过 `npx github:cexll/myclaude --list` 单独安装技能(模块内置技能如 do、omo、sparv 见上表):
| 技能 | 描述 |
|------|------|
| browser | 浏览器自动化测试和数据提取 |
| codeagent | codeagent-wrapper 多后端 AI 代码任务调用 |
| codex | Codex 后端直接执行 |
| dev | 轻量级端到端开发工作流 |
| gemini | Gemini 后端直接执行 |
| product-requirements | 交互式 PRD 生成(含质量评分)|
| prototype-prompt-generator | 结构化 UI/UX 原型提示词生成 |
| skill-install | 从 GitHub 安装技能(含安全扫描)|
| test-cases | 从需求生成全面测试用例 |
## 核心架构
@@ -35,22 +52,20 @@ npx github:cexll/myclaude
### do 工作流(推荐)
7 阶段功能开发,通过 codeagent-wrapper 编排多个智能体。**大多数功能开发任务的首选工作流。**
5 阶段功能开发,通过 codeagent-wrapper 编排多个智能体。**大多数功能开发任务的首选工作流。**
```bash
/do "添加用户登录功能"
```
**7 阶段:**
**5 阶段:**
| 阶段 | 名称 | 目标 |
|------|------|------|
| 1 | Discovery | 理解需求 |
| 2 | Exploration | 映射代码库模式 |
| 3 | Clarification | 解决歧义(**强制**|
| 4 | Architecture | 设计实现方案 |
| 5 | Implementation | 构建功能(**需审批**|
| 6 | Review | 捕获缺陷 |
| 7 | Summary | 记录结果 |
| 1 | Understand | 并行探索理解需求和映射代码库 |
| 2 | Clarify | 解决阻塞性歧义(条件触发)|
| 3 | Design | 产出最小变更实现方案 |
| 4 | Implement + Review | 构建功能并审查 |
| 5 | Complete | 记录构建结果 |
**智能体:**
- `code-explorer` - 代码追踪、架构映射
@@ -162,6 +177,10 @@ npx github:cexll/myclaude
| `/optimize` | 性能优化 |
| `/refactor` | 代码重构 |
| `/docs` | 编写文档 |
| `/ask` | 提问和咨询 |
| `/bugfix` | Bug 修复 |
| `/enhance-prompt` | 提示词优化 |
| `/think` | 深度思考分析 |
---
@@ -218,6 +237,7 @@ npx github:cexll/myclaude --install-dir ~/.claude --force
| Codex | `codex e`, `--json`, `-C`, `resume` |
| Claude | `--output-format stream-json`, `-r` |
| Gemini | `-o stream-json`, `-y`, `-r` |
| OpenCode | `opencode`, stdin 模式 |
## 故障排查

View File

@@ -8,7 +8,7 @@ const os = require("os");
const path = require("path");
const readline = require("readline");
const zlib = require("zlib");
const { spawn } = require("child_process");
const { spawn, spawnSync } = require("child_process");
const REPO = { owner: "cexll", name: "myclaude" };
const API_HEADERS = {
@@ -931,6 +931,63 @@ async function uninstallModule(moduleName, config, repoRoot, installDir, dryRun)
deleteModuleStatus(installDir, moduleName);
}
async function installDefaultConfigs(installDir, repoRoot) {
try {
const claudeMdTarget = path.join(installDir, "CLAUDE.md");
const claudeMdSrc = path.join(repoRoot, "memorys", "CLAUDE.md");
if (!fs.existsSync(claudeMdTarget) && fs.existsSync(claudeMdSrc)) {
await fs.promises.copyFile(claudeMdSrc, claudeMdTarget);
process.stdout.write(`Installed CLAUDE.md to ${claudeMdTarget}\n`);
}
} catch (err) {
process.stderr.write(`Warning: could not install default configs: ${err.message}\n`);
}
}
function printPostInstallInfo(installDir) {
process.stdout.write("\n");
// Check codeagent-wrapper version
const wrapperBin = path.join(installDir, "bin", "codeagent-wrapper");
let wrapperVersion = null;
try {
const r = spawnSync(wrapperBin, ["--version"], { timeout: 5000 });
if (r.status === 0 && r.stdout) {
wrapperVersion = r.stdout.toString().trim();
}
} catch {}
// Check PATH
const binDir = path.join(installDir, "bin");
const envPath = process.env.PATH || "";
const pathOk = envPath.split(path.delimiter).some((p) => {
try { return fs.realpathSync(p) === fs.realpathSync(binDir); } catch { return p === binDir; }
});
// Check backend CLIs
const whichCmd = process.platform === "win32" ? "where" : "which";
const backends = ["codex", "claude", "gemini", "opencode"];
const detected = {};
for (const name of backends) {
try {
const r = spawnSync(whichCmd, [name], { timeout: 3000 });
detected[name] = r.status === 0;
} catch {
detected[name] = false;
}
}
process.stdout.write("Setup Complete!\n");
process.stdout.write(` codeagent-wrapper: ${wrapperVersion || "(not found)"} ${wrapperVersion ? "✓" : "✗"}\n`);
process.stdout.write(` PATH: ${binDir} ${pathOk ? "✓" : "✗ (not in PATH)"}\n`);
process.stdout.write("\nBackend CLIs detected:\n");
process.stdout.write(" " + backends.map((b) => `${b} ${detected[b] ? "✓" : "✗"}`).join(" | ") + "\n");
process.stdout.write("\nNext steps:\n");
process.stdout.write(" 1. Configure API keys in ~/.codeagent/models.json\n");
process.stdout.write(' 2. Try: /do "your first task"\n');
process.stdout.write("\n");
}
async function installSelected(picks, tag, config, installDir, force, dryRun) {
const needRepo = picks.some((p) => p.kind !== "wrapper");
const needWrapper = picks.some((p) => p.kind === "wrapper");
@@ -985,6 +1042,9 @@ async function installSelected(picks, tag, config, installDir, force, dryRun) {
);
}
}
await installDefaultConfigs(installDir, repoRoot);
printPostInstallInfo(installDir);
} finally {
await rmTree(tmp);
}

View File

@@ -1,97 +1,158 @@
# codeagent-wrapper
`codeagent-wrapper` 是一个用 Go 编写的“多后端 AI 代码代理”命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端Codex / Claude / Gemini / Opencode并提供一致的参数、配置与会话恢复体验。
[English](README.md) | [中文](README_CN.md)
入口:`cmd/codeagent/main.go`(生成二进制名:`codeagent`)和 `cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。两者行为一致。
A multi-backend AI code agent CLI wrapper written in Go. Provides a unified CLI entry point wrapping different AI tool backends (Codex / Claude / Gemini / OpenCode) with consistent flags, configuration, skill injection, and session resumption.
## 功能特性
Entry point: `cmd/codeagent-wrapper/main.go` (binary: `codeagent-wrapper`).
- 多后端支持:`codex` / `claude` / `gemini` / `opencode`
- 统一命令行:`codeagent [flags] <task>` / `codeagent resume <session_id> <task> [workdir]`
- 自动 stdin遇到换行/特殊字符/超长任务自动走 stdin避免 shell quoting 地狱;也可显式使用 `-`
- 配置合并:支持配置文件与 `CODEAGENT_*` 环境变量viper
- Agent 预设:从 `~/.codeagent/models.json` 读取 backend/model/prompt 等预设
- 并行执行:`--parallel` 从 stdin 读取多任务配置,支持依赖拓扑并发执行
- 日志清理:`codeagent cleanup` 清理旧日志(日志写入系统临时目录)
## Features
## 安装
- **Multi-backend support**: `codex` / `claude` / `gemini` / `opencode`
- **Unified CLI**: `codeagent-wrapper [flags] <task>` / `codeagent-wrapper resume <session_id> <task> [workdir]`
- **Auto stdin**: Automatically pipes via stdin when task contains newlines, special characters, or exceeds length; also supports explicit `-`
- **Config merging**: Config files + `CODEAGENT_*` environment variables (viper)
- **Agent presets**: Read backend/model/prompt/reasoning/yolo/allowed_tools from `~/.codeagent/models.json`
- **Dynamic agents**: Place a `{name}.md` prompt file in `~/.codeagent/agents/` to use as an agent
- **Skill auto-injection**: `--skills` for manual specification, or auto-detect from project tech stack (Go/Rust/Python/Node.js/Vue)
- **Git worktree isolation**: `--worktree` executes tasks in an isolated git worktree with auto-generated task_id and branch
- **Parallel execution**: `--parallel` reads multi-task config from stdin with dependency-aware topological concurrent execution and structured summary reports
- **Backend config**: `backends` section in `models.json` supports per-backend `base_url` / `api_key` injection
- **Claude tool control**: `allowed_tools` / `disallowed_tools` to restrict available tools for Claude backend
- **Stderr noise filtering**: Automatically filters noisy stderr output from Gemini and Codex backends
- **Log cleanup**: `codeagent-wrapper cleanup` cleans old logs (logs written to system temp directory)
- **Cross-platform**: macOS / Linux / Windows
要求Go 1.21+。
## Installation
在仓库根目录执行:
### Recommended (interactive installer)
```bash
go install ./cmd/codeagent
go install ./cmd/codeagent-wrapper
npx github:cexll/myclaude
```
安装后确认:
Select the `codeagent-wrapper` module to install.
### Manual build
Requires: Go 1.21+.
```bash
codeagent version
codeagent-wrapper version
# Build from source
make build
# Or install to $GOPATH/bin
make install
```
## 使用示例
最简单用法(默认后端:`codex`
Verify installation:
```bash
codeagent "分析 internal/app/cli.go 的入口逻辑,给出改进建议"
codeagent-wrapper --version
```
指定后端:
## Usage
Basic usage (default backend: `codex`):
```bash
codeagent --backend claude "解释 internal/executor/parallel_config.go 的并行配置格式"
codeagent-wrapper "analyze the entry logic of internal/app/cli.go"
```
指定工作目录(第 2 个位置参数):
Specify backend:
```bash
codeagent "在当前 repo 下搜索潜在数据竞争" .
codeagent-wrapper --backend claude "explain the parallel config format in internal/executor/parallel_config.go"
```
显式从 stdin 读取 task使用 `-`
Specify working directory (2nd positional argument):
```bash
cat task.txt | codeagent -
codeagent-wrapper "search for potential data races in this repo" .
```
恢复会话:
Explicit stdin (using `-`):
```bash
codeagent resume <session_id> "继续上次任务"
cat task.txt | codeagent-wrapper -
```
并行模式(从 stdin 读取任务配置;禁止位置参数):
HEREDOC (recommended for multi-line tasks):
```bash
codeagent --parallel <<'EOF'
codeagent-wrapper --backend claude - <<'EOF'
Implement user authentication:
- JWT tokens
- bcrypt password hashing
- Session management
EOF
```
Resume session:
```bash
codeagent-wrapper resume <session_id> "continue the previous task"
```
Execute in isolated git worktree:
```bash
codeagent-wrapper --worktree "refactor the auth module"
```
Manual skill injection:
```bash
codeagent-wrapper --skills golang-base-practices "optimize database queries"
```
Parallel mode (task config from stdin):
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: t1
workdir: .
backend: codex
---CONTENT---
列出本项目的主要模块以及它们的职责。
List the main modules and their responsibilities.
---TASK---
id: t2
dependencies: t1
backend: claude
---CONTENT---
基于 t1 的结论,提出重构风险点与建议。
Based on t1's findings, identify refactoring risks and suggestions.
EOF
```
## 配置说明
## CLI Flags
### 配置文件
| Flag | Description |
|------|-------------|
| `--backend <name>` | Backend selection (codex/claude/gemini/opencode) |
| `--model <name>` | Model override |
| `--agent <name>` | Agent preset name (from models.json or ~/.codeagent/agents/) |
| `--prompt-file <path>` | Read prompt from file |
| `--skills <names>` | Comma-separated skill names for spec injection |
| `--reasoning-effort <level>` | Reasoning effort (backend-specific) |
| `--skip-permissions` | Skip permission prompts |
| `--dangerously-skip-permissions` | Alias for `--skip-permissions` |
| `--worktree` | Execute in a new git worktree (auto-generates task_id) |
| `--parallel` | Parallel task mode (config from stdin) |
| `--full-output` | Full output in parallel mode (default: summary only) |
| `--config <path>` | Config file path (default: `$HOME/.codeagent/config.*`) |
| `--version`, `-v` | Print version |
| `--cleanup` | Clean up old logs |
默认查找路径(当 `--config` 为空时):
## Configuration
### Config File
Default search path (when `--config` is empty):
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
示例(YAML
Example (YAML):
```yaml
backend: codex
@@ -99,59 +160,113 @@ model: gpt-4.1
skip-permissions: false
```
也可以通过 `--config /path/to/config.yaml` 显式指定。
Can also be specified explicitly via `--config /path/to/config.yaml`.
### 环境变量(`CODEAGENT_*`
### Environment Variables (`CODEAGENT_*`)
通过 viper 读取并自动映射 `-` `_`,常用项:
Read via viper with automatic `-` to `_` mapping:
- `CODEAGENT_BACKEND``codex|claude|gemini|opencode`
- `CODEAGENT_MODEL`
- `CODEAGENT_AGENT`
- `CODEAGENT_PROMPT_FILE`
- `CODEAGENT_REASONING_EFFORT`
- `CODEAGENT_SKIP_PERMISSIONS`
- `CODEAGENT_FULL_OUTPUT`(并行模式 legacy 输出)
- `CODEAGENT_MAX_PARALLEL_WORKERS`0 表示不限制,上限 100
| Variable | Description |
|----------|-------------|
| `CODEAGENT_BACKEND` | Backend name (codex/claude/gemini/opencode) |
| `CODEAGENT_MODEL` | Model name |
| `CODEAGENT_AGENT` | Agent preset name |
| `CODEAGENT_PROMPT_FILE` | Prompt file path |
| `CODEAGENT_REASONING_EFFORT` | Reasoning effort |
| `CODEAGENT_SKIP_PERMISSIONS` | Skip permission prompts (default true; set `false` to disable) |
| `CODEAGENT_FULL_OUTPUT` | Full output in parallel mode |
| `CODEAGENT_MAX_PARALLEL_WORKERS` | Parallel worker count (0=unlimited, max 100) |
| `CODEAGENT_TMPDIR` | Custom temp directory (for macOS permission issues) |
| `CODEX_TIMEOUT` | Timeout in ms (default 7200000 = 2 hours) |
| `CODEX_BYPASS_SANDBOX` | Codex sandbox bypass (default true; set `false` to disable) |
| `DO_WORKTREE_DIR` | Reuse existing worktree directory (set by /do workflow) |
### Agent 预设(`~/.codeagent/models.json`
可在 `~/.codeagent/models.json` 定义 agent → backend/model/prompt 等映射,用 `--agent <name>` 选择:
### Agent Presets (`~/.codeagent/models.json`)
```json
{
"default_backend": "opencode",
"default_model": "opencode/grok-code",
"default_backend": "codex",
"default_model": "gpt-4.1",
"backends": {
"codex": { "api_key": "..." },
"claude": { "base_url": "http://localhost:23001", "api_key": "..." }
},
"agents": {
"develop": {
"backend": "codex",
"model": "gpt-4.1",
"prompt_file": "~/.codeagent/prompts/develop.md",
"description": "Code development"
"reasoning": "high",
"yolo": true,
"allowed_tools": ["Read", "Write", "Bash"],
"disallowed_tools": ["WebFetch"]
}
}
}
```
## 支持的后端
Use `--agent <name>` to select a preset. Agents inherit `base_url` / `api_key` from the corresponding `backends` entry.
该项目本身不内置模型能力,依赖你本机安装并可在 `PATH` 中找到对应 CLI
### Dynamic Agents
- `codex`:执行 `codex e ...`(默认会添加 `--dangerously-bypass-approvals-and-sandbox`;如需关闭请设置 `CODEX_BYPASS_SANDBOX=false`
- `claude`:执行 `claude -p ... --output-format stream-json`(默认会跳过权限提示;如需开启请设置 `CODEAGENT_SKIP_PERMISSIONS=false`
- `gemini`:执行 `gemini ... -o stream-json`(可从 `~/.gemini/.env` 加载环境变量)
- `opencode`:执行 `opencode run --format json`
Place a `{name}.md` file in `~/.codeagent/agents/` to use it via `--agent {name}`. The Markdown file is read as the prompt, using `default_backend` and `default_model`.
## 开发
### Skill Auto-Detection
```bash
make build
make test
make lint
make clean
When no skills are specified via `--skills`, codeagent-wrapper auto-detects the tech stack from files in the working directory:
| Detected Files | Injected Skills |
|----------------|-----------------|
| `go.mod` / `go.sum` | `golang-base-practices` |
| `Cargo.toml` | `rust-best-practices` |
| `pyproject.toml` / `setup.py` / `requirements.txt` | `python-best-practices` |
| `package.json` | `vercel-react-best-practices`, `frontend-design` |
| `vue.config.js` / `vite.config.ts` / `nuxt.config.ts` | `vue-web-app` |
Skill specs are read from `~/.claude/skills/{name}/SKILL.md`, subject to a 16000-character budget.
## Supported Backends
This project does not embed model capabilities. It requires the corresponding CLI tools installed and available in `PATH`:
| Backend | Command | Notes |
|---------|---------|-------|
| `codex` | `codex e ...` | Adds `--dangerously-bypass-approvals-and-sandbox` by default; set `CODEX_BYPASS_SANDBOX=false` to disable |
| `claude` | `claude -p ... --output-format stream-json` | Skips permissions and disables setting-sources to prevent recursion; set `CODEAGENT_SKIP_PERMISSIONS=false` to enable prompts; auto-reads env and model from `~/.claude/settings.json` |
| `gemini` | `gemini -o stream-json -y ...` | Auto-loads env vars from `~/.gemini/.env` (GEMINI_API_KEY, GEMINI_MODEL, etc.) |
| `opencode` | `opencode run --format json` | — |
## Project Structure
```
cmd/codeagent-wrapper/main.go # CLI entry point
internal/
app/ # CLI command definitions, argument parsing, main orchestration
backend/ # Backend abstraction and implementations (codex/claude/gemini/opencode)
config/ # Config loading, agent resolution, viper bindings
executor/ # Task execution engine: single/parallel/worktree/skill injection
logger/ # Structured logging system
parser/ # JSON stream parser
utils/ # Common utility functions
worktree/ # Git worktree management
```
## 故障排查
## Development
- macOS 下如果看到临时目录相关的 `permission denied`(例如临时可执行文件无法在 `/var/folders/.../T` 执行),可设置一个可执行的临时目录:`CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`
- `claude` 后端的 `base_url/api_key`(来自 `~/.codeagent/models.json`)会注入到子进程环境变量:`ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY`。若 `base_url` 指向本地代理(如 `localhost:23001`),请确认代理进程在运行。
```bash
make build # Build binary
make test # Run tests
make lint # golangci-lint + staticcheck
make clean # Clean build artifacts
make install # Install to $GOPATH/bin
```
CI uses GitHub Actions with Go 1.21 / 1.22 matrix testing.
## Troubleshooting
- On macOS, if you see `permission denied` related to temp directories, set: `CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`
- `claude` backend's `base_url` / `api_key` (from `~/.codeagent/models.json` `backends.claude`) are injected as `ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY` env vars
- `gemini` backend's API key is loaded from `~/.gemini/.env`, injected as `GEMINI_API_KEY` with `GEMINI_API_KEY_AUTH_MECHANISM=bearer` auto-set
- Exit codes: 127 = backend not found, 124 = timeout, 130 = interrupted
- Parallel mode outputs structured summary by default; use `--full-output` for complete output when debugging

View File

@@ -0,0 +1,272 @@
# codeagent-wrapper
[English](README.md) | [中文](README_CN.md)
`codeagent-wrapper` 是一个用 Go 编写的多后端 AI 代码代理命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端Codex / Claude / Gemini / OpenCode并提供一致的参数、配置、技能注入与会话恢复体验。
入口:`cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。
## 功能特性
- **多后端支持**`codex` / `claude` / `gemini` / `opencode`
- **统一命令行**`codeagent-wrapper [flags] <task>` / `codeagent-wrapper resume <session_id> <task> [workdir]`
- **自动 stdin**:遇到换行/特殊字符/超长任务自动走 stdin避免 shell quoting 问题;也可显式使用 `-`
- **配置合并**:支持配置文件与 `CODEAGENT_*` 环境变量viper
- **Agent 预设**:从 `~/.codeagent/models.json` 读取 backend/model/prompt/reasoning/yolo/allowed_tools 等预设
- **动态 Agent**:在 `~/.codeagent/agents/{name}.md` 放置 prompt 文件即可作为 agent 使用
- **技能自动注入**`--skills` 手动指定或根据项目技术栈自动检测Go/Rust/Python/Node.js/Vue并注入对应技能规范
- **Git Worktree 隔离**`--worktree` 在独立 git worktree 中执行任务,自动生成 task_id 和分支
- **并行执行**`--parallel` 从 stdin 读取多任务配置,支持依赖拓扑并发执行,带结构化摘要报告
- **后端配置**`models.json``backends` 节支持 per-backend 的 `base_url` / `api_key` 注入
- **Claude 工具控制**`allowed_tools` / `disallowed_tools` 限制 Claude 后端可用工具
- **Stderr 降噪**:自动过滤 Gemini 和 Codex 后端的噪声 stderr 输出
- **日志清理**`codeagent-wrapper cleanup` 清理旧日志(日志写入系统临时目录)
- **跨平台**:支持 macOS / Linux / Windows
## 安装
### 推荐方式(交互式安装器)
```bash
npx github:cexll/myclaude
```
选择 `codeagent-wrapper` 模块进行安装。
### 手动构建
要求Go 1.21+。
```bash
# 从源码构建
make build
# 或直接安装到 $GOPATH/bin
make install
```
安装后确认:
```bash
codeagent-wrapper --version
```
## 使用示例
最简单用法(默认后端:`codex`
```bash
codeagent-wrapper "分析 internal/app/cli.go 的入口逻辑,给出改进建议"
```
指定后端:
```bash
codeagent-wrapper --backend claude "解释 internal/executor/parallel_config.go 的并行配置格式"
```
指定工作目录(第 2 个位置参数):
```bash
codeagent-wrapper "在当前 repo 下搜索潜在数据竞争" .
```
显式从 stdin 读取 task使用 `-`
```bash
cat task.txt | codeagent-wrapper -
```
使用 HEREDOC推荐用于多行任务
```bash
codeagent-wrapper --backend claude - <<'EOF'
实现用户认证系统:
- JWT 令牌
- bcrypt 密码哈希
- 会话管理
EOF
```
恢复会话:
```bash
codeagent-wrapper resume <session_id> "继续上次任务"
```
在 git worktree 中隔离执行:
```bash
codeagent-wrapper --worktree "重构认证模块"
```
手动指定技能注入:
```bash
codeagent-wrapper --skills golang-base-practices "优化数据库查询"
```
并行模式(从 stdin 读取任务配置):
```bash
codeagent-wrapper --parallel <<'EOF'
---TASK---
id: t1
workdir: .
backend: codex
---CONTENT---
列出本项目的主要模块以及它们的职责。
---TASK---
id: t2
dependencies: t1
backend: claude
---CONTENT---
基于 t1 的结论,提出重构风险点与建议。
EOF
```
## CLI 参数
| 参数 | 说明 |
|------|------|
| `--backend <name>` | 后端选择codex/claude/gemini/opencode |
| `--model <name>` | 覆盖模型 |
| `--agent <name>` | Agent 预设名(来自 models.json 或 ~/.codeagent/agents/ |
| `--prompt-file <path>` | 从文件读取 prompt |
| `--skills <names>` | 逗号分隔的技能名,注入对应规范 |
| `--reasoning-effort <level>` | 推理力度(后端相关) |
| `--skip-permissions` | 跳过权限提示 |
| `--dangerously-skip-permissions` | `--skip-permissions` 的别名 |
| `--worktree` | 在新 git worktree 中执行(自动生成 task_id |
| `--parallel` | 并行任务模式(从 stdin 读取配置) |
| `--full-output` | 并行模式下输出完整消息(默认仅输出摘要) |
| `--config <path>` | 配置文件路径(默认:`$HOME/.codeagent/config.*` |
| `--version`, `-v` | 打印版本号 |
| `--cleanup` | 清理旧日志 |
## 配置说明
### 配置文件
默认查找路径(当 `--config` 为空时):
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
示例YAML
```yaml
backend: codex
model: gpt-4.1
skip-permissions: false
```
也可以通过 `--config /path/to/config.yaml` 显式指定。
### 环境变量(`CODEAGENT_*`
通过 viper 读取并自动映射 `-``_`,常用项:
| 变量 | 说明 |
|------|------|
| `CODEAGENT_BACKEND` | 后端名codex/claude/gemini/opencode |
| `CODEAGENT_MODEL` | 模型名 |
| `CODEAGENT_AGENT` | Agent 预设名 |
| `CODEAGENT_PROMPT_FILE` | Prompt 文件路径 |
| `CODEAGENT_REASONING_EFFORT` | 推理力度 |
| `CODEAGENT_SKIP_PERMISSIONS` | 跳过权限提示(默认 true`false` 关闭) |
| `CODEAGENT_FULL_OUTPUT` | 并行模式完整输出 |
| `CODEAGENT_MAX_PARALLEL_WORKERS` | 并行 worker 数0=不限制,上限 100 |
| `CODEAGENT_TMPDIR` | 自定义临时目录macOS 权限问题时使用) |
| `CODEX_TIMEOUT` | 超时(毫秒,默认 7200000 即 2 小时) |
| `CODEX_BYPASS_SANDBOX` | Codex sandbox bypass默认 true`false` 关闭) |
| `DO_WORKTREE_DIR` | 复用已有 worktree 目录(由 /do 工作流设置) |
### Agent 预设(`~/.codeagent/models.json`
```json
{
"default_backend": "codex",
"default_model": "gpt-4.1",
"backends": {
"codex": { "api_key": "..." },
"claude": { "base_url": "http://localhost:23001", "api_key": "..." }
},
"agents": {
"develop": {
"backend": "codex",
"model": "gpt-4.1",
"prompt_file": "~/.codeagent/prompts/develop.md",
"reasoning": "high",
"yolo": true,
"allowed_tools": ["Read", "Write", "Bash"],
"disallowed_tools": ["WebFetch"]
}
}
}
```
`--agent <name>` 选择预设agent 会继承 `backends` 下对应后端的 `base_url` / `api_key`
### 动态 Agent
`~/.codeagent/agents/` 目录放置 `{name}.md` 文件,即可通过 `--agent {name}` 使用,自动读取该 Markdown 作为 prompt使用 `default_backend``default_model`
### 技能自动检测
当未通过 `--skills` 显式指定技能时codeagent-wrapper 会根据工作目录中的文件自动检测技术栈:
| 检测文件 | 注入技能 |
|----------|----------|
| `go.mod` / `go.sum` | `golang-base-practices` |
| `Cargo.toml` | `rust-best-practices` |
| `pyproject.toml` / `setup.py` / `requirements.txt` | `python-best-practices` |
| `package.json` | `vercel-react-best-practices`, `frontend-design` |
| `vue.config.js` / `vite.config.ts` / `nuxt.config.ts` | `vue-web-app` |
技能规范从 `~/.claude/skills/{name}/SKILL.md` 读取,受 16000 字符预算限制。
## 支持的后端
该项目本身不内置模型能力,依赖本机安装并可在 `PATH` 中找到对应 CLI
| 后端 | 执行命令 | 说明 |
|------|----------|------|
| `codex` | `codex e ...` | 默认添加 `--dangerously-bypass-approvals-and-sandbox`;设 `CODEX_BYPASS_SANDBOX=false` 关闭 |
| `claude` | `claude -p ... --output-format stream-json` | 默认跳过权限并禁用 setting-sources 防止递归;设 `CODEAGENT_SKIP_PERMISSIONS=false` 开启权限;自动读取 `~/.claude/settings.json` 中的 env 和 model |
| `gemini` | `gemini -o stream-json -y ...` | 自动从 `~/.gemini/.env` 加载环境变量GEMINI_API_KEY, GEMINI_MODEL 等) |
| `opencode` | `opencode run --format json` | — |
## 项目结构
```
cmd/codeagent-wrapper/main.go # CLI 入口
internal/
app/ # CLI 命令定义、参数解析、主逻辑编排
backend/ # 后端抽象与实现codex/claude/gemini/opencode
config/ # 配置加载、agent 解析、viper 绑定
executor/ # 任务执行引擎:单任务/并行/worktree/技能注入
logger/ # 结构化日志系统
parser/ # JSON stream 解析器
utils/ # 通用工具函数
worktree/ # Git worktree 管理
```
## 开发
```bash
make build # 构建
make test # 运行测试
make lint # golangci-lint + staticcheck
make clean # 清理构建产物
make install # 安装到 $GOPATH/bin
```
CI 使用 GitHub ActionsGo 1.21 / 1.22 矩阵测试。
## 故障排查
- macOS 下如果看到临时目录相关的 `permission denied`,可设置:`CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`
- `claude` 后端的 `base_url` / `api_key`(来自 `~/.codeagent/models.json``backends.claude`)会注入到子进程环境变量 `ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY`
- `gemini` 后端的 API key 从 `~/.gemini/.env` 加载,注入 `GEMINI_API_KEY` 并自动设置 `GEMINI_API_KEY_AUTH_MECHANISM=bearer`
- 后端命令未找到时返回退出码 127超时返回 124中断返回 130
- 并行模式默认输出结构化摘要,使用 `--full-output` 查看完整输出以便调试

View File

@@ -1,11 +1,11 @@
# Codeagent-Wrapper User Guide
Multi-backend AI code execution wrapper supporting Codex, Claude, and Gemini.
Multi-backend AI code execution wrapper supporting Codex, Claude, Gemini, and OpenCode.
## Overview
`codeagent-wrapper` is a Go-based CLI tool that provides a unified interface to multiple AI coding backends. It handles:
- Multi-backend execution (Codex, Claude, Gemini)
- Multi-backend execution (Codex, Claude, Gemini, OpenCode)
- JSON stream parsing and output formatting
- Session management and resumption
- Parallel task execution with dependency resolution
@@ -42,6 +42,24 @@ Implement user authentication:
EOF
```
### CLI Flags
| Flag | Description |
|------|-------------|
| `--backend <name>` | Select backend (codex/claude/gemini/opencode) |
| `--model <name>` | Override model for this invocation |
| `--agent <name>` | Agent preset name (from ~/.codeagent/models.json) |
| `--config <path>` | Path to models.json config file |
| `--cleanup` | Clean up log files on startup |
| `--worktree` | Execute in a new git worktree (auto-generates task ID) |
| `--skills <names>` | Comma-separated skill names for spec injection |
| `--prompt-file <path>` | Read prompt from file |
| `--reasoning-effort <level>` | Set reasoning effort (low/medium/high) |
| `--skip-permissions` | Skip permission prompts |
| `--parallel` | Enable parallel task execution |
| `--full-output` | Show full output in parallel mode |
| `--version`, `-v` | Print version and exit |
### Backend Selection
| Backend | Command | Best For |
@@ -49,6 +67,7 @@ EOF
| **Codex** | `--backend codex` | General code tasks (default) |
| **Claude** | `--backend claude` | Complex reasoning, architecture |
| **Gemini** | `--backend gemini` | Fast iteration, prototyping |
| **OpenCode** | `--backend opencode` | Open-source alternative |
## Core Features

View File

@@ -39,6 +39,36 @@
"omo": {
"enabled": false,
"description": "OmO multi-agent orchestration with Sisyphus coordinator",
"agents": {
"oracle": {
"backend": "claude",
"model": "claude-opus-4-5-20251101",
"yolo": true
},
"librarian": {
"backend": "claude",
"model": "claude-sonnet-4-5-20250929",
"yolo": true
},
"explore": {
"backend": "opencode",
"model": "opencode/grok-code"
},
"develop": {
"backend": "codex",
"model": "gpt-5.2",
"reasoning": "xhigh",
"yolo": true
},
"frontend-ui-ux-engineer": {
"backend": "gemini",
"model": "gemini-3-pro-preview"
},
"document-writer": {
"backend": "gemini",
"model": "gemini-3-flash-preview"
}
},
"operations": [
{
"type": "copy_file",
@@ -98,7 +128,27 @@
},
"do": {
"enabled": true,
"description": "7-phase feature development workflow with codeagent orchestration",
"description": "5-phase feature development workflow with codeagent orchestration",
"agents": {
"develop": {
"backend": "codex",
"model": "gpt-4.1",
"reasoning": "high",
"yolo": true
},
"code-explorer": {
"backend": "opencode",
"model": ""
},
"code-architect": {
"backend": "claude",
"model": ""
},
"code-reviewer": {
"backend": "claude",
"model": ""
}
},
"operations": [
{
"type": "copy_dir",
@@ -148,7 +198,7 @@
},
"claudekit": {
"enabled": false,
"description": "ClaudeKit workflow: skills/do + global hooks (pre-bash, inject-spec, log-prompt, on-stop)",
"description": "ClaudeKit workflow: skills/do + global hooks (pre-bash, inject-spec, log-prompt)",
"operations": [
{
"type": "copy_dir",
@@ -160,7 +210,7 @@
"type": "copy_dir",
"source": "hooks",
"target": "hooks",
"description": "Install global hooks (pre-bash, inject-spec, log-prompt, on-stop)"
"description": "Install global hooks (pre-bash, inject-spec, log-prompt)"
}
]
}

View File

@@ -244,6 +244,112 @@ def unmerge_hooks_from_settings(module_name: str, ctx: Dict[str, Any]) -> None:
write_log({"level": "INFO", "message": f"Removed hooks for module: {module_name}"}, ctx)
def merge_agents_to_models(module_name: str, agents: Dict[str, Any], ctx: Dict[str, Any]) -> None:
"""Merge module agent configs into ~/.codeagent/models.json."""
models_path = Path.home() / ".codeagent" / "models.json"
models_path.parent.mkdir(parents=True, exist_ok=True)
if models_path.exists():
with models_path.open("r", encoding="utf-8") as fh:
models = json.load(fh)
else:
template = ctx["config_dir"] / "templates" / "models.json.example"
if template.exists():
with template.open("r", encoding="utf-8") as fh:
models = json.load(fh)
# Clear template agents so modules populate with __module__ tags
models["agents"] = {}
else:
models = {
"default_backend": "codex",
"default_model": "gpt-4.1",
"backends": {},
"agents": {},
}
models.setdefault("agents", {})
for agent_name, agent_cfg in agents.items():
entry = dict(agent_cfg)
entry["__module__"] = module_name
existing = models["agents"].get(agent_name, {})
if not existing or existing.get("__module__"):
models["agents"][agent_name] = entry
with models_path.open("w", encoding="utf-8") as fh:
json.dump(models, fh, indent=2, ensure_ascii=False)
write_log(
{
"level": "INFO",
"message": (
f"Merged {len(agents)} agent(s) from {module_name} "
"into models.json"
),
},
ctx,
)
def unmerge_agents_from_models(module_name: str, ctx: Dict[str, Any]) -> None:
"""Remove module's agent configs from ~/.codeagent/models.json.
If another installed module also declares a removed agent, restore that
module's version so shared agents (e.g. 'develop') are not lost.
"""
models_path = Path.home() / ".codeagent" / "models.json"
if not models_path.exists():
return
with models_path.open("r", encoding="utf-8") as fh:
models = json.load(fh)
agents = models.get("agents", {})
to_remove = [
name
for name, cfg in agents.items()
if isinstance(cfg, dict) and cfg.get("__module__") == module_name
]
if not to_remove:
return
# Load config to find other modules that declare the same agents
config_path = ctx["config_dir"] / "config.json"
config = _load_json(config_path) if config_path.exists() else {}
installed = load_installed_status(ctx).get("modules", {})
for name in to_remove:
del agents[name]
# Check if another installed module also declares this agent
for other_mod, other_status in installed.items():
if other_mod == module_name:
continue
if other_status.get("status") != "success":
continue
other_cfg = config.get("modules", {}).get(other_mod, {})
other_agents = other_cfg.get("agents", {})
if name in other_agents:
restored = dict(other_agents[name])
restored["__module__"] = other_mod
agents[name] = restored
break
with models_path.open("w", encoding="utf-8") as fh:
json.dump(models, fh, indent=2, ensure_ascii=False)
write_log(
{
"level": "INFO",
"message": (
f"Removed {len(to_remove)} agent(s) from {module_name} "
"in models.json"
),
},
ctx,
)
def _hooks_equal(hook1: Dict[str, Any], hook2: Dict[str, Any]) -> bool:
"""Compare two hooks ignoring the __module__ marker."""
h1 = {k: v for k, v in hook1.items() if k != "__module__"}
@@ -545,6 +651,14 @@ def uninstall_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dic
target.unlink()
removed_paths.append(str(target))
write_log({"level": "INFO", "message": f"Removed: {target}"}, ctx)
# Clean up empty parent directories up to install_dir
parent = target.parent
while parent != install_dir and parent.exists():
try:
parent.rmdir()
except OSError:
break
parent = parent.parent
elif op_type == "merge_dir":
if not merge_dir_files:
write_log(
@@ -604,6 +718,13 @@ def uninstall_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dic
except Exception as exc:
write_log({"level": "WARNING", "message": f"Failed to remove hooks for {name}: {exc}"}, ctx)
# Remove module agents from ~/.codeagent/models.json
try:
unmerge_agents_from_models(name, ctx)
result["agents_removed"] = True
except Exception as exc:
write_log({"level": "WARNING", "message": f"Failed to remove agents for {name}: {exc}"}, ctx)
result["removed_paths"] = removed_paths
return result
@@ -626,7 +747,9 @@ def update_status_after_uninstall(uninstalled_modules: List[str], ctx: Dict[str,
def interactive_manage(config: Dict[str, Any], ctx: Dict[str, Any]) -> int:
"""Interactive module management menu."""
"""Interactive module management menu. Returns 0 on success, 1 on error.
Sets ctx['_did_install'] = True if any module was installed."""
ctx.setdefault("_did_install", False)
while True:
installed_status = get_installed_modules(config, ctx)
modules = config.get("modules", {})
@@ -695,6 +818,7 @@ def interactive_manage(config: Dict[str, Any], ctx: Dict[str, Any]) -> int:
for r in results:
if r.get("status") == "success":
current_status.setdefault("modules", {})[r["module"]] = r
ctx["_did_install"] = True
current_status["updated_at"] = datetime.now().isoformat()
with Path(ctx["status_file"]).open("w", encoding="utf-8") as fh:
json.dump(current_status, fh, indent=2, ensure_ascii=False)
@@ -819,6 +943,17 @@ def execute_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dict[
write_log({"level": "WARNING", "message": f"Failed to merge hooks for {name}: {exc}"}, ctx)
result["operations"].append({"type": "merge_hooks", "status": "failed", "error": str(exc)})
# Handle agents: merge module agent configs into ~/.codeagent/models.json
module_agents = cfg.get("agents", {})
if module_agents:
try:
merge_agents_to_models(name, module_agents, ctx)
result["operations"].append({"type": "merge_agents", "status": "success"})
result["has_agents"] = True
except Exception as exc:
write_log({"level": "WARNING", "message": f"Failed to merge agents for {name}: {exc}"}, ctx)
result["operations"].append({"type": "merge_agents", "status": "failed", "error": str(exc)})
return result
@@ -1060,6 +1195,67 @@ def write_status(results: List[Dict[str, Any]], ctx: Dict[str, Any]) -> None:
json.dump(status, fh, indent=2, ensure_ascii=False)
def install_default_configs(ctx: Dict[str, Any]) -> None:
"""Copy default config files if they don't already exist. Best-effort: never raises."""
try:
install_dir = ctx["install_dir"]
config_dir = ctx["config_dir"]
# Copy memorys/CLAUDE.md -> {install_dir}/CLAUDE.md
claude_md_src = config_dir / "memorys" / "CLAUDE.md"
claude_md_dst = install_dir / "CLAUDE.md"
if not claude_md_dst.exists() and claude_md_src.exists():
shutil.copy2(claude_md_src, claude_md_dst)
print(f" Installed CLAUDE.md to {claude_md_dst}")
write_log({"level": "INFO", "message": f"Installed CLAUDE.md to {claude_md_dst}"}, ctx)
except Exception as exc:
print(f" Warning: could not install default configs: {exc}", file=sys.stderr)
def print_post_install_info(ctx: Dict[str, Any]) -> None:
"""Print post-install verification and setup guidance."""
install_dir = ctx["install_dir"]
# Check codeagent-wrapper version
wrapper_bin = install_dir / "bin" / "codeagent-wrapper"
wrapper_version = None
try:
result = subprocess.run(
[str(wrapper_bin), "--version"],
capture_output=True, text=True, timeout=5,
)
if result.returncode == 0:
wrapper_version = result.stdout.strip()
except Exception:
pass
# Check PATH
bin_dir = str(install_dir / "bin")
env_path = os.environ.get("PATH", "")
path_ok = any(
os.path.realpath(p) == os.path.realpath(bin_dir)
if os.path.exists(p) else p == bin_dir
for p in env_path.split(os.pathsep)
)
# Check backend CLIs
backends = ["codex", "claude", "gemini", "opencode"]
detected = {name: shutil.which(name) is not None for name in backends}
print("\nSetup Complete!")
v_mark = "" if wrapper_version else ""
print(f" codeagent-wrapper: {wrapper_version or '(not found)'} {v_mark}")
p_mark = "" if path_ok else "✗ (not in PATH)"
print(f" PATH: {bin_dir} {p_mark}")
print("\nBackend CLIs detected:")
cli_parts = [f"{b} {'' if detected[b] else ''}" for b in backends]
print(" " + " | ".join(cli_parts))
print("\nNext steps:")
print(" 1. Configure API keys in ~/.codeagent/models.json")
print(' 2. Try: /do "your first task"')
print()
def prepare_status_backup(ctx: Dict[str, Any]) -> None:
status_path = Path(ctx["status_file"])
if status_path.exists():
@@ -1208,6 +1404,8 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
failed = len(results) - success
if failed == 0:
print(f"\n✓ Update complete: {success} module(s) updated")
install_default_configs(ctx)
print_post_install_info(ctx)
else:
print(f"\n⚠ Update finished with errors: {success} success, {failed} failed")
if not args.force:
@@ -1221,7 +1419,11 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
except Exception as exc:
print(f"Failed to prepare install dir: {exc}", file=sys.stderr)
return 1
return interactive_manage(config, ctx)
result = interactive_manage(config, ctx)
if result == 0 and ctx.get("_did_install"):
install_default_configs(ctx)
print_post_install_info(ctx)
return result
# Install specified modules
modules = select_modules(config, args.module)
@@ -1280,6 +1482,10 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
if not args.force:
return 1
if failed == 0:
install_default_configs(ctx)
print_post_install_info(ctx)
return 0

View File

@@ -1,6 +1,6 @@
{
"name": "myclaude",
"version": "0.0.0",
"version": "6.7.0",
"private": true,
"description": "Claude Code multi-agent workflows (npx installer)",
"license": "AGPL-3.0",
@@ -13,6 +13,7 @@
"agents/",
"skills/",
"memorys/",
"templates/",
"codeagent-wrapper/",
"config.json",
"install.py",

View File

@@ -10,31 +10,17 @@ An orchestrator for systematic feature development. Invoke agents via `codeagent
## Loop Initialization (REQUIRED)
When triggered via `/do <task>`, follow these steps:
### Step 1: Ask about worktree mode
Use AskUserQuestion to ask:
```
Develop in a separate worktree? (Isolates changes from main branch)
- Yes (Recommended for larger changes)
- No (Work directly in current directory)
```
### Step 2: Initialize task directory
When triggered via `/do <task>`, initialize the task directory immediately without asking about worktree:
```bash
# If worktree mode selected:
python3 ".claude/skills/do/scripts/setup-do.py" --worktree "<task description>"
# If no worktree:
python3 ".claude/skills/do/scripts/setup-do.py" "<task description>"
```
This creates a task directory under `.claude/do-tasks/` with:
- `task.md`: Single file containing YAML frontmatter (metadata) + Markdown body (requirements/context)
**Worktree decision is deferred until Phase 4 (Implement).** Phases 1-3 are read-only and do not require worktree isolation.
## Task Directory Management
Use `task.py` to manage task state:
@@ -52,15 +38,23 @@ python3 ".claude/skills/do/scripts/task.py" list
## Worktree Mode
When worktree mode is enabled in task.json, ALL `codeagent-wrapper` calls that modify code MUST include `--worktree`:
The worktree is created **only when needed** (right before Phase 4: Implement). If the user chooses worktree mode:
1. Run setup with `--worktree` flag to create the worktree:
```bash
python3 ".claude/skills/do/scripts/setup-do.py" --worktree "<task description>"
```
2. Use the `DO_WORKTREE_DIR` environment variable to direct `codeagent-wrapper` develop agent into the worktree. **Do NOT pass `--worktree` to subsequent calls** — that creates a new worktree each time.
```bash
codeagent-wrapper --worktree --agent develop - . <<'EOF'
# Save the worktree path from setup output, then prefix all develop calls:
DO_WORKTREE_DIR=<worktree_dir> codeagent-wrapper --agent develop - . <<'EOF'
...
EOF
```
Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `--worktree`.
Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `DO_WORKTREE_DIR`.
## Hard Constraints
@@ -69,7 +63,7 @@ Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `--w
3. **Update phase after each phase.** Use `task.py update-phase <N>`.
4. **Expect long-running `codeagent-wrapper` calls.** High-reasoning modes can take a long time.
5. **Timeouts are not an escape hatch.** If a call times out, retry with narrower scope.
6. **Respect worktree setting.** If enabled, always pass `--worktree` to develop agent calls.
6. **Defer worktree decision until Phase 4.** Only ask about worktree mode right before implementation. If enabled, prefix develop agent calls with `DO_WORKTREE_DIR=<path>`. Never pass `--worktree` after initialization.
## Agents
@@ -78,7 +72,7 @@ Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `--w
| `code-explorer` | Trace code, map architecture, find patterns | No (read-only) |
| `code-architect` | Design approaches, file plans, build sequences | No (read-only) |
| `code-reviewer` | Review for bugs, simplicity, conventions | No (read-only) |
| `develop` | Implement code, run tests | **Yes** (if worktree enabled) |
| `develop` | Implement code, run tests | **Yes** — use `DO_WORKTREE_DIR` env prefix |
## Issue Severity Definitions
@@ -175,12 +169,39 @@ EOF
**Goal:** Build feature and review in one phase.
1. Invoke `develop` to implement. For full-stack projects, split into backend/frontend tasks with per-task `skills:` injection. Use `--parallel` when tasks can be split; use single agent when the change is small or single-domain.
**Step 1: Decide on worktree mode (ONLY NOW)**
**Single-domain example** (add `--worktree` if enabled):
Use AskUserQuestion to ask:
```
Develop in a separate worktree? (Isolates changes from main branch)
- Yes (Recommended for larger changes)
- No (Work directly in current directory)
```
If user chooses worktree:
```bash
python3 ".claude/skills/do/scripts/setup-do.py" --worktree "<task description>"
# Save the worktree path from output for DO_WORKTREE_DIR
```
**Step 2: Invoke develop agent**
For full-stack projects, split into backend/frontend tasks with per-task `skills:` injection. Use `--parallel` when tasks can be split; use single agent when the change is small or single-domain.
**Single-domain example** (prefix with `DO_WORKTREE_DIR` if worktree enabled):
```bash
codeagent-wrapper --worktree --agent develop --skills golang-base-practices - . <<'EOF'
# With worktree:
DO_WORKTREE_DIR=<worktree_dir> codeagent-wrapper --agent develop --skills golang-base-practices - . <<'EOF'
Implement with minimal change set following the Phase 3 blueprint.
- Follow Phase 1 patterns
- Add/adjust tests per Phase 3 plan
- Run narrowest relevant tests
EOF
# Without worktree:
codeagent-wrapper --agent develop --skills golang-base-practices - . <<'EOF'
Implement with minimal change set following the Phase 3 blueprint.
- Follow Phase 1 patterns
- Add/adjust tests per Phase 3 plan
@@ -191,7 +212,8 @@ EOF
**Full-stack parallel example** (adapt task IDs, skills, and content based on Phase 3 design):
```bash
codeagent-wrapper --worktree --parallel <<'EOF'
# With worktree:
DO_WORKTREE_DIR=<worktree_dir> codeagent-wrapper --parallel <<'EOF'
---TASK---
id: p4_backend
agent: develop
@@ -213,11 +235,17 @@ Implement frontend changes following Phase 3 blueprint.
- Follow Phase 1 patterns
- Add/adjust tests per Phase 3 plan
EOF
# Without worktree: remove DO_WORKTREE_DIR prefix
```
Note: Choose which skills to inject based on Phase 3 design output. Only inject skills relevant to each task's domain.
2. Run parallel reviews:
**Step 3: Review**
**Step 3: Review**
Run parallel reviews:
```bash
codeagent-wrapper --parallel <<'EOF'
@@ -239,9 +267,10 @@ Classify each issue as BLOCKING or MINOR.
EOF
```
3. Handle review results:
- **MINOR issues only** → Auto-fix via `develop`, no user interaction
- **BLOCKING issues** → Use AskUserQuestion: "Fix now / Proceed as-is"
**Step 4: Handle review results**
- **MINOR issues only** → Auto-fix via `develop`, no user interaction
- **BLOCKING issues** → Use AskUserQuestion: "Fix now / Proceed as-is"
### Phase 5: Complete (No Interaction)

329
skills/harness/SKILL.md Normal file
View File

@@ -0,0 +1,329 @@
---
name: harness
description: "This skill should be used for multi-session autonomous agent work requiring progress checkpointing, failure recovery, and task dependency management. Triggers on '/harness' command, or when a task involves many subtasks needing progress persistence, sleep/resume cycles across context windows, recovery from mid-task failures with partial state, or distributed work across multiple agent sessions. Synthesized from Anthropic and OpenAI engineering practices for long-running agents."
---
# Harness — Long-Running Agent Framework
Executable protocol enabling any agent task to run continuously across multiple sessions with automatic progress recovery, task dependency resolution, failure rollback, and standardized error handling.
## Design Principles
1. **Design for the agent, not the human** — Test output, docs, and task structure are the agent's primary interface
2. **Progress files ARE the context** — When context window resets, progress files + git history = full recovery
3. **Premature completion is the #1 failure mode** — Structured task lists with explicit completion criteria prevent declaring victory early
4. **Standardize everything grep-able** — ERROR on same line, structured timestamps, consistent prefixes
5. **Fast feedback loops** — Pre-compute stats, run smoke tests before full validation
6. **Idempotent everything** — Init scripts, task execution, environment setup must all be safe to re-run
7. **Fail safe, not fail silent** — Every failure must have an explicit recovery strategy
## Commands
```
/harness init <project-path> # Initialize harness files in project
/harness run # Start/resume the infinite loop
/harness status # Show current progress and stats
/harness add "task description" # Add a task to the list
```
## Progress Persistence (Dual-File System)
Maintain two files in the project working directory:
### harness-progress.txt (Append-Only Log)
Free-text log of all agent actions across sessions. Never truncate.
```
[2025-07-01T10:00:00Z] [SESSION-1] INIT Harness initialized for project /path/to/project
[2025-07-01T10:00:05Z] [SESSION-1] INIT Environment health check: PASS
[2025-07-01T10:00:10Z] [SESSION-1] LOCK acquired (pid=12345)
[2025-07-01T10:00:11Z] [SESSION-1] Starting [task-001] Implement user authentication (base=def5678)
[2025-07-01T10:05:00Z] [SESSION-1] CHECKPOINT [task-001] step=2/4 "auth routes created, tests pending"
[2025-07-01T10:15:30Z] [SESSION-1] Completed [task-001] (commit abc1234)
[2025-07-01T10:15:31Z] [SESSION-1] Starting [task-002] Add rate limiting (base=abc1234)
[2025-07-01T10:20:00Z] [SESSION-1] ERROR [task-002] [TASK_EXEC] Redis connection refused
[2025-07-01T10:20:01Z] [SESSION-1] ROLLBACK [task-002] git reset --hard abc1234
[2025-07-01T10:20:02Z] [SESSION-1] STATS tasks_total=5 completed=1 failed=1 pending=3 blocked=0 attempts_total=2 checkpoints=1
```
### harness-tasks.json (Structured State)
```json
{
"version": 2,
"created": "2025-07-01T10:00:00Z",
"session_config": {
"max_tasks_per_session": 20,
"max_sessions": 50
},
"tasks": [
{
"id": "task-001",
"title": "Implement user authentication",
"status": "completed",
"priority": "P0",
"depends_on": [],
"attempts": 1,
"max_attempts": 3,
"started_at_commit": "def5678",
"validation": {
"command": "npm test -- --testPathPattern=auth",
"timeout_seconds": 300
},
"on_failure": {
"cleanup": null
},
"error_log": [],
"checkpoints": [],
"completed_at": "2025-07-01T10:15:30Z"
},
{
"id": "task-002",
"title": "Add rate limiting",
"status": "failed",
"priority": "P1",
"depends_on": [],
"attempts": 1,
"max_attempts": 3,
"started_at_commit": "abc1234",
"validation": {
"command": "npm test -- --testPathPattern=rate-limit",
"timeout_seconds": 120
},
"on_failure": {
"cleanup": "docker compose down redis"
},
"error_log": ["[TASK_EXEC] Redis connection refused"],
"checkpoints": [],
"completed_at": null
},
{
"id": "task-003",
"title": "Add OAuth providers",
"status": "pending",
"priority": "P1",
"depends_on": ["task-001"],
"attempts": 0,
"max_attempts": 3,
"started_at_commit": null,
"validation": {
"command": "npm test -- --testPathPattern=oauth",
"timeout_seconds": 180
},
"on_failure": {
"cleanup": null
},
"error_log": [],
"checkpoints": [],
"completed_at": null
}
],
"session_count": 1,
"last_session": "2025-07-01T10:20:02Z"
}
```
Task statuses: `pending``in_progress` (transient, set only during active execution) → `completed` or `failed`. A task found as `in_progress` at session start means the previous session was interrupted — handle via Context Window Recovery Protocol.
**Session boundary**: A session starts when the agent begins executing the Session Start protocol and ends when a Stopping Condition is met or the context window resets. Each session gets a unique `SESSION-N` identifier (N = `session_count` after increment).
## Concurrency Control
Before modifying `harness-tasks.json`, acquire an exclusive lock using portable `mkdir` (atomic on all POSIX systems, works on both macOS and Linux):
```bash
# Acquire lock (fail fast if another agent is running)
LOCKDIR="/tmp/harness-$(printf '%s' "$(pwd)" | shasum -a 256 2>/dev/null || sha256sum | cut -c1-8).lock"
if ! mkdir "$LOCKDIR" 2>/dev/null; then
# Check if lock holder is still alive
LOCK_PID=$(cat "$LOCKDIR/pid" 2>/dev/null)
if [ -n "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2>/dev/null; then
echo "ERROR: Another harness session is active (pid=$LOCK_PID)"; exit 1
fi
# Stale lock — atomically reclaim via mv to avoid TOCTOU race
STALE="$LOCKDIR.stale.$$"
if mv "$LOCKDIR" "$STALE" 2>/dev/null; then
rm -rf "$STALE"
mkdir "$LOCKDIR" || { echo "ERROR: Lock contention"; exit 1; }
echo "WARN: Removed stale lock${LOCK_PID:+ from pid=$LOCK_PID}"
else
echo "ERROR: Another agent reclaimed the lock"; exit 1
fi
fi
echo "$$" > "$LOCKDIR/pid"
trap 'rm -rf "$LOCKDIR"' EXIT
```
Log lock acquisition: `[timestamp] [SESSION-N] LOCK acquired (pid=<PID>)`
Log lock release: `[timestamp] [SESSION-N] LOCK released`
The lock is held for the entire session. The `trap EXIT` handler releases it automatically on normal exit, errors, or signals. Never release the lock between tasks within a session.
## Infinite Loop Protocol
### Session Start (Execute Every Time)
1. **Read state**: Read last 200 lines of `harness-progress.txt` + full `harness-tasks.json`. If JSON is unparseable, see JSON corruption recovery in Error Handling.
2. **Read git**: Run `git log --oneline -20` and `git diff --stat` to detect uncommitted work
3. **Acquire lock**: Fail if another session is active
4. **Recover interrupted tasks** (see Context Window Recovery below)
5. **Health check**: Run `harness-init.sh` if it exists
6. **Track session**: Increment `session_count` in JSON. Check `session_count` against `max_sessions` — if reached, log STATS and STOP. Initialize per-session task counter to 0.
7. **Pick next task** using Task Selection Algorithm below
### Task Selection Algorithm
Before selecting, run dependency validation:
1. **Cycle detection**: For each non-completed task, walk `depends_on` transitively. If any task appears in its own chain, mark it `failed` with `[DEPENDENCY] Circular dependency detected: task-A -> task-B -> task-A`. Self-references (`depends_on` includes own id) are also cycles.
2. **Blocked propagation**: If a task's `depends_on` includes a task that is `failed` and will never be retried (either `attempts >= max_attempts` OR its `error_log` contains a `[DEPENDENCY]` entry), mark the blocked task as `failed` with `[DEPENDENCY] Blocked by failed task-XXX`. Repeat until no more tasks can be propagated.
Then pick the next task in this priority order:
1. Tasks with `status: "pending"` where ALL `depends_on` tasks are `completed` — sorted by `priority` (P0 > P1 > P2), then by `id` (lowest first)
2. Tasks with `status: "failed"` where `attempts < max_attempts` and ALL `depends_on` are `completed` — sorted by priority, then oldest failure first
3. If no eligible tasks remain → log final STATS → STOP
### Task Execution Cycle
For each task, execute this exact sequence:
1. **Claim**: Record `started_at_commit` = current HEAD hash. Set status to `in_progress`, log `Starting [<task-id>] <title> (base=<hash>)`
2. **Execute with checkpoints**: Perform the work. After each significant step, log:
```
[timestamp] [SESSION-N] CHECKPOINT [task-id] step=M/N "description of what was done"
```
Also append to the task's `checkpoints` array: `{ "step": M, "total": N, "description": "...", "timestamp": "ISO" }`
3. **Validate**: Run the task's `validation.command` wrapped with `timeout`: `timeout <timeout_seconds> <command>`. If no validation command, skip. Before running, verify the command exists (e.g., `command -v <binary>`) — if missing, treat as `ENV_SETUP` error.
- Command exits 0 → PASS
- Command exits non-zero → FAIL
- Command exceeds timeout → TIMEOUT
4. **Record outcome**:
- **Success**: status=`completed`, set `completed_at`, log `Completed [<task-id>] (commit <hash>)`, git commit
- **Failure**: increment `attempts`, append error to `error_log`. Verify `started_at_commit` exists via `git cat-file -t <hash>` — if missing, mark failed at max_attempts. Otherwise execute `git reset --hard <started_at_commit>` and `git clean -fd` to rollback ALL commits and remove untracked files. Execute `on_failure.cleanup` if defined. Log `ERROR [<task-id>] [<category>] <message>`. Set status=`failed` (Task Selection Algorithm pass 2 handles retries when attempts < max_attempts)
5. **Track**: Increment per-session task counter. If `max_tasks_per_session` reached, log STATS and STOP.
6. **Continue**: Immediately pick next task (zero idle time)
### Stopping Conditions
- All tasks `completed`
- All remaining tasks `failed` at max_attempts or blocked by failed dependencies
- `session_config.max_tasks_per_session` reached for this session
- `session_config.max_sessions` reached across all sessions
- User interrupts
## Context Window Recovery Protocol
When a new session starts and finds a task with `status: "in_progress"`:
1. **Check git state**:
```bash
git diff --stat # Uncommitted changes?
git log --oneline -5 # Recent commits since task started?
git stash list # Any stashed work?
```
2. **Check checkpoints**: Read the task's `checkpoints` array to determine last completed step
3. **Decision matrix** (verify recent commits belong to this task by checking commit messages for the task-id):
| Uncommitted? | Recent task commits? | Checkpoints? | Action |
|---|---|---|---|
| No | No | None | Mark `failed` with `[SESSION_TIMEOUT] No progress detected`, increment attempts |
| No | No | Some | Verify file state matches checkpoint claims. If files reflect checkpoint progress, resume from last step. If not, mark `failed` — work was lost |
| No | Yes | Any | Run `validation.command`. If passes → mark `completed`. If fails → `git reset --hard <started_at_commit>`, mark `failed` |
| Yes | No | Any | Run validation WITH uncommitted changes present. If passes → commit, mark `completed`. If fails → `git reset --hard <started_at_commit>` + `git clean -fd`, mark `failed` |
| Yes | Yes | Any | Commit uncommitted changes, run `validation.command`. If passes → mark `completed`. If fails → `git reset --hard <started_at_commit>` + `git clean -fd`, mark `failed` |
4. **Log recovery**: `[timestamp] [SESSION-N] RECOVERY [task-id] action="<action taken>" reason="<reason>"`
## Error Handling & Recovery Strategies
Each error category has a default recovery strategy:
| Category | Default Recovery | Agent Action |
|----------|-----------------|--------------|
| `ENV_SETUP` | Re-run init, then STOP if still failing | Run `harness-init.sh` again immediately. If fails twice, log and stop — environment is broken |
| `TASK_EXEC` | Rollback via `git reset --hard <started_at_commit>`, retry | Verify `started_at_commit` exists (`git cat-file -t <hash>`). If missing, mark failed at max_attempts. Otherwise reset, run `on_failure.cleanup` if defined, retry if attempts < max_attempts |
| `TEST_FAIL` | Rollback via `git reset --hard <started_at_commit>`, retry | Reset to `started_at_commit`, analyze test output to identify fix, retry with targeted changes |
| `TIMEOUT` | Kill process, execute cleanup, retry | Wrap validation with `timeout <seconds> <command>`. On timeout, run `on_failure.cleanup`, retry (consider splitting task if repeated) |
| `DEPENDENCY` | Skip task, mark blocked | Log which dependency failed, mark task as `failed` with dependency reason |
| `SESSION_TIMEOUT` | Use Context Window Recovery Protocol | New session assesses partial progress via Recovery Protocol — may result in completion or failure depending on validation |
**JSON corruption**: If `harness-tasks.json` cannot be parsed, check for `harness-tasks.json.bak` (written before each modification). If backup exists and is valid, restore from it. If no valid backup, log `ERROR [ENV_SETUP] harness-tasks.json corrupted and unrecoverable` and STOP — task metadata (validation commands, dependencies, cleanup) cannot be reconstructed from logs alone.
**Backup protocol**: Before every write to `harness-tasks.json`, copy the current file to `harness-tasks.json.bak`.
## Environment Initialization
If `harness-init.sh` exists in the project root, run it at every session start. The script must be idempotent.
Example `harness-init.sh`:
```bash
#!/bin/bash
set -e
npm install 2>/dev/null || pip install -r requirements.txt 2>/dev/null || true
curl -sf http://localhost:5432 >/dev/null 2>&1 || echo "WARN: DB not reachable"
npm test -- --bail --silent 2>/dev/null || echo "WARN: Smoke test failed"
echo "Environment health check complete"
```
## Standardized Log Format
All log entries use grep-friendly format on a single line:
```
[ISO-timestamp] [SESSION-N] <TYPE> [task-id]? [category]? message
```
`[task-id]` and `[category]` are included when applicable (task-scoped entries). Session-level entries (`INIT`, `LOCK`, `STATS`) omit them.
Types: `INIT`, `Starting`, `Completed`, `ERROR`, `CHECKPOINT`, `ROLLBACK`, `RECOVERY`, `STATS`, `LOCK`, `WARN`
Error categories: `ENV_SETUP`, `TASK_EXEC`, `TEST_FAIL`, `TIMEOUT`, `DEPENDENCY`, `SESSION_TIMEOUT`
Filtering:
```bash
grep "ERROR" harness-progress.txt # All errors
grep "ERROR" harness-progress.txt | grep "TASK_EXEC" # Execution errors only
grep "SESSION-3" harness-progress.txt # All session 3 activity
grep "STATS" harness-progress.txt # All session summaries
grep "CHECKPOINT" harness-progress.txt # All checkpoints
grep "RECOVERY" harness-progress.txt # All recovery actions
```
## Session Statistics
At session end, update `harness-tasks.json`: increment `session_count`, set `last_session` to current timestamp. Then append:
```
[timestamp] [SESSION-N] STATS tasks_total=10 completed=7 failed=1 pending=2 blocked=0 attempts_total=12 checkpoints=23
```
`blocked` is computed at stats time: count of pending tasks whose `depends_on` includes a permanently failed task. It is not a stored status value.
## Init Command (`/harness init`)
1. Create `harness-progress.txt` with initialization entry
2. Create `harness-tasks.json` with empty task list and default `session_config`
3. Optionally create `harness-init.sh` template (chmod +x)
4. Ask user: add harness files to `.gitignore`?
## Status Command (`/harness status`)
Read `harness-tasks.json` and `harness-progress.txt`, then display:
1. Task summary: count by status (completed, failed, pending, blocked). `blocked` = pending tasks whose `depends_on` includes a permanently failed task (computed, not a stored status).
2. Per-task one-liner: `[status] task-id: title (attempts/max_attempts)`
3. Last 5 lines from `harness-progress.txt`
4. Session count and last session timestamp
Does NOT acquire the lock (read-only operation).
## Add Command (`/harness add`)
Append a new task to `harness-tasks.json` with auto-incremented id (`task-NNN`), status `pending`, default `max_attempts: 3`, empty `depends_on`, and no validation command. Prompt user for optional fields: `priority`, `depends_on`, `validation.command`, `timeout_seconds`. Requires lock acquisition (modifies JSON).
## Tool Dependencies
Requires: Bash, file read/write, git. All harness operations must be executed from the project root directory.
Does NOT require: specific MCP servers, programming languages, or test frameworks.

View File

@@ -0,0 +1,52 @@
{
"default_backend": "codex",
"default_model": "gpt-5.2",
"backends": {
"codex": { "api_key": "" },
"claude": { "api_key": "" },
"gemini": { "api_key": "" },
"opencode": { "api_key": "" }
},
"agents": {
"develop": {
"backend": "codex",
"model": "gpt-5.2",
"reasoning": "xhigh",
"yolo": true
},
"code-explorer": {
"backend": "opencode",
"model": ""
},
"code-architect": {
"backend": "claude",
"model": ""
},
"code-reviewer": {
"backend": "claude",
"model": ""
},
"oracle": {
"backend": "claude",
"model": "claude-opus-4-5-20251101",
"yolo": true
},
"librarian": {
"backend": "claude",
"model": "claude-sonnet-4-5-20250929",
"yolo": true
},
"explore": {
"backend": "opencode",
"model": "opencode/grok-code"
},
"frontend-ui-ux-engineer": {
"backend": "gemini",
"model": "gemini-3-pro-preview"
},
"document-writer": {
"backend": "gemini",
"model": "gemini-3-flash-preview"
}
}
}