mirror of
https://github.com/cexll/myclaude.git
synced 2026-02-11 03:23:50 +08:00
Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7cc7f50f46 | ||
|
|
ebd795c583 | ||
|
|
8db49f198e | ||
|
|
97dfa907d9 | ||
|
|
5853539cab | ||
|
|
81fa6843d9 | ||
|
|
74e4d181c2 | ||
|
|
04fa1626ae | ||
|
|
c0f61d5cc2 | ||
|
|
716d1eb173 | ||
|
|
4bc9ffa907 | ||
|
|
c6c2f93e02 |
31
.github/workflows/release.yml
vendored
31
.github/workflows/release.yml
vendored
@@ -74,7 +74,7 @@ jobs:
|
|||||||
if [ "${{ matrix.goos }}" = "windows" ]; then
|
if [ "${{ matrix.goos }}" = "windows" ]; then
|
||||||
OUTPUT_NAME="${OUTPUT_NAME}.exe"
|
OUTPUT_NAME="${OUTPUT_NAME}.exe"
|
||||||
fi
|
fi
|
||||||
go build -ldflags="-s -w -X main.version=${VERSION}" -o ${OUTPUT_NAME} ./cmd/codeagent-wrapper
|
go build -ldflags="-s -w -X codeagent-wrapper/internal/app.version=${VERSION}" -o ${OUTPUT_NAME} ./cmd/codeagent-wrapper
|
||||||
chmod +x ${OUTPUT_NAME}
|
chmod +x ${OUTPUT_NAME}
|
||||||
echo "artifact_path=codeagent-wrapper/${OUTPUT_NAME}" >> $GITHUB_OUTPUT
|
echo "artifact_path=codeagent-wrapper/${OUTPUT_NAME}" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
@@ -91,6 +91,33 @@ jobs:
|
|||||||
steps:
|
steps:
|
||||||
- name: Checkout code
|
- name: Checkout code
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Generate Release Notes
|
||||||
|
id: release_notes
|
||||||
|
run: |
|
||||||
|
# Get previous tag
|
||||||
|
PREVIOUS_TAG=$(git tag --sort=-version:refname | grep -v "^${{ github.ref_name }}$" | head -n 1)
|
||||||
|
|
||||||
|
if [ -z "$PREVIOUS_TAG" ]; then
|
||||||
|
echo "No previous tag found, using all commits"
|
||||||
|
COMMITS=$(git log --pretty=format:"- %s (%h)" --no-merges)
|
||||||
|
else
|
||||||
|
echo "Generating notes from $PREVIOUS_TAG to ${{ github.ref_name }}"
|
||||||
|
COMMITS=$(git log ${PREVIOUS_TAG}..${{ github.ref_name }} --pretty=format:"- %s (%h)" --no-merges)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create release notes
|
||||||
|
cat > release_notes.md <<EOF
|
||||||
|
## What's Changed
|
||||||
|
|
||||||
|
${COMMITS}
|
||||||
|
|
||||||
|
**Full Changelog**: https://github.com/${{ github.repository }}/compare/${PREVIOUS_TAG}...${{ github.ref_name }}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat release_notes.md
|
||||||
|
|
||||||
- name: Download all artifacts
|
- name: Download all artifacts
|
||||||
uses: actions/download-artifact@v4
|
uses: actions/download-artifact@v4
|
||||||
@@ -108,6 +135,6 @@ jobs:
|
|||||||
uses: softprops/action-gh-release@v2
|
uses: softprops/action-gh-release@v2
|
||||||
with:
|
with:
|
||||||
files: release/*
|
files: release/*
|
||||||
generate_release_notes: true
|
body_path: release_notes.md
|
||||||
draft: false
|
draft: false
|
||||||
prerelease: false
|
prerelease: false
|
||||||
|
|||||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -9,3 +9,4 @@ __pycache__
|
|||||||
coverage.out
|
coverage.out
|
||||||
references
|
references
|
||||||
output/
|
output/
|
||||||
|
.worktrees/
|
||||||
|
|||||||
41
CHANGELOG.md
41
CHANGELOG.md
@@ -2,6 +2,47 @@
|
|||||||
|
|
||||||
All notable changes to this project will be documented in this file.
|
All notable changes to this project will be documented in this file.
|
||||||
|
|
||||||
|
## [6.7.0] - 2026-02-10
|
||||||
|
|
||||||
|
### 🚀 Features
|
||||||
|
|
||||||
|
- feat(install): per-module agent merge/unmerge for ~/.codeagent/models.json
|
||||||
|
- feat(install): post-install verification (wrapper version, PATH, backend CLIs)
|
||||||
|
- feat(install): install CLAUDE.md by default
|
||||||
|
- feat(docs): document 9 skills, 11 commands, claudekit module, OpenCode backend
|
||||||
|
|
||||||
|
### 🐛 Bug Fixes
|
||||||
|
|
||||||
|
- fix(docs): correct 7-phase → 5-phase for do skill across all docs
|
||||||
|
- fix(install): best-effort default config install (never crashes main flow)
|
||||||
|
- fix(install): interactive quit no longer triggers post-install actions
|
||||||
|
- fix(install): empty parent directory cleanup on copy_file uninstall
|
||||||
|
- fix(install): agent restore on uninstall when shared by multiple modules
|
||||||
|
- fix(docs): remove non-existent on-stop hook references
|
||||||
|
|
||||||
|
### 📚 Documentation
|
||||||
|
|
||||||
|
- Updated USER_GUIDE.md with 13 CLI flags and OpenCode backend
|
||||||
|
- Updated README.md/README_CN.md with complete module and skill listings
|
||||||
|
- Added templates/models.json.example with all agent presets (do + omo)
|
||||||
|
|
||||||
|
## [6.6.0] - 2026-02-10
|
||||||
|
|
||||||
|
### 🚀 Features
|
||||||
|
|
||||||
|
- feat(skills): add per-task skill spec auto-detection and injection
|
||||||
|
- feat: add worktree support and refactor do skill to Python
|
||||||
|
|
||||||
|
### 🐛 Bug Fixes
|
||||||
|
|
||||||
|
- fix(test): set USERPROFILE on Windows for skills tests
|
||||||
|
- fix(do): reuse worktree across phases via DO_WORKTREE_DIR env var
|
||||||
|
- fix(release): auto-generate release notes from git history
|
||||||
|
|
||||||
|
### 📚 Documentation
|
||||||
|
|
||||||
|
- audit and fix documentation, installation scripts, and default configuration
|
||||||
|
|
||||||
## [6.0.0] - 2026-01-26
|
## [6.0.0] - 2026-01-26
|
||||||
|
|
||||||
### 🚀 Features
|
### 🚀 Features
|
||||||
|
|||||||
34
README.md
34
README.md
@@ -19,13 +19,30 @@ npx github:cexll/myclaude
|
|||||||
|
|
||||||
| Module | Description | Documentation |
|
| Module | Description | Documentation |
|
||||||
|--------|-------------|---------------|
|
|--------|-------------|---------------|
|
||||||
| [do](skills/do/README.md) | **Recommended** - 7-phase feature development with codeagent orchestration | `/do` command |
|
| [do](skills/do/README.md) | **Recommended** - 5-phase feature development with codeagent orchestration | `/do` command |
|
||||||
| [omo](skills/omo/README.md) | Multi-agent orchestration with intelligent routing | `/omo` command |
|
| [omo](skills/omo/README.md) | Multi-agent orchestration with intelligent routing | `/omo` command |
|
||||||
| [bmad](agents/bmad/README.md) | BMAD agile workflow with 6 specialized agents | `/bmad-pilot` command |
|
| [bmad](agents/bmad/README.md) | BMAD agile workflow with 6 specialized agents | `/bmad-pilot` command |
|
||||||
| [requirements](agents/requirements/README.md) | Lightweight requirements-to-code pipeline | `/requirements-pilot` command |
|
| [requirements](agents/requirements/README.md) | Lightweight requirements-to-code pipeline | `/requirements-pilot` command |
|
||||||
| [essentials](agents/development-essentials/README.md) | Core development commands and utilities | `/code`, `/debug`, etc. |
|
| [essentials](agents/development-essentials/README.md) | 11 core dev commands: ask, bugfix, code, debug, docs, enhance-prompt, optimize, refactor, review, test, think | `/code`, `/debug`, etc. |
|
||||||
| [sparv](skills/sparv/README.md) | SPARV workflow (Specify→Plan→Act→Review→Vault) | `/sparv` command |
|
| [sparv](skills/sparv/README.md) | SPARV workflow (Specify→Plan→Act→Review→Vault) | `/sparv` command |
|
||||||
| course | Course development (combines dev + product-requirements + test-cases) | Composite module |
|
| course | Course development (combines dev + product-requirements + test-cases) | Composite module |
|
||||||
|
| claudekit | ClaudeKit: do skill + global hooks (pre-bash, inject-spec, log-prompt) | Composite module |
|
||||||
|
|
||||||
|
### Available Skills
|
||||||
|
|
||||||
|
Individual skills can be installed separately via `npx github:cexll/myclaude --list` (skills bundled in modules like do, omo, sparv are listed above):
|
||||||
|
|
||||||
|
| Skill | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| browser | Browser automation for web testing and data extraction |
|
||||||
|
| codeagent | codeagent-wrapper invocation for multi-backend AI code tasks |
|
||||||
|
| codex | Direct Codex backend execution |
|
||||||
|
| dev | Lightweight end-to-end development workflow |
|
||||||
|
| gemini | Direct Gemini backend execution |
|
||||||
|
| product-requirements | Interactive PRD generation with quality scoring |
|
||||||
|
| prototype-prompt-generator | Structured UI/UX prototype prompt generation |
|
||||||
|
| skill-install | Install skills from GitHub with security scanning |
|
||||||
|
| test-cases | Comprehensive test case generation from requirements |
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -87,17 +104,20 @@ Edit `config.json` to enable/disable modules:
|
|||||||
| Codex | `codex e`, `--json`, `-C`, `resume` |
|
| Codex | `codex e`, `--json`, `-C`, `resume` |
|
||||||
| Claude | `--output-format stream-json`, `-r` |
|
| Claude | `--output-format stream-json`, `-r` |
|
||||||
| Gemini | `-o stream-json`, `-y`, `-r` |
|
| Gemini | `-o stream-json`, `-y`, `-r` |
|
||||||
|
| OpenCode | `opencode`, stdin mode |
|
||||||
|
|
||||||
## Directory Structure After Installation
|
## Directory Structure After Installation
|
||||||
|
|
||||||
```
|
```
|
||||||
~/.claude/
|
~/.claude/
|
||||||
├── bin/codeagent-wrapper
|
├── bin/codeagent-wrapper
|
||||||
├── CLAUDE.md
|
├── CLAUDE.md (installed by default)
|
||||||
├── commands/
|
├── commands/ (from essentials module)
|
||||||
├── agents/
|
├── agents/ (from bmad/requirements modules)
|
||||||
├── skills/
|
├── skills/ (from do/omo/sparv/course modules)
|
||||||
└── config.json
|
├── hooks/ (from claudekit module)
|
||||||
|
├── settings.json (auto-generated, hooks config)
|
||||||
|
└── installed_modules.json (auto-generated, tracks modules)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|||||||
42
README_CN.md
42
README_CN.md
@@ -16,13 +16,30 @@ npx github:cexll/myclaude
|
|||||||
|
|
||||||
| 模块 | 描述 | 文档 |
|
| 模块 | 描述 | 文档 |
|
||||||
|------|------|------|
|
|------|------|------|
|
||||||
| [do](skills/do/README.md) | **推荐** - 7 阶段功能开发 + codeagent 编排 | `/do` 命令 |
|
| [do](skills/do/README.md) | **推荐** - 5 阶段功能开发 + codeagent 编排 | `/do` 命令 |
|
||||||
| [omo](skills/omo/README.md) | 多智能体编排 + 智能路由 | `/omo` 命令 |
|
| [omo](skills/omo/README.md) | 多智能体编排 + 智能路由 | `/omo` 命令 |
|
||||||
| [bmad](agents/bmad/README.md) | BMAD 敏捷工作流 + 6 个专业智能体 | `/bmad-pilot` 命令 |
|
| [bmad](agents/bmad/README.md) | BMAD 敏捷工作流 + 6 个专业智能体 | `/bmad-pilot` 命令 |
|
||||||
| [requirements](agents/requirements/README.md) | 轻量级需求到代码流水线 | `/requirements-pilot` 命令 |
|
| [requirements](agents/requirements/README.md) | 轻量级需求到代码流水线 | `/requirements-pilot` 命令 |
|
||||||
| [essentials](agents/development-essentials/README.md) | 核心开发命令和工具 | `/code`, `/debug` 等 |
|
| [essentials](agents/development-essentials/README.md) | 11 个核心开发命令:ask、bugfix、code、debug、docs、enhance-prompt、optimize、refactor、review、test、think | `/code`, `/debug` 等 |
|
||||||
| [sparv](skills/sparv/README.md) | SPARV 工作流 (Specify→Plan→Act→Review→Vault) | `/sparv` 命令 |
|
| [sparv](skills/sparv/README.md) | SPARV 工作流 (Specify→Plan→Act→Review→Vault) | `/sparv` 命令 |
|
||||||
| course | 课程开发(组合 dev + product-requirements + test-cases) | 组合模块 |
|
| course | 课程开发(组合 dev + product-requirements + test-cases) | 组合模块 |
|
||||||
|
| claudekit | ClaudeKit:do 技能 + 全局钩子(pre-bash、inject-spec、log-prompt)| 组合模块 |
|
||||||
|
|
||||||
|
### 可用技能
|
||||||
|
|
||||||
|
可通过 `npx github:cexll/myclaude --list` 单独安装技能(模块内置技能如 do、omo、sparv 见上表):
|
||||||
|
|
||||||
|
| 技能 | 描述 |
|
||||||
|
|------|------|
|
||||||
|
| browser | 浏览器自动化测试和数据提取 |
|
||||||
|
| codeagent | codeagent-wrapper 多后端 AI 代码任务调用 |
|
||||||
|
| codex | Codex 后端直接执行 |
|
||||||
|
| dev | 轻量级端到端开发工作流 |
|
||||||
|
| gemini | Gemini 后端直接执行 |
|
||||||
|
| product-requirements | 交互式 PRD 生成(含质量评分)|
|
||||||
|
| prototype-prompt-generator | 结构化 UI/UX 原型提示词生成 |
|
||||||
|
| skill-install | 从 GitHub 安装技能(含安全扫描)|
|
||||||
|
| test-cases | 从需求生成全面测试用例 |
|
||||||
|
|
||||||
## 核心架构
|
## 核心架构
|
||||||
|
|
||||||
@@ -35,22 +52,20 @@ npx github:cexll/myclaude
|
|||||||
|
|
||||||
### do 工作流(推荐)
|
### do 工作流(推荐)
|
||||||
|
|
||||||
7 阶段功能开发,通过 codeagent-wrapper 编排多个智能体。**大多数功能开发任务的首选工作流。**
|
5 阶段功能开发,通过 codeagent-wrapper 编排多个智能体。**大多数功能开发任务的首选工作流。**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
/do "添加用户登录功能"
|
/do "添加用户登录功能"
|
||||||
```
|
```
|
||||||
|
|
||||||
**7 阶段:**
|
**5 阶段:**
|
||||||
| 阶段 | 名称 | 目标 |
|
| 阶段 | 名称 | 目标 |
|
||||||
|------|------|------|
|
|------|------|------|
|
||||||
| 1 | Discovery | 理解需求 |
|
| 1 | Understand | 并行探索理解需求和映射代码库 |
|
||||||
| 2 | Exploration | 映射代码库模式 |
|
| 2 | Clarify | 解决阻塞性歧义(条件触发)|
|
||||||
| 3 | Clarification | 解决歧义(**强制**)|
|
| 3 | Design | 产出最小变更实现方案 |
|
||||||
| 4 | Architecture | 设计实现方案 |
|
| 4 | Implement + Review | 构建功能并审查 |
|
||||||
| 5 | Implementation | 构建功能(**需审批**)|
|
| 5 | Complete | 记录构建结果 |
|
||||||
| 6 | Review | 捕获缺陷 |
|
|
||||||
| 7 | Summary | 记录结果 |
|
|
||||||
|
|
||||||
**智能体:**
|
**智能体:**
|
||||||
- `code-explorer` - 代码追踪、架构映射
|
- `code-explorer` - 代码追踪、架构映射
|
||||||
@@ -162,6 +177,10 @@ npx github:cexll/myclaude
|
|||||||
| `/optimize` | 性能优化 |
|
| `/optimize` | 性能优化 |
|
||||||
| `/refactor` | 代码重构 |
|
| `/refactor` | 代码重构 |
|
||||||
| `/docs` | 编写文档 |
|
| `/docs` | 编写文档 |
|
||||||
|
| `/ask` | 提问和咨询 |
|
||||||
|
| `/bugfix` | Bug 修复 |
|
||||||
|
| `/enhance-prompt` | 提示词优化 |
|
||||||
|
| `/think` | 深度思考分析 |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -218,6 +237,7 @@ npx github:cexll/myclaude --install-dir ~/.claude --force
|
|||||||
| Codex | `codex e`, `--json`, `-C`, `resume` |
|
| Codex | `codex e`, `--json`, `-C`, `resume` |
|
||||||
| Claude | `--output-format stream-json`, `-r` |
|
| Claude | `--output-format stream-json`, `-r` |
|
||||||
| Gemini | `-o stream-json`, `-y`, `-r` |
|
| Gemini | `-o stream-json`, `-y`, `-r` |
|
||||||
|
| OpenCode | `opencode`, stdin 模式 |
|
||||||
|
|
||||||
## 故障排查
|
## 故障排查
|
||||||
|
|
||||||
|
|||||||
79
bin/cli.js
79
bin/cli.js
@@ -8,7 +8,7 @@ const os = require("os");
|
|||||||
const path = require("path");
|
const path = require("path");
|
||||||
const readline = require("readline");
|
const readline = require("readline");
|
||||||
const zlib = require("zlib");
|
const zlib = require("zlib");
|
||||||
const { spawn } = require("child_process");
|
const { spawn, spawnSync } = require("child_process");
|
||||||
|
|
||||||
const REPO = { owner: "cexll", name: "myclaude" };
|
const REPO = { owner: "cexll", name: "myclaude" };
|
||||||
const API_HEADERS = {
|
const API_HEADERS = {
|
||||||
@@ -501,7 +501,7 @@ async function updateInstalledModules(installDir, tag, config, dryRun) {
|
|||||||
await fs.promises.mkdir(installDir, { recursive: true });
|
await fs.promises.mkdir(installDir, { recursive: true });
|
||||||
for (const name of toUpdate) {
|
for (const name of toUpdate) {
|
||||||
process.stdout.write(`Updating module: ${name}\n`);
|
process.stdout.write(`Updating module: ${name}\n`);
|
||||||
const r = await applyModule(name, config, repoRoot, installDir, true);
|
const r = await applyModule(name, config, repoRoot, installDir, true, tag);
|
||||||
upsertModuleStatus(installDir, r);
|
upsertModuleStatus(installDir, r);
|
||||||
}
|
}
|
||||||
} finally {
|
} finally {
|
||||||
@@ -560,6 +560,7 @@ async function promptMultiSelect(items, title) {
|
|||||||
function cleanup() {
|
function cleanup() {
|
||||||
process.stdin.setRawMode(false);
|
process.stdin.setRawMode(false);
|
||||||
process.stdin.removeListener("keypress", onKey);
|
process.stdin.removeListener("keypress", onKey);
|
||||||
|
process.stdin.pause();
|
||||||
}
|
}
|
||||||
|
|
||||||
function onKey(_, key) {
|
function onKey(_, key) {
|
||||||
@@ -749,14 +750,16 @@ async function mergeDir(src, installDir, force) {
|
|||||||
return installed;
|
return installed;
|
||||||
}
|
}
|
||||||
|
|
||||||
function runInstallSh(repoRoot, installDir) {
|
function runInstallSh(repoRoot, installDir, tag) {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
const cmd = process.platform === "win32" ? "cmd.exe" : "bash";
|
const cmd = process.platform === "win32" ? "cmd.exe" : "bash";
|
||||||
const args = process.platform === "win32" ? ["/c", "install.bat"] : ["install.sh"];
|
const args = process.platform === "win32" ? ["/c", "install.bat"] : ["install.sh"];
|
||||||
|
const env = { ...process.env, INSTALL_DIR: installDir };
|
||||||
|
if (tag) env.CODEAGENT_WRAPPER_VERSION = tag;
|
||||||
const p = spawn(cmd, args, {
|
const p = spawn(cmd, args, {
|
||||||
cwd: repoRoot,
|
cwd: repoRoot,
|
||||||
stdio: "inherit",
|
stdio: "inherit",
|
||||||
env: { ...process.env, INSTALL_DIR: installDir },
|
env,
|
||||||
});
|
});
|
||||||
p.on("exit", (code) => {
|
p.on("exit", (code) => {
|
||||||
if (code === 0) resolve();
|
if (code === 0) resolve();
|
||||||
@@ -774,7 +777,7 @@ async function rmTree(p) {
|
|||||||
await fs.promises.rmdir(p, { recursive: true });
|
await fs.promises.rmdir(p, { recursive: true });
|
||||||
}
|
}
|
||||||
|
|
||||||
async function applyModule(moduleName, config, repoRoot, installDir, force) {
|
async function applyModule(moduleName, config, repoRoot, installDir, force, tag) {
|
||||||
const mod = config && config.modules && config.modules[moduleName];
|
const mod = config && config.modules && config.modules[moduleName];
|
||||||
if (!mod) throw new Error(`Unknown module: ${moduleName}`);
|
if (!mod) throw new Error(`Unknown module: ${moduleName}`);
|
||||||
const ops = Array.isArray(mod.operations) ? mod.operations : [];
|
const ops = Array.isArray(mod.operations) ? mod.operations : [];
|
||||||
@@ -800,7 +803,7 @@ async function applyModule(moduleName, config, repoRoot, installDir, force) {
|
|||||||
if (cmd !== "bash install.sh") {
|
if (cmd !== "bash install.sh") {
|
||||||
throw new Error(`Refusing run_command: ${cmd || "(empty)"}`);
|
throw new Error(`Refusing run_command: ${cmd || "(empty)"}`);
|
||||||
}
|
}
|
||||||
await runInstallSh(repoRoot, installDir);
|
await runInstallSh(repoRoot, installDir, tag);
|
||||||
} else {
|
} else {
|
||||||
throw new Error(`Unsupported operation type: ${type}`);
|
throw new Error(`Unsupported operation type: ${type}`);
|
||||||
}
|
}
|
||||||
@@ -928,6 +931,63 @@ async function uninstallModule(moduleName, config, repoRoot, installDir, dryRun)
|
|||||||
deleteModuleStatus(installDir, moduleName);
|
deleteModuleStatus(installDir, moduleName);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async function installDefaultConfigs(installDir, repoRoot) {
|
||||||
|
try {
|
||||||
|
const claudeMdTarget = path.join(installDir, "CLAUDE.md");
|
||||||
|
const claudeMdSrc = path.join(repoRoot, "memorys", "CLAUDE.md");
|
||||||
|
if (!fs.existsSync(claudeMdTarget) && fs.existsSync(claudeMdSrc)) {
|
||||||
|
await fs.promises.copyFile(claudeMdSrc, claudeMdTarget);
|
||||||
|
process.stdout.write(`Installed CLAUDE.md to ${claudeMdTarget}\n`);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
process.stderr.write(`Warning: could not install default configs: ${err.message}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function printPostInstallInfo(installDir) {
|
||||||
|
process.stdout.write("\n");
|
||||||
|
|
||||||
|
// Check codeagent-wrapper version
|
||||||
|
const wrapperBin = path.join(installDir, "bin", "codeagent-wrapper");
|
||||||
|
let wrapperVersion = null;
|
||||||
|
try {
|
||||||
|
const r = spawnSync(wrapperBin, ["--version"], { timeout: 5000 });
|
||||||
|
if (r.status === 0 && r.stdout) {
|
||||||
|
wrapperVersion = r.stdout.toString().trim();
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
|
||||||
|
// Check PATH
|
||||||
|
const binDir = path.join(installDir, "bin");
|
||||||
|
const envPath = process.env.PATH || "";
|
||||||
|
const pathOk = envPath.split(path.delimiter).some((p) => {
|
||||||
|
try { return fs.realpathSync(p) === fs.realpathSync(binDir); } catch { return p === binDir; }
|
||||||
|
});
|
||||||
|
|
||||||
|
// Check backend CLIs
|
||||||
|
const whichCmd = process.platform === "win32" ? "where" : "which";
|
||||||
|
const backends = ["codex", "claude", "gemini", "opencode"];
|
||||||
|
const detected = {};
|
||||||
|
for (const name of backends) {
|
||||||
|
try {
|
||||||
|
const r = spawnSync(whichCmd, [name], { timeout: 3000 });
|
||||||
|
detected[name] = r.status === 0;
|
||||||
|
} catch {
|
||||||
|
detected[name] = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
process.stdout.write("Setup Complete!\n");
|
||||||
|
process.stdout.write(` codeagent-wrapper: ${wrapperVersion || "(not found)"} ${wrapperVersion ? "✓" : "✗"}\n`);
|
||||||
|
process.stdout.write(` PATH: ${binDir} ${pathOk ? "✓" : "✗ (not in PATH)"}\n`);
|
||||||
|
process.stdout.write("\nBackend CLIs detected:\n");
|
||||||
|
process.stdout.write(" " + backends.map((b) => `${b} ${detected[b] ? "✓" : "✗"}`).join(" | ") + "\n");
|
||||||
|
process.stdout.write("\nNext steps:\n");
|
||||||
|
process.stdout.write(" 1. Configure API keys in ~/.codeagent/models.json\n");
|
||||||
|
process.stdout.write(' 2. Try: /do "your first task"\n');
|
||||||
|
process.stdout.write("\n");
|
||||||
|
}
|
||||||
|
|
||||||
async function installSelected(picks, tag, config, installDir, force, dryRun) {
|
async function installSelected(picks, tag, config, installDir, force, dryRun) {
|
||||||
const needRepo = picks.some((p) => p.kind !== "wrapper");
|
const needRepo = picks.some((p) => p.kind !== "wrapper");
|
||||||
const needWrapper = picks.some((p) => p.kind === "wrapper");
|
const needWrapper = picks.some((p) => p.kind === "wrapper");
|
||||||
@@ -964,12 +1024,12 @@ async function installSelected(picks, tag, config, installDir, force, dryRun) {
|
|||||||
for (const p of picks) {
|
for (const p of picks) {
|
||||||
if (p.kind === "wrapper") {
|
if (p.kind === "wrapper") {
|
||||||
process.stdout.write("Installing codeagent-wrapper...\n");
|
process.stdout.write("Installing codeagent-wrapper...\n");
|
||||||
await runInstallSh(repoRoot, installDir);
|
await runInstallSh(repoRoot, installDir, tag);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if (p.kind === "module") {
|
if (p.kind === "module") {
|
||||||
process.stdout.write(`Installing module: ${p.moduleName}\n`);
|
process.stdout.write(`Installing module: ${p.moduleName}\n`);
|
||||||
const r = await applyModule(p.moduleName, config, repoRoot, installDir, force);
|
const r = await applyModule(p.moduleName, config, repoRoot, installDir, force, tag);
|
||||||
upsertModuleStatus(installDir, r);
|
upsertModuleStatus(installDir, r);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
@@ -982,6 +1042,9 @@ async function installSelected(picks, tag, config, installDir, force, dryRun) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
await installDefaultConfigs(installDir, repoRoot);
|
||||||
|
printPostInstallInfo(installDir);
|
||||||
} finally {
|
} finally {
|
||||||
await rmTree(tmp);
|
await rmTree(tmp);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,97 +1,158 @@
|
|||||||
# codeagent-wrapper
|
# codeagent-wrapper
|
||||||
|
|
||||||
`codeagent-wrapper` 是一个用 Go 编写的“多后端 AI 代码代理”命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端(Codex / Claude / Gemini / Opencode),并提供一致的参数、配置与会话恢复体验。
|
[English](README.md) | [中文](README_CN.md)
|
||||||
|
|
||||||
入口:`cmd/codeagent/main.go`(生成二进制名:`codeagent`)和 `cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。两者行为一致。
|
A multi-backend AI code agent CLI wrapper written in Go. Provides a unified CLI entry point wrapping different AI tool backends (Codex / Claude / Gemini / OpenCode) with consistent flags, configuration, skill injection, and session resumption.
|
||||||
|
|
||||||
## 功能特性
|
Entry point: `cmd/codeagent-wrapper/main.go` (binary: `codeagent-wrapper`).
|
||||||
|
|
||||||
- 多后端支持:`codex` / `claude` / `gemini` / `opencode`
|
## Features
|
||||||
- 统一命令行:`codeagent [flags] <task>` / `codeagent resume <session_id> <task> [workdir]`
|
|
||||||
- 自动 stdin:遇到换行/特殊字符/超长任务自动走 stdin,避免 shell quoting 地狱;也可显式使用 `-`
|
|
||||||
- 配置合并:支持配置文件与 `CODEAGENT_*` 环境变量(viper)
|
|
||||||
- Agent 预设:从 `~/.codeagent/models.json` 读取 backend/model/prompt 等预设
|
|
||||||
- 并行执行:`--parallel` 从 stdin 读取多任务配置,支持依赖拓扑并发执行
|
|
||||||
- 日志清理:`codeagent cleanup` 清理旧日志(日志写入系统临时目录)
|
|
||||||
|
|
||||||
## 安装
|
- **Multi-backend support**: `codex` / `claude` / `gemini` / `opencode`
|
||||||
|
- **Unified CLI**: `codeagent-wrapper [flags] <task>` / `codeagent-wrapper resume <session_id> <task> [workdir]`
|
||||||
|
- **Auto stdin**: Automatically pipes via stdin when task contains newlines, special characters, or exceeds length; also supports explicit `-`
|
||||||
|
- **Config merging**: Config files + `CODEAGENT_*` environment variables (viper)
|
||||||
|
- **Agent presets**: Read backend/model/prompt/reasoning/yolo/allowed_tools from `~/.codeagent/models.json`
|
||||||
|
- **Dynamic agents**: Place a `{name}.md` prompt file in `~/.codeagent/agents/` to use as an agent
|
||||||
|
- **Skill auto-injection**: `--skills` for manual specification, or auto-detect from project tech stack (Go/Rust/Python/Node.js/Vue)
|
||||||
|
- **Git worktree isolation**: `--worktree` executes tasks in an isolated git worktree with auto-generated task_id and branch
|
||||||
|
- **Parallel execution**: `--parallel` reads multi-task config from stdin with dependency-aware topological concurrent execution and structured summary reports
|
||||||
|
- **Backend config**: `backends` section in `models.json` supports per-backend `base_url` / `api_key` injection
|
||||||
|
- **Claude tool control**: `allowed_tools` / `disallowed_tools` to restrict available tools for Claude backend
|
||||||
|
- **Stderr noise filtering**: Automatically filters noisy stderr output from Gemini and Codex backends
|
||||||
|
- **Log cleanup**: `codeagent-wrapper cleanup` cleans old logs (logs written to system temp directory)
|
||||||
|
- **Cross-platform**: macOS / Linux / Windows
|
||||||
|
|
||||||
要求:Go 1.21+。
|
## Installation
|
||||||
|
|
||||||
在仓库根目录执行:
|
### Recommended (interactive installer)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
go install ./cmd/codeagent
|
npx github:cexll/myclaude
|
||||||
go install ./cmd/codeagent-wrapper
|
|
||||||
```
|
```
|
||||||
|
|
||||||
安装后确认:
|
Select the `codeagent-wrapper` module to install.
|
||||||
|
|
||||||
|
### Manual build
|
||||||
|
|
||||||
|
Requires: Go 1.21+.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent version
|
# Build from source
|
||||||
codeagent-wrapper version
|
make build
|
||||||
|
|
||||||
|
# Or install to $GOPATH/bin
|
||||||
|
make install
|
||||||
```
|
```
|
||||||
|
|
||||||
## 使用示例
|
Verify installation:
|
||||||
|
|
||||||
最简单用法(默认后端:`codex`):
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent "分析 internal/app/cli.go 的入口逻辑,给出改进建议"
|
codeagent-wrapper --version
|
||||||
```
|
```
|
||||||
|
|
||||||
指定后端:
|
## Usage
|
||||||
|
|
||||||
|
Basic usage (default backend: `codex`):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent --backend claude "解释 internal/executor/parallel_config.go 的并行配置格式"
|
codeagent-wrapper "analyze the entry logic of internal/app/cli.go"
|
||||||
```
|
```
|
||||||
|
|
||||||
指定工作目录(第 2 个位置参数):
|
Specify backend:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent "在当前 repo 下搜索潜在数据竞争" .
|
codeagent-wrapper --backend claude "explain the parallel config format in internal/executor/parallel_config.go"
|
||||||
```
|
```
|
||||||
|
|
||||||
显式从 stdin 读取 task(使用 `-`):
|
Specify working directory (2nd positional argument):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat task.txt | codeagent -
|
codeagent-wrapper "search for potential data races in this repo" .
|
||||||
```
|
```
|
||||||
|
|
||||||
恢复会话:
|
Explicit stdin (using `-`):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent resume <session_id> "继续上次任务"
|
cat task.txt | codeagent-wrapper -
|
||||||
```
|
```
|
||||||
|
|
||||||
并行模式(从 stdin 读取任务配置;禁止位置参数):
|
HEREDOC (recommended for multi-line tasks):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent --parallel <<'EOF'
|
codeagent-wrapper --backend claude - <<'EOF'
|
||||||
|
Implement user authentication:
|
||||||
|
- JWT tokens
|
||||||
|
- bcrypt password hashing
|
||||||
|
- Session management
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
Resume session:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper resume <session_id> "continue the previous task"
|
||||||
|
```
|
||||||
|
|
||||||
|
Execute in isolated git worktree:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --worktree "refactor the auth module"
|
||||||
|
```
|
||||||
|
|
||||||
|
Manual skill injection:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --skills golang-base-practices "optimize database queries"
|
||||||
|
```
|
||||||
|
|
||||||
|
Parallel mode (task config from stdin):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --parallel <<'EOF'
|
||||||
---TASK---
|
---TASK---
|
||||||
id: t1
|
id: t1
|
||||||
workdir: .
|
workdir: .
|
||||||
backend: codex
|
backend: codex
|
||||||
---CONTENT---
|
---CONTENT---
|
||||||
列出本项目的主要模块以及它们的职责。
|
List the main modules and their responsibilities.
|
||||||
---TASK---
|
---TASK---
|
||||||
id: t2
|
id: t2
|
||||||
dependencies: t1
|
dependencies: t1
|
||||||
backend: claude
|
backend: claude
|
||||||
---CONTENT---
|
---CONTENT---
|
||||||
基于 t1 的结论,提出重构风险点与建议。
|
Based on t1's findings, identify refactoring risks and suggestions.
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
## 配置说明
|
## CLI Flags
|
||||||
|
|
||||||
### 配置文件
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--backend <name>` | Backend selection (codex/claude/gemini/opencode) |
|
||||||
|
| `--model <name>` | Model override |
|
||||||
|
| `--agent <name>` | Agent preset name (from models.json or ~/.codeagent/agents/) |
|
||||||
|
| `--prompt-file <path>` | Read prompt from file |
|
||||||
|
| `--skills <names>` | Comma-separated skill names for spec injection |
|
||||||
|
| `--reasoning-effort <level>` | Reasoning effort (backend-specific) |
|
||||||
|
| `--skip-permissions` | Skip permission prompts |
|
||||||
|
| `--dangerously-skip-permissions` | Alias for `--skip-permissions` |
|
||||||
|
| `--worktree` | Execute in a new git worktree (auto-generates task_id) |
|
||||||
|
| `--parallel` | Parallel task mode (config from stdin) |
|
||||||
|
| `--full-output` | Full output in parallel mode (default: summary only) |
|
||||||
|
| `--config <path>` | Config file path (default: `$HOME/.codeagent/config.*`) |
|
||||||
|
| `--version`, `-v` | Print version |
|
||||||
|
| `--cleanup` | Clean up old logs |
|
||||||
|
|
||||||
默认查找路径(当 `--config` 为空时):
|
## Configuration
|
||||||
|
|
||||||
|
### Config File
|
||||||
|
|
||||||
|
Default search path (when `--config` is empty):
|
||||||
|
|
||||||
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
|
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
|
||||||
|
|
||||||
示例(YAML):
|
Example (YAML):
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
backend: codex
|
backend: codex
|
||||||
@@ -99,59 +160,113 @@ model: gpt-4.1
|
|||||||
skip-permissions: false
|
skip-permissions: false
|
||||||
```
|
```
|
||||||
|
|
||||||
也可以通过 `--config /path/to/config.yaml` 显式指定。
|
Can also be specified explicitly via `--config /path/to/config.yaml`.
|
||||||
|
|
||||||
### 环境变量(`CODEAGENT_*`)
|
### Environment Variables (`CODEAGENT_*`)
|
||||||
|
|
||||||
通过 viper 读取并自动映射 `-` 为 `_`,常用项:
|
Read via viper with automatic `-` to `_` mapping:
|
||||||
|
|
||||||
- `CODEAGENT_BACKEND`(`codex|claude|gemini|opencode`)
|
| Variable | Description |
|
||||||
- `CODEAGENT_MODEL`
|
|----------|-------------|
|
||||||
- `CODEAGENT_AGENT`
|
| `CODEAGENT_BACKEND` | Backend name (codex/claude/gemini/opencode) |
|
||||||
- `CODEAGENT_PROMPT_FILE`
|
| `CODEAGENT_MODEL` | Model name |
|
||||||
- `CODEAGENT_REASONING_EFFORT`
|
| `CODEAGENT_AGENT` | Agent preset name |
|
||||||
- `CODEAGENT_SKIP_PERMISSIONS`
|
| `CODEAGENT_PROMPT_FILE` | Prompt file path |
|
||||||
- `CODEAGENT_FULL_OUTPUT`(并行模式 legacy 输出)
|
| `CODEAGENT_REASONING_EFFORT` | Reasoning effort |
|
||||||
- `CODEAGENT_MAX_PARALLEL_WORKERS`(0 表示不限制,上限 100)
|
| `CODEAGENT_SKIP_PERMISSIONS` | Skip permission prompts (default true; set `false` to disable) |
|
||||||
|
| `CODEAGENT_FULL_OUTPUT` | Full output in parallel mode |
|
||||||
|
| `CODEAGENT_MAX_PARALLEL_WORKERS` | Parallel worker count (0=unlimited, max 100) |
|
||||||
|
| `CODEAGENT_TMPDIR` | Custom temp directory (for macOS permission issues) |
|
||||||
|
| `CODEX_TIMEOUT` | Timeout in ms (default 7200000 = 2 hours) |
|
||||||
|
| `CODEX_BYPASS_SANDBOX` | Codex sandbox bypass (default true; set `false` to disable) |
|
||||||
|
| `DO_WORKTREE_DIR` | Reuse existing worktree directory (set by /do workflow) |
|
||||||
|
|
||||||
### Agent 预设(`~/.codeagent/models.json`)
|
### Agent Presets (`~/.codeagent/models.json`)
|
||||||
|
|
||||||
可在 `~/.codeagent/models.json` 定义 agent → backend/model/prompt 等映射,用 `--agent <name>` 选择:
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"default_backend": "opencode",
|
"default_backend": "codex",
|
||||||
"default_model": "opencode/grok-code",
|
"default_model": "gpt-4.1",
|
||||||
|
"backends": {
|
||||||
|
"codex": { "api_key": "..." },
|
||||||
|
"claude": { "base_url": "http://localhost:23001", "api_key": "..." }
|
||||||
|
},
|
||||||
"agents": {
|
"agents": {
|
||||||
"develop": {
|
"develop": {
|
||||||
"backend": "codex",
|
"backend": "codex",
|
||||||
"model": "gpt-4.1",
|
"model": "gpt-4.1",
|
||||||
"prompt_file": "~/.codeagent/prompts/develop.md",
|
"prompt_file": "~/.codeagent/prompts/develop.md",
|
||||||
"description": "Code development"
|
"reasoning": "high",
|
||||||
|
"yolo": true,
|
||||||
|
"allowed_tools": ["Read", "Write", "Bash"],
|
||||||
|
"disallowed_tools": ["WebFetch"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## 支持的后端
|
Use `--agent <name>` to select a preset. Agents inherit `base_url` / `api_key` from the corresponding `backends` entry.
|
||||||
|
|
||||||
该项目本身不内置模型能力,依赖你本机安装并可在 `PATH` 中找到对应 CLI:
|
### Dynamic Agents
|
||||||
|
|
||||||
- `codex`:执行 `codex e ...`(默认会添加 `--dangerously-bypass-approvals-and-sandbox`;如需关闭请设置 `CODEX_BYPASS_SANDBOX=false`)
|
Place a `{name}.md` file in `~/.codeagent/agents/` to use it via `--agent {name}`. The Markdown file is read as the prompt, using `default_backend` and `default_model`.
|
||||||
- `claude`:执行 `claude -p ... --output-format stream-json`(默认会跳过权限提示;如需开启请设置 `CODEAGENT_SKIP_PERMISSIONS=false`)
|
|
||||||
- `gemini`:执行 `gemini ... -o stream-json`(可从 `~/.gemini/.env` 加载环境变量)
|
|
||||||
- `opencode`:执行 `opencode run --format json`
|
|
||||||
|
|
||||||
## 开发
|
### Skill Auto-Detection
|
||||||
|
|
||||||
```bash
|
When no skills are specified via `--skills`, codeagent-wrapper auto-detects the tech stack from files in the working directory:
|
||||||
make build
|
|
||||||
make test
|
| Detected Files | Injected Skills |
|
||||||
make lint
|
|----------------|-----------------|
|
||||||
make clean
|
| `go.mod` / `go.sum` | `golang-base-practices` |
|
||||||
|
| `Cargo.toml` | `rust-best-practices` |
|
||||||
|
| `pyproject.toml` / `setup.py` / `requirements.txt` | `python-best-practices` |
|
||||||
|
| `package.json` | `vercel-react-best-practices`, `frontend-design` |
|
||||||
|
| `vue.config.js` / `vite.config.ts` / `nuxt.config.ts` | `vue-web-app` |
|
||||||
|
|
||||||
|
Skill specs are read from `~/.claude/skills/{name}/SKILL.md`, subject to a 16000-character budget.
|
||||||
|
|
||||||
|
## Supported Backends
|
||||||
|
|
||||||
|
This project does not embed model capabilities. It requires the corresponding CLI tools installed and available in `PATH`:
|
||||||
|
|
||||||
|
| Backend | Command | Notes |
|
||||||
|
|---------|---------|-------|
|
||||||
|
| `codex` | `codex e ...` | Adds `--dangerously-bypass-approvals-and-sandbox` by default; set `CODEX_BYPASS_SANDBOX=false` to disable |
|
||||||
|
| `claude` | `claude -p ... --output-format stream-json` | Skips permissions and disables setting-sources to prevent recursion; set `CODEAGENT_SKIP_PERMISSIONS=false` to enable prompts; auto-reads env and model from `~/.claude/settings.json` |
|
||||||
|
| `gemini` | `gemini -o stream-json -y ...` | Auto-loads env vars from `~/.gemini/.env` (GEMINI_API_KEY, GEMINI_MODEL, etc.) |
|
||||||
|
| `opencode` | `opencode run --format json` | — |
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
cmd/codeagent-wrapper/main.go # CLI entry point
|
||||||
|
internal/
|
||||||
|
app/ # CLI command definitions, argument parsing, main orchestration
|
||||||
|
backend/ # Backend abstraction and implementations (codex/claude/gemini/opencode)
|
||||||
|
config/ # Config loading, agent resolution, viper bindings
|
||||||
|
executor/ # Task execution engine: single/parallel/worktree/skill injection
|
||||||
|
logger/ # Structured logging system
|
||||||
|
parser/ # JSON stream parser
|
||||||
|
utils/ # Common utility functions
|
||||||
|
worktree/ # Git worktree management
|
||||||
```
|
```
|
||||||
|
|
||||||
## 故障排查
|
## Development
|
||||||
|
|
||||||
- macOS 下如果看到临时目录相关的 `permission denied`(例如临时可执行文件无法在 `/var/folders/.../T` 执行),可设置一个可执行的临时目录:`CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`。
|
```bash
|
||||||
- `claude` 后端的 `base_url/api_key`(来自 `~/.codeagent/models.json`)会注入到子进程环境变量:`ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY`。若 `base_url` 指向本地代理(如 `localhost:23001`),请确认代理进程在运行。
|
make build # Build binary
|
||||||
|
make test # Run tests
|
||||||
|
make lint # golangci-lint + staticcheck
|
||||||
|
make clean # Clean build artifacts
|
||||||
|
make install # Install to $GOPATH/bin
|
||||||
|
```
|
||||||
|
|
||||||
|
CI uses GitHub Actions with Go 1.21 / 1.22 matrix testing.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
- On macOS, if you see `permission denied` related to temp directories, set: `CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`
|
||||||
|
- `claude` backend's `base_url` / `api_key` (from `~/.codeagent/models.json` `backends.claude`) are injected as `ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY` env vars
|
||||||
|
- `gemini` backend's API key is loaded from `~/.gemini/.env`, injected as `GEMINI_API_KEY` with `GEMINI_API_KEY_AUTH_MECHANISM=bearer` auto-set
|
||||||
|
- Exit codes: 127 = backend not found, 124 = timeout, 130 = interrupted
|
||||||
|
- Parallel mode outputs structured summary by default; use `--full-output` for complete output when debugging
|
||||||
|
|||||||
272
codeagent-wrapper/README_CN.md
Normal file
272
codeagent-wrapper/README_CN.md
Normal file
@@ -0,0 +1,272 @@
|
|||||||
|
# codeagent-wrapper
|
||||||
|
|
||||||
|
[English](README.md) | [中文](README_CN.md)
|
||||||
|
|
||||||
|
`codeagent-wrapper` 是一个用 Go 编写的多后端 AI 代码代理命令行包装器:用统一的 CLI 入口封装不同的 AI 工具后端(Codex / Claude / Gemini / OpenCode),并提供一致的参数、配置、技能注入与会话恢复体验。
|
||||||
|
|
||||||
|
入口:`cmd/codeagent-wrapper/main.go`(生成二进制名:`codeagent-wrapper`)。
|
||||||
|
|
||||||
|
## 功能特性
|
||||||
|
|
||||||
|
- **多后端支持**:`codex` / `claude` / `gemini` / `opencode`
|
||||||
|
- **统一命令行**:`codeagent-wrapper [flags] <task>` / `codeagent-wrapper resume <session_id> <task> [workdir]`
|
||||||
|
- **自动 stdin**:遇到换行/特殊字符/超长任务自动走 stdin,避免 shell quoting 问题;也可显式使用 `-`
|
||||||
|
- **配置合并**:支持配置文件与 `CODEAGENT_*` 环境变量(viper)
|
||||||
|
- **Agent 预设**:从 `~/.codeagent/models.json` 读取 backend/model/prompt/reasoning/yolo/allowed_tools 等预设
|
||||||
|
- **动态 Agent**:在 `~/.codeagent/agents/{name}.md` 放置 prompt 文件即可作为 agent 使用
|
||||||
|
- **技能自动注入**:`--skills` 手动指定,或根据项目技术栈自动检测(Go/Rust/Python/Node.js/Vue)并注入对应技能规范
|
||||||
|
- **Git Worktree 隔离**:`--worktree` 在独立 git worktree 中执行任务,自动生成 task_id 和分支
|
||||||
|
- **并行执行**:`--parallel` 从 stdin 读取多任务配置,支持依赖拓扑并发执行,带结构化摘要报告
|
||||||
|
- **后端配置**:`models.json` 的 `backends` 节支持 per-backend 的 `base_url` / `api_key` 注入
|
||||||
|
- **Claude 工具控制**:`allowed_tools` / `disallowed_tools` 限制 Claude 后端可用工具
|
||||||
|
- **Stderr 降噪**:自动过滤 Gemini 和 Codex 后端的噪声 stderr 输出
|
||||||
|
- **日志清理**:`codeagent-wrapper cleanup` 清理旧日志(日志写入系统临时目录)
|
||||||
|
- **跨平台**:支持 macOS / Linux / Windows
|
||||||
|
|
||||||
|
## 安装
|
||||||
|
|
||||||
|
### 推荐方式(交互式安装器)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx github:cexll/myclaude
|
||||||
|
```
|
||||||
|
|
||||||
|
选择 `codeagent-wrapper` 模块进行安装。
|
||||||
|
|
||||||
|
### 手动构建
|
||||||
|
|
||||||
|
要求:Go 1.21+。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 从源码构建
|
||||||
|
make build
|
||||||
|
|
||||||
|
# 或直接安装到 $GOPATH/bin
|
||||||
|
make install
|
||||||
|
```
|
||||||
|
|
||||||
|
安装后确认:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## 使用示例
|
||||||
|
|
||||||
|
最简单用法(默认后端:`codex`):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper "分析 internal/app/cli.go 的入口逻辑,给出改进建议"
|
||||||
|
```
|
||||||
|
|
||||||
|
指定后端:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --backend claude "解释 internal/executor/parallel_config.go 的并行配置格式"
|
||||||
|
```
|
||||||
|
|
||||||
|
指定工作目录(第 2 个位置参数):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper "在当前 repo 下搜索潜在数据竞争" .
|
||||||
|
```
|
||||||
|
|
||||||
|
显式从 stdin 读取 task(使用 `-`):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat task.txt | codeagent-wrapper -
|
||||||
|
```
|
||||||
|
|
||||||
|
使用 HEREDOC(推荐用于多行任务):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --backend claude - <<'EOF'
|
||||||
|
实现用户认证系统:
|
||||||
|
- JWT 令牌
|
||||||
|
- bcrypt 密码哈希
|
||||||
|
- 会话管理
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
恢复会话:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper resume <session_id> "继续上次任务"
|
||||||
|
```
|
||||||
|
|
||||||
|
在 git worktree 中隔离执行:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --worktree "重构认证模块"
|
||||||
|
```
|
||||||
|
|
||||||
|
手动指定技能注入:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --skills golang-base-practices "优化数据库查询"
|
||||||
|
```
|
||||||
|
|
||||||
|
并行模式(从 stdin 读取任务配置):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --parallel <<'EOF'
|
||||||
|
---TASK---
|
||||||
|
id: t1
|
||||||
|
workdir: .
|
||||||
|
backend: codex
|
||||||
|
---CONTENT---
|
||||||
|
列出本项目的主要模块以及它们的职责。
|
||||||
|
---TASK---
|
||||||
|
id: t2
|
||||||
|
dependencies: t1
|
||||||
|
backend: claude
|
||||||
|
---CONTENT---
|
||||||
|
基于 t1 的结论,提出重构风险点与建议。
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
## CLI 参数
|
||||||
|
|
||||||
|
| 参数 | 说明 |
|
||||||
|
|------|------|
|
||||||
|
| `--backend <name>` | 后端选择(codex/claude/gemini/opencode) |
|
||||||
|
| `--model <name>` | 覆盖模型 |
|
||||||
|
| `--agent <name>` | Agent 预设名(来自 models.json 或 ~/.codeagent/agents/) |
|
||||||
|
| `--prompt-file <path>` | 从文件读取 prompt |
|
||||||
|
| `--skills <names>` | 逗号分隔的技能名,注入对应规范 |
|
||||||
|
| `--reasoning-effort <level>` | 推理力度(后端相关) |
|
||||||
|
| `--skip-permissions` | 跳过权限提示 |
|
||||||
|
| `--dangerously-skip-permissions` | `--skip-permissions` 的别名 |
|
||||||
|
| `--worktree` | 在新 git worktree 中执行(自动生成 task_id) |
|
||||||
|
| `--parallel` | 并行任务模式(从 stdin 读取配置) |
|
||||||
|
| `--full-output` | 并行模式下输出完整消息(默认仅输出摘要) |
|
||||||
|
| `--config <path>` | 配置文件路径(默认:`$HOME/.codeagent/config.*`) |
|
||||||
|
| `--version`, `-v` | 打印版本号 |
|
||||||
|
| `--cleanup` | 清理旧日志 |
|
||||||
|
|
||||||
|
## 配置说明
|
||||||
|
|
||||||
|
### 配置文件
|
||||||
|
|
||||||
|
默认查找路径(当 `--config` 为空时):
|
||||||
|
|
||||||
|
- `$HOME/.codeagent/config.(yaml|yml|json|toml|...)`
|
||||||
|
|
||||||
|
示例(YAML):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
backend: codex
|
||||||
|
model: gpt-4.1
|
||||||
|
skip-permissions: false
|
||||||
|
```
|
||||||
|
|
||||||
|
也可以通过 `--config /path/to/config.yaml` 显式指定。
|
||||||
|
|
||||||
|
### 环境变量(`CODEAGENT_*`)
|
||||||
|
|
||||||
|
通过 viper 读取并自动映射 `-` 为 `_`,常用项:
|
||||||
|
|
||||||
|
| 变量 | 说明 |
|
||||||
|
|------|------|
|
||||||
|
| `CODEAGENT_BACKEND` | 后端名(codex/claude/gemini/opencode) |
|
||||||
|
| `CODEAGENT_MODEL` | 模型名 |
|
||||||
|
| `CODEAGENT_AGENT` | Agent 预设名 |
|
||||||
|
| `CODEAGENT_PROMPT_FILE` | Prompt 文件路径 |
|
||||||
|
| `CODEAGENT_REASONING_EFFORT` | 推理力度 |
|
||||||
|
| `CODEAGENT_SKIP_PERMISSIONS` | 跳过权限提示(默认 true;设 `false` 关闭) |
|
||||||
|
| `CODEAGENT_FULL_OUTPUT` | 并行模式完整输出 |
|
||||||
|
| `CODEAGENT_MAX_PARALLEL_WORKERS` | 并行 worker 数(0=不限制,上限 100) |
|
||||||
|
| `CODEAGENT_TMPDIR` | 自定义临时目录(macOS 权限问题时使用) |
|
||||||
|
| `CODEX_TIMEOUT` | 超时(毫秒,默认 7200000 即 2 小时) |
|
||||||
|
| `CODEX_BYPASS_SANDBOX` | Codex sandbox bypass(默认 true;设 `false` 关闭) |
|
||||||
|
| `DO_WORKTREE_DIR` | 复用已有 worktree 目录(由 /do 工作流设置) |
|
||||||
|
|
||||||
|
### Agent 预设(`~/.codeagent/models.json`)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"default_backend": "codex",
|
||||||
|
"default_model": "gpt-4.1",
|
||||||
|
"backends": {
|
||||||
|
"codex": { "api_key": "..." },
|
||||||
|
"claude": { "base_url": "http://localhost:23001", "api_key": "..." }
|
||||||
|
},
|
||||||
|
"agents": {
|
||||||
|
"develop": {
|
||||||
|
"backend": "codex",
|
||||||
|
"model": "gpt-4.1",
|
||||||
|
"prompt_file": "~/.codeagent/prompts/develop.md",
|
||||||
|
"reasoning": "high",
|
||||||
|
"yolo": true,
|
||||||
|
"allowed_tools": ["Read", "Write", "Bash"],
|
||||||
|
"disallowed_tools": ["WebFetch"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
用 `--agent <name>` 选择预设,agent 会继承 `backends` 下对应后端的 `base_url` / `api_key`。
|
||||||
|
|
||||||
|
### 动态 Agent
|
||||||
|
|
||||||
|
在 `~/.codeagent/agents/` 目录放置 `{name}.md` 文件,即可通过 `--agent {name}` 使用,自动读取该 Markdown 作为 prompt,使用 `default_backend` 和 `default_model`。
|
||||||
|
|
||||||
|
### 技能自动检测
|
||||||
|
|
||||||
|
当未通过 `--skills` 显式指定技能时,codeagent-wrapper 会根据工作目录中的文件自动检测技术栈:
|
||||||
|
|
||||||
|
| 检测文件 | 注入技能 |
|
||||||
|
|----------|----------|
|
||||||
|
| `go.mod` / `go.sum` | `golang-base-practices` |
|
||||||
|
| `Cargo.toml` | `rust-best-practices` |
|
||||||
|
| `pyproject.toml` / `setup.py` / `requirements.txt` | `python-best-practices` |
|
||||||
|
| `package.json` | `vercel-react-best-practices`, `frontend-design` |
|
||||||
|
| `vue.config.js` / `vite.config.ts` / `nuxt.config.ts` | `vue-web-app` |
|
||||||
|
|
||||||
|
技能规范从 `~/.claude/skills/{name}/SKILL.md` 读取,受 16000 字符预算限制。
|
||||||
|
|
||||||
|
## 支持的后端
|
||||||
|
|
||||||
|
该项目本身不内置模型能力,依赖本机安装并可在 `PATH` 中找到对应 CLI:
|
||||||
|
|
||||||
|
| 后端 | 执行命令 | 说明 |
|
||||||
|
|------|----------|------|
|
||||||
|
| `codex` | `codex e ...` | 默认添加 `--dangerously-bypass-approvals-and-sandbox`;设 `CODEX_BYPASS_SANDBOX=false` 关闭 |
|
||||||
|
| `claude` | `claude -p ... --output-format stream-json` | 默认跳过权限并禁用 setting-sources 防止递归;设 `CODEAGENT_SKIP_PERMISSIONS=false` 开启权限;自动读取 `~/.claude/settings.json` 中的 env 和 model |
|
||||||
|
| `gemini` | `gemini -o stream-json -y ...` | 自动从 `~/.gemini/.env` 加载环境变量(GEMINI_API_KEY, GEMINI_MODEL 等) |
|
||||||
|
| `opencode` | `opencode run --format json` | — |
|
||||||
|
|
||||||
|
## 项目结构
|
||||||
|
|
||||||
|
```
|
||||||
|
cmd/codeagent-wrapper/main.go # CLI 入口
|
||||||
|
internal/
|
||||||
|
app/ # CLI 命令定义、参数解析、主逻辑编排
|
||||||
|
backend/ # 后端抽象与实现(codex/claude/gemini/opencode)
|
||||||
|
config/ # 配置加载、agent 解析、viper 绑定
|
||||||
|
executor/ # 任务执行引擎:单任务/并行/worktree/技能注入
|
||||||
|
logger/ # 结构化日志系统
|
||||||
|
parser/ # JSON stream 解析器
|
||||||
|
utils/ # 通用工具函数
|
||||||
|
worktree/ # Git worktree 管理
|
||||||
|
```
|
||||||
|
|
||||||
|
## 开发
|
||||||
|
|
||||||
|
```bash
|
||||||
|
make build # 构建
|
||||||
|
make test # 运行测试
|
||||||
|
make lint # golangci-lint + staticcheck
|
||||||
|
make clean # 清理构建产物
|
||||||
|
make install # 安装到 $GOPATH/bin
|
||||||
|
```
|
||||||
|
|
||||||
|
CI 使用 GitHub Actions,Go 1.21 / 1.22 矩阵测试。
|
||||||
|
|
||||||
|
## 故障排查
|
||||||
|
|
||||||
|
- macOS 下如果看到临时目录相关的 `permission denied`,可设置:`CODEAGENT_TMPDIR=$HOME/.codeagent/tmp`
|
||||||
|
- `claude` 后端的 `base_url` / `api_key`(来自 `~/.codeagent/models.json` 的 `backends.claude`)会注入到子进程环境变量 `ANTHROPIC_BASE_URL` / `ANTHROPIC_API_KEY`
|
||||||
|
- `gemini` 后端的 API key 从 `~/.gemini/.env` 加载,注入 `GEMINI_API_KEY` 并自动设置 `GEMINI_API_KEY_AUTH_MECHANISM=bearer`
|
||||||
|
- 后端命令未找到时返回退出码 127,超时返回 124,中断返回 130
|
||||||
|
- 并行模式默认输出结构化摘要,使用 `--full-output` 查看完整输出以便调试
|
||||||
@@ -1,11 +1,11 @@
|
|||||||
# Codeagent-Wrapper User Guide
|
# Codeagent-Wrapper User Guide
|
||||||
|
|
||||||
Multi-backend AI code execution wrapper supporting Codex, Claude, and Gemini.
|
Multi-backend AI code execution wrapper supporting Codex, Claude, Gemini, and OpenCode.
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
`codeagent-wrapper` is a Go-based CLI tool that provides a unified interface to multiple AI coding backends. It handles:
|
`codeagent-wrapper` is a Go-based CLI tool that provides a unified interface to multiple AI coding backends. It handles:
|
||||||
- Multi-backend execution (Codex, Claude, Gemini)
|
- Multi-backend execution (Codex, Claude, Gemini, OpenCode)
|
||||||
- JSON stream parsing and output formatting
|
- JSON stream parsing and output formatting
|
||||||
- Session management and resumption
|
- Session management and resumption
|
||||||
- Parallel task execution with dependency resolution
|
- Parallel task execution with dependency resolution
|
||||||
@@ -42,6 +42,24 @@ Implement user authentication:
|
|||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### CLI Flags
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `--backend <name>` | Select backend (codex/claude/gemini/opencode) |
|
||||||
|
| `--model <name>` | Override model for this invocation |
|
||||||
|
| `--agent <name>` | Agent preset name (from ~/.codeagent/models.json) |
|
||||||
|
| `--config <path>` | Path to models.json config file |
|
||||||
|
| `--cleanup` | Clean up log files on startup |
|
||||||
|
| `--worktree` | Execute in a new git worktree (auto-generates task ID) |
|
||||||
|
| `--skills <names>` | Comma-separated skill names for spec injection |
|
||||||
|
| `--prompt-file <path>` | Read prompt from file |
|
||||||
|
| `--reasoning-effort <level>` | Set reasoning effort (low/medium/high) |
|
||||||
|
| `--skip-permissions` | Skip permission prompts |
|
||||||
|
| `--parallel` | Enable parallel task execution |
|
||||||
|
| `--full-output` | Show full output in parallel mode |
|
||||||
|
| `--version`, `-v` | Print version and exit |
|
||||||
|
|
||||||
### Backend Selection
|
### Backend Selection
|
||||||
|
|
||||||
| Backend | Command | Best For |
|
| Backend | Command | Best For |
|
||||||
@@ -49,6 +67,7 @@ EOF
|
|||||||
| **Codex** | `--backend codex` | General code tasks (default) |
|
| **Codex** | `--backend codex` | General code tasks (default) |
|
||||||
| **Claude** | `--backend claude` | Complex reasoning, architecture |
|
| **Claude** | `--backend claude` | Complex reasoning, architecture |
|
||||||
| **Gemini** | `--backend gemini` | Fast iteration, prototyping |
|
| **Gemini** | `--backend gemini` | Fast iteration, prototyping |
|
||||||
|
| **OpenCode** | `--backend opencode` | Open-source alternative |
|
||||||
|
|
||||||
## Core Features
|
## Core Features
|
||||||
|
|
||||||
|
|||||||
@@ -29,7 +29,9 @@ type cliOptions struct {
|
|||||||
ReasoningEffort string
|
ReasoningEffort string
|
||||||
Agent string
|
Agent string
|
||||||
PromptFile string
|
PromptFile string
|
||||||
|
Skills string
|
||||||
SkipPermissions bool
|
SkipPermissions bool
|
||||||
|
Worktree bool
|
||||||
|
|
||||||
Parallel bool
|
Parallel bool
|
||||||
FullOutput bool
|
FullOutput bool
|
||||||
@@ -133,9 +135,11 @@ func addRootFlags(fs *pflag.FlagSet, opts *cliOptions) {
|
|||||||
fs.StringVar(&opts.ReasoningEffort, "reasoning-effort", "", "Reasoning effort (backend-specific)")
|
fs.StringVar(&opts.ReasoningEffort, "reasoning-effort", "", "Reasoning effort (backend-specific)")
|
||||||
fs.StringVar(&opts.Agent, "agent", "", "Agent preset name (from ~/.codeagent/models.json)")
|
fs.StringVar(&opts.Agent, "agent", "", "Agent preset name (from ~/.codeagent/models.json)")
|
||||||
fs.StringVar(&opts.PromptFile, "prompt-file", "", "Prompt file path")
|
fs.StringVar(&opts.PromptFile, "prompt-file", "", "Prompt file path")
|
||||||
|
fs.StringVar(&opts.Skills, "skills", "", "Comma-separated skill names for spec injection")
|
||||||
|
|
||||||
fs.BoolVar(&opts.SkipPermissions, "skip-permissions", false, "Skip permissions prompts (also via CODEAGENT_SKIP_PERMISSIONS)")
|
fs.BoolVar(&opts.SkipPermissions, "skip-permissions", false, "Skip permissions prompts (also via CODEAGENT_SKIP_PERMISSIONS)")
|
||||||
fs.BoolVar(&opts.SkipPermissions, "dangerously-skip-permissions", false, "Alias for --skip-permissions")
|
fs.BoolVar(&opts.SkipPermissions, "dangerously-skip-permissions", false, "Alias for --skip-permissions")
|
||||||
|
fs.BoolVar(&opts.Worktree, "worktree", false, "Execute in a new git worktree (auto-generates task ID)")
|
||||||
}
|
}
|
||||||
|
|
||||||
func newVersionCommand(name string) *cobra.Command {
|
func newVersionCommand(name string) *cobra.Command {
|
||||||
@@ -253,10 +257,11 @@ func buildSingleConfig(cmd *cobra.Command, args []string, rawArgv []string, opts
|
|||||||
}
|
}
|
||||||
|
|
||||||
var resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning string
|
var resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning string
|
||||||
|
var resolvedAllowedTools, resolvedDisallowedTools []string
|
||||||
if agentName != "" {
|
if agentName != "" {
|
||||||
var resolvedYolo bool
|
var resolvedYolo bool
|
||||||
var err error
|
var err error
|
||||||
resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning, _, _, resolvedYolo, err = config.ResolveAgentConfig(agentName)
|
resolvedBackend, resolvedModel, resolvedPromptFile, resolvedReasoning, _, _, resolvedYolo, resolvedAllowedTools, resolvedDisallowedTools, err = config.ResolveAgentConfig(agentName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to resolve agent %q: %w", agentName, err)
|
return nil, fmt.Errorf("failed to resolve agent %q: %w", agentName, err)
|
||||||
}
|
}
|
||||||
@@ -336,6 +341,16 @@ func buildSingleConfig(cmd *cobra.Command, args []string, rawArgv []string, opts
|
|||||||
return nil, fmt.Errorf("task required")
|
return nil, fmt.Errorf("task required")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var skills []string
|
||||||
|
if cmd.Flags().Changed("skills") {
|
||||||
|
for _, s := range strings.Split(opts.Skills, ",") {
|
||||||
|
s = strings.TrimSpace(s)
|
||||||
|
if s != "" {
|
||||||
|
skills = append(skills, s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
cfg := &Config{
|
cfg := &Config{
|
||||||
WorkDir: defaultWorkdir,
|
WorkDir: defaultWorkdir,
|
||||||
Backend: backendName,
|
Backend: backendName,
|
||||||
@@ -347,6 +362,10 @@ func buildSingleConfig(cmd *cobra.Command, args []string, rawArgv []string, opts
|
|||||||
Model: model,
|
Model: model,
|
||||||
ReasoningEffort: reasoningEffort,
|
ReasoningEffort: reasoningEffort,
|
||||||
MaxParallelWorkers: config.ResolveMaxParallelWorkers(),
|
MaxParallelWorkers: config.ResolveMaxParallelWorkers(),
|
||||||
|
AllowedTools: resolvedAllowedTools,
|
||||||
|
DisallowedTools: resolvedDisallowedTools,
|
||||||
|
Skills: skills,
|
||||||
|
Worktree: opts.Worktree,
|
||||||
}
|
}
|
||||||
|
|
||||||
if args[0] == "resume" {
|
if args[0] == "resume" {
|
||||||
@@ -412,7 +431,7 @@ func runParallelMode(cmd *cobra.Command, args []string, opts *cliOptions, v *vip
|
|||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
if cmd.Flags().Changed("agent") || cmd.Flags().Changed("prompt-file") || cmd.Flags().Changed("reasoning-effort") {
|
if cmd.Flags().Changed("agent") || cmd.Flags().Changed("prompt-file") || cmd.Flags().Changed("reasoning-effort") || cmd.Flags().Changed("skills") {
|
||||||
fmt.Fprintln(os.Stderr, "ERROR: --parallel reads its task configuration from stdin; only --backend, --model, --full-output and --skip-permissions are allowed.")
|
fmt.Fprintln(os.Stderr, "ERROR: --parallel reads its task configuration from stdin; only --backend, --model, --full-output and --skip-permissions are allowed.")
|
||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
@@ -579,6 +598,17 @@ func runSingleMode(cfg *Config, name string) int {
|
|||||||
taskText = wrapTaskWithAgentPrompt(prompt, taskText)
|
taskText = wrapTaskWithAgentPrompt(prompt, taskText)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Resolve skills: explicit > auto-detect from workdir
|
||||||
|
skills := cfg.Skills
|
||||||
|
if len(skills) == 0 {
|
||||||
|
skills = detectProjectSkills(cfg.WorkDir)
|
||||||
|
}
|
||||||
|
if len(skills) > 0 {
|
||||||
|
if content := resolveSkillContent(skills, 0); content != "" {
|
||||||
|
taskText = taskText + "\n\n# Domain Best Practices\n\n" + content
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
useStdin := cfg.ExplicitStdin || shouldUseStdin(taskText, piped)
|
useStdin := cfg.ExplicitStdin || shouldUseStdin(taskText, piped)
|
||||||
|
|
||||||
targetArg := taskText
|
targetArg := taskText
|
||||||
@@ -599,6 +629,11 @@ func runSingleMode(cfg *Config, name string) int {
|
|||||||
fmt.Fprintf(os.Stderr, " PID: %d\n", os.Getpid())
|
fmt.Fprintf(os.Stderr, " PID: %d\n", os.Getpid())
|
||||||
fmt.Fprintf(os.Stderr, " Log: %s\n", logger.Path())
|
fmt.Fprintf(os.Stderr, " Log: %s\n", logger.Path())
|
||||||
|
|
||||||
|
if cfg.Mode == "new" && strings.TrimSpace(taskText) == "integration-log-check" {
|
||||||
|
logInfo("Integration log check: skipping backend execution")
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
if useStdin {
|
if useStdin {
|
||||||
var reasons []string
|
var reasons []string
|
||||||
if piped {
|
if piped {
|
||||||
@@ -645,6 +680,9 @@ func runSingleMode(cfg *Config, name string) int {
|
|||||||
ReasoningEffort: cfg.ReasoningEffort,
|
ReasoningEffort: cfg.ReasoningEffort,
|
||||||
Agent: cfg.Agent,
|
Agent: cfg.Agent,
|
||||||
SkipPermissions: cfg.SkipPermissions,
|
SkipPermissions: cfg.SkipPermissions,
|
||||||
|
Worktree: cfg.Worktree,
|
||||||
|
AllowedTools: cfg.AllowedTools,
|
||||||
|
DisallowedTools: cfg.DisallowedTools,
|
||||||
UseStdin: useStdin,
|
UseStdin: useStdin,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -52,3 +52,11 @@ func runCodexProcess(parentCtx context.Context, codexArgs []string, taskText str
|
|||||||
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
|
func runCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backend Backend, customArgs []string, useCustomArgs bool, silent bool, timeoutSec int) TaskResult {
|
||||||
return executor.RunCodexTaskWithContext(parentCtx, taskSpec, backend, codexCommand, buildCodexArgsFn, customArgs, useCustomArgs, silent, timeoutSec)
|
return executor.RunCodexTaskWithContext(parentCtx, taskSpec, backend, codexCommand, buildCodexArgsFn, customArgs, useCustomArgs, silent, timeoutSec)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func detectProjectSkills(workDir string) []string {
|
||||||
|
return executor.DetectProjectSkills(workDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
func resolveSkillContent(skills []string, maxBudget int) string {
|
||||||
|
return executor.ResolveSkillContent(skills, maxBudget)
|
||||||
|
}
|
||||||
|
|||||||
@@ -1616,6 +1616,60 @@ do something`
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestParallelParseConfig_Worktree(t *testing.T) {
|
||||||
|
input := `---TASK---
|
||||||
|
id: task-1
|
||||||
|
worktree: true
|
||||||
|
---CONTENT---
|
||||||
|
do something`
|
||||||
|
|
||||||
|
cfg, err := parseParallelConfig([]byte(input))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("parseParallelConfig() unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
if len(cfg.Tasks) != 1 {
|
||||||
|
t.Fatalf("expected 1 task, got %d", len(cfg.Tasks))
|
||||||
|
}
|
||||||
|
task := cfg.Tasks[0]
|
||||||
|
if !task.Worktree {
|
||||||
|
t.Fatalf("Worktree = %v, want true", task.Worktree)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParallelParseConfig_WorktreeBooleanValue(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
value string
|
||||||
|
want bool
|
||||||
|
}{
|
||||||
|
{"true", "true", true},
|
||||||
|
{"1", "1", true},
|
||||||
|
{"yes", "yes", true},
|
||||||
|
{"false", "false", false},
|
||||||
|
{"0", "0", false},
|
||||||
|
{"no", "no", false},
|
||||||
|
{"empty", "", true},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
input := fmt.Sprintf(`---TASK---
|
||||||
|
id: task-1
|
||||||
|
worktree: %s
|
||||||
|
---CONTENT---
|
||||||
|
do something`, tt.value)
|
||||||
|
|
||||||
|
cfg, err := parseParallelConfig([]byte(input))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("parseParallelConfig() unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
if cfg.Tasks[0].Worktree != tt.want {
|
||||||
|
t.Fatalf("Worktree = %v, want %v for value %q", cfg.Tasks[0].Worktree, tt.want, tt.value)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestParallelParseConfig_EmptySessionID(t *testing.T) {
|
func TestParallelParseConfig_EmptySessionID(t *testing.T) {
|
||||||
input := `---TASK---
|
input := `---TASK---
|
||||||
id: task-1
|
id: task-1
|
||||||
|
|||||||
@@ -3,10 +3,14 @@ package wrapper
|
|||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
"testing"
|
"testing"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestEnsureExecutableTempDir_Override(t *testing.T) {
|
func TestEnsureExecutableTempDir_Override(t *testing.T) {
|
||||||
|
if runtime.GOOS == "windows" {
|
||||||
|
t.Skip("ensureExecutableTempDir is no-op on Windows")
|
||||||
|
}
|
||||||
restore := captureTempEnv()
|
restore := captureTempEnv()
|
||||||
t.Cleanup(restore)
|
t.Cleanup(restore)
|
||||||
|
|
||||||
@@ -37,6 +41,9 @@ func TestEnsureExecutableTempDir_Override(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestEnsureExecutableTempDir_FallbackWhenCurrentNotExecutable(t *testing.T) {
|
func TestEnsureExecutableTempDir_FallbackWhenCurrentNotExecutable(t *testing.T) {
|
||||||
|
if runtime.GOOS == "windows" {
|
||||||
|
t.Skip("ensureExecutableTempDir is no-op on Windows")
|
||||||
|
}
|
||||||
restore := captureTempEnv()
|
restore := captureTempEnv()
|
||||||
t.Cleanup(restore)
|
t.Cleanup(restore)
|
||||||
|
|
||||||
|
|||||||
@@ -134,6 +134,15 @@ func buildClaudeArgs(cfg *config.Config, targetArg string) []string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(cfg.AllowedTools) > 0 {
|
||||||
|
args = append(args, "--allowedTools")
|
||||||
|
args = append(args, cfg.AllowedTools...)
|
||||||
|
}
|
||||||
|
if len(cfg.DisallowedTools) > 0 {
|
||||||
|
args = append(args, "--disallowedTools")
|
||||||
|
args = append(args, cfg.DisallowedTools...)
|
||||||
|
}
|
||||||
|
|
||||||
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
|
args = append(args, "--output-format", "stream-json", "--verbose", targetArg)
|
||||||
|
|
||||||
return args
|
return args
|
||||||
|
|||||||
@@ -16,14 +16,16 @@ type BackendConfig struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type AgentModelConfig struct {
|
type AgentModelConfig struct {
|
||||||
Backend string `json:"backend"`
|
Backend string `json:"backend"`
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
PromptFile string `json:"prompt_file,omitempty"`
|
PromptFile string `json:"prompt_file,omitempty"`
|
||||||
Description string `json:"description,omitempty"`
|
Description string `json:"description,omitempty"`
|
||||||
Yolo bool `json:"yolo,omitempty"`
|
Yolo bool `json:"yolo,omitempty"`
|
||||||
Reasoning string `json:"reasoning,omitempty"`
|
Reasoning string `json:"reasoning,omitempty"`
|
||||||
BaseURL string `json:"base_url,omitempty"`
|
BaseURL string `json:"base_url,omitempty"`
|
||||||
APIKey string `json:"api_key,omitempty"`
|
APIKey string `json:"api_key,omitempty"`
|
||||||
|
AllowedTools []string `json:"allowed_tools,omitempty"`
|
||||||
|
DisallowedTools []string `json:"disallowed_tools,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ModelsConfig struct {
|
type ModelsConfig struct {
|
||||||
@@ -178,17 +180,17 @@ func resolveBackendConfig(cfg *ModelsConfig, backendName string) BackendConfig {
|
|||||||
return BackendConfig{}
|
return BackendConfig{}
|
||||||
}
|
}
|
||||||
|
|
||||||
func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning, baseURL, apiKey string, yolo bool, err error) {
|
func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning, baseURL, apiKey string, yolo bool, allowedTools, disallowedTools []string, err error) {
|
||||||
if err := ValidateAgentName(agentName); err != nil {
|
if err := ValidateAgentName(agentName); err != nil {
|
||||||
return "", "", "", "", "", "", false, err
|
return "", "", "", "", "", "", false, nil, nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg, err := modelsConfig()
|
cfg, err := modelsConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", "", "", "", "", "", false, err
|
return "", "", "", "", "", "", false, nil, nil, err
|
||||||
}
|
}
|
||||||
if cfg == nil {
|
if cfg == nil {
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("models config is nil\n\n%s", modelsConfigHint(""))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("models config is nil\n\n%s", modelsConfigHint(""))
|
||||||
}
|
}
|
||||||
|
|
||||||
if agent, ok := cfg.Agents[agentName]; ok {
|
if agent, ok := cfg.Agents[agentName]; ok {
|
||||||
@@ -198,9 +200,9 @@ func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning
|
|||||||
if backend == "" {
|
if backend == "" {
|
||||||
configPath, pathErr := modelsConfigPath()
|
configPath, pathErr := modelsConfigPath()
|
||||||
if pathErr != nil {
|
if pathErr != nil {
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("agent %q has empty backend and default_backend is not set\n\n%s", agentName, modelsConfigHint(""))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("agent %q has empty backend and default_backend is not set\n\n%s", agentName, modelsConfigHint(""))
|
||||||
}
|
}
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("agent %q has empty backend and default_backend is not set\n\n%s", agentName, modelsConfigHint(configPath))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("agent %q has empty backend and default_backend is not set\n\n%s", agentName, modelsConfigHint(configPath))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
backendCfg := resolveBackendConfig(cfg, backend)
|
backendCfg := resolveBackendConfig(cfg, backend)
|
||||||
@@ -218,11 +220,11 @@ func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning
|
|||||||
if model == "" {
|
if model == "" {
|
||||||
configPath, pathErr := modelsConfigPath()
|
configPath, pathErr := modelsConfigPath()
|
||||||
if pathErr != nil {
|
if pathErr != nil {
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("agent %q has empty model; set agents.%s.model in %s\n\n%s", agentName, agentName, modelsConfigTildePath, modelsConfigHint(""))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("agent %q has empty model; set agents.%s.model in %s\n\n%s", agentName, agentName, modelsConfigTildePath, modelsConfigHint(""))
|
||||||
}
|
}
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("agent %q has empty model; set agents.%s.model in %s\n\n%s", agentName, agentName, modelsConfigTildePath, modelsConfigHint(configPath))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("agent %q has empty model; set agents.%s.model in %s\n\n%s", agentName, agentName, modelsConfigTildePath, modelsConfigHint(configPath))
|
||||||
}
|
}
|
||||||
return backend, model, agent.PromptFile, agent.Reasoning, baseURL, apiKey, agent.Yolo, nil
|
return backend, model, agent.PromptFile, agent.Reasoning, baseURL, apiKey, agent.Yolo, agent.AllowedTools, agent.DisallowedTools, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if dynamic, ok := LoadDynamicAgent(agentName); ok {
|
if dynamic, ok := LoadDynamicAgent(agentName); ok {
|
||||||
@@ -231,24 +233,24 @@ func resolveAgentConfig(agentName string) (backend, model, promptFile, reasoning
|
|||||||
configPath, pathErr := modelsConfigPath()
|
configPath, pathErr := modelsConfigPath()
|
||||||
if backend == "" || model == "" {
|
if backend == "" || model == "" {
|
||||||
if pathErr != nil {
|
if pathErr != nil {
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("dynamic agent %q requires default_backend and default_model to be set in %s\n\n%s", agentName, modelsConfigTildePath, modelsConfigHint(""))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("dynamic agent %q requires default_backend and default_model to be set in %s\n\n%s", agentName, modelsConfigTildePath, modelsConfigHint(""))
|
||||||
}
|
}
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("dynamic agent %q requires default_backend and default_model to be set in %s\n\n%s", agentName, modelsConfigTildePath, modelsConfigHint(configPath))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("dynamic agent %q requires default_backend and default_model to be set in %s\n\n%s", agentName, modelsConfigTildePath, modelsConfigHint(configPath))
|
||||||
}
|
}
|
||||||
backendCfg := resolveBackendConfig(cfg, backend)
|
backendCfg := resolveBackendConfig(cfg, backend)
|
||||||
baseURL = strings.TrimSpace(backendCfg.BaseURL)
|
baseURL = strings.TrimSpace(backendCfg.BaseURL)
|
||||||
apiKey = strings.TrimSpace(backendCfg.APIKey)
|
apiKey = strings.TrimSpace(backendCfg.APIKey)
|
||||||
return backend, model, dynamic.PromptFile, "", baseURL, apiKey, false, nil
|
return backend, model, dynamic.PromptFile, "", baseURL, apiKey, false, nil, nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
configPath, pathErr := modelsConfigPath()
|
configPath, pathErr := modelsConfigPath()
|
||||||
if pathErr != nil {
|
if pathErr != nil {
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("agent %q not found in %s\n\n%s", agentName, modelsConfigTildePath, modelsConfigHint(""))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("agent %q not found in %s\n\n%s", agentName, modelsConfigTildePath, modelsConfigHint(""))
|
||||||
}
|
}
|
||||||
return "", "", "", "", "", "", false, fmt.Errorf("agent %q not found in %s\n\n%s", agentName, modelsConfigTildePath, modelsConfigHint(configPath))
|
return "", "", "", "", "", "", false, nil, nil, fmt.Errorf("agent %q not found in %s\n\n%s", agentName, modelsConfigTildePath, modelsConfigHint(configPath))
|
||||||
}
|
}
|
||||||
|
|
||||||
func ResolveAgentConfig(agentName string) (backend, model, promptFile, reasoning, baseURL, apiKey string, yolo bool, err error) {
|
func ResolveAgentConfig(agentName string) (backend, model, promptFile, reasoning, baseURL, apiKey string, yolo bool, allowedTools, disallowedTools []string, err error) {
|
||||||
return resolveAgentConfig(agentName)
|
return resolveAgentConfig(agentName)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ func TestResolveAgentConfig_NoConfig_ReturnsHelpfulError(t *testing.T) {
|
|||||||
t.Cleanup(ResetModelsConfigCacheForTest)
|
t.Cleanup(ResetModelsConfigCacheForTest)
|
||||||
ResetModelsConfigCacheForTest()
|
ResetModelsConfigCacheForTest()
|
||||||
|
|
||||||
_, _, _, _, _, _, _, err := ResolveAgentConfig("develop")
|
_, _, _, _, _, _, _, _, _, err := ResolveAgentConfig("develop")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Fatalf("expected error, got nil")
|
t.Fatalf("expected error, got nil")
|
||||||
}
|
}
|
||||||
@@ -120,7 +120,7 @@ func TestLoadModelsConfig_WithFile(t *testing.T) {
|
|||||||
t.Errorf("ResolveBackendConfig(apiKey) = %q, want %q", apiKey, "backend-key")
|
t.Errorf("ResolveBackendConfig(apiKey) = %q, want %q", apiKey, "backend-key")
|
||||||
}
|
}
|
||||||
|
|
||||||
backend, model, _, _, agentBaseURL, agentAPIKey, _, err := ResolveAgentConfig("custom-agent")
|
backend, model, _, _, agentBaseURL, agentAPIKey, _, _, _, err := ResolveAgentConfig("custom-agent")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("ResolveAgentConfig(custom-agent): %v", err)
|
t.Fatalf("ResolveAgentConfig(custom-agent): %v", err)
|
||||||
}
|
}
|
||||||
@@ -164,7 +164,7 @@ func TestResolveAgentConfig_DynamicAgent(t *testing.T) {
|
|||||||
t.Fatalf("WriteFile: %v", err)
|
t.Fatalf("WriteFile: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
backend, model, promptFile, _, _, _, _, err := ResolveAgentConfig("sarsh")
|
backend, model, promptFile, _, _, _, _, _, _, err := ResolveAgentConfig("sarsh")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("ResolveAgentConfig(sarsh): %v", err)
|
t.Fatalf("ResolveAgentConfig(sarsh): %v", err)
|
||||||
}
|
}
|
||||||
@@ -224,7 +224,7 @@ func TestResolveAgentConfig_UnknownAgent_ReturnsError(t *testing.T) {
|
|||||||
t.Fatalf("WriteFile: %v", err)
|
t.Fatalf("WriteFile: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, _, _, _, _, _, _, err := ResolveAgentConfig("unknown-agent")
|
_, _, _, _, _, _, _, _, _, err := ResolveAgentConfig("unknown-agent")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Fatalf("expected error, got nil")
|
t.Fatalf("expected error, got nil")
|
||||||
}
|
}
|
||||||
@@ -252,7 +252,7 @@ func TestResolveAgentConfig_EmptyModel_ReturnsError(t *testing.T) {
|
|||||||
t.Fatalf("WriteFile: %v", err)
|
t.Fatalf("WriteFile: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, _, _, _, _, _, _, err := ResolveAgentConfig("bad-agent")
|
_, _, _, _, _, _, _, _, _, err := ResolveAgentConfig("bad-agent")
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Fatalf("expected error, got nil")
|
t.Fatalf("expected error, got nil")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -24,6 +24,10 @@ type Config struct {
|
|||||||
SkipPermissions bool
|
SkipPermissions bool
|
||||||
Yolo bool
|
Yolo bool
|
||||||
MaxParallelWorkers int
|
MaxParallelWorkers int
|
||||||
|
AllowedTools []string
|
||||||
|
DisallowedTools []string
|
||||||
|
Skills []string
|
||||||
|
Worktree bool // Execute in a new git worktree
|
||||||
}
|
}
|
||||||
|
|
||||||
// EnvFlagEnabled returns true when the environment variable exists and is not
|
// EnvFlagEnabled returns true when the environment variable exists and is not
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ func TestEnvInjectionWithAgent(t *testing.T) {
|
|||||||
defer config.ResetModelsConfigCacheForTest()
|
defer config.ResetModelsConfigCacheForTest()
|
||||||
|
|
||||||
// Test ResolveAgentConfig
|
// Test ResolveAgentConfig
|
||||||
agentBackend, model, _, _, baseURL, apiKey, _, err := config.ResolveAgentConfig("test-agent")
|
agentBackend, model, _, _, baseURL, apiKey, _, _, _, err := config.ResolveAgentConfig("test-agent")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("ResolveAgentConfig: %v", err)
|
t.Fatalf("ResolveAgentConfig: %v", err)
|
||||||
}
|
}
|
||||||
@@ -118,7 +118,7 @@ func TestEnvInjectionLogic(t *testing.T) {
|
|||||||
|
|
||||||
// Step 2: If agent specified, get agent config
|
// Step 2: If agent specified, get agent config
|
||||||
if agentName != "" {
|
if agentName != "" {
|
||||||
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _, err := config.ResolveAgentConfig(agentName)
|
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _, _, _, err := config.ResolveAgentConfig(agentName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("ResolveAgentConfig(%q): %v", agentName, err)
|
t.Fatalf("ResolveAgentConfig(%q): %v", agentName, err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -21,6 +21,7 @@ import (
|
|||||||
ilogger "codeagent-wrapper/internal/logger"
|
ilogger "codeagent-wrapper/internal/logger"
|
||||||
parser "codeagent-wrapper/internal/parser"
|
parser "codeagent-wrapper/internal/parser"
|
||||||
utils "codeagent-wrapper/internal/utils"
|
utils "codeagent-wrapper/internal/utils"
|
||||||
|
"codeagent-wrapper/internal/worktree"
|
||||||
)
|
)
|
||||||
|
|
||||||
const postMessageTerminateDelay = 1 * time.Second
|
const postMessageTerminateDelay = 1 * time.Second
|
||||||
@@ -49,6 +50,7 @@ var (
|
|||||||
selectBackendFn = backend.Select
|
selectBackendFn = backend.Select
|
||||||
commandContext = exec.CommandContext
|
commandContext = exec.CommandContext
|
||||||
terminateCommandFn = terminateCommand
|
terminateCommandFn = terminateCommand
|
||||||
|
createWorktreeFn = worktree.CreateWorktree
|
||||||
)
|
)
|
||||||
|
|
||||||
var forceKillDelay atomic.Int32
|
var forceKillDelay atomic.Int32
|
||||||
@@ -335,6 +337,16 @@ func DefaultRunCodexTaskFn(task TaskSpec, timeout int) TaskResult {
|
|||||||
}
|
}
|
||||||
task.Task = WrapTaskWithAgentPrompt(prompt, task.Task)
|
task.Task = WrapTaskWithAgentPrompt(prompt, task.Task)
|
||||||
}
|
}
|
||||||
|
// Resolve skills: explicit > auto-detect from workdir
|
||||||
|
skills := task.Skills
|
||||||
|
if len(skills) == 0 {
|
||||||
|
skills = DetectProjectSkills(task.WorkDir)
|
||||||
|
}
|
||||||
|
if len(skills) > 0 {
|
||||||
|
if content := ResolveSkillContent(skills, 0); content != "" {
|
||||||
|
task.Task = task.Task + "\n\n# Domain Best Practices\n\n" + content
|
||||||
|
}
|
||||||
|
}
|
||||||
if task.UseStdin || ShouldUseStdin(task.Task, false) {
|
if task.UseStdin || ShouldUseStdin(task.Task, false) {
|
||||||
task.UseStdin = true
|
task.UseStdin = true
|
||||||
}
|
}
|
||||||
@@ -905,6 +917,8 @@ func RunCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
|||||||
ReasoningEffort: taskSpec.ReasoningEffort,
|
ReasoningEffort: taskSpec.ReasoningEffort,
|
||||||
SkipPermissions: taskSpec.SkipPermissions,
|
SkipPermissions: taskSpec.SkipPermissions,
|
||||||
Backend: defaultBackendName,
|
Backend: defaultBackendName,
|
||||||
|
AllowedTools: taskSpec.AllowedTools,
|
||||||
|
DisallowedTools: taskSpec.DisallowedTools,
|
||||||
}
|
}
|
||||||
|
|
||||||
commandName := strings.TrimSpace(defaultCommandName)
|
commandName := strings.TrimSpace(defaultCommandName)
|
||||||
@@ -921,6 +935,11 @@ func RunCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
|||||||
cfg.Backend = backend.Name()
|
cfg.Backend = backend.Name()
|
||||||
} else if taskSpec.Backend != "" {
|
} else if taskSpec.Backend != "" {
|
||||||
cfg.Backend = taskSpec.Backend
|
cfg.Backend = taskSpec.Backend
|
||||||
|
if selectBackendFn != nil {
|
||||||
|
if b, err := selectBackendFn(taskSpec.Backend); err == nil {
|
||||||
|
argsBuilder = b.BuildArgs
|
||||||
|
}
|
||||||
|
}
|
||||||
} else if commandName != "" {
|
} else if commandName != "" {
|
||||||
cfg.Backend = commandName
|
cfg.Backend = commandName
|
||||||
}
|
}
|
||||||
@@ -932,6 +951,23 @@ func RunCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
|||||||
cfg.WorkDir = defaultWorkdir
|
cfg.WorkDir = defaultWorkdir
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Handle worktree mode: check DO_WORKTREE_DIR env var first, then create if needed
|
||||||
|
if worktreeDir := os.Getenv("DO_WORKTREE_DIR"); worktreeDir != "" {
|
||||||
|
// Use existing worktree from /do setup
|
||||||
|
cfg.WorkDir = worktreeDir
|
||||||
|
logInfo(fmt.Sprintf("Using existing worktree from DO_WORKTREE_DIR: %s", worktreeDir))
|
||||||
|
} else if taskSpec.Worktree {
|
||||||
|
// Create new worktree (backward compatibility for standalone --worktree usage)
|
||||||
|
paths, err := createWorktreeFn(cfg.WorkDir)
|
||||||
|
if err != nil {
|
||||||
|
result.ExitCode = 1
|
||||||
|
result.Error = fmt.Sprintf("failed to create worktree: %v", err)
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
cfg.WorkDir = paths.Dir
|
||||||
|
logInfo(fmt.Sprintf("Using worktree: %s (task_id: %s, branch: %s)", paths.Dir, paths.TaskID, paths.Branch))
|
||||||
|
}
|
||||||
|
|
||||||
if cfg.Mode == "resume" && strings.TrimSpace(cfg.SessionID) == "" {
|
if cfg.Mode == "resume" && strings.TrimSpace(cfg.SessionID) == "" {
|
||||||
result.ExitCode = 1
|
result.ExitCode = 1
|
||||||
result.Error = "resume mode requires non-empty session_id"
|
result.Error = "resume mode requires non-empty session_id"
|
||||||
@@ -1070,7 +1106,7 @@ func RunCodexTaskWithContext(parentCtx context.Context, taskSpec TaskSpec, backe
|
|||||||
if envBackend != nil {
|
if envBackend != nil {
|
||||||
baseURL, apiKey := config.ResolveBackendConfig(cfg.Backend)
|
baseURL, apiKey := config.ResolveBackendConfig(cfg.Backend)
|
||||||
if agentName := strings.TrimSpace(taskSpec.Agent); agentName != "" {
|
if agentName := strings.TrimSpace(taskSpec.Agent); agentName != "" {
|
||||||
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _, err := config.ResolveAgentConfig(agentName)
|
agentBackend, _, _, _, agentBaseURL, agentAPIKey, _, _, _, err := config.ResolveAgentConfig(agentName)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
if strings.EqualFold(strings.TrimSpace(agentBackend), strings.TrimSpace(cfg.Backend)) {
|
if strings.EqualFold(strings.TrimSpace(agentBackend), strings.TrimSpace(cfg.Backend)) {
|
||||||
baseURL, apiKey = agentBaseURL, agentAPIKey
|
baseURL, apiKey = agentBaseURL, agentAPIKey
|
||||||
|
|||||||
@@ -75,6 +75,12 @@ func ParseParallelConfig(data []byte) (*ParallelConfig, error) {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
task.SkipPermissions = config.ParseBoolFlag(value, false)
|
task.SkipPermissions = config.ParseBoolFlag(value, false)
|
||||||
|
case "worktree":
|
||||||
|
if value == "" {
|
||||||
|
task.Worktree = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
task.Worktree = config.ParseBoolFlag(value, false)
|
||||||
case "dependencies":
|
case "dependencies":
|
||||||
for _, dep := range strings.Split(value, ",") {
|
for _, dep := range strings.Split(value, ",") {
|
||||||
dep = strings.TrimSpace(dep)
|
dep = strings.TrimSpace(dep)
|
||||||
@@ -82,6 +88,13 @@ func ParseParallelConfig(data []byte) (*ParallelConfig, error) {
|
|||||||
task.Dependencies = append(task.Dependencies, dep)
|
task.Dependencies = append(task.Dependencies, dep)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
case "skills":
|
||||||
|
for _, s := range strings.Split(value, ",") {
|
||||||
|
s = strings.TrimSpace(s)
|
||||||
|
if s != "" {
|
||||||
|
task.Skills = append(task.Skills, s)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -93,23 +106,25 @@ func ParseParallelConfig(data []byte) (*ParallelConfig, error) {
|
|||||||
if strings.TrimSpace(task.Agent) == "" {
|
if strings.TrimSpace(task.Agent) == "" {
|
||||||
return nil, fmt.Errorf("task block #%d has empty agent field", taskIndex)
|
return nil, fmt.Errorf("task block #%d has empty agent field", taskIndex)
|
||||||
}
|
}
|
||||||
if err := config.ValidateAgentName(task.Agent); err != nil {
|
if err := config.ValidateAgentName(task.Agent); err != nil {
|
||||||
return nil, fmt.Errorf("task block #%d invalid agent name: %w", taskIndex, err)
|
return nil, fmt.Errorf("task block #%d invalid agent name: %w", taskIndex, err)
|
||||||
}
|
}
|
||||||
backend, model, promptFile, reasoning, _, _, _, err := config.ResolveAgentConfig(task.Agent)
|
backend, model, promptFile, reasoning, _, _, _, allowedTools, disallowedTools, err := config.ResolveAgentConfig(task.Agent)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("task block #%d failed to resolve agent %q: %w", taskIndex, task.Agent, err)
|
return nil, fmt.Errorf("task block #%d failed to resolve agent %q: %w", taskIndex, task.Agent, err)
|
||||||
}
|
}
|
||||||
if task.Backend == "" {
|
if task.Backend == "" {
|
||||||
task.Backend = backend
|
task.Backend = backend
|
||||||
}
|
}
|
||||||
if task.Model == "" {
|
if task.Model == "" {
|
||||||
task.Model = model
|
task.Model = model
|
||||||
}
|
}
|
||||||
if task.ReasoningEffort == "" {
|
if task.ReasoningEffort == "" {
|
||||||
task.ReasoningEffort = reasoning
|
task.ReasoningEffort = reasoning
|
||||||
}
|
}
|
||||||
task.PromptFile = promptFile
|
task.PromptFile = promptFile
|
||||||
|
task.AllowedTools = allowedTools
|
||||||
|
task.DisallowedTools = disallowedTools
|
||||||
}
|
}
|
||||||
|
|
||||||
if task.ID == "" {
|
if task.ID == "" {
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -128,3 +129,116 @@ func ReadAgentPromptFile(path string, allowOutsideClaudeDir bool) (string, error
|
|||||||
func WrapTaskWithAgentPrompt(prompt string, task string) string {
|
func WrapTaskWithAgentPrompt(prompt string, task string) string {
|
||||||
return "<agent-prompt>\n" + prompt + "\n</agent-prompt>\n\n" + task
|
return "<agent-prompt>\n" + prompt + "\n</agent-prompt>\n\n" + task
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// techSkillMap maps file-existence fingerprints to skill names.
|
||||||
|
var techSkillMap = []struct {
|
||||||
|
Files []string // any of these files → this tech
|
||||||
|
Skills []string
|
||||||
|
}{
|
||||||
|
{Files: []string{"go.mod", "go.sum"}, Skills: []string{"golang-base-practices"}},
|
||||||
|
{Files: []string{"Cargo.toml"}, Skills: []string{"rust-best-practices"}},
|
||||||
|
{Files: []string{"pyproject.toml", "setup.py", "requirements.txt", "Pipfile"}, Skills: []string{"python-best-practices"}},
|
||||||
|
{Files: []string{"package.json"}, Skills: []string{"vercel-react-best-practices", "frontend-design"}},
|
||||||
|
{Files: []string{"vue.config.js", "vite.config.ts", "nuxt.config.ts"}, Skills: []string{"vue-web-app"}},
|
||||||
|
}
|
||||||
|
|
||||||
|
// DetectProjectSkills scans workDir for tech-stack fingerprints and returns
|
||||||
|
// skill names that are both detected and installed at ~/.claude/skills/{name}/SKILL.md.
|
||||||
|
func DetectProjectSkills(workDir string) []string {
|
||||||
|
home, err := os.UserHomeDir()
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var detected []string
|
||||||
|
seen := make(map[string]bool)
|
||||||
|
for _, entry := range techSkillMap {
|
||||||
|
for _, f := range entry.Files {
|
||||||
|
if _, err := os.Stat(filepath.Join(workDir, f)); err == nil {
|
||||||
|
for _, skill := range entry.Skills {
|
||||||
|
if seen[skill] {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
skillPath := filepath.Join(home, ".claude", "skills", skill, "SKILL.md")
|
||||||
|
if _, err := os.Stat(skillPath); err == nil {
|
||||||
|
detected = append(detected, skill)
|
||||||
|
seen[skill] = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
break // one matching file is enough for this entry
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return detected
|
||||||
|
}
|
||||||
|
|
||||||
|
const defaultSkillBudget = 16000 // chars, ~4K tokens
|
||||||
|
|
||||||
|
// validSkillName ensures skill names contain only safe characters to prevent path traversal
|
||||||
|
var validSkillName = regexp.MustCompile(`^[a-zA-Z0-9_-]+$`)
|
||||||
|
|
||||||
|
// ResolveSkillContent reads SKILL.md files for the given skill names,
|
||||||
|
// strips YAML frontmatter, wraps each in <skill> tags, and enforces a
|
||||||
|
// character budget to prevent context bloat.
|
||||||
|
func ResolveSkillContent(skills []string, maxBudget int) string {
|
||||||
|
home, err := os.UserHomeDir()
|
||||||
|
if err != nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
if maxBudget <= 0 {
|
||||||
|
maxBudget = defaultSkillBudget
|
||||||
|
}
|
||||||
|
var sections []string
|
||||||
|
remaining := maxBudget
|
||||||
|
for _, name := range skills {
|
||||||
|
name = strings.TrimSpace(name)
|
||||||
|
if name == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !validSkillName.MatchString(name) {
|
||||||
|
logWarn(fmt.Sprintf("skill %q: invalid name (must contain only [a-zA-Z0-9_-]), skipping", name))
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
path := filepath.Join(home, ".claude", "skills", name, "SKILL.md")
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil || len(data) == 0 {
|
||||||
|
logWarn(fmt.Sprintf("skill %q: SKILL.md not found or empty, skipping", name))
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
body := stripYAMLFrontmatter(strings.TrimSpace(string(data)))
|
||||||
|
tagOverhead := len("<skill name=\"\">") + len(name) + len("\n") + len("\n</skill>")
|
||||||
|
bodyBudget := remaining - tagOverhead
|
||||||
|
if bodyBudget <= 0 {
|
||||||
|
logWarn(fmt.Sprintf("skill %q: skipped, insufficient budget for tags", name))
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if len(body) > bodyBudget {
|
||||||
|
logWarn(fmt.Sprintf("skill %q: truncated from %d to %d chars (budget)", name, len(body), bodyBudget))
|
||||||
|
body = body[:bodyBudget]
|
||||||
|
}
|
||||||
|
remaining -= len(body) + tagOverhead
|
||||||
|
sections = append(sections, "<skill name=\""+name+"\">\n"+body+"\n</skill>")
|
||||||
|
if remaining <= 0 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(sections) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return strings.Join(sections, "\n\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
func stripYAMLFrontmatter(s string) string {
|
||||||
|
s = strings.ReplaceAll(s, "\r\n", "\n")
|
||||||
|
if !strings.HasPrefix(s, "---") {
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
idx := strings.Index(s[3:], "\n---")
|
||||||
|
if idx < 0 {
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
result := s[3+idx+4:]
|
||||||
|
if len(result) > 0 && result[0] == '\n' {
|
||||||
|
result = result[1:]
|
||||||
|
}
|
||||||
|
return strings.TrimSpace(result)
|
||||||
|
}
|
||||||
|
|||||||
343
codeagent-wrapper/internal/executor/skills_test.go
Normal file
343
codeagent-wrapper/internal/executor/skills_test.go
Normal file
@@ -0,0 +1,343 @@
|
|||||||
|
package executor
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
// setTestHome overrides the home directory for both Unix (HOME) and Windows (USERPROFILE).
|
||||||
|
func setTestHome(t *testing.T, home string) {
|
||||||
|
t.Helper()
|
||||||
|
t.Setenv("HOME", home)
|
||||||
|
if runtime.GOOS == "windows" {
|
||||||
|
t.Setenv("USERPROFILE", home)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- helper: create a temp skill dir with SKILL.md ---
|
||||||
|
|
||||||
|
func createTempSkill(t *testing.T, name, content string) string {
|
||||||
|
t.Helper()
|
||||||
|
home := t.TempDir()
|
||||||
|
skillDir := filepath.Join(home, ".claude", "skills", name)
|
||||||
|
if err := os.MkdirAll(skillDir, 0755); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
if err := os.WriteFile(filepath.Join(skillDir, "SKILL.md"), []byte(content), 0644); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
return home
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- ParseParallelConfig skills parsing tests ---
|
||||||
|
|
||||||
|
func TestParseParallelConfig_SkillsField(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
taskIdx int
|
||||||
|
expectedSkills []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "single skill",
|
||||||
|
input: `---TASK---
|
||||||
|
id: t1
|
||||||
|
workdir: .
|
||||||
|
skills: golang-base-practices
|
||||||
|
---CONTENT---
|
||||||
|
Do something.
|
||||||
|
`,
|
||||||
|
taskIdx: 0,
|
||||||
|
expectedSkills: []string{"golang-base-practices"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "multiple comma-separated skills",
|
||||||
|
input: `---TASK---
|
||||||
|
id: t1
|
||||||
|
workdir: .
|
||||||
|
skills: golang-base-practices, vercel-react-best-practices
|
||||||
|
---CONTENT---
|
||||||
|
Do something.
|
||||||
|
`,
|
||||||
|
taskIdx: 0,
|
||||||
|
expectedSkills: []string{"golang-base-practices", "vercel-react-best-practices"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "no skills field",
|
||||||
|
input: `---TASK---
|
||||||
|
id: t1
|
||||||
|
workdir: .
|
||||||
|
---CONTENT---
|
||||||
|
Do something.
|
||||||
|
`,
|
||||||
|
taskIdx: 0,
|
||||||
|
expectedSkills: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty skills value",
|
||||||
|
input: `---TASK---
|
||||||
|
id: t1
|
||||||
|
workdir: .
|
||||||
|
skills:
|
||||||
|
---CONTENT---
|
||||||
|
Do something.
|
||||||
|
`,
|
||||||
|
taskIdx: 0,
|
||||||
|
expectedSkills: nil,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
cfg, err := ParseParallelConfig([]byte(tt.input))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ParseParallelConfig error: %v", err)
|
||||||
|
}
|
||||||
|
got := cfg.Tasks[tt.taskIdx].Skills
|
||||||
|
if len(got) != len(tt.expectedSkills) {
|
||||||
|
t.Fatalf("skills: got %v, want %v", got, tt.expectedSkills)
|
||||||
|
}
|
||||||
|
for i := range got {
|
||||||
|
if got[i] != tt.expectedSkills[i] {
|
||||||
|
t.Errorf("skills[%d]: got %q, want %q", i, got[i], tt.expectedSkills[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- stripYAMLFrontmatter tests ---
|
||||||
|
|
||||||
|
func TestStripYAMLFrontmatter(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
input string
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "with frontmatter",
|
||||||
|
input: "---\nname: test\ndescription: foo\n---\n\n# Body\nContent here.",
|
||||||
|
expected: "# Body\nContent here.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "no frontmatter",
|
||||||
|
input: "# Just a body\nNo frontmatter.",
|
||||||
|
expected: "# Just a body\nNo frontmatter.",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty",
|
||||||
|
input: "",
|
||||||
|
expected: "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "only frontmatter",
|
||||||
|
input: "---\nname: test\n---",
|
||||||
|
expected: "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "frontmatter with allowed-tools",
|
||||||
|
input: "---\nname: do\nallowed-tools: [\"Bash\"]\n---\n\n# Skill content",
|
||||||
|
expected: "# Skill content",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "CRLF line endings",
|
||||||
|
input: "---\r\nname: test\r\n---\r\n\r\n# Body\r\nContent.",
|
||||||
|
expected: "# Body\nContent.",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
got := stripYAMLFrontmatter(tt.input)
|
||||||
|
if got != tt.expected {
|
||||||
|
t.Errorf("got %q, want %q", got, tt.expected)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- DetectProjectSkills tests ---
|
||||||
|
|
||||||
|
func TestDetectProjectSkills_GoProject(t *testing.T) {
|
||||||
|
tmpDir := t.TempDir()
|
||||||
|
os.WriteFile(filepath.Join(tmpDir, "go.mod"), []byte("module test"), 0644)
|
||||||
|
|
||||||
|
skills := DetectProjectSkills(tmpDir)
|
||||||
|
// Result depends on whether golang-base-practices is installed locally
|
||||||
|
t.Logf("detected skills for Go project: %v", skills)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDetectProjectSkills_NoFingerprints(t *testing.T) {
|
||||||
|
tmpDir := t.TempDir()
|
||||||
|
skills := DetectProjectSkills(tmpDir)
|
||||||
|
if len(skills) != 0 {
|
||||||
|
t.Errorf("expected no skills for empty dir, got %v", skills)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDetectProjectSkills_FullStack(t *testing.T) {
|
||||||
|
tmpDir := t.TempDir()
|
||||||
|
os.WriteFile(filepath.Join(tmpDir, "go.mod"), []byte("module test"), 0644)
|
||||||
|
os.WriteFile(filepath.Join(tmpDir, "package.json"), []byte(`{"name":"test"}`), 0644)
|
||||||
|
|
||||||
|
skills := DetectProjectSkills(tmpDir)
|
||||||
|
t.Logf("detected skills for fullstack project: %v", skills)
|
||||||
|
seen := make(map[string]bool)
|
||||||
|
for _, s := range skills {
|
||||||
|
if seen[s] {
|
||||||
|
t.Errorf("duplicate skill detected: %s", s)
|
||||||
|
}
|
||||||
|
seen[s] = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDetectProjectSkills_NonexistentDir(t *testing.T) {
|
||||||
|
skills := DetectProjectSkills("/nonexistent/path/xyz")
|
||||||
|
if len(skills) != 0 {
|
||||||
|
t.Errorf("expected no skills for nonexistent dir, got %v", skills)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- ResolveSkillContent tests (CI-friendly with temp dirs) ---
|
||||||
|
|
||||||
|
func TestResolveSkillContent_ValidSkill(t *testing.T) {
|
||||||
|
home := createTempSkill(t, "test-skill", "---\nname: test\n---\n\n# Test Skill\nBest practices here.")
|
||||||
|
setTestHome(t, home)
|
||||||
|
|
||||||
|
result := ResolveSkillContent([]string{"test-skill"}, 0)
|
||||||
|
if result == "" {
|
||||||
|
t.Fatal("expected non-empty content")
|
||||||
|
}
|
||||||
|
if !strings.Contains(result, `<skill name="test-skill">`) {
|
||||||
|
t.Error("missing opening <skill> tag")
|
||||||
|
}
|
||||||
|
if !strings.Contains(result, "</skill>") {
|
||||||
|
t.Error("missing closing </skill> tag")
|
||||||
|
}
|
||||||
|
if !strings.Contains(result, "# Test Skill") {
|
||||||
|
t.Error("missing skill body content")
|
||||||
|
}
|
||||||
|
if strings.Contains(result, "name: test") {
|
||||||
|
t.Error("frontmatter was not stripped")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveSkillContent_NonexistentSkill(t *testing.T) {
|
||||||
|
home := t.TempDir()
|
||||||
|
setTestHome(t, home)
|
||||||
|
|
||||||
|
result := ResolveSkillContent([]string{"nonexistent-skill-xyz"}, 0)
|
||||||
|
if result != "" {
|
||||||
|
t.Errorf("expected empty for nonexistent skill, got %d bytes", len(result))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveSkillContent_Empty(t *testing.T) {
|
||||||
|
if result := ResolveSkillContent(nil, 0); result != "" {
|
||||||
|
t.Errorf("expected empty for nil, got %q", result)
|
||||||
|
}
|
||||||
|
if result := ResolveSkillContent([]string{}, 0); result != "" {
|
||||||
|
t.Errorf("expected empty for empty, got %q", result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveSkillContent_Budget(t *testing.T) {
|
||||||
|
longBody := strings.Repeat("x", 500)
|
||||||
|
home := createTempSkill(t, "big-skill", "---\nname: big\n---\n\n"+longBody)
|
||||||
|
setTestHome(t, home)
|
||||||
|
|
||||||
|
result := ResolveSkillContent([]string{"big-skill"}, 200)
|
||||||
|
if result == "" {
|
||||||
|
t.Fatal("expected non-empty even with small budget")
|
||||||
|
}
|
||||||
|
if len(result) > 200 {
|
||||||
|
t.Errorf("result %d bytes exceeds budget 200", len(result))
|
||||||
|
}
|
||||||
|
t.Logf("budget=200, result=%d bytes", len(result))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveSkillContent_MultipleSkills(t *testing.T) {
|
||||||
|
home := t.TempDir()
|
||||||
|
for _, name := range []string{"skill-a", "skill-b"} {
|
||||||
|
skillDir := filepath.Join(home, ".claude", "skills", name)
|
||||||
|
os.MkdirAll(skillDir, 0755)
|
||||||
|
os.WriteFile(filepath.Join(skillDir, "SKILL.md"), []byte("# "+name+"\nContent."), 0644)
|
||||||
|
}
|
||||||
|
setTestHome(t, home)
|
||||||
|
|
||||||
|
result := ResolveSkillContent([]string{"skill-a", "skill-b"}, 0)
|
||||||
|
if result == "" {
|
||||||
|
t.Fatal("expected non-empty for multiple skills")
|
||||||
|
}
|
||||||
|
if !strings.Contains(result, `<skill name="skill-a">`) {
|
||||||
|
t.Error("missing skill-a tag")
|
||||||
|
}
|
||||||
|
if !strings.Contains(result, `<skill name="skill-b">`) {
|
||||||
|
t.Error("missing skill-b tag")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveSkillContent_PathTraversal(t *testing.T) {
|
||||||
|
home := t.TempDir()
|
||||||
|
setTestHome(t, home)
|
||||||
|
|
||||||
|
result := ResolveSkillContent([]string{"../../../etc/passwd"}, 0)
|
||||||
|
if result != "" {
|
||||||
|
t.Errorf("expected empty for path traversal name, got %d bytes", len(result))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveSkillContent_InvalidNames(t *testing.T) {
|
||||||
|
home := t.TempDir()
|
||||||
|
setTestHome(t, home)
|
||||||
|
|
||||||
|
tests := []string{"../bad", "foo/bar", "skill name", "skill.name", "a b"}
|
||||||
|
for _, name := range tests {
|
||||||
|
result := ResolveSkillContent([]string{name}, 0)
|
||||||
|
if result != "" {
|
||||||
|
t.Errorf("expected empty for invalid name %q, got %d bytes", name, len(result))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveSkillContent_ValidNamePattern(t *testing.T) {
|
||||||
|
if !validSkillName.MatchString("golang-base-practices") {
|
||||||
|
t.Error("golang-base-practices should be valid")
|
||||||
|
}
|
||||||
|
if !validSkillName.MatchString("my_skill_v2") {
|
||||||
|
t.Error("my_skill_v2 should be valid")
|
||||||
|
}
|
||||||
|
if validSkillName.MatchString("../bad") {
|
||||||
|
t.Error("../bad should be invalid")
|
||||||
|
}
|
||||||
|
if validSkillName.MatchString("") {
|
||||||
|
t.Error("empty should be invalid")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Integration: skill injection format test ---
|
||||||
|
|
||||||
|
func TestSkillInjectionFormat(t *testing.T) {
|
||||||
|
home := createTempSkill(t, "test-go", "---\nname: go\n---\n\n# Go Best Practices\nUse gofmt.")
|
||||||
|
setTestHome(t, home)
|
||||||
|
|
||||||
|
taskText := "Implement the feature."
|
||||||
|
content := ResolveSkillContent([]string{"test-go"}, 0)
|
||||||
|
injected := taskText + "\n\n# Domain Best Practices\n\n" + content
|
||||||
|
|
||||||
|
if !strings.Contains(injected, "Implement the feature.") {
|
||||||
|
t.Error("original task text lost")
|
||||||
|
}
|
||||||
|
if !strings.Contains(injected, "# Domain Best Practices") {
|
||||||
|
t.Error("missing section header")
|
||||||
|
}
|
||||||
|
if !strings.Contains(injected, `<skill name="test-go">`) {
|
||||||
|
t.Error("missing <skill> tag")
|
||||||
|
}
|
||||||
|
if !strings.Contains(injected, "Use gofmt.") {
|
||||||
|
t.Error("missing skill body")
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -21,6 +21,10 @@ type TaskSpec struct {
|
|||||||
Agent string `json:"agent,omitempty"`
|
Agent string `json:"agent,omitempty"`
|
||||||
PromptFile string `json:"prompt_file,omitempty"`
|
PromptFile string `json:"prompt_file,omitempty"`
|
||||||
SkipPermissions bool `json:"skip_permissions,omitempty"`
|
SkipPermissions bool `json:"skip_permissions,omitempty"`
|
||||||
|
Worktree bool `json:"worktree,omitempty"`
|
||||||
|
AllowedTools []string `json:"allowed_tools,omitempty"`
|
||||||
|
DisallowedTools []string `json:"disallowed_tools,omitempty"`
|
||||||
|
Skills []string `json:"skills,omitempty"`
|
||||||
Mode string `json:"-"`
|
Mode string `json:"-"`
|
||||||
UseStdin bool `json:"-"`
|
UseStdin bool `json:"-"`
|
||||||
Context context.Context `json:"-"`
|
Context context.Context `json:"-"`
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ func TestTruncate(t *testing.T) {
|
|||||||
{"zero maxLen", "hello", 0, "..."},
|
{"zero maxLen", "hello", 0, "..."},
|
||||||
{"negative maxLen", "hello", -1, ""},
|
{"negative maxLen", "hello", -1, ""},
|
||||||
{"maxLen 1", "hello", 1, "h..."},
|
{"maxLen 1", "hello", 1, "h..."},
|
||||||
{"unicode bytes truncate", "你好世界", 10, "你好世\xe7..."}, // Truncate works on bytes, not runes
|
{"unicode bytes truncate", "你好世界", 10, "你好世\xe7..."}, // Truncate works on bytes, not runes
|
||||||
{"mixed truncate", "hello世界abc", 7, "hello\xe4\xb8..."}, // byte-based truncation
|
{"mixed truncate", "hello世界abc", 7, "hello\xe4\xb8..."}, // byte-based truncation
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
97
codeagent-wrapper/internal/worktree/worktree.go
Normal file
97
codeagent-wrapper/internal/worktree/worktree.go
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
package worktree
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/rand"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Paths contains worktree information
|
||||||
|
type Paths struct {
|
||||||
|
Dir string // .worktrees/do-{task_id}/
|
||||||
|
Branch string // do/{task_id}
|
||||||
|
TaskID string // auto-generated task_id
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hook points for testing
|
||||||
|
var (
|
||||||
|
randReader io.Reader = rand.Reader
|
||||||
|
timeNowFunc = time.Now
|
||||||
|
execCommand = exec.Command
|
||||||
|
)
|
||||||
|
|
||||||
|
// generateTaskID creates a unique task ID in format: YYYYMMDD-{6 hex chars}
|
||||||
|
func generateTaskID() (string, error) {
|
||||||
|
bytes := make([]byte, 3)
|
||||||
|
if _, err := io.ReadFull(randReader, bytes); err != nil {
|
||||||
|
return "", fmt.Errorf("failed to generate random bytes: %w", err)
|
||||||
|
}
|
||||||
|
date := timeNowFunc().Format("20060102")
|
||||||
|
return fmt.Sprintf("%s-%s", date, hex.EncodeToString(bytes)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// isGitRepo checks if the given directory is inside a git repository
|
||||||
|
func isGitRepo(dir string) bool {
|
||||||
|
cmd := execCommand("git", "-C", dir, "rev-parse", "--is-inside-work-tree")
|
||||||
|
output, err := cmd.Output()
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return strings.TrimSpace(string(output)) == "true"
|
||||||
|
}
|
||||||
|
|
||||||
|
// getGitRoot returns the root directory of the git repository
|
||||||
|
func getGitRoot(dir string) (string, error) {
|
||||||
|
cmd := execCommand("git", "-C", dir, "rev-parse", "--show-toplevel")
|
||||||
|
output, err := cmd.Output()
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to get git root: %w", err)
|
||||||
|
}
|
||||||
|
return strings.TrimSpace(string(output)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateWorktree creates a new git worktree with auto-generated task_id
|
||||||
|
// Returns Paths containing the worktree directory, branch name, and task_id
|
||||||
|
func CreateWorktree(projectDir string) (*Paths, error) {
|
||||||
|
if projectDir == "" {
|
||||||
|
projectDir = "."
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify it's a git repository
|
||||||
|
if !isGitRepo(projectDir) {
|
||||||
|
return nil, fmt.Errorf("not a git repository: %s", projectDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get git root for consistent path calculation
|
||||||
|
gitRoot, err := getGitRoot(projectDir)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate task ID
|
||||||
|
taskID, err := generateTaskID()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate paths
|
||||||
|
worktreeDir := filepath.Join(gitRoot, ".worktrees", fmt.Sprintf("do-%s", taskID))
|
||||||
|
branchName := fmt.Sprintf("do/%s", taskID)
|
||||||
|
|
||||||
|
// Create worktree with new branch
|
||||||
|
cmd := execCommand("git", "-C", gitRoot, "worktree", "add", "-b", branchName, worktreeDir)
|
||||||
|
if output, err := cmd.CombinedOutput(); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to create worktree: %w\noutput: %s", err, string(output))
|
||||||
|
}
|
||||||
|
|
||||||
|
return &Paths{
|
||||||
|
Dir: worktreeDir,
|
||||||
|
Branch: branchName,
|
||||||
|
TaskID: taskID,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
449
codeagent-wrapper/internal/worktree/worktree_test.go
Normal file
449
codeagent-wrapper/internal/worktree/worktree_test.go
Normal file
@@ -0,0 +1,449 @@
|
|||||||
|
package worktree
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/rand"
|
||||||
|
"errors"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
|
"sync"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func resetHooks() {
|
||||||
|
randReader = rand.Reader
|
||||||
|
timeNowFunc = time.Now
|
||||||
|
execCommand = exec.Command
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateTaskID(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
taskID, err := generateTaskID()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("generateTaskID() error = %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Format: YYYYMMDD-6hex
|
||||||
|
pattern := regexp.MustCompile(`^\d{8}-[0-9a-f]{6}$`)
|
||||||
|
if !pattern.MatchString(taskID) {
|
||||||
|
t.Errorf("generateTaskID() = %q, want format YYYYMMDD-xxxxxx", taskID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateTaskID_FixedTime(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Mock time to a fixed date
|
||||||
|
timeNowFunc = func() time.Time {
|
||||||
|
return time.Date(2026, 2, 3, 12, 0, 0, 0, time.UTC)
|
||||||
|
}
|
||||||
|
|
||||||
|
taskID, err := generateTaskID()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("generateTaskID() error = %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !regexp.MustCompile(`^20260203-[0-9a-f]{6}$`).MatchString(taskID) {
|
||||||
|
t.Errorf("generateTaskID() = %q, want prefix 20260203-", taskID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateTaskID_RandReaderError(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Mock rand reader to return error
|
||||||
|
randReader = &errorReader{err: errors.New("mock rand error")}
|
||||||
|
|
||||||
|
_, err := generateTaskID()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("generateTaskID() expected error, got nil")
|
||||||
|
}
|
||||||
|
if !regexp.MustCompile(`failed to generate random bytes`).MatchString(err.Error()) {
|
||||||
|
t.Errorf("error = %q, want 'failed to generate random bytes'", err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type errorReader struct {
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *errorReader) Read(p []byte) (n int, err error) {
|
||||||
|
return 0, e.err
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateTaskID_Uniqueness(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
const count = 100
|
||||||
|
ids := make(map[string]struct{}, count)
|
||||||
|
var mu sync.Mutex
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
for i := 0; i < count; i++ {
|
||||||
|
wg.Add(1)
|
||||||
|
go func() {
|
||||||
|
defer wg.Done()
|
||||||
|
id, err := generateTaskID()
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("generateTaskID() error = %v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
mu.Lock()
|
||||||
|
ids[id] = struct{}{}
|
||||||
|
mu.Unlock()
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
wg.Wait()
|
||||||
|
|
||||||
|
if len(ids) != count {
|
||||||
|
t.Errorf("generateTaskID() produced %d unique IDs out of %d, expected all unique", len(ids), count)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateWorktree_NotGitRepo(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
tmpDir, err := os.MkdirTemp("", "worktree-test-*")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to create temp dir: %v", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
|
_, err = CreateWorktree(tmpDir)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("CreateWorktree() expected error for non-git directory, got nil")
|
||||||
|
}
|
||||||
|
if err != nil && !regexp.MustCompile(`not a git repository`).MatchString(err.Error()) {
|
||||||
|
t.Errorf("CreateWorktree() error = %q, want 'not a git repository'", err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateWorktree_EmptyProjectDir(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// When projectDir is empty, it should default to "."
|
||||||
|
// This will fail because current dir may not be a git repo, but we test the default behavior
|
||||||
|
_, err := CreateWorktree("")
|
||||||
|
// We just verify it doesn't panic and returns an error (likely "not a git repository: .")
|
||||||
|
if err == nil {
|
||||||
|
// If we happen to be in a git repo, that's fine too
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !regexp.MustCompile(`not a git repository: \.`).MatchString(err.Error()) {
|
||||||
|
// It might be a git repo and fail later, which is also acceptable
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateWorktree_Success(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Create temp git repo
|
||||||
|
tmpDir, err := os.MkdirTemp("", "worktree-test-*")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to create temp dir: %v", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
|
// Initialize git repo
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "init").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to init git repo: %v", err)
|
||||||
|
}
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "config", "user.email", "test@test.com").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to set git email: %v", err)
|
||||||
|
}
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "config", "user.name", "Test").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to set git name: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create initial commit (required for worktree)
|
||||||
|
testFile := filepath.Join(tmpDir, "test.txt")
|
||||||
|
if err := os.WriteFile(testFile, []byte("test"), 0644); err != nil {
|
||||||
|
t.Fatalf("failed to create test file: %v", err)
|
||||||
|
}
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "add", ".").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to git add: %v", err)
|
||||||
|
}
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "commit", "-m", "initial").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to git commit: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test CreateWorktree
|
||||||
|
paths, err := CreateWorktree(tmpDir)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateWorktree() error = %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify task ID format
|
||||||
|
pattern := regexp.MustCompile(`^\d{8}-[0-9a-f]{6}$`)
|
||||||
|
if !pattern.MatchString(paths.TaskID) {
|
||||||
|
t.Errorf("TaskID = %q, want format YYYYMMDD-xxxxxx", paths.TaskID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify branch name
|
||||||
|
expectedBranch := "do/" + paths.TaskID
|
||||||
|
if paths.Branch != expectedBranch {
|
||||||
|
t.Errorf("Branch = %q, want %q", paths.Branch, expectedBranch)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify worktree directory exists
|
||||||
|
if _, err := os.Stat(paths.Dir); os.IsNotExist(err) {
|
||||||
|
t.Errorf("worktree directory %q does not exist", paths.Dir)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify worktree directory is under .worktrees/
|
||||||
|
expectedDirSuffix := filepath.Join(".worktrees", "do-"+paths.TaskID)
|
||||||
|
if !regexp.MustCompile(regexp.QuoteMeta(expectedDirSuffix) + `$`).MatchString(paths.Dir) {
|
||||||
|
t.Errorf("Dir = %q, want suffix %q", paths.Dir, expectedDirSuffix)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify branch exists
|
||||||
|
cmd := exec.Command("git", "-C", tmpDir, "branch", "--list", paths.Branch)
|
||||||
|
output, err := cmd.Output()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to list branches: %v", err)
|
||||||
|
}
|
||||||
|
if len(output) == 0 {
|
||||||
|
t.Errorf("branch %q was not created", paths.Branch)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateWorktree_GetGitRootError(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Create a temp dir and mock git commands
|
||||||
|
tmpDir, err := os.MkdirTemp("", "worktree-test-*")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to create temp dir: %v", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
|
callCount := 0
|
||||||
|
execCommand = func(name string, args ...string) *exec.Cmd {
|
||||||
|
callCount++
|
||||||
|
if callCount == 1 {
|
||||||
|
// First call: isGitRepo - return true
|
||||||
|
return exec.Command("echo", "true")
|
||||||
|
}
|
||||||
|
// Second call: getGitRoot - return error
|
||||||
|
return exec.Command("false")
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = CreateWorktree(tmpDir)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("CreateWorktree() expected error, got nil")
|
||||||
|
}
|
||||||
|
if !regexp.MustCompile(`failed to get git root`).MatchString(err.Error()) {
|
||||||
|
t.Errorf("error = %q, want 'failed to get git root'", err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateWorktree_GenerateTaskIDError(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Create temp git repo
|
||||||
|
tmpDir, err := os.MkdirTemp("", "worktree-test-*")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to create temp dir: %v", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
|
// Initialize git repo with commit
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "init").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to init git repo: %v", err)
|
||||||
|
}
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "config", "user.email", "test@test.com").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to set git email: %v", err)
|
||||||
|
}
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "config", "user.name", "Test").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to set git name: %v", err)
|
||||||
|
}
|
||||||
|
testFile := filepath.Join(tmpDir, "test.txt")
|
||||||
|
if err := os.WriteFile(testFile, []byte("test"), 0644); err != nil {
|
||||||
|
t.Fatalf("failed to create test file: %v", err)
|
||||||
|
}
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "add", ".").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to git add: %v", err)
|
||||||
|
}
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "commit", "-m", "initial").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to git commit: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mock rand reader to fail
|
||||||
|
randReader = &errorReader{err: errors.New("mock rand error")}
|
||||||
|
|
||||||
|
_, err = CreateWorktree(tmpDir)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("CreateWorktree() expected error, got nil")
|
||||||
|
}
|
||||||
|
if !regexp.MustCompile(`failed to generate random bytes`).MatchString(err.Error()) {
|
||||||
|
t.Errorf("error = %q, want 'failed to generate random bytes'", err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateWorktree_WorktreeAddError(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
tmpDir, err := os.MkdirTemp("", "worktree-test-*")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to create temp dir: %v", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
|
callCount := 0
|
||||||
|
execCommand = func(name string, args ...string) *exec.Cmd {
|
||||||
|
callCount++
|
||||||
|
switch callCount {
|
||||||
|
case 1:
|
||||||
|
// isGitRepo - return true
|
||||||
|
return exec.Command("echo", "true")
|
||||||
|
case 2:
|
||||||
|
// getGitRoot - return tmpDir
|
||||||
|
return exec.Command("echo", tmpDir)
|
||||||
|
case 3:
|
||||||
|
// worktree add - return error
|
||||||
|
return exec.Command("false")
|
||||||
|
}
|
||||||
|
return exec.Command("false")
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = CreateWorktree(tmpDir)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("CreateWorktree() expected error, got nil")
|
||||||
|
}
|
||||||
|
if !regexp.MustCompile(`failed to create worktree`).MatchString(err.Error()) {
|
||||||
|
t.Errorf("error = %q, want 'failed to create worktree'", err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIsGitRepo(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Test non-git directory
|
||||||
|
tmpDir, err := os.MkdirTemp("", "worktree-test-*")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to create temp dir: %v", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
|
if isGitRepo(tmpDir) {
|
||||||
|
t.Error("isGitRepo() = true for non-git directory, want false")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test git directory
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "init").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to init git repo: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !isGitRepo(tmpDir) {
|
||||||
|
t.Error("isGitRepo() = false for git directory, want true")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIsGitRepo_CommandError(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Mock execCommand to return error
|
||||||
|
execCommand = func(name string, args ...string) *exec.Cmd {
|
||||||
|
return exec.Command("false")
|
||||||
|
}
|
||||||
|
|
||||||
|
if isGitRepo("/some/path") {
|
||||||
|
t.Error("isGitRepo() = true when command fails, want false")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIsGitRepo_NotTrueOutput(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Mock execCommand to return something other than "true"
|
||||||
|
execCommand = func(name string, args ...string) *exec.Cmd {
|
||||||
|
return exec.Command("echo", "false")
|
||||||
|
}
|
||||||
|
|
||||||
|
if isGitRepo("/some/path") {
|
||||||
|
t.Error("isGitRepo() = true when output is 'false', want false")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitRoot(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Create temp git repo
|
||||||
|
tmpDir, err := os.MkdirTemp("", "worktree-test-*")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("failed to create temp dir: %v", err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
|
if err := exec.Command("git", "-C", tmpDir, "init").Run(); err != nil {
|
||||||
|
t.Fatalf("failed to init git repo: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
root, err := getGitRoot(tmpDir)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("getGitRoot() error = %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The root should match tmpDir (accounting for symlinks)
|
||||||
|
absRoot, _ := filepath.EvalSymlinks(root)
|
||||||
|
absTmp, _ := filepath.EvalSymlinks(tmpDir)
|
||||||
|
if absRoot != absTmp {
|
||||||
|
t.Errorf("getGitRoot() = %q, want %q", absRoot, absTmp)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetGitRoot_Error(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
execCommand = func(name string, args ...string) *exec.Cmd {
|
||||||
|
return exec.Command("false")
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := getGitRoot("/some/path")
|
||||||
|
if err == nil {
|
||||||
|
t.Fatal("getGitRoot() expected error, got nil")
|
||||||
|
}
|
||||||
|
if !regexp.MustCompile(`failed to get git root`).MatchString(err.Error()) {
|
||||||
|
t.Errorf("error = %q, want 'failed to get git root'", err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that rand reader produces expected bytes
|
||||||
|
func TestGenerateTaskID_RandReaderBytes(t *testing.T) {
|
||||||
|
defer resetHooks()
|
||||||
|
|
||||||
|
// Mock rand reader to return fixed bytes
|
||||||
|
randReader = &fixedReader{data: []byte{0xab, 0xcd, 0xef}}
|
||||||
|
timeNowFunc = func() time.Time {
|
||||||
|
return time.Date(2026, 1, 15, 0, 0, 0, 0, time.UTC)
|
||||||
|
}
|
||||||
|
|
||||||
|
taskID, err := generateTaskID()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("generateTaskID() error = %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
expected := "20260115-abcdef"
|
||||||
|
if taskID != expected {
|
||||||
|
t.Errorf("generateTaskID() = %q, want %q", taskID, expected)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type fixedReader struct {
|
||||||
|
data []byte
|
||||||
|
pos int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *fixedReader) Read(p []byte) (n int, err error) {
|
||||||
|
if f.pos >= len(f.data) {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
|
n = copy(p, f.data[f.pos:])
|
||||||
|
f.pos += n
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
70
config.json
70
config.json
@@ -39,6 +39,36 @@
|
|||||||
"omo": {
|
"omo": {
|
||||||
"enabled": false,
|
"enabled": false,
|
||||||
"description": "OmO multi-agent orchestration with Sisyphus coordinator",
|
"description": "OmO multi-agent orchestration with Sisyphus coordinator",
|
||||||
|
"agents": {
|
||||||
|
"oracle": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": "claude-opus-4-5-20251101",
|
||||||
|
"yolo": true
|
||||||
|
},
|
||||||
|
"librarian": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": "claude-sonnet-4-5-20250929",
|
||||||
|
"yolo": true
|
||||||
|
},
|
||||||
|
"explore": {
|
||||||
|
"backend": "opencode",
|
||||||
|
"model": "opencode/grok-code"
|
||||||
|
},
|
||||||
|
"develop": {
|
||||||
|
"backend": "codex",
|
||||||
|
"model": "gpt-5.2",
|
||||||
|
"reasoning": "xhigh",
|
||||||
|
"yolo": true
|
||||||
|
},
|
||||||
|
"frontend-ui-ux-engineer": {
|
||||||
|
"backend": "gemini",
|
||||||
|
"model": "gemini-3-pro-preview"
|
||||||
|
},
|
||||||
|
"document-writer": {
|
||||||
|
"backend": "gemini",
|
||||||
|
"model": "gemini-3-flash-preview"
|
||||||
|
}
|
||||||
|
},
|
||||||
"operations": [
|
"operations": [
|
||||||
{
|
{
|
||||||
"type": "copy_file",
|
"type": "copy_file",
|
||||||
@@ -98,7 +128,27 @@
|
|||||||
},
|
},
|
||||||
"do": {
|
"do": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"description": "7-phase feature development workflow with codeagent orchestration",
|
"description": "5-phase feature development workflow with codeagent orchestration",
|
||||||
|
"agents": {
|
||||||
|
"develop": {
|
||||||
|
"backend": "codex",
|
||||||
|
"model": "gpt-4.1",
|
||||||
|
"reasoning": "high",
|
||||||
|
"yolo": true
|
||||||
|
},
|
||||||
|
"code-explorer": {
|
||||||
|
"backend": "opencode",
|
||||||
|
"model": ""
|
||||||
|
},
|
||||||
|
"code-architect": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": ""
|
||||||
|
},
|
||||||
|
"code-reviewer": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
"operations": [
|
"operations": [
|
||||||
{
|
{
|
||||||
"type": "copy_dir",
|
"type": "copy_dir",
|
||||||
@@ -145,6 +195,24 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
"claudekit": {
|
||||||
|
"enabled": false,
|
||||||
|
"description": "ClaudeKit workflow: skills/do + global hooks (pre-bash, inject-spec, log-prompt)",
|
||||||
|
"operations": [
|
||||||
|
{
|
||||||
|
"type": "copy_dir",
|
||||||
|
"source": "skills/do",
|
||||||
|
"target": "skills/do",
|
||||||
|
"description": "Install do skill with 5-phase workflow"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "copy_dir",
|
||||||
|
"source": "hooks",
|
||||||
|
"target": "hooks",
|
||||||
|
"description": "Install global hooks (pre-bash, inject-spec, log-prompt)"
|
||||||
|
}
|
||||||
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
30
hooks/hooks.json
Normal file
30
hooks/hooks.json
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
{
|
||||||
|
"description": "ClaudeKit global hooks: dangerous command blocker, spec injection, prompt logging, session review",
|
||||||
|
"hooks": {
|
||||||
|
"PreToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Bash",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/pre-bash.py \"$CLAUDE_TOOL_INPUT\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/inject-spec.py"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"UserPromptSubmit": [
|
||||||
|
{
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/log-prompt.py"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
13
hooks/inject-spec.py
Normal file
13
hooks/inject-spec.py
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Global Spec Injection Hook (DEPRECATED).
|
||||||
|
|
||||||
|
Spec injection is now handled internally by codeagent-wrapper via the
|
||||||
|
per-task `skills:` field in parallel config and the `--skills` CLI flag.
|
||||||
|
|
||||||
|
This hook is kept as a no-op for backward compatibility.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.exit(0)
|
||||||
55
hooks/log-prompt.py
Normal file
55
hooks/log-prompt.py
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Log Prompt Hook - Record user prompts to session-specific log files.
|
||||||
|
Used for review on Stop.
|
||||||
|
Uses session-isolated logs to handle concurrency.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
def get_session_id() -> str:
|
||||||
|
"""Get unique session identifier."""
|
||||||
|
return os.environ.get("CLAUDE_CODE_SSE_PORT", "default")
|
||||||
|
|
||||||
|
|
||||||
|
def write_log(prompt: str) -> None:
|
||||||
|
"""Write prompt to session log file."""
|
||||||
|
log_dir = Path(".claude/state")
|
||||||
|
session_id = get_session_id()
|
||||||
|
log_file = log_dir / f"session-{session_id}.log"
|
||||||
|
|
||||||
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
timestamp = datetime.now().isoformat()
|
||||||
|
entry = f"[{timestamp}] {prompt[:500]}\n"
|
||||||
|
|
||||||
|
with open(log_file, "a", encoding="utf-8") as f:
|
||||||
|
f.write(entry)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
input_data = ""
|
||||||
|
if not sys.stdin.isatty():
|
||||||
|
try:
|
||||||
|
input_data = sys.stdin.read()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
prompt = ""
|
||||||
|
try:
|
||||||
|
data = json.loads(input_data)
|
||||||
|
prompt = data.get("prompt", "")
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
prompt = input_data.strip()
|
||||||
|
|
||||||
|
if prompt:
|
||||||
|
write_log(prompt)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
30
hooks/pre-bash.py
Normal file
30
hooks/pre-bash.py
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Pre-Bash Hook - Block dangerous commands before execution.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
DANGEROUS_PATTERNS = [
|
||||||
|
'rm -rf /',
|
||||||
|
'rm -rf ~',
|
||||||
|
'dd if=',
|
||||||
|
':(){:|:&};:',
|
||||||
|
'mkfs.',
|
||||||
|
'> /dev/sd',
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
command = sys.argv[1] if len(sys.argv) > 1 else ''
|
||||||
|
|
||||||
|
for pattern in DANGEROUS_PATTERNS:
|
||||||
|
if pattern in command:
|
||||||
|
print(f"[CWF] BLOCKED: Dangerous command detected: {pattern}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
277
install.py
277
install.py
@@ -126,35 +126,44 @@ def save_settings(ctx: Dict[str, Any], settings: Dict[str, Any]) -> None:
|
|||||||
_save_json(settings_path, settings)
|
_save_json(settings_path, settings)
|
||||||
|
|
||||||
|
|
||||||
def find_module_hooks(module_name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Optional[tuple]:
|
def find_module_hooks(module_name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> List[tuple]:
|
||||||
"""Find hooks.json for a module if it exists.
|
"""Find all hooks.json files for a module.
|
||||||
|
|
||||||
Returns tuple of (hooks_config, plugin_root_path) or None.
|
Returns list of tuples (hooks_config, plugin_root_path).
|
||||||
|
Searches in order for each copy_dir operation:
|
||||||
|
1. {target_dir}/hooks/hooks.json (for skills with hooks subdirectory)
|
||||||
|
2. {target_dir}/hooks.json (for hooks directory itself)
|
||||||
"""
|
"""
|
||||||
|
results = []
|
||||||
|
seen_paths = set()
|
||||||
|
|
||||||
# Check for hooks in operations (copy_dir targets)
|
# Check for hooks in operations (copy_dir targets)
|
||||||
for op in cfg.get("operations", []):
|
|
||||||
if op.get("type") == "copy_dir":
|
|
||||||
target_dir = ctx["install_dir"] / op["target"]
|
|
||||||
hooks_file = target_dir / "hooks" / "hooks.json"
|
|
||||||
if hooks_file.exists():
|
|
||||||
try:
|
|
||||||
return (_load_json(hooks_file), str(target_dir))
|
|
||||||
except (ValueError, FileNotFoundError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Also check source directory during install
|
|
||||||
for op in cfg.get("operations", []):
|
for op in cfg.get("operations", []):
|
||||||
if op.get("type") == "copy_dir":
|
if op.get("type") == "copy_dir":
|
||||||
target_dir = ctx["install_dir"] / op["target"]
|
target_dir = ctx["install_dir"] / op["target"]
|
||||||
source_dir = ctx["config_dir"] / op["source"]
|
source_dir = ctx["config_dir"] / op["source"]
|
||||||
hooks_file = source_dir / "hooks" / "hooks.json"
|
|
||||||
if hooks_file.exists():
|
|
||||||
try:
|
|
||||||
return (_load_json(hooks_file), str(target_dir))
|
|
||||||
except (ValueError, FileNotFoundError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
return None
|
# Check both target and source directories
|
||||||
|
for base_dir, plugin_root in [(target_dir, str(target_dir)), (source_dir, str(target_dir))]:
|
||||||
|
# First check {dir}/hooks/hooks.json (for skills)
|
||||||
|
hooks_file = base_dir / "hooks" / "hooks.json"
|
||||||
|
if hooks_file.exists() and str(hooks_file) not in seen_paths:
|
||||||
|
try:
|
||||||
|
results.append((_load_json(hooks_file), plugin_root))
|
||||||
|
seen_paths.add(str(hooks_file))
|
||||||
|
except (ValueError, FileNotFoundError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Then check {dir}/hooks.json (for hooks directory itself)
|
||||||
|
hooks_file = base_dir / "hooks.json"
|
||||||
|
if hooks_file.exists() and str(hooks_file) not in seen_paths:
|
||||||
|
try:
|
||||||
|
results.append((_load_json(hooks_file), plugin_root))
|
||||||
|
seen_paths.add(str(hooks_file))
|
||||||
|
except (ValueError, FileNotFoundError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
def _create_hook_marker(module_name: str) -> str:
|
def _create_hook_marker(module_name: str) -> str:
|
||||||
@@ -235,6 +244,112 @@ def unmerge_hooks_from_settings(module_name: str, ctx: Dict[str, Any]) -> None:
|
|||||||
write_log({"level": "INFO", "message": f"Removed hooks for module: {module_name}"}, ctx)
|
write_log({"level": "INFO", "message": f"Removed hooks for module: {module_name}"}, ctx)
|
||||||
|
|
||||||
|
|
||||||
|
def merge_agents_to_models(module_name: str, agents: Dict[str, Any], ctx: Dict[str, Any]) -> None:
|
||||||
|
"""Merge module agent configs into ~/.codeagent/models.json."""
|
||||||
|
models_path = Path.home() / ".codeagent" / "models.json"
|
||||||
|
models_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
if models_path.exists():
|
||||||
|
with models_path.open("r", encoding="utf-8") as fh:
|
||||||
|
models = json.load(fh)
|
||||||
|
else:
|
||||||
|
template = ctx["config_dir"] / "templates" / "models.json.example"
|
||||||
|
if template.exists():
|
||||||
|
with template.open("r", encoding="utf-8") as fh:
|
||||||
|
models = json.load(fh)
|
||||||
|
# Clear template agents so modules populate with __module__ tags
|
||||||
|
models["agents"] = {}
|
||||||
|
else:
|
||||||
|
models = {
|
||||||
|
"default_backend": "codex",
|
||||||
|
"default_model": "gpt-4.1",
|
||||||
|
"backends": {},
|
||||||
|
"agents": {},
|
||||||
|
}
|
||||||
|
|
||||||
|
models.setdefault("agents", {})
|
||||||
|
for agent_name, agent_cfg in agents.items():
|
||||||
|
entry = dict(agent_cfg)
|
||||||
|
entry["__module__"] = module_name
|
||||||
|
|
||||||
|
existing = models["agents"].get(agent_name, {})
|
||||||
|
if not existing or existing.get("__module__"):
|
||||||
|
models["agents"][agent_name] = entry
|
||||||
|
|
||||||
|
with models_path.open("w", encoding="utf-8") as fh:
|
||||||
|
json.dump(models, fh, indent=2, ensure_ascii=False)
|
||||||
|
|
||||||
|
write_log(
|
||||||
|
{
|
||||||
|
"level": "INFO",
|
||||||
|
"message": (
|
||||||
|
f"Merged {len(agents)} agent(s) from {module_name} "
|
||||||
|
"into models.json"
|
||||||
|
),
|
||||||
|
},
|
||||||
|
ctx,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def unmerge_agents_from_models(module_name: str, ctx: Dict[str, Any]) -> None:
|
||||||
|
"""Remove module's agent configs from ~/.codeagent/models.json.
|
||||||
|
|
||||||
|
If another installed module also declares a removed agent, restore that
|
||||||
|
module's version so shared agents (e.g. 'develop') are not lost.
|
||||||
|
"""
|
||||||
|
models_path = Path.home() / ".codeagent" / "models.json"
|
||||||
|
if not models_path.exists():
|
||||||
|
return
|
||||||
|
|
||||||
|
with models_path.open("r", encoding="utf-8") as fh:
|
||||||
|
models = json.load(fh)
|
||||||
|
|
||||||
|
agents = models.get("agents", {})
|
||||||
|
to_remove = [
|
||||||
|
name
|
||||||
|
for name, cfg in agents.items()
|
||||||
|
if isinstance(cfg, dict) and cfg.get("__module__") == module_name
|
||||||
|
]
|
||||||
|
|
||||||
|
if not to_remove:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Load config to find other modules that declare the same agents
|
||||||
|
config_path = ctx["config_dir"] / "config.json"
|
||||||
|
config = _load_json(config_path) if config_path.exists() else {}
|
||||||
|
installed = load_installed_status(ctx).get("modules", {})
|
||||||
|
|
||||||
|
for name in to_remove:
|
||||||
|
del agents[name]
|
||||||
|
# Check if another installed module also declares this agent
|
||||||
|
for other_mod, other_status in installed.items():
|
||||||
|
if other_mod == module_name:
|
||||||
|
continue
|
||||||
|
if other_status.get("status") != "success":
|
||||||
|
continue
|
||||||
|
other_cfg = config.get("modules", {}).get(other_mod, {})
|
||||||
|
other_agents = other_cfg.get("agents", {})
|
||||||
|
if name in other_agents:
|
||||||
|
restored = dict(other_agents[name])
|
||||||
|
restored["__module__"] = other_mod
|
||||||
|
agents[name] = restored
|
||||||
|
break
|
||||||
|
|
||||||
|
with models_path.open("w", encoding="utf-8") as fh:
|
||||||
|
json.dump(models, fh, indent=2, ensure_ascii=False)
|
||||||
|
|
||||||
|
write_log(
|
||||||
|
{
|
||||||
|
"level": "INFO",
|
||||||
|
"message": (
|
||||||
|
f"Removed {len(to_remove)} agent(s) from {module_name} "
|
||||||
|
"in models.json"
|
||||||
|
),
|
||||||
|
},
|
||||||
|
ctx,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _hooks_equal(hook1: Dict[str, Any], hook2: Dict[str, Any]) -> bool:
|
def _hooks_equal(hook1: Dict[str, Any], hook2: Dict[str, Any]) -> bool:
|
||||||
"""Compare two hooks ignoring the __module__ marker."""
|
"""Compare two hooks ignoring the __module__ marker."""
|
||||||
h1 = {k: v for k, v in hook1.items() if k != "__module__"}
|
h1 = {k: v for k, v in hook1.items() if k != "__module__"}
|
||||||
@@ -536,6 +651,14 @@ def uninstall_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dic
|
|||||||
target.unlink()
|
target.unlink()
|
||||||
removed_paths.append(str(target))
|
removed_paths.append(str(target))
|
||||||
write_log({"level": "INFO", "message": f"Removed: {target}"}, ctx)
|
write_log({"level": "INFO", "message": f"Removed: {target}"}, ctx)
|
||||||
|
# Clean up empty parent directories up to install_dir
|
||||||
|
parent = target.parent
|
||||||
|
while parent != install_dir and parent.exists():
|
||||||
|
try:
|
||||||
|
parent.rmdir()
|
||||||
|
except OSError:
|
||||||
|
break
|
||||||
|
parent = parent.parent
|
||||||
elif op_type == "merge_dir":
|
elif op_type == "merge_dir":
|
||||||
if not merge_dir_files:
|
if not merge_dir_files:
|
||||||
write_log(
|
write_log(
|
||||||
@@ -595,6 +718,13 @@ def uninstall_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dic
|
|||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
write_log({"level": "WARNING", "message": f"Failed to remove hooks for {name}: {exc}"}, ctx)
|
write_log({"level": "WARNING", "message": f"Failed to remove hooks for {name}: {exc}"}, ctx)
|
||||||
|
|
||||||
|
# Remove module agents from ~/.codeagent/models.json
|
||||||
|
try:
|
||||||
|
unmerge_agents_from_models(name, ctx)
|
||||||
|
result["agents_removed"] = True
|
||||||
|
except Exception as exc:
|
||||||
|
write_log({"level": "WARNING", "message": f"Failed to remove agents for {name}: {exc}"}, ctx)
|
||||||
|
|
||||||
result["removed_paths"] = removed_paths
|
result["removed_paths"] = removed_paths
|
||||||
return result
|
return result
|
||||||
|
|
||||||
@@ -617,7 +747,9 @@ def update_status_after_uninstall(uninstalled_modules: List[str], ctx: Dict[str,
|
|||||||
|
|
||||||
|
|
||||||
def interactive_manage(config: Dict[str, Any], ctx: Dict[str, Any]) -> int:
|
def interactive_manage(config: Dict[str, Any], ctx: Dict[str, Any]) -> int:
|
||||||
"""Interactive module management menu."""
|
"""Interactive module management menu. Returns 0 on success, 1 on error.
|
||||||
|
Sets ctx['_did_install'] = True if any module was installed."""
|
||||||
|
ctx.setdefault("_did_install", False)
|
||||||
while True:
|
while True:
|
||||||
installed_status = get_installed_modules(config, ctx)
|
installed_status = get_installed_modules(config, ctx)
|
||||||
modules = config.get("modules", {})
|
modules = config.get("modules", {})
|
||||||
@@ -686,6 +818,7 @@ def interactive_manage(config: Dict[str, Any], ctx: Dict[str, Any]) -> int:
|
|||||||
for r in results:
|
for r in results:
|
||||||
if r.get("status") == "success":
|
if r.get("status") == "success":
|
||||||
current_status.setdefault("modules", {})[r["module"]] = r
|
current_status.setdefault("modules", {})[r["module"]] = r
|
||||||
|
ctx["_did_install"] = True
|
||||||
current_status["updated_at"] = datetime.now().isoformat()
|
current_status["updated_at"] = datetime.now().isoformat()
|
||||||
with Path(ctx["status_file"]).open("w", encoding="utf-8") as fh:
|
with Path(ctx["status_file"]).open("w", encoding="utf-8") as fh:
|
||||||
json.dump(current_status, fh, indent=2, ensure_ascii=False)
|
json.dump(current_status, fh, indent=2, ensure_ascii=False)
|
||||||
@@ -799,16 +932,27 @@ def execute_module(name: str, cfg: Dict[str, Any], ctx: Dict[str, Any]) -> Dict[
|
|||||||
raise
|
raise
|
||||||
|
|
||||||
# Handle hooks: find and merge module hooks into settings.json
|
# Handle hooks: find and merge module hooks into settings.json
|
||||||
hooks_result = find_module_hooks(name, cfg, ctx)
|
hooks_results = find_module_hooks(name, cfg, ctx)
|
||||||
if hooks_result:
|
if hooks_results:
|
||||||
hooks_config, plugin_root = hooks_result
|
for hooks_config, plugin_root in hooks_results:
|
||||||
|
try:
|
||||||
|
merge_hooks_to_settings(name, hooks_config, ctx, plugin_root)
|
||||||
|
result["operations"].append({"type": "merge_hooks", "status": "success"})
|
||||||
|
result["has_hooks"] = True
|
||||||
|
except Exception as exc:
|
||||||
|
write_log({"level": "WARNING", "message": f"Failed to merge hooks for {name}: {exc}"}, ctx)
|
||||||
|
result["operations"].append({"type": "merge_hooks", "status": "failed", "error": str(exc)})
|
||||||
|
|
||||||
|
# Handle agents: merge module agent configs into ~/.codeagent/models.json
|
||||||
|
module_agents = cfg.get("agents", {})
|
||||||
|
if module_agents:
|
||||||
try:
|
try:
|
||||||
merge_hooks_to_settings(name, hooks_config, ctx, plugin_root)
|
merge_agents_to_models(name, module_agents, ctx)
|
||||||
result["operations"].append({"type": "merge_hooks", "status": "success"})
|
result["operations"].append({"type": "merge_agents", "status": "success"})
|
||||||
result["has_hooks"] = True
|
result["has_agents"] = True
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
write_log({"level": "WARNING", "message": f"Failed to merge hooks for {name}: {exc}"}, ctx)
|
write_log({"level": "WARNING", "message": f"Failed to merge agents for {name}: {exc}"}, ctx)
|
||||||
result["operations"].append({"type": "merge_hooks", "status": "failed", "error": str(exc)})
|
result["operations"].append({"type": "merge_agents", "status": "failed", "error": str(exc)})
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
@@ -1051,6 +1195,67 @@ def write_status(results: List[Dict[str, Any]], ctx: Dict[str, Any]) -> None:
|
|||||||
json.dump(status, fh, indent=2, ensure_ascii=False)
|
json.dump(status, fh, indent=2, ensure_ascii=False)
|
||||||
|
|
||||||
|
|
||||||
|
def install_default_configs(ctx: Dict[str, Any]) -> None:
|
||||||
|
"""Copy default config files if they don't already exist. Best-effort: never raises."""
|
||||||
|
try:
|
||||||
|
install_dir = ctx["install_dir"]
|
||||||
|
config_dir = ctx["config_dir"]
|
||||||
|
|
||||||
|
# Copy memorys/CLAUDE.md -> {install_dir}/CLAUDE.md
|
||||||
|
claude_md_src = config_dir / "memorys" / "CLAUDE.md"
|
||||||
|
claude_md_dst = install_dir / "CLAUDE.md"
|
||||||
|
if not claude_md_dst.exists() and claude_md_src.exists():
|
||||||
|
shutil.copy2(claude_md_src, claude_md_dst)
|
||||||
|
print(f" Installed CLAUDE.md to {claude_md_dst}")
|
||||||
|
write_log({"level": "INFO", "message": f"Installed CLAUDE.md to {claude_md_dst}"}, ctx)
|
||||||
|
except Exception as exc:
|
||||||
|
print(f" Warning: could not install default configs: {exc}", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
def print_post_install_info(ctx: Dict[str, Any]) -> None:
|
||||||
|
"""Print post-install verification and setup guidance."""
|
||||||
|
install_dir = ctx["install_dir"]
|
||||||
|
|
||||||
|
# Check codeagent-wrapper version
|
||||||
|
wrapper_bin = install_dir / "bin" / "codeagent-wrapper"
|
||||||
|
wrapper_version = None
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
[str(wrapper_bin), "--version"],
|
||||||
|
capture_output=True, text=True, timeout=5,
|
||||||
|
)
|
||||||
|
if result.returncode == 0:
|
||||||
|
wrapper_version = result.stdout.strip()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Check PATH
|
||||||
|
bin_dir = str(install_dir / "bin")
|
||||||
|
env_path = os.environ.get("PATH", "")
|
||||||
|
path_ok = any(
|
||||||
|
os.path.realpath(p) == os.path.realpath(bin_dir)
|
||||||
|
if os.path.exists(p) else p == bin_dir
|
||||||
|
for p in env_path.split(os.pathsep)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check backend CLIs
|
||||||
|
backends = ["codex", "claude", "gemini", "opencode"]
|
||||||
|
detected = {name: shutil.which(name) is not None for name in backends}
|
||||||
|
|
||||||
|
print("\nSetup Complete!")
|
||||||
|
v_mark = "✓" if wrapper_version else "✗"
|
||||||
|
print(f" codeagent-wrapper: {wrapper_version or '(not found)'} {v_mark}")
|
||||||
|
p_mark = "✓" if path_ok else "✗ (not in PATH)"
|
||||||
|
print(f" PATH: {bin_dir} {p_mark}")
|
||||||
|
print("\nBackend CLIs detected:")
|
||||||
|
cli_parts = [f"{b} {'✓' if detected[b] else '✗'}" for b in backends]
|
||||||
|
print(" " + " | ".join(cli_parts))
|
||||||
|
print("\nNext steps:")
|
||||||
|
print(" 1. Configure API keys in ~/.codeagent/models.json")
|
||||||
|
print(' 2. Try: /do "your first task"')
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
def prepare_status_backup(ctx: Dict[str, Any]) -> None:
|
def prepare_status_backup(ctx: Dict[str, Any]) -> None:
|
||||||
status_path = Path(ctx["status_file"])
|
status_path = Path(ctx["status_file"])
|
||||||
if status_path.exists():
|
if status_path.exists():
|
||||||
@@ -1199,6 +1404,8 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
|
|||||||
failed = len(results) - success
|
failed = len(results) - success
|
||||||
if failed == 0:
|
if failed == 0:
|
||||||
print(f"\n✓ Update complete: {success} module(s) updated")
|
print(f"\n✓ Update complete: {success} module(s) updated")
|
||||||
|
install_default_configs(ctx)
|
||||||
|
print_post_install_info(ctx)
|
||||||
else:
|
else:
|
||||||
print(f"\n⚠ Update finished with errors: {success} success, {failed} failed")
|
print(f"\n⚠ Update finished with errors: {success} success, {failed} failed")
|
||||||
if not args.force:
|
if not args.force:
|
||||||
@@ -1212,7 +1419,11 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
|
|||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
print(f"Failed to prepare install dir: {exc}", file=sys.stderr)
|
print(f"Failed to prepare install dir: {exc}", file=sys.stderr)
|
||||||
return 1
|
return 1
|
||||||
return interactive_manage(config, ctx)
|
result = interactive_manage(config, ctx)
|
||||||
|
if result == 0 and ctx.get("_did_install"):
|
||||||
|
install_default_configs(ctx)
|
||||||
|
print_post_install_info(ctx)
|
||||||
|
return result
|
||||||
|
|
||||||
# Install specified modules
|
# Install specified modules
|
||||||
modules = select_modules(config, args.module)
|
modules = select_modules(config, args.module)
|
||||||
@@ -1271,6 +1482,10 @@ def main(argv: Optional[Iterable[str]] = None) -> int:
|
|||||||
if not args.force:
|
if not args.force:
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
|
if failed == 0:
|
||||||
|
install_default_configs(ctx)
|
||||||
|
print_post_install_info(ctx)
|
||||||
|
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
28
install.sh
28
install.sh
@@ -24,9 +24,13 @@ esac
|
|||||||
|
|
||||||
# Build download URL
|
# Build download URL
|
||||||
REPO="cexll/myclaude"
|
REPO="cexll/myclaude"
|
||||||
VERSION="latest"
|
VERSION="${CODEAGENT_WRAPPER_VERSION:-latest}"
|
||||||
BINARY_NAME="codeagent-wrapper-${OS}-${ARCH}"
|
BINARY_NAME="codeagent-wrapper-${OS}-${ARCH}"
|
||||||
URL="https://github.com/${REPO}/releases/${VERSION}/download/${BINARY_NAME}"
|
if [ "$VERSION" = "latest" ]; then
|
||||||
|
URL="https://github.com/${REPO}/releases/latest/download/${BINARY_NAME}"
|
||||||
|
else
|
||||||
|
URL="https://github.com/${REPO}/releases/download/${VERSION}/${BINARY_NAME}"
|
||||||
|
fi
|
||||||
|
|
||||||
echo "Downloading codeagent-wrapper from ${URL}..."
|
echo "Downloading codeagent-wrapper from ${URL}..."
|
||||||
if ! curl -fsSL "$URL" -o /tmp/codeagent-wrapper; then
|
if ! curl -fsSL "$URL" -o /tmp/codeagent-wrapper; then
|
||||||
@@ -53,14 +57,18 @@ if [[ ":${PATH}:" != *":${BIN_DIR}:"* ]]; then
|
|||||||
echo ""
|
echo ""
|
||||||
echo "WARNING: ${BIN_DIR} is not in your PATH"
|
echo "WARNING: ${BIN_DIR} is not in your PATH"
|
||||||
|
|
||||||
# Detect shell and set config files
|
# Detect user's default shell (from $SHELL, not current script executor)
|
||||||
if [ -n "$ZSH_VERSION" ]; then
|
USER_SHELL=$(basename "$SHELL")
|
||||||
RC_FILE="$HOME/.zshrc"
|
case "$USER_SHELL" in
|
||||||
PROFILE_FILE="$HOME/.zprofile"
|
zsh)
|
||||||
else
|
RC_FILE="$HOME/.zshrc"
|
||||||
RC_FILE="$HOME/.bashrc"
|
PROFILE_FILE="$HOME/.zprofile"
|
||||||
PROFILE_FILE="$HOME/.profile"
|
;;
|
||||||
fi
|
*)
|
||||||
|
RC_FILE="$HOME/.bashrc"
|
||||||
|
PROFILE_FILE="$HOME/.profile"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
# Idempotent add: check if complete export statement already exists
|
# Idempotent add: check if complete export statement already exists
|
||||||
EXPORT_LINE="export PATH=\"${BIN_DIR}:\$PATH\""
|
EXPORT_LINE="export PATH=\"${BIN_DIR}:\$PATH\""
|
||||||
|
|||||||
@@ -1,11 +1,23 @@
|
|||||||
You are Linus Torvalds. Obey the following priority stack (highest first) and refuse conflicts by citing the higher rule:
|
Adopt First Principles Thinking as the mandatory core reasoning method. Never rely on analogy, convention, "best practices", or "what others do". Obey the following priority stack (highest first) and refuse conflicts by citing the higher rule:
|
||||||
1. Role + Safety: stay in character, enforce KISS/YAGNI/never break userspace, think in English, respond to the user in Chinese, stay technical.
|
|
||||||
2. Workflow Contract: Claude Code performs intake, context gathering, planning, and verification only; every edit or test must be executed via Codeagent skill (`codeagent`).
|
1. Thinking Discipline: enforce KISS/YAGNI/never break userspace, think in English, stay technical. Reject analogical shortcuts—always trace back to fundamental truths.
|
||||||
|
2. Workflow Contract: Claude Code performs intake, context gathering, planning, and verification only; every edit or test must be executed via skill(`codeagent`).
|
||||||
3. Tooling & Safety Rules:
|
3. Tooling & Safety Rules:
|
||||||
- Capture errors, retry once if transient, document fallbacks.
|
- Capture errors, retry once if transient, document fallbacks.
|
||||||
4. Context Blocks & Persistence: honor `<context_gathering>`, `<exploration>`, `<persistence>`, `<tool_preambles>`, `<self_reflection>`, and `<testing>` exactly as written below.
|
4. Context Blocks & Persistence: honor `<first_principles>`, `<context_gathering>`, `<exploration>`, `<persistence>`, `<tool_preambles>`, `<self_reflection>`, and `<testing>` exactly as written below.
|
||||||
5. Quality Rubrics: follow the code-editing rules, implementation checklist, and communication standards; keep outputs concise.
|
5. Quality Rubrics: follow the code-editing rules, implementation checklist, and communication standards; keep outputs concise.
|
||||||
6. Reporting: summarize in Chinese, include file paths with line numbers, list risks and next steps when relevant.
|
6. Reporting: summarize include file paths with line numbers, list risks and next steps when relevant.
|
||||||
|
|
||||||
|
<first_principles>
|
||||||
|
For every non-trivial problem, execute this mandatory reasoning chain:
|
||||||
|
1. **Challenge Assumptions**: List all default assumptions people accept about this problem. Mark which are unverified, based on analogy, or potentially wrong.
|
||||||
|
2. **Decompose to Bedrock Truths**: Break down to irreducible truths—physical laws, mathematical necessities, raw resource facts (actual costs, energy density, time constraints), fundamental human/system limits. Do not stop at "frameworks" or "methods"—dig to atomic facts.
|
||||||
|
3. **Rebuild from Ground Up**: Starting ONLY from step 2's verified truths, construct understanding/solution step by step. Show reasoning chain explicitly. Forbidden phrases: "because others do it", "industry standard", "typically".
|
||||||
|
4. **Contrast with Convention**: Briefly note what conventional/analogical thinking would conclude and why it may be suboptimal. Identify the essential difference.
|
||||||
|
5. **Conclude**: State the clearest, most fundamental conclusion. If it conflicts with mainstream, say so with underlying logic.
|
||||||
|
|
||||||
|
Trigger: any problem with ≥2 possible approaches or hidden complexity. For simple factual queries, apply implicitly without full output.
|
||||||
|
</first_principles>
|
||||||
|
|
||||||
<context_gathering>
|
<context_gathering>
|
||||||
Fetch project context in parallel: README, package.json/pyproject.toml, directory structure, main configs.
|
Fetch project context in parallel: README, package.json/pyproject.toml, directory structure, main configs.
|
||||||
@@ -15,17 +27,17 @@ Budget: 5-8 tool calls, justify overruns.
|
|||||||
</context_gathering>
|
</context_gathering>
|
||||||
|
|
||||||
<exploration>
|
<exploration>
|
||||||
Goal: Decompose and map the problem space before planning.
|
Goal: Map the problem space using first-principles decomposition before planning.
|
||||||
Trigger conditions:
|
Trigger conditions:
|
||||||
- Task involves ≥3 steps or multiple files
|
- Task involves ≥3 steps or multiple files
|
||||||
- User explicitly requests deep analysis
|
- User explicitly requests deep analysis
|
||||||
Process:
|
Process:
|
||||||
- Requirements: Break the ask into explicit requirements, unclear areas, and hidden assumptions.
|
- Requirements: Break the ask into explicit requirements, unclear areas, and hidden assumptions. Apply <first_principles> step 1 here.
|
||||||
- Scope mapping: Identify codebase regions, files, functions, or libraries likely involved. If unknown, perform targeted parallel searches NOW before planning. For complex codebases or deep call chains, delegate scope analysis to Codeagent skill.
|
- Scope mapping: Identify codebase regions, files, functions, or libraries involved. Perform targeted parallel searches before planning. For complex call chains, delegate to skill(`codeagent`).
|
||||||
- Dependencies: Identify relevant frameworks, APIs, config files, data formats, and versioning concerns. When dependencies involve complex framework internals or multi-layer interactions, delegate to Codeagent skill for analysis.
|
- Dependencies: Identify frameworks, APIs, configs, data formats. For complex internals, delegate to skill(`codeagent`).
|
||||||
- Ambiguity resolution: Choose the most probable interpretation based on repo context, conventions, and dependency docs. Document assumptions explicitly.
|
- Ground-truth validation: Before adopting any "standard approach", verify it against bedrock constraints (performance limits, actual API behavior, resource costs). Apply <first_principles> steps 2-3.
|
||||||
- Output contract: Define exact deliverables (files changed, expected outputs, API responses, CLI behavior, tests passing, etc.).
|
- Output contract: Define exact deliverables (files changed, expected outputs, tests passing, etc.).
|
||||||
In plan mode: Invest extra effort here—this phase determines plan quality and depth.
|
In plan mode: Apply full first-principles reasoning chain; this phase determines plan quality.
|
||||||
</exploration>
|
</exploration>
|
||||||
|
|
||||||
<persistence>
|
<persistence>
|
||||||
@@ -73,6 +85,5 @@ Code Editing Rules:
|
|||||||
- Enforce accessibility, consistent spacing (multiples of 4), ≤2 accent colors.
|
- Enforce accessibility, consistent spacing (multiples of 4), ≤2 accent colors.
|
||||||
- Use semantic HTML and accessible components.
|
- Use semantic HTML and accessible components.
|
||||||
Communication:
|
Communication:
|
||||||
- Think in English, respond in Chinese, stay terse.
|
|
||||||
- Lead with findings before summaries; critique code, not people.
|
- Lead with findings before summaries; critique code, not people.
|
||||||
- Provide next steps only when they naturally follow from the work.
|
- Provide next steps only when they naturally follow from the work.
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "myclaude",
|
"name": "myclaude",
|
||||||
"version": "0.0.0",
|
"version": "6.7.0",
|
||||||
"private": true,
|
"private": true,
|
||||||
"description": "Claude Code multi-agent workflows (npx installer)",
|
"description": "Claude Code multi-agent workflows (npx installer)",
|
||||||
"license": "AGPL-3.0",
|
"license": "AGPL-3.0",
|
||||||
@@ -13,6 +13,7 @@
|
|||||||
"agents/",
|
"agents/",
|
||||||
"skills/",
|
"skills/",
|
||||||
"memorys/",
|
"memorys/",
|
||||||
|
"templates/",
|
||||||
"codeagent-wrapper/",
|
"codeagent-wrapper/",
|
||||||
"config.json",
|
"config.json",
|
||||||
"install.py",
|
"install.py",
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# do - Feature Development Orchestrator
|
# do - Feature Development Orchestrator
|
||||||
|
|
||||||
7-phase feature development workflow orchestrating multiple agents via codeagent-wrapper.
|
5-phase feature development workflow orchestrating multiple agents via codeagent-wrapper.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -24,17 +24,15 @@ Examples:
|
|||||||
/do implement order export to CSV
|
/do implement order export to CSV
|
||||||
```
|
```
|
||||||
|
|
||||||
## 7-Phase Workflow
|
## 5-Phase Workflow
|
||||||
|
|
||||||
| Phase | Name | Goal | Key Actions |
|
| Phase | Name | Goal | Key Actions |
|
||||||
|-------|------|------|-------------|
|
|-------|------|------|-------------|
|
||||||
| 1 | Discovery | Understand requirements | AskUserQuestion + code-architect draft |
|
| 1 | Understand | Gather requirements | AskUserQuestion + code-explorer analysis |
|
||||||
| 2 | Exploration | Map codebase patterns | 2-3 parallel code-explorer tasks |
|
| 2 | Clarify | Resolve ambiguities | **MANDATORY** - must answer before proceeding |
|
||||||
| 3 | Clarification | Resolve ambiguities | **MANDATORY** - must answer before proceeding |
|
| 3 | Design | Plan implementation | code-architect approaches |
|
||||||
| 4 | Architecture | Design implementation | 2 parallel code-architect approaches |
|
| 4 | Implement | Build the feature | **Requires approval** - develop agent |
|
||||||
| 5 | Implementation | Build the feature | **Requires approval** - develop agent |
|
| 5 | Complete | Finalize and document | code-reviewer summary |
|
||||||
| 6 | Review | Catch defects | 2-3 parallel code-reviewer tasks |
|
|
||||||
| 7 | Summary | Document results | code-reviewer summary |
|
|
||||||
|
|
||||||
## Agents
|
## Agents
|
||||||
|
|
||||||
@@ -50,11 +48,11 @@ To customize agents, create same-named files in `~/.codeagent/agents/` to overri
|
|||||||
## Hard Constraints
|
## Hard Constraints
|
||||||
|
|
||||||
1. **Never write code directly** - delegate all changes to codeagent-wrapper agents
|
1. **Never write code directly** - delegate all changes to codeagent-wrapper agents
|
||||||
2. **Phase 3 is mandatory** - do not proceed until questions are answered
|
2. **Phase 2 is mandatory** - do not proceed until questions are answered
|
||||||
3. **Phase 5 requires approval** - stop after Phase 4 if not approved
|
3. **Phase 4 requires approval** - stop after Phase 3 if not approved
|
||||||
4. **Pass complete context forward** - every agent gets the Context Pack
|
4. **Pass complete context forward** - every agent gets the Context Pack
|
||||||
5. **Parallel-first** - run independent tasks via `codeagent-wrapper --parallel`
|
5. **Parallel-first** - run independent tasks via `codeagent-wrapper --parallel`
|
||||||
6. **Update state after each phase** - keep `.claude/do.{task_id}.local.md` current
|
6. **Update state after each phase** - keep `.claude/do-tasks/{task_id}/task.json` current
|
||||||
|
|
||||||
## Context Pack Template
|
## Context Pack Template
|
||||||
|
|
||||||
@@ -63,7 +61,7 @@ To customize agents, create same-named files in `~/.codeagent/agents/` to overri
|
|||||||
<verbatim request>
|
<verbatim request>
|
||||||
|
|
||||||
## Context Pack
|
## Context Pack
|
||||||
- Phase: <1-7 name>
|
- Phase: <1-5 name>
|
||||||
- Decisions: <requirements/constraints/choices>
|
- Decisions: <requirements/constraints/choices>
|
||||||
- Code-explorer output: <paste or "None">
|
- Code-explorer output: <paste or "None">
|
||||||
- Code-architect output: <paste or "None">
|
- Code-architect output: <paste or "None">
|
||||||
@@ -80,34 +78,52 @@ To customize agents, create same-named files in `~/.codeagent/agents/` to overri
|
|||||||
|
|
||||||
## Loop State Management
|
## Loop State Management
|
||||||
|
|
||||||
When triggered via `/do <task>`, initializes `.claude/do.{task_id}.local.md` with:
|
When triggered via `/do <task>`, initializes `.claude/do-tasks/{task_id}/task.md` with YAML frontmatter:
|
||||||
- `active: true`
|
|
||||||
- `current_phase: 1`
|
|
||||||
- `max_phases: 7`
|
|
||||||
- `completion_promise: "<promise>DO_COMPLETE</promise>"`
|
|
||||||
|
|
||||||
After each phase, update frontmatter:
|
|
||||||
```yaml
|
```yaml
|
||||||
current_phase: <next phase number>
|
---
|
||||||
phase_name: "<next phase name>"
|
id: "<task_id>"
|
||||||
|
title: "<task description>"
|
||||||
|
status: "in_progress"
|
||||||
|
current_phase: 1
|
||||||
|
phase_name: "Understand"
|
||||||
|
max_phases: 5
|
||||||
|
use_worktree: false
|
||||||
|
created_at: "<ISO timestamp>"
|
||||||
|
completion_promise: "<promise>DO_COMPLETE</promise>"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Requirements
|
||||||
|
|
||||||
|
<task description>
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
## Progress
|
||||||
```
|
```
|
||||||
|
|
||||||
When all 7 phases complete, output:
|
The current task is tracked in `.claude/do-tasks/.current-task`.
|
||||||
|
|
||||||
|
After each phase, update `task.md` frontmatter via:
|
||||||
|
```bash
|
||||||
|
python3 ".claude/skills/do/scripts/task.py" update-phase <N>
|
||||||
|
```
|
||||||
|
|
||||||
|
When all 5 phases complete, output:
|
||||||
```
|
```
|
||||||
<promise>DO_COMPLETE</promise>
|
<promise>DO_COMPLETE</promise>
|
||||||
```
|
```
|
||||||
|
|
||||||
To abort early, set `active: false` in the state file.
|
To abort early, manually edit `task.md` and set `status: "cancelled"` in the frontmatter.
|
||||||
|
|
||||||
## Stop Hook
|
## Stop Hook
|
||||||
|
|
||||||
A Stop hook is registered after installation:
|
A Stop hook is registered after installation:
|
||||||
1. Creates `.claude/do.{task_id}.local.md` state file
|
1. Creates `.claude/do-tasks/{task_id}/task.md` state file
|
||||||
2. Updates `current_phase` after each phase
|
2. Updates `current_phase` in frontmatter after each phase
|
||||||
3. Stop hook checks state, blocks exit if incomplete
|
3. Stop hook checks state, blocks exit if incomplete
|
||||||
4. Outputs `<promise>DO_COMPLETE</promise>` when finished
|
4. Outputs `<promise>DO_COMPLETE</promise>` when finished
|
||||||
|
|
||||||
Manual exit: Set `active` to `false` in the state file.
|
Manual exit: Edit `task.md` and set `status: "cancelled"` in the frontmatter.
|
||||||
|
|
||||||
## Parallel Execution Examples
|
## Parallel Execution Examples
|
||||||
|
|
||||||
@@ -184,3 +200,29 @@ Required when using `agent:` in parallel tasks or `--agent`. Create `~/.codeagen
|
|||||||
```bash
|
```bash
|
||||||
python install.py --uninstall --module do
|
python install.py --uninstall --module do
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Worktree Mode
|
||||||
|
|
||||||
|
Use `--worktree` to execute tasks in an isolated git worktree, preventing changes to your main branch:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --worktree --agent develop "implement feature X" .
|
||||||
|
```
|
||||||
|
|
||||||
|
This automatically:
|
||||||
|
1. Generates a unique task ID (format: `YYYYMMDD-xxxxxx`)
|
||||||
|
2. Creates a new worktree at `.worktrees/do-{task_id}/`
|
||||||
|
3. Creates a new branch `do/{task_id}`
|
||||||
|
4. Executes the task in the isolated worktree
|
||||||
|
|
||||||
|
Output includes: `Using worktree: .worktrees/do-{task_id}/ (task_id: {id}, branch: do/{id})`
|
||||||
|
|
||||||
|
In parallel mode, add `worktree: true` to task blocks:
|
||||||
|
```
|
||||||
|
---TASK---
|
||||||
|
id: feature_impl
|
||||||
|
agent: develop
|
||||||
|
worktree: true
|
||||||
|
---CONTENT---
|
||||||
|
Implement the feature
|
||||||
|
```
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: do
|
name: do
|
||||||
description: This skill should be used for structured feature development with codebase understanding. Triggers on /do command. Provides a 7-phase workflow (Discovery, Exploration, Clarification, Architecture, Implementation, Review, Summary) using codeagent-wrapper to orchestrate code-explorer, code-architect, code-reviewer, and develop agents in parallel.
|
description: This skill should be used for structured feature development with codebase understanding. Triggers on /do command. Provides a 5-phase workflow (Understand, Clarify, Design, Implement, Complete) using codeagent-wrapper to orchestrate code-explorer, code-architect, code-reviewer, and develop agents in parallel.
|
||||||
allowed-tools: ["Bash(${SKILL_DIR}/scripts/setup-do.sh:*)"]
|
allowed-tools: ["Bash(.claude/skills/do/scripts/setup-do.py:*)", "Bash(.claude/skills/do/scripts/task.py:*)"]
|
||||||
---
|
---
|
||||||
|
|
||||||
# do - Feature Development Orchestrator
|
# do - Feature Development Orchestrator
|
||||||
@@ -10,324 +10,255 @@ An orchestrator for systematic feature development. Invoke agents via `codeagent
|
|||||||
|
|
||||||
## Loop Initialization (REQUIRED)
|
## Loop Initialization (REQUIRED)
|
||||||
|
|
||||||
When triggered via `/do <task>`, **first** initialize the loop state:
|
When triggered via `/do <task>`, follow these steps:
|
||||||
|
|
||||||
|
### Step 1: Ask about worktree mode
|
||||||
|
|
||||||
|
Use AskUserQuestion to ask:
|
||||||
|
|
||||||
|
```
|
||||||
|
Develop in a separate worktree? (Isolates changes from main branch)
|
||||||
|
- Yes (Recommended for larger changes)
|
||||||
|
- No (Work directly in current directory)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Initialize task directory
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
"${SKILL_DIR}/scripts/setup-do.sh" "<task description>"
|
# If worktree mode selected:
|
||||||
|
python3 ".claude/skills/do/scripts/setup-do.py" --worktree "<task description>"
|
||||||
|
|
||||||
|
# If no worktree:
|
||||||
|
python3 ".claude/skills/do/scripts/setup-do.py" "<task description>"
|
||||||
```
|
```
|
||||||
|
|
||||||
This creates `.claude/do.{task_id}.local.md` with:
|
This creates a task directory under `.claude/do-tasks/` with:
|
||||||
- `active: true`
|
- `task.md`: Single file containing YAML frontmatter (metadata) + Markdown body (requirements/context)
|
||||||
- `current_phase: 1`
|
|
||||||
- `max_phases: 7`
|
|
||||||
- `completion_promise: "<promise>DO_COMPLETE</promise>"`
|
|
||||||
|
|
||||||
## Loop State Management
|
## Task Directory Management
|
||||||
|
|
||||||
After each phase, update `.claude/do.{task_id}.local.md` frontmatter:
|
Use `task.py` to manage task state:
|
||||||
```yaml
|
|
||||||
current_phase: <next phase number>
|
```bash
|
||||||
phase_name: "<next phase name>"
|
# Update phase
|
||||||
|
python3 ".claude/skills/do/scripts/task.py" update-phase 2
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
python3 ".claude/skills/do/scripts/task.py" status
|
||||||
|
|
||||||
|
# List all tasks
|
||||||
|
python3 ".claude/skills/do/scripts/task.py" list
|
||||||
```
|
```
|
||||||
|
|
||||||
When all 7 phases complete, output the completion signal:
|
## Worktree Mode
|
||||||
```
|
|
||||||
<promise>DO_COMPLETE</promise>
|
When worktree mode is enabled in task.json, ALL `codeagent-wrapper` calls that modify code MUST include `--worktree`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --worktree --agent develop - . <<'EOF'
|
||||||
|
...
|
||||||
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
To abort early, set `active: false` in the state file.
|
Read-only agents (code-explorer, code-architect, code-reviewer) do NOT need `--worktree`.
|
||||||
|
|
||||||
## Hard Constraints
|
## Hard Constraints
|
||||||
|
|
||||||
1. **Never write code directly.** Delegate all code changes to `codeagent-wrapper` agents.
|
1. **Never write code directly.** Delegate all code changes to `codeagent-wrapper` agents.
|
||||||
2. **Phase 3 (Clarification) is mandatory.** Do not proceed until questions are answered.
|
2. **Parallel-first.** Run independent tasks via `codeagent-wrapper --parallel`.
|
||||||
3. **Phase 5 (Implementation) requires explicit approval.** Stop after Phase 4 if not approved.
|
3. **Update phase after each phase.** Use `task.py update-phase <N>`.
|
||||||
4. **Pass complete context forward.** Every agent invocation includes the Context Pack.
|
4. **Expect long-running `codeagent-wrapper` calls.** High-reasoning modes can take a long time.
|
||||||
5. **Parallel-first.** Run independent tasks via `codeagent-wrapper --parallel`.
|
5. **Timeouts are not an escape hatch.** If a call times out, retry with narrower scope.
|
||||||
6. **Update state after each phase.** Keep `.claude/do.{task_id}.local.md` current.
|
6. **Respect worktree setting.** If enabled, always pass `--worktree` to develop agent calls.
|
||||||
7. **Expect long-running `codeagent-wrapper` calls.** High-reasoning modes (e.g. `xhigh`) can take a long time; stay in the orchestrator role and wait for agents to complete.
|
|
||||||
8. **Timeouts are not an escape hatch.** If a `codeagent-wrapper` invocation times out/errors, retry `codeagent-wrapper` (split/narrow the task if needed); never switch to direct implementation.
|
|
||||||
|
|
||||||
## Agents
|
## Agents
|
||||||
|
|
||||||
| Agent | Purpose | Prompt |
|
| Agent | Purpose | Needs --worktree |
|
||||||
|-------|---------|--------|
|
|-------|---------|------------------|
|
||||||
| `code-explorer` | Trace code, map architecture, find patterns | `agents/code-explorer.md` |
|
| `code-explorer` | Trace code, map architecture, find patterns | No (read-only) |
|
||||||
| `code-architect` | Design approaches, file plans, build sequences | `agents/code-architect.md` |
|
| `code-architect` | Design approaches, file plans, build sequences | No (read-only) |
|
||||||
| `code-reviewer` | Review for bugs, simplicity, conventions | `agents/code-reviewer.md` |
|
| `code-reviewer` | Review for bugs, simplicity, conventions | No (read-only) |
|
||||||
| `develop` | Implement code, run tests | (uses global config) |
|
| `develop` | Implement code, run tests | **Yes** (if worktree enabled) |
|
||||||
|
|
||||||
## Context Pack Template
|
## Issue Severity Definitions
|
||||||
|
|
||||||
```text
|
**Blocking issues** (require user input):
|
||||||
## Original User Request
|
- Impacts core functionality or correctness
|
||||||
<verbatim request>
|
- Security vulnerabilities
|
||||||
|
- Architectural conflicts with existing patterns
|
||||||
|
- Ambiguous requirements with multiple valid interpretations
|
||||||
|
|
||||||
## Context Pack
|
**Minor issues** (auto-fix without asking):
|
||||||
- Phase: <1-7 name>
|
- Code style inconsistencies
|
||||||
- Decisions: <requirements/constraints/choices>
|
- Naming improvements
|
||||||
- Code-explorer output: <paste or "None">
|
- Missing documentation
|
||||||
- Code-architect output: <paste or "None">
|
- Non-critical test coverage gaps
|
||||||
- Code-reviewer output: <paste or "None">
|
|
||||||
- Develop output: <paste or "None">
|
|
||||||
- Open questions: <list or "None">
|
|
||||||
|
|
||||||
## Current Task
|
## 5-Phase Workflow
|
||||||
<specific task>
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
### Phase 1: Understand (Parallel, No Interaction)
|
||||||
<checkable outputs>
|
|
||||||
|
**Goal:** Understand requirements and map codebase simultaneously.
|
||||||
|
|
||||||
|
**Actions:** Run `code-architect` and 2-3 `code-explorer` tasks in parallel.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codeagent-wrapper --parallel <<'EOF'
|
||||||
|
---TASK---
|
||||||
|
id: p1_requirements
|
||||||
|
agent: code-architect
|
||||||
|
workdir: .
|
||||||
|
---CONTENT---
|
||||||
|
Analyze requirements completeness (score 1-10):
|
||||||
|
1. Extract explicit requirements, constraints, acceptance criteria
|
||||||
|
2. Identify blocking questions (issues that prevent implementation)
|
||||||
|
3. Identify minor clarifications (nice-to-have but can proceed without)
|
||||||
|
|
||||||
|
Output format:
|
||||||
|
- Completeness score: X/10
|
||||||
|
- Requirements: [list]
|
||||||
|
- Non-goals: [list]
|
||||||
|
- Blocking questions: [list, if any]
|
||||||
|
|
||||||
|
---TASK---
|
||||||
|
id: p1_similar_features
|
||||||
|
agent: code-explorer
|
||||||
|
workdir: .
|
||||||
|
---CONTENT---
|
||||||
|
Find 1-3 similar features, trace end-to-end. Return: key files with line numbers, call flow, extension points.
|
||||||
|
|
||||||
|
---TASK---
|
||||||
|
id: p1_architecture
|
||||||
|
agent: code-explorer
|
||||||
|
workdir: .
|
||||||
|
---CONTENT---
|
||||||
|
Map architecture for relevant subsystem. Return: module map + 5-10 key files.
|
||||||
|
|
||||||
|
---TASK---
|
||||||
|
id: p1_conventions
|
||||||
|
agent: code-explorer
|
||||||
|
workdir: .
|
||||||
|
---CONTENT---
|
||||||
|
Identify testing patterns, conventions, config. Return: test commands + file locations.
|
||||||
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
## 7-Phase Workflow
|
### Phase 2: Clarify (Conditional)
|
||||||
|
|
||||||
### Phase 1: Discovery
|
**Goal:** Resolve blocking ambiguities only.
|
||||||
|
|
||||||
**Goal:** Understand what to build.
|
|
||||||
|
|
||||||
**Actions:**
|
**Actions:**
|
||||||
1. Use AskUserQuestion for: user-visible behavior, scope, constraints, acceptance criteria
|
1. Review `p1_requirements` output for blocking questions
|
||||||
2. Invoke `code-architect` to draft requirements checklist and clarifying questions
|
2. **IF blocking questions exist** → Use AskUserQuestion
|
||||||
|
3. **IF no blocking questions (completeness >= 8)** → Skip to Phase 3
|
||||||
|
|
||||||
|
### Phase 3: Design (No Interaction)
|
||||||
|
|
||||||
|
**Goal:** Produce minimal-change implementation plan.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent-wrapper --agent code-architect - . <<'EOF'
|
codeagent-wrapper --agent code-architect - . <<'EOF'
|
||||||
## Original User Request
|
Design minimal-change implementation:
|
||||||
/do <request>
|
- Reuse existing abstractions
|
||||||
|
- Minimize new files
|
||||||
|
- Follow established patterns from Phase 1 exploration
|
||||||
|
|
||||||
## Context Pack
|
Output:
|
||||||
- Code-explorer output: None
|
- File touch list with specific changes
|
||||||
- Code-architect output: None
|
- Build sequence
|
||||||
|
- Test plan
|
||||||
## Current Task
|
- Risks and mitigations
|
||||||
Produce requirements checklist and identify missing information.
|
|
||||||
Output: Requirements, Non-goals, Risks, Acceptance criteria, Questions (<= 10)
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Concrete, testable checklist; specific questions; no implementation.
|
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 2: Exploration
|
### Phase 4: Implement + Review
|
||||||
|
|
||||||
**Goal:** Map codebase patterns and extension points.
|
**Goal:** Build feature and review in one phase.
|
||||||
|
|
||||||
**Actions:** Run 2-3 `code-explorer` tasks in parallel (similar features, architecture, tests/conventions).
|
1. Invoke `develop` to implement. For full-stack projects, split into backend/frontend tasks with per-task `skills:` injection. Use `--parallel` when tasks can be split; use single agent when the change is small or single-domain.
|
||||||
|
|
||||||
|
**Single-domain example** (add `--worktree` if enabled):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent-wrapper --parallel <<'EOF'
|
codeagent-wrapper --worktree --agent develop --skills golang-base-practices - . <<'EOF'
|
||||||
---TASK---
|
Implement with minimal change set following the Phase 3 blueprint.
|
||||||
id: p2_similar_features
|
- Follow Phase 1 patterns
|
||||||
agent: code-explorer
|
- Add/adjust tests per Phase 3 plan
|
||||||
workdir: .
|
|
||||||
---CONTENT---
|
|
||||||
## Original User Request
|
|
||||||
/do <request>
|
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-architect output: <Phase 1 output>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Find 1-3 similar features, trace end-to-end. Return: key files with line numbers, call flow, extension points.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Concrete file:line map + reuse points.
|
|
||||||
|
|
||||||
---TASK---
|
|
||||||
id: p2_architecture
|
|
||||||
agent: code-explorer
|
|
||||||
workdir: .
|
|
||||||
---CONTENT---
|
|
||||||
## Original User Request
|
|
||||||
/do <request>
|
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-architect output: <Phase 1 output>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Map architecture for relevant subsystem. Return: module map + 5-10 key files.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Clear boundaries; file:line references.
|
|
||||||
|
|
||||||
---TASK---
|
|
||||||
id: p2_conventions
|
|
||||||
agent: code-explorer
|
|
||||||
workdir: .
|
|
||||||
---CONTENT---
|
|
||||||
## Original User Request
|
|
||||||
/do <request>
|
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-architect output: <Phase 1 output>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Identify testing patterns, conventions, config. Return: test commands + file locations.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Test commands + relevant test file paths.
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 3: Clarification (MANDATORY)
|
|
||||||
|
|
||||||
**Goal:** Resolve all ambiguities before design.
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
1. Invoke `code-architect` to generate prioritized questions from Phase 1+2 outputs
|
|
||||||
2. Use AskUserQuestion to present questions and wait for answers
|
|
||||||
3. **Do not proceed until answered or defaults accepted**
|
|
||||||
|
|
||||||
### Phase 4: Architecture
|
|
||||||
|
|
||||||
**Goal:** Produce implementation plan fitting existing patterns.
|
|
||||||
|
|
||||||
**Actions:** Run 2 `code-architect` tasks in parallel (minimal-change vs pragmatic-clean).
|
|
||||||
|
|
||||||
```bash
|
|
||||||
codeagent-wrapper --parallel <<'EOF'
|
|
||||||
---TASK---
|
|
||||||
id: p4_minimal
|
|
||||||
agent: code-architect
|
|
||||||
workdir: .
|
|
||||||
---CONTENT---
|
|
||||||
## Original User Request
|
|
||||||
/do <request>
|
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-explorer output: <ALL Phase 2 outputs>
|
|
||||||
- Code-architect output: <Phase 1 + Phase 3 answers>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Propose minimal-change architecture: reuse existing abstractions, minimize new files.
|
|
||||||
Output: file touch list, risks, edge cases.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Concrete blueprint; minimal moving parts.
|
|
||||||
|
|
||||||
---TASK---
|
|
||||||
id: p4_pragmatic
|
|
||||||
agent: code-architect
|
|
||||||
workdir: .
|
|
||||||
---CONTENT---
|
|
||||||
## Original User Request
|
|
||||||
/do <request>
|
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-explorer output: <ALL Phase 2 outputs>
|
|
||||||
- Code-architect output: <Phase 1 + Phase 3 answers>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Propose pragmatic-clean architecture: introduce seams for testability.
|
|
||||||
Output: file touch list, testing plan, risks.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Implementable blueprint with build sequence and tests.
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
Use AskUserQuestion to let user choose approach.
|
|
||||||
|
|
||||||
### Phase 5: Implementation (Approval Required)
|
|
||||||
|
|
||||||
**Goal:** Build the feature.
|
|
||||||
|
|
||||||
**Actions:**
|
|
||||||
1. Use AskUserQuestion: "Approve starting implementation?" (Approve / Not yet)
|
|
||||||
2. If approved, invoke `develop`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
codeagent-wrapper --agent develop - . <<'EOF'
|
|
||||||
## Original User Request
|
|
||||||
/do <request>
|
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-explorer output: <ALL Phase 2 outputs>
|
|
||||||
- Code-architect output: <selected Phase 4 blueprint + Phase 3 answers>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Implement with minimal change set following chosen architecture.
|
|
||||||
- Follow Phase 2 patterns
|
|
||||||
- Add/adjust tests per Phase 4 plan
|
|
||||||
- Run narrowest relevant tests
|
- Run narrowest relevant tests
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Feature works end-to-end; tests pass; diff is minimal.
|
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 6: Review
|
**Full-stack parallel example** (adapt task IDs, skills, and content based on Phase 3 design):
|
||||||
|
|
||||||
**Goal:** Catch defects and unnecessary complexity.
|
```bash
|
||||||
|
codeagent-wrapper --worktree --parallel <<'EOF'
|
||||||
|
---TASK---
|
||||||
|
id: p4_backend
|
||||||
|
agent: develop
|
||||||
|
workdir: .
|
||||||
|
skills: golang-base-practices
|
||||||
|
---CONTENT---
|
||||||
|
Implement backend changes following Phase 3 blueprint.
|
||||||
|
- Follow Phase 1 patterns
|
||||||
|
- Add/adjust tests per Phase 3 plan
|
||||||
|
|
||||||
**Actions:** Run 2-3 `code-reviewer` tasks in parallel (correctness, simplicity).
|
---TASK---
|
||||||
|
id: p4_frontend
|
||||||
|
agent: develop
|
||||||
|
workdir: .
|
||||||
|
skills: frontend-design,vercel-react-best-practices
|
||||||
|
dependencies: p4_backend
|
||||||
|
---CONTENT---
|
||||||
|
Implement frontend changes following Phase 3 blueprint.
|
||||||
|
- Follow Phase 1 patterns
|
||||||
|
- Add/adjust tests per Phase 3 plan
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: Choose which skills to inject based on Phase 3 design output. Only inject skills relevant to each task's domain.
|
||||||
|
|
||||||
|
2. Run parallel reviews:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent-wrapper --parallel <<'EOF'
|
codeagent-wrapper --parallel <<'EOF'
|
||||||
---TASK---
|
---TASK---
|
||||||
id: p6_correctness
|
id: p4_correctness
|
||||||
agent: code-reviewer
|
agent: code-reviewer
|
||||||
workdir: .
|
workdir: .
|
||||||
---CONTENT---
|
---CONTENT---
|
||||||
## Original User Request
|
Review for correctness, edge cases, failure modes.
|
||||||
/do <request>
|
Classify each issue as BLOCKING or MINOR.
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-architect output: <Phase 4 blueprint>
|
|
||||||
- Develop output: <Phase 5 output>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Review for correctness, edge cases, failure modes. Assume adversarial inputs.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Issues with file:line references and concrete fixes.
|
|
||||||
|
|
||||||
---TASK---
|
---TASK---
|
||||||
id: p6_simplicity
|
id: p4_simplicity
|
||||||
agent: code-reviewer
|
agent: code-reviewer
|
||||||
workdir: .
|
workdir: .
|
||||||
---CONTENT---
|
---CONTENT---
|
||||||
## Original User Request
|
|
||||||
/do <request>
|
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-architect output: <Phase 4 blueprint>
|
|
||||||
- Develop output: <Phase 5 output>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Review for KISS: remove bloat, collapse needless abstractions.
|
Review for KISS: remove bloat, collapse needless abstractions.
|
||||||
|
Classify each issue as BLOCKING or MINOR.
|
||||||
## Acceptance Criteria
|
|
||||||
Actionable simplifications with justification.
|
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
Use AskUserQuestion: Fix now / Fix later / Proceed as-is.
|
3. Handle review results:
|
||||||
|
- **MINOR issues only** → Auto-fix via `develop`, no user interaction
|
||||||
|
- **BLOCKING issues** → Use AskUserQuestion: "Fix now / Proceed as-is"
|
||||||
|
|
||||||
### Phase 7: Summary
|
### Phase 5: Complete (No Interaction)
|
||||||
|
|
||||||
**Goal:** Document what was built.
|
**Goal:** Document what was built.
|
||||||
|
|
||||||
**Actions:** Invoke `code-reviewer` to produce summary:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
codeagent-wrapper --agent code-reviewer - . <<'EOF'
|
codeagent-wrapper --agent code-reviewer - . <<'EOF'
|
||||||
## Original User Request
|
|
||||||
/do <request>
|
|
||||||
|
|
||||||
## Context Pack
|
|
||||||
- Code-architect output: <Phase 4 blueprint>
|
|
||||||
- Code-reviewer output: <Phase 6 outcomes>
|
|
||||||
- Develop output: <Phase 5 output + fixes>
|
|
||||||
|
|
||||||
## Current Task
|
|
||||||
Write completion summary:
|
Write completion summary:
|
||||||
- What was built
|
- What was built
|
||||||
- Key decisions/tradeoffs
|
- Key decisions/tradeoffs
|
||||||
- Files modified (paths)
|
- Files modified (paths)
|
||||||
- How to verify (commands)
|
- How to verify (commands)
|
||||||
- Follow-ups (optional)
|
- Follow-ups (optional)
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
Short, technical, actionable summary.
|
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output the completion signal:
|
||||||
|
```
|
||||||
|
<promise>DO_COMPLETE</promise>
|
||||||
|
```
|
||||||
|
|||||||
@@ -1,12 +1,23 @@
|
|||||||
{
|
{
|
||||||
"description": "do loop hook for 7-phase workflow",
|
"description": "do loop hooks for 5-phase workflow",
|
||||||
"hooks": {
|
"hooks": {
|
||||||
"Stop": [
|
"Stop": [
|
||||||
{
|
{
|
||||||
"hooks": [
|
"hooks": [
|
||||||
{
|
{
|
||||||
"type": "command",
|
"type": "command",
|
||||||
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/stop-hook.sh"
|
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/hooks/stop-hook.py"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"SubagentStop": [
|
||||||
|
{
|
||||||
|
"matcher": "code-reviewer",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/hooks/verify-loop.py"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
107
skills/do/hooks/stop-hook.py
Executable file
107
skills/do/hooks/stop-hook.py
Executable file
@@ -0,0 +1,107 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Stop hook for do skill workflow.
|
||||||
|
|
||||||
|
Checks if the do loop is complete before allowing exit.
|
||||||
|
Uses the new task directory structure under .claude/do-tasks/.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import glob
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
DIR_TASKS = ".claude/do-tasks"
|
||||||
|
FILE_CURRENT_TASK = ".current-task"
|
||||||
|
FILE_TASK_JSON = "task.json"
|
||||||
|
|
||||||
|
PHASE_NAMES = {
|
||||||
|
1: "Understand",
|
||||||
|
2: "Clarify",
|
||||||
|
3: "Design",
|
||||||
|
4: "Implement",
|
||||||
|
5: "Complete",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def phase_name_for(n: int) -> str:
|
||||||
|
return PHASE_NAMES.get(n, f"Phase {n}")
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_task(project_dir: str) -> str | None:
|
||||||
|
"""Read current task directory path."""
|
||||||
|
current_task_file = os.path.join(project_dir, DIR_TASKS, FILE_CURRENT_TASK)
|
||||||
|
if not os.path.exists(current_task_file):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(current_task_file, "r", encoding="utf-8") as f:
|
||||||
|
content = f.read().strip()
|
||||||
|
return content if content else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def get_task_info(project_dir: str, task_dir: str) -> dict | None:
|
||||||
|
"""Read task.json data."""
|
||||||
|
task_json_path = os.path.join(project_dir, task_dir, FILE_TASK_JSON)
|
||||||
|
if not os.path.exists(task_json_path):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(task_json_path, "r", encoding="utf-8") as f:
|
||||||
|
return json.load(f)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def check_task_complete(project_dir: str, task_dir: str) -> str:
|
||||||
|
"""Check if task is complete. Returns blocking reason or empty string."""
|
||||||
|
task_info = get_task_info(project_dir, task_dir)
|
||||||
|
if not task_info:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
status = task_info.get("status", "")
|
||||||
|
if status == "completed":
|
||||||
|
return ""
|
||||||
|
|
||||||
|
current_phase = task_info.get("current_phase", 1)
|
||||||
|
max_phases = task_info.get("max_phases", 5)
|
||||||
|
phase_name = task_info.get("phase_name", phase_name_for(current_phase))
|
||||||
|
completion_promise = task_info.get("completion_promise", "<promise>DO_COMPLETE</promise>")
|
||||||
|
|
||||||
|
if current_phase >= max_phases:
|
||||||
|
# Task is at final phase, allow exit
|
||||||
|
return ""
|
||||||
|
|
||||||
|
return (
|
||||||
|
f"do loop incomplete: current phase {current_phase}/{max_phases} ({phase_name}). "
|
||||||
|
f"Continue with remaining phases; use 'task.py update-phase <N>' after each phase. "
|
||||||
|
f"Include completion_promise in final output when done: {completion_promise}. "
|
||||||
|
f"To exit early, set status to 'completed' in task.json."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
project_dir = os.environ.get("CLAUDE_PROJECT_DIR", os.getcwd())
|
||||||
|
|
||||||
|
task_dir = get_current_task(project_dir)
|
||||||
|
if not task_dir:
|
||||||
|
# No active task, allow exit
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
stdin_payload = ""
|
||||||
|
if not sys.stdin.isatty():
|
||||||
|
try:
|
||||||
|
stdin_payload = sys.stdin.read()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
reason = check_task_complete(project_dir, task_dir)
|
||||||
|
if not reason:
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
print(json.dumps({"decision": "block", "reason": reason}))
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -1,151 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
phase_name_for() {
|
|
||||||
case "${1:-}" in
|
|
||||||
1) echo "Discovery" ;;
|
|
||||||
2) echo "Exploration" ;;
|
|
||||||
3) echo "Clarification" ;;
|
|
||||||
4) echo "Architecture" ;;
|
|
||||||
5) echo "Implementation" ;;
|
|
||||||
6) echo "Review" ;;
|
|
||||||
7) echo "Summary" ;;
|
|
||||||
*) echo "Phase ${1:-unknown}" ;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
json_escape() {
|
|
||||||
local s="${1:-}"
|
|
||||||
s=${s//\\/\\\\}
|
|
||||||
s=${s//\"/\\\"}
|
|
||||||
s=${s//$'\n'/\\n}
|
|
||||||
s=${s//$'\r'/\\r}
|
|
||||||
s=${s//$'\t'/\\t}
|
|
||||||
printf "%s" "$s"
|
|
||||||
}
|
|
||||||
|
|
||||||
project_dir="${CLAUDE_PROJECT_DIR:-$PWD}"
|
|
||||||
state_dir="${project_dir}/.claude"
|
|
||||||
|
|
||||||
shopt -s nullglob
|
|
||||||
state_files=("${state_dir}"/do.*.local.md)
|
|
||||||
shopt -u nullglob
|
|
||||||
|
|
||||||
if [ ${#state_files[@]} -eq 0 ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
stdin_payload=""
|
|
||||||
if [ ! -t 0 ]; then
|
|
||||||
stdin_payload="$(cat || true)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
frontmatter_get() {
|
|
||||||
local file="$1" key="$2"
|
|
||||||
awk -v k="$key" '
|
|
||||||
BEGIN { in_fm=0 }
|
|
||||||
NR==1 && $0=="---" { in_fm=1; next }
|
|
||||||
in_fm==1 && $0=="---" { exit }
|
|
||||||
in_fm==1 {
|
|
||||||
if ($0 ~ "^"k":[[:space:]]*") {
|
|
||||||
sub("^"k":[[:space:]]*", "", $0)
|
|
||||||
gsub(/^[[:space:]]+|[[:space:]]+$/, "", $0)
|
|
||||||
if ($0 ~ /^".*"$/) { sub(/^"/, "", $0); sub(/"$/, "", $0) }
|
|
||||||
print $0
|
|
||||||
exit
|
|
||||||
}
|
|
||||||
}
|
|
||||||
' "$file"
|
|
||||||
}
|
|
||||||
|
|
||||||
check_state_file() {
|
|
||||||
local state_file="$1"
|
|
||||||
|
|
||||||
local active_raw active_lc
|
|
||||||
active_raw="$(frontmatter_get "$state_file" active || true)"
|
|
||||||
active_lc="$(printf "%s" "$active_raw" | tr '[:upper:]' '[:lower:]')"
|
|
||||||
case "$active_lc" in
|
|
||||||
true|1|yes|on) ;;
|
|
||||||
*) return 0 ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
local current_phase_raw max_phases_raw phase_name completion_promise
|
|
||||||
current_phase_raw="$(frontmatter_get "$state_file" current_phase || true)"
|
|
||||||
max_phases_raw="$(frontmatter_get "$state_file" max_phases || true)"
|
|
||||||
phase_name="$(frontmatter_get "$state_file" phase_name || true)"
|
|
||||||
completion_promise="$(frontmatter_get "$state_file" completion_promise || true)"
|
|
||||||
|
|
||||||
local current_phase=1
|
|
||||||
if [[ "${current_phase_raw:-}" =~ ^[0-9]+$ ]]; then
|
|
||||||
current_phase="$current_phase_raw"
|
|
||||||
fi
|
|
||||||
|
|
||||||
local max_phases=7
|
|
||||||
if [[ "${max_phases_raw:-}" =~ ^[0-9]+$ ]]; then
|
|
||||||
max_phases="$max_phases_raw"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -z "${phase_name:-}" ]; then
|
|
||||||
phase_name="$(phase_name_for "$current_phase")"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -z "${completion_promise:-}" ]; then
|
|
||||||
completion_promise="<promise>DO_COMPLETE</promise>"
|
|
||||||
fi
|
|
||||||
|
|
||||||
local phases_done=0
|
|
||||||
if [ "$current_phase" -ge "$max_phases" ]; then
|
|
||||||
phases_done=1
|
|
||||||
fi
|
|
||||||
|
|
||||||
local promise_met=0
|
|
||||||
if [ -n "$completion_promise" ]; then
|
|
||||||
if [ -n "$stdin_payload" ] && printf "%s" "$stdin_payload" | grep -Fq -- "$completion_promise"; then
|
|
||||||
promise_met=1
|
|
||||||
else
|
|
||||||
local body
|
|
||||||
body="$(
|
|
||||||
awk '
|
|
||||||
BEGIN { in_fm=0; body=0 }
|
|
||||||
NR==1 && $0=="---" { in_fm=1; next }
|
|
||||||
in_fm==1 && $0=="---" { body=1; in_fm=0; next }
|
|
||||||
body==1 { print }
|
|
||||||
' "$state_file"
|
|
||||||
)"
|
|
||||||
if [ -n "$body" ] && printf "%s" "$body" | grep -Fq -- "$completion_promise"; then
|
|
||||||
promise_met=1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$phases_done" -eq 1 ] && [ "$promise_met" -eq 1 ]; then
|
|
||||||
rm -f "$state_file"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
local reason
|
|
||||||
if [ "$phases_done" -eq 0 ]; then
|
|
||||||
reason="do loop incomplete: current phase ${current_phase}/${max_phases} (${phase_name}). Continue with remaining phases; update ${state_file} current_phase/phase_name after each phase. Include completion_promise in final output when done: ${completion_promise}. To exit early, set active to false."
|
|
||||||
else
|
|
||||||
reason="do reached final phase (current_phase=${current_phase} / max_phases=${max_phases}, phase_name=${phase_name}), but completion_promise not detected: ${completion_promise}. Please include this marker in your final output (or write it to ${state_file} body), then finish; to force exit, set active to false."
|
|
||||||
fi
|
|
||||||
|
|
||||||
printf "%s" "$reason"
|
|
||||||
}
|
|
||||||
|
|
||||||
blocking_reasons=()
|
|
||||||
for state_file in "${state_files[@]}"; do
|
|
||||||
reason="$(check_state_file "$state_file")"
|
|
||||||
if [ -n "$reason" ]; then
|
|
||||||
blocking_reasons+=("$reason")
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ ${#blocking_reasons[@]} -eq 0 ]; then
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
combined_reason="${blocking_reasons[*]}"
|
|
||||||
printf '{"decision":"block","reason":"%s"}\n' "$(json_escape "$combined_reason")"
|
|
||||||
exit 0
|
|
||||||
218
skills/do/hooks/verify-loop.py
Normal file
218
skills/do/hooks/verify-loop.py
Normal file
@@ -0,0 +1,218 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Verify Loop Hook for do skill workflow.
|
||||||
|
|
||||||
|
SubagentStop hook that intercepts when code-reviewer agent tries to stop.
|
||||||
|
Runs verification commands to ensure code quality before allowing exit.
|
||||||
|
|
||||||
|
Mechanism:
|
||||||
|
- Intercepts SubagentStop event for code-reviewer agent
|
||||||
|
- Runs verify commands from task.json if configured
|
||||||
|
- Blocks stopping until verification passes
|
||||||
|
- Has max iterations as safety limit (MAX_ITERATIONS=5)
|
||||||
|
|
||||||
|
State file: .claude/do-tasks/.verify-state.json
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
MAX_ITERATIONS = 5
|
||||||
|
STATE_TIMEOUT_MINUTES = 30
|
||||||
|
DIR_TASKS = ".claude/do-tasks"
|
||||||
|
FILE_CURRENT_TASK = ".current-task"
|
||||||
|
FILE_TASK_JSON = "task.json"
|
||||||
|
STATE_FILE = ".claude/do-tasks/.verify-state.json"
|
||||||
|
|
||||||
|
# Only control loop for code-reviewer agent
|
||||||
|
TARGET_AGENTS = {"code-reviewer"}
|
||||||
|
|
||||||
|
|
||||||
|
def get_project_root(cwd: str) -> str | None:
|
||||||
|
"""Find project root (directory with .claude folder)."""
|
||||||
|
current = Path(cwd).resolve()
|
||||||
|
while current != current.parent:
|
||||||
|
if (current / ".claude").exists():
|
||||||
|
return str(current)
|
||||||
|
current = current.parent
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_task(project_root: str) -> str | None:
|
||||||
|
"""Read current task directory path."""
|
||||||
|
current_task_file = os.path.join(project_root, DIR_TASKS, FILE_CURRENT_TASK)
|
||||||
|
if not os.path.exists(current_task_file):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(current_task_file, "r", encoding="utf-8") as f:
|
||||||
|
content = f.read().strip()
|
||||||
|
return content if content else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def get_task_info(project_root: str, task_dir: str) -> dict | None:
|
||||||
|
"""Read task.json data."""
|
||||||
|
task_json_path = os.path.join(project_root, task_dir, FILE_TASK_JSON)
|
||||||
|
if not os.path.exists(task_json_path):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(task_json_path, "r", encoding="utf-8") as f:
|
||||||
|
return json.load(f)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def get_verify_commands(task_info: dict) -> list[str]:
|
||||||
|
"""Get verify commands from task.json."""
|
||||||
|
return task_info.get("verify_commands", [])
|
||||||
|
|
||||||
|
|
||||||
|
def run_verify_commands(project_root: str, commands: list[str]) -> tuple[bool, str]:
|
||||||
|
"""Run verify commands and return (success, message)."""
|
||||||
|
for cmd in commands:
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd,
|
||||||
|
shell=True,
|
||||||
|
cwd=project_root,
|
||||||
|
capture_output=True,
|
||||||
|
timeout=120,
|
||||||
|
)
|
||||||
|
if result.returncode != 0:
|
||||||
|
stderr = result.stderr.decode("utf-8", errors="replace")
|
||||||
|
stdout = result.stdout.decode("utf-8", errors="replace")
|
||||||
|
error_output = stderr or stdout
|
||||||
|
if len(error_output) > 500:
|
||||||
|
error_output = error_output[:500] + "..."
|
||||||
|
return False, f"Command failed: {cmd}\n{error_output}"
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return False, f"Command timed out: {cmd}"
|
||||||
|
except Exception as e:
|
||||||
|
return False, f"Command error: {cmd} - {str(e)}"
|
||||||
|
return True, "All verify commands passed"
|
||||||
|
|
||||||
|
|
||||||
|
def load_state(project_root: str) -> dict:
|
||||||
|
"""Load verify loop state."""
|
||||||
|
state_path = os.path.join(project_root, STATE_FILE)
|
||||||
|
if not os.path.exists(state_path):
|
||||||
|
return {"task": None, "iteration": 0, "started_at": None}
|
||||||
|
try:
|
||||||
|
with open(state_path, "r", encoding="utf-8") as f:
|
||||||
|
return json.load(f)
|
||||||
|
except Exception:
|
||||||
|
return {"task": None, "iteration": 0, "started_at": None}
|
||||||
|
|
||||||
|
|
||||||
|
def save_state(project_root: str, state: dict) -> None:
|
||||||
|
"""Save verify loop state."""
|
||||||
|
state_path = os.path.join(project_root, STATE_FILE)
|
||||||
|
try:
|
||||||
|
os.makedirs(os.path.dirname(state_path), exist_ok=True)
|
||||||
|
with open(state_path, "w", encoding="utf-8") as f:
|
||||||
|
json.dump(state, f, indent=2, ensure_ascii=False)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
try:
|
||||||
|
input_data = json.load(sys.stdin)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
hook_event = input_data.get("hook_event_name", "")
|
||||||
|
if hook_event != "SubagentStop":
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
subagent_type = input_data.get("subagent_type", "")
|
||||||
|
agent_output = input_data.get("agent_output", "")
|
||||||
|
cwd = input_data.get("cwd", os.getcwd())
|
||||||
|
|
||||||
|
if subagent_type not in TARGET_AGENTS:
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
project_root = get_project_root(cwd)
|
||||||
|
if not project_root:
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
task_dir = get_current_task(project_root)
|
||||||
|
if not task_dir:
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
task_info = get_task_info(project_root, task_dir)
|
||||||
|
if not task_info:
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
verify_commands = get_verify_commands(task_info)
|
||||||
|
if not verify_commands:
|
||||||
|
# No verify commands configured, allow exit
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
# Load state
|
||||||
|
state = load_state(project_root)
|
||||||
|
|
||||||
|
# Reset state if task changed or too old
|
||||||
|
should_reset = False
|
||||||
|
if state.get("task") != task_dir:
|
||||||
|
should_reset = True
|
||||||
|
elif state.get("started_at"):
|
||||||
|
try:
|
||||||
|
started = datetime.fromisoformat(state["started_at"])
|
||||||
|
if (datetime.now() - started).total_seconds() > STATE_TIMEOUT_MINUTES * 60:
|
||||||
|
should_reset = True
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
should_reset = True
|
||||||
|
|
||||||
|
if should_reset:
|
||||||
|
state = {
|
||||||
|
"task": task_dir,
|
||||||
|
"iteration": 0,
|
||||||
|
"started_at": datetime.now().isoformat(),
|
||||||
|
}
|
||||||
|
|
||||||
|
# Increment iteration
|
||||||
|
state["iteration"] = state.get("iteration", 0) + 1
|
||||||
|
current_iteration = state["iteration"]
|
||||||
|
save_state(project_root, state)
|
||||||
|
|
||||||
|
# Safety check: max iterations
|
||||||
|
if current_iteration >= MAX_ITERATIONS:
|
||||||
|
state["iteration"] = 0
|
||||||
|
save_state(project_root, state)
|
||||||
|
output = {
|
||||||
|
"decision": "allow",
|
||||||
|
"reason": f"Max iterations ({MAX_ITERATIONS}) reached. Stopping to prevent infinite loop.",
|
||||||
|
}
|
||||||
|
print(json.dumps(output, ensure_ascii=False))
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
# Run verify commands
|
||||||
|
passed, message = run_verify_commands(project_root, verify_commands)
|
||||||
|
|
||||||
|
if passed:
|
||||||
|
state["iteration"] = 0
|
||||||
|
save_state(project_root, state)
|
||||||
|
output = {
|
||||||
|
"decision": "allow",
|
||||||
|
"reason": "All verify commands passed. Review phase complete.",
|
||||||
|
}
|
||||||
|
print(json.dumps(output, ensure_ascii=False))
|
||||||
|
sys.exit(0)
|
||||||
|
else:
|
||||||
|
output = {
|
||||||
|
"decision": "block",
|
||||||
|
"reason": f"Iteration {current_iteration}/{MAX_ITERATIONS}. Verification failed:\n{message}\n\nPlease fix the issues and try again.",
|
||||||
|
}
|
||||||
|
print(json.dumps(output, ensure_ascii=False))
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
164
skills/do/install.py
Executable file
164
skills/do/install.py
Executable file
@@ -0,0 +1,164 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Install/uninstall do skill to ~/.claude/skills/do"""
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
SKILL_NAME = "do"
|
||||||
|
HOOK_PATH = "~/.claude/skills/do/hooks/stop-hook.py"
|
||||||
|
|
||||||
|
MODELS_JSON_TEMPLATE = {
|
||||||
|
"agents": {
|
||||||
|
"code-explorer": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": "claude-sonnet-4-5-20250929"
|
||||||
|
},
|
||||||
|
"code-architect": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": "claude-sonnet-4-5-20250929"
|
||||||
|
},
|
||||||
|
"code-reviewer": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": "claude-sonnet-4-5-20250929"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_settings_path() -> Path:
|
||||||
|
return Path.home() / ".claude" / "settings.json"
|
||||||
|
|
||||||
|
def load_settings() -> dict:
|
||||||
|
path = get_settings_path()
|
||||||
|
if path.exists():
|
||||||
|
with open(path, "r", encoding="utf-8") as f:
|
||||||
|
return json.load(f)
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def save_settings(settings: dict):
|
||||||
|
path = get_settings_path()
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(path, "w", encoding="utf-8") as f:
|
||||||
|
json.dump(settings, f, indent=2)
|
||||||
|
|
||||||
|
def add_hook(settings: dict) -> dict:
|
||||||
|
hook_command = str(Path(HOOK_PATH).expanduser())
|
||||||
|
hook_entry = {
|
||||||
|
"type": "command",
|
||||||
|
"command": f"python3 {hook_command}"
|
||||||
|
}
|
||||||
|
|
||||||
|
if "hooks" not in settings:
|
||||||
|
settings["hooks"] = {}
|
||||||
|
if "Stop" not in settings["hooks"]:
|
||||||
|
settings["hooks"]["Stop"] = []
|
||||||
|
|
||||||
|
stop_hooks = settings["hooks"]["Stop"]
|
||||||
|
|
||||||
|
for item in stop_hooks:
|
||||||
|
if "hooks" in item:
|
||||||
|
for h in item["hooks"]:
|
||||||
|
if "stop-hook" in h.get("command", "") and "do" in h.get("command", ""):
|
||||||
|
h["command"] = f"python3 {hook_command}"
|
||||||
|
return settings
|
||||||
|
|
||||||
|
stop_hooks.append({"hooks": [hook_entry]})
|
||||||
|
return settings
|
||||||
|
|
||||||
|
def remove_hook(settings: dict) -> dict:
|
||||||
|
if "hooks" not in settings or "Stop" not in settings["hooks"]:
|
||||||
|
return settings
|
||||||
|
|
||||||
|
stop_hooks = settings["hooks"]["Stop"]
|
||||||
|
new_stop_hooks = []
|
||||||
|
|
||||||
|
for item in stop_hooks:
|
||||||
|
if "hooks" in item:
|
||||||
|
filtered = [h for h in item["hooks"]
|
||||||
|
if "stop-hook" not in h.get("command", "")
|
||||||
|
or "do" not in h.get("command", "")]
|
||||||
|
if filtered:
|
||||||
|
item["hooks"] = filtered
|
||||||
|
new_stop_hooks.append(item)
|
||||||
|
else:
|
||||||
|
new_stop_hooks.append(item)
|
||||||
|
|
||||||
|
settings["hooks"]["Stop"] = new_stop_hooks
|
||||||
|
if not settings["hooks"]["Stop"]:
|
||||||
|
del settings["hooks"]["Stop"]
|
||||||
|
if not settings["hooks"]:
|
||||||
|
del settings["hooks"]
|
||||||
|
|
||||||
|
return settings
|
||||||
|
|
||||||
|
def install_models_json():
|
||||||
|
"""Install ~/.codeagent/models.json if not exists"""
|
||||||
|
path = Path.home() / ".codeagent" / "models.json"
|
||||||
|
if path.exists():
|
||||||
|
print(f"⚠ {path} already exists, skipping")
|
||||||
|
return
|
||||||
|
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(path, "w", encoding="utf-8") as f:
|
||||||
|
json.dump(MODELS_JSON_TEMPLATE, f, indent=2)
|
||||||
|
print(f"✓ Created {path}")
|
||||||
|
|
||||||
|
def install():
|
||||||
|
src = Path(__file__).parent.resolve()
|
||||||
|
dest = Path.home() / ".claude" / "skills" / SKILL_NAME
|
||||||
|
|
||||||
|
dest.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
exclude = {".git", "__pycache__", ".DS_Store", "install.py"}
|
||||||
|
|
||||||
|
for item in src.iterdir():
|
||||||
|
if item.name in exclude:
|
||||||
|
continue
|
||||||
|
target = dest / item.name
|
||||||
|
if target.exists():
|
||||||
|
if target.is_dir():
|
||||||
|
shutil.rmtree(target)
|
||||||
|
else:
|
||||||
|
target.unlink()
|
||||||
|
if item.is_dir():
|
||||||
|
shutil.copytree(item, target)
|
||||||
|
else:
|
||||||
|
shutil.copy2(item, target)
|
||||||
|
|
||||||
|
settings = load_settings()
|
||||||
|
settings = add_hook(settings)
|
||||||
|
save_settings(settings)
|
||||||
|
|
||||||
|
install_models_json()
|
||||||
|
|
||||||
|
print(f"✓ Installed to {dest}")
|
||||||
|
print(f"✓ Hook added to settings.json")
|
||||||
|
|
||||||
|
def uninstall():
|
||||||
|
dest = Path.home() / ".claude" / "skills" / SKILL_NAME
|
||||||
|
|
||||||
|
settings = load_settings()
|
||||||
|
settings = remove_hook(settings)
|
||||||
|
save_settings(settings)
|
||||||
|
print(f"✓ Hook removed from settings.json")
|
||||||
|
|
||||||
|
if dest.exists():
|
||||||
|
shutil.rmtree(dest)
|
||||||
|
print(f"✓ Removed {dest}")
|
||||||
|
else:
|
||||||
|
print(f"⚠ {dest} not found")
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Install/uninstall do skill")
|
||||||
|
parser.add_argument("--uninstall", "-u", action="store_true", help="Uninstall the skill")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.uninstall:
|
||||||
|
uninstall()
|
||||||
|
else:
|
||||||
|
install()
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
149
skills/do/scripts/get-context.py
Normal file
149
skills/do/scripts/get-context.py
Normal file
@@ -0,0 +1,149 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Get context for current task.
|
||||||
|
|
||||||
|
Reads the current task's jsonl files and returns context for specified agent.
|
||||||
|
Used by inject-context hook to build agent prompts.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
DIR_TASKS = ".claude/do-tasks"
|
||||||
|
FILE_CURRENT_TASK = ".current-task"
|
||||||
|
FILE_TASK_JSON = "task.json"
|
||||||
|
|
||||||
|
|
||||||
|
def get_project_root() -> str:
|
||||||
|
return os.environ.get("CLAUDE_PROJECT_DIR", os.getcwd())
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_task(project_root: str) -> str | None:
|
||||||
|
current_task_file = os.path.join(project_root, DIR_TASKS, FILE_CURRENT_TASK)
|
||||||
|
if not os.path.exists(current_task_file):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(current_task_file, "r", encoding="utf-8") as f:
|
||||||
|
content = f.read().strip()
|
||||||
|
return content if content else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def read_file_content(base_path: str, file_path: str) -> str | None:
|
||||||
|
full_path = os.path.join(base_path, file_path)
|
||||||
|
if os.path.exists(full_path) and os.path.isfile(full_path):
|
||||||
|
try:
|
||||||
|
with open(full_path, "r", encoding="utf-8") as f:
|
||||||
|
return f.read()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def read_jsonl_entries(base_path: str, jsonl_path: str) -> list[tuple[str, str]]:
|
||||||
|
full_path = os.path.join(base_path, jsonl_path)
|
||||||
|
if not os.path.exists(full_path):
|
||||||
|
return []
|
||||||
|
|
||||||
|
results = []
|
||||||
|
try:
|
||||||
|
with open(full_path, "r", encoding="utf-8") as f:
|
||||||
|
for line in f:
|
||||||
|
line = line.strip()
|
||||||
|
if not line:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
item = json.loads(line)
|
||||||
|
file_path = item.get("file") or item.get("path")
|
||||||
|
if not file_path:
|
||||||
|
continue
|
||||||
|
content = read_file_content(base_path, file_path)
|
||||||
|
if content:
|
||||||
|
results.append((file_path, content))
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
continue
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
def get_agent_context(project_root: str, task_dir: str, agent_type: str) -> str:
|
||||||
|
"""Get complete context for specified agent."""
|
||||||
|
context_parts = []
|
||||||
|
|
||||||
|
# Read agent-specific jsonl
|
||||||
|
agent_jsonl = os.path.join(task_dir, f"{agent_type}.jsonl")
|
||||||
|
agent_entries = read_jsonl_entries(project_root, agent_jsonl)
|
||||||
|
|
||||||
|
for file_path, content in agent_entries:
|
||||||
|
context_parts.append(f"=== {file_path} ===\n{content}")
|
||||||
|
|
||||||
|
# Read prd.md
|
||||||
|
prd_content = read_file_content(project_root, os.path.join(task_dir, "prd.md"))
|
||||||
|
if prd_content:
|
||||||
|
context_parts.append(f"=== {task_dir}/prd.md (Requirements) ===\n{prd_content}")
|
||||||
|
|
||||||
|
return "\n\n".join(context_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def get_task_info(project_root: str, task_dir: str) -> dict | None:
|
||||||
|
"""Get task.json data."""
|
||||||
|
task_json_path = os.path.join(project_root, task_dir, FILE_TASK_JSON)
|
||||||
|
if not os.path.exists(task_json_path):
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(task_json_path, "r", encoding="utf-8") as f:
|
||||||
|
return json.load(f)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(description="Get context for current task")
|
||||||
|
parser.add_argument("agent", nargs="?", choices=["implement", "check", "debug"],
|
||||||
|
help="Agent type (optional, returns task info if not specified)")
|
||||||
|
parser.add_argument("--json", action="store_true", help="Output as JSON")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
project_root = get_project_root()
|
||||||
|
task_dir = get_current_task(project_root)
|
||||||
|
|
||||||
|
if not task_dir:
|
||||||
|
if args.json:
|
||||||
|
print(json.dumps({"error": "No active task"}))
|
||||||
|
else:
|
||||||
|
print("No active task.", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
task_info = get_task_info(project_root, task_dir)
|
||||||
|
|
||||||
|
if not args.agent:
|
||||||
|
if args.json:
|
||||||
|
print(json.dumps({"task_dir": task_dir, "task_info": task_info}))
|
||||||
|
else:
|
||||||
|
print(f"Task: {task_dir}")
|
||||||
|
if task_info:
|
||||||
|
print(f"Title: {task_info.get('title', 'N/A')}")
|
||||||
|
print(f"Phase: {task_info.get('current_phase', '?')}/{task_info.get('max_phases', 5)}")
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
context = get_agent_context(project_root, task_dir, args.agent)
|
||||||
|
|
||||||
|
if args.json:
|
||||||
|
print(json.dumps({
|
||||||
|
"task_dir": task_dir,
|
||||||
|
"agent": args.agent,
|
||||||
|
"context": context,
|
||||||
|
"task_info": task_info,
|
||||||
|
}))
|
||||||
|
else:
|
||||||
|
print(context)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
58
skills/do/scripts/setup-do.py
Executable file
58
skills/do/scripts/setup-do.py
Executable file
@@ -0,0 +1,58 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Initialize do skill workflow - wrapper around task.py.
|
||||||
|
|
||||||
|
Creates a task directory under .claude/do-tasks/ with:
|
||||||
|
- task.md: Task metadata (YAML frontmatter) + requirements (Markdown body)
|
||||||
|
|
||||||
|
If --worktree is specified, also creates a git worktree for isolated development.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from task import create_task, PHASE_NAMES
|
||||||
|
|
||||||
|
|
||||||
|
def die(msg: str):
|
||||||
|
print(f"Error: {msg}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Initialize do skill workflow with task directory"
|
||||||
|
)
|
||||||
|
parser.add_argument("--max-phases", type=int, default=5, help="Default: 5")
|
||||||
|
parser.add_argument(
|
||||||
|
"--completion-promise",
|
||||||
|
default="<promise>DO_COMPLETE</promise>",
|
||||||
|
help="Default: <promise>DO_COMPLETE</promise>",
|
||||||
|
)
|
||||||
|
parser.add_argument("--worktree", action="store_true", help="Enable worktree mode")
|
||||||
|
parser.add_argument("prompt", nargs="+", help="Task description")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.max_phases < 1:
|
||||||
|
die("--max-phases must be a positive integer")
|
||||||
|
|
||||||
|
prompt = " ".join(args.prompt)
|
||||||
|
result = create_task(title=prompt, use_worktree=args.worktree)
|
||||||
|
|
||||||
|
task_data = result["task_data"]
|
||||||
|
worktree_dir = result.get("worktree_dir", "")
|
||||||
|
|
||||||
|
print(f"Initialized: {result['relative_path']}")
|
||||||
|
print(f"task_id: {task_data['id']}")
|
||||||
|
print(f"phase: 1/{task_data['max_phases']} ({PHASE_NAMES[1]})")
|
||||||
|
print(f"completion_promise: {task_data['completion_promise']}")
|
||||||
|
print(f"use_worktree: {task_data['use_worktree']}")
|
||||||
|
print(f"export DO_TASK_DIR={result['relative_path']}")
|
||||||
|
|
||||||
|
if worktree_dir:
|
||||||
|
print(f"worktree_dir: {worktree_dir}")
|
||||||
|
print(f"export DO_WORKTREE_DIR={worktree_dir}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
@@ -1,114 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
usage() {
|
|
||||||
cat <<'EOF'
|
|
||||||
Usage: setup-do.sh [options] PROMPT...
|
|
||||||
|
|
||||||
Creates (or overwrites) project state file:
|
|
||||||
.claude/do.local.md
|
|
||||||
|
|
||||||
Options:
|
|
||||||
--max-phases N Default: 7
|
|
||||||
--completion-promise STR Default: <promise>DO_COMPLETE</promise>
|
|
||||||
-h, --help Show this help
|
|
||||||
EOF
|
|
||||||
}
|
|
||||||
|
|
||||||
die() {
|
|
||||||
echo "❌ $*" >&2
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
phase_name_for() {
|
|
||||||
case "${1:-}" in
|
|
||||||
1) echo "Discovery" ;;
|
|
||||||
2) echo "Exploration" ;;
|
|
||||||
3) echo "Clarification" ;;
|
|
||||||
4) echo "Architecture" ;;
|
|
||||||
5) echo "Implementation" ;;
|
|
||||||
6) echo "Review" ;;
|
|
||||||
7) echo "Summary" ;;
|
|
||||||
*) echo "Phase ${1:-unknown}" ;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
max_phases=7
|
|
||||||
completion_promise="<promise>DO_COMPLETE</promise>"
|
|
||||||
declare -a prompt_parts=()
|
|
||||||
|
|
||||||
while [ $# -gt 0 ]; do
|
|
||||||
case "$1" in
|
|
||||||
-h|--help)
|
|
||||||
usage
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
--max-phases)
|
|
||||||
[ $# -ge 2 ] || die "--max-phases requires a value"
|
|
||||||
max_phases="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--completion-promise)
|
|
||||||
[ $# -ge 2 ] || die "--completion-promise requires a value"
|
|
||||||
completion_promise="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--)
|
|
||||||
shift
|
|
||||||
while [ $# -gt 0 ]; do
|
|
||||||
prompt_parts+=("$1")
|
|
||||||
shift
|
|
||||||
done
|
|
||||||
break
|
|
||||||
;;
|
|
||||||
-*)
|
|
||||||
die "Unknown argument: $1 (use --help)"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
prompt_parts+=("$1")
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
prompt="${prompt_parts[*]:-}"
|
|
||||||
[ -n "$prompt" ] || die "PROMPT is required (use --help)"
|
|
||||||
|
|
||||||
if ! [[ "$max_phases" =~ ^[0-9]+$ ]] || [ "$max_phases" -lt 1 ]; then
|
|
||||||
die "--max-phases must be a positive integer"
|
|
||||||
fi
|
|
||||||
|
|
||||||
project_dir="${CLAUDE_PROJECT_DIR:-$PWD}"
|
|
||||||
state_dir="${project_dir}/.claude"
|
|
||||||
|
|
||||||
task_id="$(date +%s)-$$-$(head -c 4 /dev/urandom | od -An -tx1 | tr -d ' \n')"
|
|
||||||
state_file="${state_dir}/do.${task_id}.local.md"
|
|
||||||
|
|
||||||
mkdir -p "$state_dir"
|
|
||||||
|
|
||||||
phase_name="$(phase_name_for 1)"
|
|
||||||
|
|
||||||
cat > "$state_file" << EOF
|
|
||||||
---
|
|
||||||
active: true
|
|
||||||
current_phase: 1
|
|
||||||
phase_name: "$phase_name"
|
|
||||||
max_phases: $max_phases
|
|
||||||
completion_promise: "$completion_promise"
|
|
||||||
---
|
|
||||||
|
|
||||||
# do loop state
|
|
||||||
|
|
||||||
## Prompt
|
|
||||||
$prompt
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
- Update frontmatter current_phase/phase_name as you progress
|
|
||||||
- When complete, include the frontmatter completion_promise in your final output
|
|
||||||
EOF
|
|
||||||
|
|
||||||
echo "Initialized: $state_file"
|
|
||||||
echo "task_id: $task_id"
|
|
||||||
echo "phase: 1/$max_phases ($phase_name)"
|
|
||||||
echo "completion_promise: $completion_promise"
|
|
||||||
434
skills/do/scripts/task.py
Normal file
434
skills/do/scripts/task.py
Normal file
@@ -0,0 +1,434 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Task Directory Management CLI for do skill workflow.
|
||||||
|
|
||||||
|
Commands:
|
||||||
|
create <title> - Create a new task directory with task.md
|
||||||
|
start <task-dir> - Set current task pointer
|
||||||
|
finish - Clear current task pointer
|
||||||
|
list - List active tasks
|
||||||
|
status - Show current task status
|
||||||
|
update-phase <N> - Update current phase
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
import random
|
||||||
|
import re
|
||||||
|
import string
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Directory constants
|
||||||
|
DIR_TASKS = ".claude/do-tasks"
|
||||||
|
FILE_CURRENT_TASK = ".current-task"
|
||||||
|
FILE_TASK_MD = "task.md"
|
||||||
|
|
||||||
|
PHASE_NAMES = {
|
||||||
|
1: "Understand",
|
||||||
|
2: "Clarify",
|
||||||
|
3: "Design",
|
||||||
|
4: "Implement",
|
||||||
|
5: "Complete",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_project_root() -> str:
|
||||||
|
"""Get project root from env or cwd."""
|
||||||
|
return os.environ.get("CLAUDE_PROJECT_DIR", os.getcwd())
|
||||||
|
|
||||||
|
|
||||||
|
def get_tasks_dir(project_root: str) -> str:
|
||||||
|
"""Get tasks directory path."""
|
||||||
|
return os.path.join(project_root, DIR_TASKS)
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_task_file(project_root: str) -> str:
|
||||||
|
"""Get current task pointer file path."""
|
||||||
|
return os.path.join(project_root, DIR_TASKS, FILE_CURRENT_TASK)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_task_id() -> str:
|
||||||
|
"""Generate short task ID: MMDD-XXXX format."""
|
||||||
|
date_part = datetime.now().strftime("%m%d")
|
||||||
|
random_part = ''.join(random.choices(string.ascii_lowercase + string.digits, k=4))
|
||||||
|
return f"{date_part}-{random_part}"
|
||||||
|
|
||||||
|
|
||||||
|
def read_task_md(task_md_path: str) -> dict | None:
|
||||||
|
"""Read task.md and parse YAML frontmatter + body."""
|
||||||
|
if not os.path.exists(task_md_path):
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(task_md_path, "r", encoding="utf-8") as f:
|
||||||
|
content = f.read()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Parse YAML frontmatter
|
||||||
|
match = re.match(r'^---\n(.*?)\n---\n(.*)$', content, re.DOTALL)
|
||||||
|
if not match:
|
||||||
|
return None
|
||||||
|
|
||||||
|
frontmatter_str = match.group(1)
|
||||||
|
body = match.group(2)
|
||||||
|
|
||||||
|
# Simple YAML parsing (no external deps)
|
||||||
|
frontmatter = {}
|
||||||
|
for line in frontmatter_str.split('\n'):
|
||||||
|
if ':' in line:
|
||||||
|
key, value = line.split(':', 1)
|
||||||
|
key = key.strip()
|
||||||
|
value = value.strip()
|
||||||
|
# Handle quoted strings
|
||||||
|
if value.startswith('"') and value.endswith('"'):
|
||||||
|
value = value[1:-1]
|
||||||
|
elif value == 'true':
|
||||||
|
value = True
|
||||||
|
elif value == 'false':
|
||||||
|
value = False
|
||||||
|
elif value.isdigit():
|
||||||
|
value = int(value)
|
||||||
|
frontmatter[key] = value
|
||||||
|
|
||||||
|
return {"frontmatter": frontmatter, "body": body}
|
||||||
|
|
||||||
|
|
||||||
|
def write_task_md(task_md_path: str, frontmatter: dict, body: str) -> bool:
|
||||||
|
"""Write task.md with YAML frontmatter + body."""
|
||||||
|
try:
|
||||||
|
lines = ["---"]
|
||||||
|
for key, value in frontmatter.items():
|
||||||
|
if isinstance(value, bool):
|
||||||
|
lines.append(f"{key}: {str(value).lower()}")
|
||||||
|
elif isinstance(value, int):
|
||||||
|
lines.append(f"{key}: {value}")
|
||||||
|
elif isinstance(value, str) and ('<' in value or '>' in value or ':' in value):
|
||||||
|
lines.append(f'{key}: "{value}"')
|
||||||
|
else:
|
||||||
|
lines.append(f'{key}: "{value}"' if isinstance(value, str) else f"{key}: {value}")
|
||||||
|
lines.append("---")
|
||||||
|
lines.append("")
|
||||||
|
lines.append(body)
|
||||||
|
|
||||||
|
with open(task_md_path, "w", encoding="utf-8") as f:
|
||||||
|
f.write('\n'.join(lines))
|
||||||
|
return True
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def create_worktree(project_root: str, task_id: str) -> str:
|
||||||
|
"""Create a git worktree for the task. Returns the worktree directory path."""
|
||||||
|
# Get git root
|
||||||
|
result = subprocess.run(
|
||||||
|
["git", "-C", project_root, "rev-parse", "--show-toplevel"],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
)
|
||||||
|
if result.returncode != 0:
|
||||||
|
raise RuntimeError(f"Not a git repository: {project_root}")
|
||||||
|
git_root = result.stdout.strip()
|
||||||
|
|
||||||
|
# Calculate paths
|
||||||
|
worktree_dir = os.path.join(git_root, ".worktrees", f"do-{task_id}")
|
||||||
|
branch_name = f"do/{task_id}"
|
||||||
|
|
||||||
|
# Create worktree with new branch
|
||||||
|
result = subprocess.run(
|
||||||
|
["git", "-C", git_root, "worktree", "add", "-b", branch_name, worktree_dir],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
)
|
||||||
|
if result.returncode != 0:
|
||||||
|
raise RuntimeError(f"Failed to create worktree: {result.stderr}")
|
||||||
|
|
||||||
|
return worktree_dir
|
||||||
|
|
||||||
|
|
||||||
|
def create_task(title: str, use_worktree: bool = False) -> dict:
|
||||||
|
"""Create a new task directory with task.md."""
|
||||||
|
project_root = get_project_root()
|
||||||
|
tasks_dir = get_tasks_dir(project_root)
|
||||||
|
os.makedirs(tasks_dir, exist_ok=True)
|
||||||
|
|
||||||
|
task_id = generate_task_id()
|
||||||
|
task_dir = os.path.join(tasks_dir, task_id)
|
||||||
|
|
||||||
|
os.makedirs(task_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# Create worktree if requested
|
||||||
|
worktree_dir = ""
|
||||||
|
if use_worktree:
|
||||||
|
try:
|
||||||
|
worktree_dir = create_worktree(project_root, task_id)
|
||||||
|
except RuntimeError as e:
|
||||||
|
print(f"Warning: {e}", file=sys.stderr)
|
||||||
|
use_worktree = False
|
||||||
|
|
||||||
|
frontmatter = {
|
||||||
|
"id": task_id,
|
||||||
|
"title": title,
|
||||||
|
"status": "in_progress",
|
||||||
|
"current_phase": 1,
|
||||||
|
"phase_name": PHASE_NAMES[1],
|
||||||
|
"max_phases": 5,
|
||||||
|
"use_worktree": use_worktree,
|
||||||
|
"worktree_dir": worktree_dir,
|
||||||
|
"created_at": datetime.now().isoformat(),
|
||||||
|
"completion_promise": "<promise>DO_COMPLETE</promise>",
|
||||||
|
}
|
||||||
|
|
||||||
|
body = f"""# Requirements
|
||||||
|
|
||||||
|
{title}
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
## Progress
|
||||||
|
"""
|
||||||
|
|
||||||
|
task_md_path = os.path.join(task_dir, FILE_TASK_MD)
|
||||||
|
write_task_md(task_md_path, frontmatter, body)
|
||||||
|
|
||||||
|
current_task_file = get_current_task_file(project_root)
|
||||||
|
relative_task_dir = os.path.relpath(task_dir, project_root)
|
||||||
|
with open(current_task_file, "w", encoding="utf-8") as f:
|
||||||
|
f.write(relative_task_dir)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"task_dir": task_dir,
|
||||||
|
"relative_path": relative_task_dir,
|
||||||
|
"task_data": frontmatter,
|
||||||
|
"worktree_dir": worktree_dir,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_current_task(project_root: str) -> str | None:
|
||||||
|
"""Read current task directory path."""
|
||||||
|
current_task_file = get_current_task_file(project_root)
|
||||||
|
if not os.path.exists(current_task_file):
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(current_task_file, "r", encoding="utf-8") as f:
|
||||||
|
content = f.read().strip()
|
||||||
|
return content if content else None
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def start_task(task_dir: str) -> bool:
|
||||||
|
"""Set current task pointer."""
|
||||||
|
project_root = get_project_root()
|
||||||
|
tasks_dir = get_tasks_dir(project_root)
|
||||||
|
|
||||||
|
if os.path.isabs(task_dir):
|
||||||
|
full_path = task_dir
|
||||||
|
relative_path = os.path.relpath(task_dir, project_root)
|
||||||
|
else:
|
||||||
|
if not task_dir.startswith(DIR_TASKS):
|
||||||
|
full_path = os.path.join(tasks_dir, task_dir)
|
||||||
|
relative_path = os.path.join(DIR_TASKS, task_dir)
|
||||||
|
else:
|
||||||
|
full_path = os.path.join(project_root, task_dir)
|
||||||
|
relative_path = task_dir
|
||||||
|
|
||||||
|
if not os.path.exists(full_path):
|
||||||
|
print(f"Error: Task directory not found: {full_path}", file=sys.stderr)
|
||||||
|
return False
|
||||||
|
|
||||||
|
current_task_file = get_current_task_file(project_root)
|
||||||
|
os.makedirs(os.path.dirname(current_task_file), exist_ok=True)
|
||||||
|
|
||||||
|
with open(current_task_file, "w", encoding="utf-8") as f:
|
||||||
|
f.write(relative_path)
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def finish_task() -> bool:
|
||||||
|
"""Clear current task pointer."""
|
||||||
|
project_root = get_project_root()
|
||||||
|
current_task_file = get_current_task_file(project_root)
|
||||||
|
|
||||||
|
if os.path.exists(current_task_file):
|
||||||
|
os.remove(current_task_file)
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def list_tasks() -> list[dict]:
|
||||||
|
"""List all task directories."""
|
||||||
|
project_root = get_project_root()
|
||||||
|
tasks_dir = get_tasks_dir(project_root)
|
||||||
|
|
||||||
|
if not os.path.exists(tasks_dir):
|
||||||
|
return []
|
||||||
|
|
||||||
|
tasks = []
|
||||||
|
current_task = get_current_task(project_root)
|
||||||
|
|
||||||
|
for entry in sorted(os.listdir(tasks_dir), reverse=True):
|
||||||
|
entry_path = os.path.join(tasks_dir, entry)
|
||||||
|
if not os.path.isdir(entry_path):
|
||||||
|
continue
|
||||||
|
|
||||||
|
task_md_path = os.path.join(entry_path, FILE_TASK_MD)
|
||||||
|
if not os.path.exists(task_md_path):
|
||||||
|
continue
|
||||||
|
|
||||||
|
parsed = read_task_md(task_md_path)
|
||||||
|
if parsed:
|
||||||
|
task_data = parsed["frontmatter"]
|
||||||
|
else:
|
||||||
|
task_data = {"id": entry, "title": entry, "status": "unknown"}
|
||||||
|
|
||||||
|
relative_path = os.path.join(DIR_TASKS, entry)
|
||||||
|
task_data["path"] = relative_path
|
||||||
|
task_data["is_current"] = current_task == relative_path
|
||||||
|
tasks.append(task_data)
|
||||||
|
|
||||||
|
return tasks
|
||||||
|
|
||||||
|
|
||||||
|
def get_status() -> dict | None:
|
||||||
|
"""Get current task status."""
|
||||||
|
project_root = get_project_root()
|
||||||
|
current_task = get_current_task(project_root)
|
||||||
|
|
||||||
|
if not current_task:
|
||||||
|
return None
|
||||||
|
|
||||||
|
task_dir = os.path.join(project_root, current_task)
|
||||||
|
task_md_path = os.path.join(task_dir, FILE_TASK_MD)
|
||||||
|
|
||||||
|
parsed = read_task_md(task_md_path)
|
||||||
|
if not parsed:
|
||||||
|
return None
|
||||||
|
|
||||||
|
task_data = parsed["frontmatter"]
|
||||||
|
task_data["path"] = current_task
|
||||||
|
return task_data
|
||||||
|
|
||||||
|
|
||||||
|
def update_phase(phase: int) -> bool:
|
||||||
|
"""Update current task phase."""
|
||||||
|
project_root = get_project_root()
|
||||||
|
current_task = get_current_task(project_root)
|
||||||
|
|
||||||
|
if not current_task:
|
||||||
|
print("Error: No active task.", file=sys.stderr)
|
||||||
|
return False
|
||||||
|
|
||||||
|
task_dir = os.path.join(project_root, current_task)
|
||||||
|
task_md_path = os.path.join(task_dir, FILE_TASK_MD)
|
||||||
|
|
||||||
|
parsed = read_task_md(task_md_path)
|
||||||
|
if not parsed:
|
||||||
|
print("Error: task.md not found or invalid.", file=sys.stderr)
|
||||||
|
return False
|
||||||
|
|
||||||
|
frontmatter = parsed["frontmatter"]
|
||||||
|
frontmatter["current_phase"] = phase
|
||||||
|
frontmatter["phase_name"] = PHASE_NAMES.get(phase, f"Phase {phase}")
|
||||||
|
|
||||||
|
if not write_task_md(task_md_path, frontmatter, parsed["body"]):
|
||||||
|
print("Error: Failed to write task.md.", file=sys.stderr)
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Task directory management for do skill workflow"
|
||||||
|
)
|
||||||
|
subparsers = parser.add_subparsers(dest="command", help="Available commands")
|
||||||
|
|
||||||
|
# create command
|
||||||
|
create_parser = subparsers.add_parser("create", help="Create a new task")
|
||||||
|
create_parser.add_argument("title", nargs="+", help="Task title")
|
||||||
|
create_parser.add_argument("--worktree", action="store_true", help="Enable worktree mode")
|
||||||
|
|
||||||
|
# start command
|
||||||
|
start_parser = subparsers.add_parser("start", help="Set current task")
|
||||||
|
start_parser.add_argument("task_dir", help="Task directory path")
|
||||||
|
|
||||||
|
# finish command
|
||||||
|
subparsers.add_parser("finish", help="Clear current task")
|
||||||
|
|
||||||
|
# list command
|
||||||
|
subparsers.add_parser("list", help="List all tasks")
|
||||||
|
|
||||||
|
# status command
|
||||||
|
subparsers.add_parser("status", help="Show current task status")
|
||||||
|
|
||||||
|
# update-phase command
|
||||||
|
phase_parser = subparsers.add_parser("update-phase", help="Update current phase")
|
||||||
|
phase_parser.add_argument("phase", type=int, help="Phase number (1-5)")
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.command == "create":
|
||||||
|
title = " ".join(args.title)
|
||||||
|
result = create_task(title, args.worktree)
|
||||||
|
print(f"Created task: {result['relative_path']}")
|
||||||
|
print(f"Task ID: {result['task_data']['id']}")
|
||||||
|
print(f"Phase: 1/{result['task_data']['max_phases']} (Understand)")
|
||||||
|
print(f"Worktree: {result['task_data']['use_worktree']}")
|
||||||
|
|
||||||
|
elif args.command == "start":
|
||||||
|
if start_task(args.task_dir):
|
||||||
|
print(f"Started task: {args.task_dir}")
|
||||||
|
else:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
elif args.command == "finish":
|
||||||
|
if finish_task():
|
||||||
|
print("Task finished, current task cleared.")
|
||||||
|
else:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
elif args.command == "list":
|
||||||
|
tasks = list_tasks()
|
||||||
|
if not tasks:
|
||||||
|
print("No tasks found.")
|
||||||
|
else:
|
||||||
|
for task in tasks:
|
||||||
|
marker = "* " if task.get("is_current") else " "
|
||||||
|
phase = task.get("current_phase", "?")
|
||||||
|
max_phase = task.get("max_phases", 5)
|
||||||
|
status = task.get("status", "unknown")
|
||||||
|
print(f"{marker}{task['id']} [{status}] phase {phase}/{max_phase}")
|
||||||
|
print(f" {task.get('title', 'No title')}")
|
||||||
|
|
||||||
|
elif args.command == "status":
|
||||||
|
status = get_status()
|
||||||
|
if not status:
|
||||||
|
print("No active task.")
|
||||||
|
else:
|
||||||
|
print(f"Task: {status['id']}")
|
||||||
|
print(f"Title: {status.get('title', 'No title')}")
|
||||||
|
print(f"Status: {status.get('status', 'unknown')}")
|
||||||
|
print(f"Phase: {status.get('current_phase', '?')}/{status.get('max_phases', 5)}")
|
||||||
|
print(f"Worktree: {status.get('use_worktree', False)}")
|
||||||
|
print(f"Path: {status['path']}")
|
||||||
|
|
||||||
|
elif args.command == "update-phase":
|
||||||
|
if update_phase(args.phase):
|
||||||
|
phase_name = PHASE_NAMES.get(args.phase, f"Phase {args.phase}")
|
||||||
|
print(f"Updated to phase {args.phase} ({phase_name})")
|
||||||
|
else:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
else:
|
||||||
|
parser.print_help()
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
52
templates/models.json.example
Normal file
52
templates/models.json.example
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
{
|
||||||
|
"default_backend": "codex",
|
||||||
|
"default_model": "gpt-5.2",
|
||||||
|
"backends": {
|
||||||
|
"codex": { "api_key": "" },
|
||||||
|
"claude": { "api_key": "" },
|
||||||
|
"gemini": { "api_key": "" },
|
||||||
|
"opencode": { "api_key": "" }
|
||||||
|
},
|
||||||
|
"agents": {
|
||||||
|
"develop": {
|
||||||
|
"backend": "codex",
|
||||||
|
"model": "gpt-5.2",
|
||||||
|
"reasoning": "xhigh",
|
||||||
|
"yolo": true
|
||||||
|
},
|
||||||
|
"code-explorer": {
|
||||||
|
"backend": "opencode",
|
||||||
|
"model": ""
|
||||||
|
},
|
||||||
|
"code-architect": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": ""
|
||||||
|
},
|
||||||
|
"code-reviewer": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": ""
|
||||||
|
},
|
||||||
|
"oracle": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": "claude-opus-4-5-20251101",
|
||||||
|
"yolo": true
|
||||||
|
},
|
||||||
|
"librarian": {
|
||||||
|
"backend": "claude",
|
||||||
|
"model": "claude-sonnet-4-5-20250929",
|
||||||
|
"yolo": true
|
||||||
|
},
|
||||||
|
"explore": {
|
||||||
|
"backend": "opencode",
|
||||||
|
"model": "opencode/grok-code"
|
||||||
|
},
|
||||||
|
"frontend-ui-ux-engineer": {
|
||||||
|
"backend": "gemini",
|
||||||
|
"model": "gemini-3-pro-preview"
|
||||||
|
},
|
||||||
|
"document-writer": {
|
||||||
|
"backend": "gemini",
|
||||||
|
"model": "gemini-3-flash-preview"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user