docs: consolidate documentation with 4-level workflow guide

- Add WORKFLOW_GUIDE.md (EN) and WORKFLOW_GUIDE_CN.md (CN)
- Simplify README.md to highlight 4-level workflow system
- Remove redundant docs: MCP_*.md, WORKFLOW_DECISION_GUIDE*.md, WORKFLOW_DIAGRAMS.md
- Move COMMAND_SPEC.md to docs/
- Move codex_mcp.md, CODEX_LENS_AUTO_HYBRID.md to codex-lens/docs/
- Delete temporary debug documents and outdated files

Root directory: 28 → 14 MD files
This commit is contained in:
catlog22
2026-01-17 10:38:06 +08:00
parent 464f3343f3
commit bc4176fda0
20 changed files with 1459 additions and 5515 deletions

View File

@@ -0,0 +1,326 @@
# CodexLens Auto Hybrid Mode - Implementation Summary
## 概述
实现了两个主要功能:
1. **自动向量嵌入生成**`init` 命令在检测到语义搜索依赖后自动生成向量嵌入
2. **默认混合搜索模式**`search` 命令在检测到嵌入存在时自动使用 hybrid 模式
## 修改文件
### 1. codex-lens CLI (`codex-lens/src/codexlens/cli/commands.py`)
#### 1.1 `init` 命令增强
**新增参数**
- `--no-embeddings`: 跳过自动嵌入生成
- `--embedding-model`: 指定嵌入模型 (默认: "code")
**自动嵌入生成逻辑**
```python
# 在 init 成功后
if not no_embeddings:
from codexlens.semantic import SEMANTIC_AVAILABLE
if SEMANTIC_AVAILABLE:
# 自动调用 generate_embeddings()
# 使用指定的 embedding_model
```
**行为**
- 检测 `fastembed``numpy` 是否安装
- 如果可用,自动生成嵌入(可用 `--no-embeddings` 跳过)
- 默认使用 "code" 模型 (jinaai/jina-embeddings-v2-base-code)
- 在输出中显示嵌入生成进度和统计
#### 1.2 `search` 命令增强
**模式变更**
- 默认模式从 `"exact"` 改为 `"auto"`
- 新增 `"auto"` 模式到有效模式列表
**自动模式检测逻辑**
```python
if mode == "auto":
# 检查项目是否有嵌入
project_record = registry.find_by_source_path(str(search_path))
if project_record:
embed_status = check_embeddings_status(index_path)
if has_embeddings:
actual_mode = "hybrid" # 使用混合模式
else:
actual_mode = "exact" # 降级到精确模式
```
**行为**
- 默认使用 `auto` 模式
- 自动检测索引是否有嵌入
- 有嵌入 → 使用 `hybrid` 模式(精确 + 模糊 + 向量融合)
- 无嵌入 → 使用 `exact` 模式(仅全文搜索)
- 用户仍可手动指定模式覆盖自动检测
### 2. MCP 工具简化 (`ccw/src/tools/codex-lens.ts`)
#### 2.1 简化 action 枚举
**仅暴露核心操作**
- `init`: 初始化索引(自动生成嵌入)
- `search`: 搜索代码(自动混合模式)
- `search_files`: 搜索文件路径
**移除的高级操作**(仍可通过 CLI 使用):
- ~~`symbol`~~: 符号提取 → 使用 `codexlens symbol`
- ~~`status`~~: 状态检查 → 使用 `codexlens status`
- ~~`config_show/set/migrate`~~: 配置管理 → 使用 `codexlens config`
- ~~`clean`~~: 清理索引 → 使用 `codexlens clean`
- ~~`bootstrap/check`~~: 安装管理 → 自动处理
**简化的 ParamsSchema**
```typescript
const ParamsSchema = z.object({
action: z.enum(['init', 'search', 'search_files']),
path: z.string().optional(),
query: z.string().optional(),
mode: z.enum(['auto', 'text', 'semantic', 'exact', 'fuzzy', 'hybrid', 'vector', 'pure-vector']).default('auto'),
languages: z.array(z.string()).optional(),
limit: z.number().default(20),
});
```
#### 2.2 扩展 mode 枚举并设置默认值
**模式支持**
```typescript
mode: z.enum(['auto', 'text', 'semantic', 'exact', 'fuzzy', 'hybrid', 'vector', 'pure-vector']).default('auto')
```
**模式映射**MCP → CLI
```typescript
const modeMap: Record<string, string> = {
'text': 'exact',
'semantic': 'pure-vector',
'auto': 'auto', // 默认:自动检测
'exact': 'exact',
'fuzzy': 'fuzzy',
'hybrid': 'hybrid',
'vector': 'vector',
'pure-vector': 'pure-vector',
};
```
#### 2.3 传递 mode 参数到 CLI
```typescript
const args = ['search', query, '--limit', limit.toString(), '--mode', cliMode, '--json'];
```
### 3. 文档更新 (`.claude/rules/context-requirements.md`)
#### 3.1 更新 init 说明
强调自动嵌入生成功能:
```markdown
**NEW**: `init` automatically generates vector embeddings if semantic dependencies are installed (fastembed).
- Auto-detects if `numpy` and `fastembed` are available
- Uses "code" model by default (jinaai/jina-embeddings-v2-base-code)
- Skip with `--no-embeddings` flag if needed
```
#### 3.2 更新 search 说明
强调自动混合模式:
```markdown
**Search Code** (Auto Hybrid Mode - DEFAULT):
# Simple call - auto-detects mode (hybrid if embeddings exist, exact otherwise):
codex_lens(action="search", query="authentication", path=".", limit=20)
```
#### 3.3 详细模式说明
添加完整的模式列表和默认行为说明:
- `auto`: **DEFAULT** - Uses hybrid if embeddings exist, exact otherwise
- `hybrid`: Exact + Fuzzy + Vector fusion (best results, auto-selected if embeddings exist)
- 其他模式...
## 使用示例
### 场景 1首次使用已安装 fastembed
```bash
# 初始化索引(自动生成嵌入)
codexlens init .
# 输出:
# OK Indexed 150 files in 12 directories
#
# Generating embeddings...
# Model: code
# ✓ Generated 1234 embeddings in 45.2s
# 搜索(自动使用 hybrid 模式)
codexlens search "authentication"
# Mode: hybrid | Searched 12 directories in 15.2ms
```
### 场景 2首次使用未安装 fastembed
```bash
# 初始化索引(跳过嵌入)
codexlens init .
# 输出:
# OK Indexed 150 files in 12 directories
# (无嵌入生成提示)
# 搜索(降级到 exact 模式)
codexlens search "authentication"
# Mode: exact | Searched 12 directories in 8.5ms
```
### 场景 3手动控制
```bash
# 跳过嵌入生成
codexlens init . --no-embeddings
# 强制使用特定模式
codexlens search "auth" --mode exact
codexlens search "how to authenticate" --mode hybrid
```
### 场景 4MCP 工具使用(简化版)
```python
# 初始化(自动生成嵌入)
codex_lens(action="init", path=".")
# 搜索(默认 auto 模式:有嵌入用 hybrid无嵌入用 exact
codex_lens(action="search", query="authentication")
# 强制混合模式
codex_lens(action="search", query="authentication", mode="hybrid")
# 强制精确模式
codex_lens(action="search", query="authenticate_user", mode="exact")
# 仅返回文件路径
codex_lens(action="search_files", query="payment processing")
```
**高级操作使用 CLI**
```bash
# 检查状态
codexlens status
# 提取符号
codexlens symbol src/auth/login.js
# 配置管理
codexlens config show
codexlens config set index_dir /custom/path
# 清理索引
codexlens clean .
```
## 技术细节
### 嵌入检测逻辑
1. 查找项目在 registry 中的记录
2. 获取索引路径 `index_root/_index.db`
3. 调用 `check_embeddings_status()` 检查:
- 是否存在 `chunks`
- `chunks_count > 0`
4. 根据检测结果选择模式
### 混合搜索权重
默认 RRF 权重:
- Exact FTS: 0.4
- Fuzzy FTS: 0.3
- Vector: 0.3
可通过 `--weights` 参数自定义:
```bash
codexlens search "query" --mode hybrid --weights 0.5,0.3,0.2
```
### 模型选项
| 模型 | 模型名称 | 维度 | 大小 | 推荐场景 |
|------|---------|------|------|---------|
| fast | BAAI/bge-small-en-v1.5 | 384 | ~80MB | 快速原型 |
| code | jinaai/jina-embeddings-v2-base-code | 768 | ~150MB | **推荐** 代码搜索 |
| multilingual | intfloat/multilingual-e5-large | 1024 | ~1GB | 多语言项目 |
| balanced | mixedbread-ai/mxbai-embed-large-v1 | 1024 | ~600MB | 平衡性能 |
## 兼容性
### 向后兼容
- 所有现有命令仍然工作
- 手动指定 `--mode` 会覆盖自动检测
- 使用 `--no-embeddings` 可恢复旧行为
### 依赖要求
**核心功能**(无需额外依赖):
- FTS 索引exact, fuzzy
- 符号提取
**语义搜索功能**(需要安装):
```bash
pip install codexlens[semantic]
# 或
pip install numpy fastembed
```
## 性能影响
### 初始化时间
- FTS 索引:~2-5 秒100 文件)
- 嵌入生成:+30-60 秒(首次下载模型)
- 后续嵌入:+10-20 秒
### 搜索性能
| 模式 | 延迟 | 召回率 | 推荐场景 |
|------|------|--------|---------|
| exact | 5ms | 中 | 精确代码标识符 |
| fuzzy | 7ms | 中 | 容错搜索 |
| hybrid | 15ms | **最高** | **通用搜索(推荐)** |
| vector | 12ms | 高 | 语义查询 |
| pure-vector | 10ms | 中 | 自然语言 |
## 最小化修改原则
所有修改都遵循最小化原则:
1. **保持向后兼容**:不破坏现有功能
2. **默认智能**:自动检测最佳模式
3. **用户可控**:可通过参数覆盖自动行为
4. **渐进增强**:未安装 fastembed 时优雅降级
## 总结
**init 命令自动生成嵌入**(可用 `--no-embeddings` 跳过)
**search 命令默认使用混合模式**(有嵌入时自动启用)
**MCP 工具简化为核心操作**init, search, search_files
**所有搜索模式支持**auto, exact, fuzzy, hybrid, vector, pure-vector
✅ **文档已更新**反映新的默认行为
**保持向后兼容性**
**优雅降级**(无 fastembed 时使用 exact 模式)
### MCP vs CLI 功能对比
| 功能 | MCP 工具 | CLI |
|------|---------|-----|
| 初始化索引 | ✅ `codex_lens(action="init")` | ✅ `codexlens init` |
| 搜索代码 | ✅ `codex_lens(action="search")` | ✅ `codexlens search` |
| 搜索文件 | ✅ `codex_lens(action="search_files")` | ✅ `codexlens search --files-only` |
| 检查状态 | ❌ 使用 CLI | ✅ `codexlens status` |
| 提取符号 | ❌ 使用 CLI | ✅ `codexlens symbol` |
| 配置管理 | ❌ 使用 CLI | ✅ `codexlens config` |
| 清理索引 | ❌ 使用 CLI | ✅ `codexlens clean` |
**设计理念**MCP 工具专注于高频核心操作(索引、搜索),高级管理操作通过 CLI 执行。

View File

@@ -0,0 +1,459 @@
MCP integration
mcp_servers
You can configure Codex to use MCP servers to give Codex access to external applications, resources, or services.
Server configuration
STDIO
STDIO servers are MCP servers that you can launch directly via commands on your computer.
# The top-level table name must be `mcp_servers`
# The sub-table name (`server-name` in this example) can be anything you would like.
[mcp_servers.server_name]
command = "npx"
# Optional
args = ["-y", "mcp-server"]
# Optional: propagate additional env vars to the MCP server.
# A default whitelist of env vars will be propagated to the MCP server.
# https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/utils.rs#L82
env = { "API_KEY" = "value" }
# or
[mcp_servers.server_name.env]
API_KEY = "value"
# Optional: Additional list of environment variables that will be whitelisted in the MCP server's environment.
env_vars = ["API_KEY2"]
# Optional: cwd that the command will be run from
cwd = "/Users/<user>/code/my-server"
Streamable HTTP
Streamable HTTP servers enable Codex to talk to resources that are accessed via a http url (either on localhost or another domain).
[mcp_servers.figma]
url = "https://mcp.figma.com/mcp"
# Optional environment variable containing a bearer token to use for auth
bearer_token_env_var = "ENV_VAR"
# Optional map of headers with hard-coded values.
http_headers = { "HEADER_NAME" = "HEADER_VALUE" }
# Optional map of headers whose values will be replaced with the environment variable.
env_http_headers = { "HEADER_NAME" = "ENV_VAR" }
Streamable HTTP connections always use the experimental Rust MCP client under the hood, so expect occasional rough edges. OAuth login flows are gated on the rmcp_client = true flag:
[features]
rmcp_client = true
After enabling it, run codex mcp login <server-name> when the server supports OAuth.
Other configuration options
# Optional: override the default 10s startup timeout
startup_timeout_sec = 20
# Optional: override the default 60s per-tool timeout
tool_timeout_sec = 30
# Optional: disable a server without removing it
enabled = false
# Optional: only expose a subset of tools from this server
enabled_tools = ["search", "summarize"]
# Optional: hide specific tools (applied after `enabled_tools`, if set)
disabled_tools = ["search"]
When both enabled_tools and disabled_tools are specified, Codex first restricts the server to the allow-list and then removes any tools that appear in the deny-list.
MCP CLI commands
# List all available commands
codex mcp --help
# Add a server (env can be repeated; `--` separates the launcher command)
codex mcp add docs -- docs-server --port 4000
# List configured servers (pretty table or JSON)
codex mcp list
codex mcp list --json
# Show one server (table or JSON)
codex mcp get docs
codex mcp get docs --json
# Remove a server
codex mcp remove docs
# Log in to a streamable HTTP server that supports oauth
codex mcp login SERVER_NAME
# Log out from a streamable HTTP server that supports oauth
codex mcp logout SERVER_NAME
Examples of useful MCPs
There is an ever growing list of useful MCP servers that can be helpful while you are working with Codex.
Some of the most common MCPs we've seen are:
Context7 — connect to a wide range of up-to-date developer documentation
Figma Local and Remote - access to your Figma designs
Playwright - control and inspect a browser using Playwright
Chrome Developer Tools — control and inspect a Chrome browser
Sentry — access to your Sentry logs
GitHub — Control over your GitHub account beyond what git allows (like controlling PRs, issues, etc.)
# Example config.toml
Use this example configuration as a starting point. For an explanation of each field and additional context, see [Configuration](./config.md). Copy the snippet below to `~/.codex/config.toml` and adjust values as needed.
```toml
# Codex example configuration (config.toml)
#
# This file lists all keys Codex reads from config.toml, their default values,
# and concise explanations. Values here mirror the effective defaults compiled
# into the CLI. Adjust as needed.
#
# Notes
# - Root keys must appear before tables in TOML.
# - Optional keys that default to "unset" are shown commented out with notes.
# - MCP servers, profiles, and model providers are examples; remove or edit.
################################################################################
# Core Model Selection
################################################################################
# Primary model used by Codex. Default: "gpt-5.1-codex-max" on all platforms.
model = "gpt-5.1-codex-max"
# Model used by the /review feature (code reviews). Default: "gpt-5.1-codex-max".
review_model = "gpt-5.1-codex-max"
# Provider id selected from [model_providers]. Default: "openai".
model_provider = "openai"
# Optional manual model metadata. When unset, Codex auto-detects from model.
# Uncomment to force values.
# model_context_window = 128000 # tokens; default: auto for model
# model_auto_compact_token_limit = 0 # disable/override auto; default: model family specific
# tool_output_token_limit = 10000 # tokens stored per tool output; default: 10000 for gpt-5.1-codex-max
################################################################################
# Reasoning & Verbosity (Responses API capable models)
################################################################################
# Reasoning effort: minimal | low | medium | high | xhigh (default: medium; xhigh on gpt-5.1-codex-max and gpt-5.2)
model_reasoning_effort = "medium"
# Reasoning summary: auto | concise | detailed | none (default: auto)
model_reasoning_summary = "auto"
# Text verbosity for GPT-5 family (Responses API): low | medium | high (default: medium)
model_verbosity = "medium"
# Force-enable reasoning summaries for current model (default: false)
model_supports_reasoning_summaries = false
# Force reasoning summary format: none | experimental (default: none)
model_reasoning_summary_format = "none"
################################################################################
# Instruction Overrides
################################################################################
# Additional user instructions appended after AGENTS.md. Default: unset.
# developer_instructions = ""
# Optional legacy base instructions override (prefer AGENTS.md). Default: unset.
# instructions = ""
# Inline override for the history compaction prompt. Default: unset.
# compact_prompt = ""
# Override built-in base instructions with a file path. Default: unset.
# experimental_instructions_file = "/absolute/or/relative/path/to/instructions.txt"
# Load the compact prompt override from a file. Default: unset.
# experimental_compact_prompt_file = "/absolute/or/relative/path/to/compact_prompt.txt"
################################################################################
# Approval & Sandbox
################################################################################
# When to ask for command approval:
# - untrusted: only known-safe read-only commands auto-run; others prompt
# - on-failure: auto-run in sandbox; prompt only on failure for escalation
# - on-request: model decides when to ask (default)
# - never: never prompt (risky)
approval_policy = "on-request"
# Filesystem/network sandbox policy for tool calls:
# - read-only (default)
# - workspace-write
# - danger-full-access (no sandbox; extremely risky)
sandbox_mode = "read-only"
# Extra settings used only when sandbox_mode = "workspace-write".
[sandbox_workspace_write]
# Additional writable roots beyond the workspace (cwd). Default: []
writable_roots = []
# Allow outbound network access inside the sandbox. Default: false
network_access = false
# Exclude $TMPDIR from writable roots. Default: false
exclude_tmpdir_env_var = false
# Exclude /tmp from writable roots. Default: false
exclude_slash_tmp = false
################################################################################
# Shell Environment Policy for spawned processes
################################################################################
[shell_environment_policy]
# inherit: all (default) | core | none
inherit = "all"
# Skip default excludes for names containing KEY/TOKEN (case-insensitive). Default: false
ignore_default_excludes = false
# Case-insensitive glob patterns to remove (e.g., "AWS_*", "AZURE_*"). Default: []
exclude = []
# Explicit key/value overrides (always win). Default: {}
set = {}
# Whitelist; if non-empty, keep only matching vars. Default: []
include_only = []
# Experimental: run via user shell profile. Default: false
experimental_use_profile = false
################################################################################
# History & File Opener
################################################################################
[history]
# save-all (default) | none
persistence = "save-all"
# Maximum bytes for history file; oldest entries are trimmed when exceeded. Example: 5242880
# max_bytes = 0
# URI scheme for clickable citations: vscode (default) | vscode-insiders | windsurf | cursor | none
file_opener = "vscode"
################################################################################
# UI, Notifications, and Misc
################################################################################
[tui]
# Desktop notifications from the TUI: boolean or filtered list. Default: true
# Examples: false | ["agent-turn-complete", "approval-requested"]
notifications = false
# Enables welcome/status/spinner animations. Default: true
animations = true
# Suppress internal reasoning events from output. Default: false
hide_agent_reasoning = false
# Show raw reasoning content when available. Default: false
show_raw_agent_reasoning = false
# Disable burst-paste detection in the TUI. Default: false
disable_paste_burst = false
# Track Windows onboarding acknowledgement (Windows only). Default: false
windows_wsl_setup_acknowledged = false
# External notifier program (argv array). When unset: disabled.
# Example: notify = ["notify-send", "Codex"]
# notify = [ ]
# In-product notices (mostly set automatically by Codex).
[notice]
# hide_full_access_warning = true
# hide_rate_limit_model_nudge = true
################################################################################
# Authentication & Login
################################################################################
# Where to persist CLI login credentials: file (default) | keyring | auto
cli_auth_credentials_store = "file"
# Base URL for ChatGPT auth flow (not OpenAI API). Default:
chatgpt_base_url = "https://chatgpt.com/backend-api/"
# Restrict ChatGPT login to a specific workspace id. Default: unset.
# forced_chatgpt_workspace_id = ""
# Force login mechanism when Codex would normally auto-select. Default: unset.
# Allowed values: chatgpt | api
# forced_login_method = "chatgpt"
# Preferred store for MCP OAuth credentials: auto (default) | file | keyring
mcp_oauth_credentials_store = "auto"
################################################################################
# Project Documentation Controls
################################################################################
# Max bytes from AGENTS.md to embed into first-turn instructions. Default: 32768
project_doc_max_bytes = 32768
# Ordered fallbacks when AGENTS.md is missing at a directory level. Default: []
project_doc_fallback_filenames = []
################################################################################
# Tools (legacy toggles kept for compatibility)
################################################################################
[tools]
# Enable web search tool (alias: web_search_request). Default: false
web_search = false
# Enable the view_image tool so the agent can attach local images. Default: true
view_image = true
# (Alias accepted) You can also write:
# web_search_request = false
################################################################################
# Centralized Feature Flags (preferred)
################################################################################
[features]
# Leave this table empty to accept defaults. Set explicit booleans to opt in/out.
unified_exec = false
rmcp_client = false
apply_patch_freeform = false
view_image_tool = true
web_search_request = false
ghost_commit = false
enable_experimental_windows_sandbox = false
skills = false
################################################################################
# Experimental toggles (legacy; prefer [features])
################################################################################
# Include apply_patch via freeform editing path (affects default tool set). Default: false
experimental_use_freeform_apply_patch = false
# Define MCP servers under this table. Leave empty to disable.
[mcp_servers]
# --- Example: STDIO transport ---
# [mcp_servers.docs]
# command = "docs-server" # required
# args = ["--port", "4000"] # optional
# env = { "API_KEY" = "value" } # optional key/value pairs copied as-is
# env_vars = ["ANOTHER_SECRET"] # optional: forward these from the parent env
# cwd = "/path/to/server" # optional working directory override
# startup_timeout_sec = 10.0 # optional; default 10.0 seconds
# # startup_timeout_ms = 10000 # optional alias for startup timeout (milliseconds)
# tool_timeout_sec = 60.0 # optional; default 60.0 seconds
# enabled_tools = ["search", "summarize"] # optional allow-list
# disabled_tools = ["slow-tool"] # optional deny-list (applied after allow-list)
# --- Example: Streamable HTTP transport ---
# [mcp_servers.github]
# url = "https://github-mcp.example.com/mcp" # required
# bearer_token_env_var = "GITHUB_TOKEN" # optional; Authorization: Bearer <token>
# http_headers = { "X-Example" = "value" } # optional static headers
# env_http_headers = { "X-Auth" = "AUTH_ENV" } # optional headers populated from env vars
# startup_timeout_sec = 10.0 # optional
# tool_timeout_sec = 60.0 # optional
# enabled_tools = ["list_issues"] # optional allow-list
################################################################################
# Model Providers (extend/override built-ins)
################################################################################
# Built-ins include:
# - openai (Responses API; requires login or OPENAI_API_KEY via auth flow)
# - oss (Chat Completions API; defaults to http://localhost:11434/v1)
[model_providers]
# --- Example: override OpenAI with explicit base URL or headers ---
# [model_providers.openai]
# name = "OpenAI"
# base_url = "https://api.openai.com/v1" # default if unset
# wire_api = "responses" # "responses" | "chat" (default varies)
# # requires_openai_auth = true # built-in OpenAI defaults to true
# # request_max_retries = 4 # default 4; max 100
# # stream_max_retries = 5 # default 5; max 100
# # stream_idle_timeout_ms = 300000 # default 300_000 (5m)
# # experimental_bearer_token = "sk-example" # optional dev-only direct bearer token
# # http_headers = { "X-Example" = "value" }
# # env_http_headers = { "OpenAI-Organization" = "OPENAI_ORGANIZATION", "OpenAI-Project" = "OPENAI_PROJECT" }
# --- Example: Azure (Chat/Responses depending on endpoint) ---
# [model_providers.azure]
# name = "Azure"
# base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
# wire_api = "responses" # or "chat" per endpoint
# query_params = { api-version = "2025-04-01-preview" }
# env_key = "AZURE_OPENAI_API_KEY"
# # env_key_instructions = "Set AZURE_OPENAI_API_KEY in your environment"
# --- Example: Local OSS (e.g., Ollama-compatible) ---
# [model_providers.ollama]
# name = "Ollama"
# base_url = "http://localhost:11434/v1"
# wire_api = "chat"
################################################################################
# Profiles (named presets)
################################################################################
# Active profile name. When unset, no profile is applied.
# profile = "default"
[profiles]
# [profiles.default]
# model = "gpt-5.1-codex-max"
# model_provider = "openai"
# approval_policy = "on-request"
# sandbox_mode = "read-only"
# model_reasoning_effort = "medium"
# model_reasoning_summary = "auto"
# model_verbosity = "medium"
# chatgpt_base_url = "https://chatgpt.com/backend-api/"
# experimental_compact_prompt_file = "compact_prompt.txt"
# include_apply_patch_tool = false
# experimental_use_freeform_apply_patch = false
# tools_web_search = false
# tools_view_image = true
# features = { unified_exec = false }
################################################################################
# Projects (trust levels)
################################################################################
# Mark specific worktrees as trusted. Only "trusted" is recognized.
[projects]
# [projects."/absolute/path/to/project"]
# trust_level = "trusted"
################################################################################
# OpenTelemetry (OTEL) disabled by default
################################################################################
[otel]
# Include user prompt text in logs. Default: false
log_user_prompt = false
# Environment label applied to telemetry. Default: "dev"
environment = "dev"
# Exporter: none (default) | otlp-http | otlp-grpc
exporter = "none"
# Example OTLP/HTTP exporter configuration
# [otel.exporter."otlp-http"]
# endpoint = "https://otel.example.com/v1/logs"
# protocol = "binary" # "binary" | "json"
# [otel.exporter."otlp-http".headers]
# "x-otlp-api-key" = "${OTLP_TOKEN}"
# Example OTLP/gRPC exporter configuration
# [otel.exporter."otlp-grpc"]
# endpoint = "https://otel.example.com:4317",
# headers = { "x-otlp-meta" = "abc123" }
# Example OTLP exporter with mutual TLS
# [otel.exporter."otlp-http"]
# endpoint = "https://otel.example.com/v1/logs"
# protocol = "binary"
# [otel.exporter."otlp-http".headers]
# "x-otlp-api-key" = "${OTLP_TOKEN}"
# [otel.exporter."otlp-http".tls]
# ca-certificate = "certs/otel-ca.pem"
# client-certificate = "/etc/codex/certs/client.pem"
# client-private-key = "/etc/codex/certs/client-key.pem"
```