Files
Claude-Code-Workflow/docs/features/api-settings.md
catlog22 4a89f626fc Refactor documentation for code commands and workflows
- Updated command syntax formatting to use code blocks for clarity in `prep.md`, `review.md`, and `spec.md`.
- Enhanced architectural diagrams in `ch01-what-is-claude-dms3.md` and core concepts in `ch03-core-concepts.md` using mermaid syntax for better visualization.
- Improved workflow diagrams in `ch04-workflow-basics.md` and `4-level.md` to provide clearer representations of processes.
- Added troubleshooting section in `installation.md` to address common installation issues and provide quick start examples.
- Revised skill documentation in `claude-meta.md` and `claude-workflow.md` to standardize command triggers and output structures.
- Updated best practices and workflow index documentation to enhance readability and understanding of workflow levels and practices.
2026-02-28 19:53:24 +08:00

1.9 KiB

API Settings

One-Liner

API Settings manages AI model endpoint configuration — Centralizes API keys, base URLs, and model selection for all supported AI backends.


Configuration Locations

Location Scope Priority
~/.claude/cli-tools.json Global Base
.claude/settings.json Project Override
.claude/settings.local.json Local Highest

Supported Backends

Backend Type Models
Gemini Builtin gemini-2.5-flash, gemini-2.5-pro
Qwen Builtin coder-model
Codex Builtin gpt-5.2
Claude Builtin sonnet, haiku
OpenCode Builtin opencode/glm-4.7-free

Configuration Example

// ~/.claude/cli-tools.json
{
  "version": "3.3.0",
  "tools": {
    "gemini": {
      "enabled": true,
      "primaryModel": "gemini-2.5-flash",
      "secondaryModel": "gemini-2.5-flash",
      "tags": ["analysis", "debug"],
      "type": "builtin"
    },
    "codex": {
      "enabled": true,
      "primaryModel": "gpt-5.2",
      "secondaryModel": "gpt-5.2",
      "tags": [],
      "type": "builtin"
    }
  }
}

Environment Variables

# API Keys
LITELLM_API_KEY=your-api-key
LITELLM_API_BASE=https://api.example.com/v1
LITELLM_MODEL=gpt-4o-mini

# Reranker (optional)
RERANKER_API_KEY=your-reranker-key
RERANKER_API_BASE=https://api.siliconflow.cn
RERANKER_PROVIDER=siliconflow
RERANKER_MODEL=BAAI/bge-reranker-v2-m3

Model Selection

Via CLI Flag

ccw cli -p "Analyze code" --tool gemini --model gemini-2.5-pro

Via Config

{
  "tools": {
    "gemini": {
      "primaryModel": "gemini-2.5-flash",
      "secondaryModel": "gemini-2.5-pro"
    }
  }
}