mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-04 01:40:45 +08:00
ccw-litellm
Unified LiteLLM interface layer shared by ccw and codex-lens projects.
Features
- Unified LLM Interface: Abstract interface for LLM operations (chat, completion)
- Unified Embedding Interface: Abstract interface for text embeddings
- Multi-Provider Support: OpenAI, Anthropic, Azure, and more via LiteLLM
- Configuration Management: YAML-based configuration with environment variable substitution
- Type Safety: Full type annotations with Pydantic models
Installation
pip install -e .
Quick Start
Configuration
Create a configuration file at ~/.ccw/config/litellm-config.yaml:
version: 1
default_provider: openai
providers:
openai:
api_key: ${OPENAI_API_KEY}
api_base: https://api.openai.com/v1
llm_models:
default:
provider: openai
model: gpt-4
embedding_models:
default:
provider: openai
model: text-embedding-3-small
dimensions: 1536
Usage
LLM Client
from ccw_litellm import LiteLLMClient, ChatMessage
# Initialize client with default model
client = LiteLLMClient(model="default")
# Chat completion
messages = [
ChatMessage(role="user", content="Hello, how are you?")
]
response = client.chat(messages)
print(response.content)
# Text completion
response = client.complete("Once upon a time")
print(response.content)
Embedder
from ccw_litellm import LiteLLMEmbedder
# Initialize embedder with default model
embedder = LiteLLMEmbedder(model="default")
# Embed single text
vector = embedder.embed("Hello world")
print(vector.shape) # (1, 1536)
# Embed multiple texts
vectors = embedder.embed(["Text 1", "Text 2", "Text 3"])
print(vectors.shape) # (3, 1536)
Custom Configuration
from ccw_litellm import LiteLLMClient, load_config
# Load custom configuration
config = load_config("/path/to/custom-config.yaml")
# Use custom configuration
client = LiteLLMClient(model="fast", config=config)
Configuration Reference
Provider Configuration
providers:
<provider_name>:
api_key: <api_key_or_${ENV_VAR}>
api_base: <base_url>
Supported providers: openai, anthropic, azure, vertex_ai, bedrock, etc.
LLM Model Configuration
llm_models:
<model_name>:
provider: <provider_name>
model: <model_identifier>
Embedding Model Configuration
embedding_models:
<model_name>:
provider: <provider_name>
model: <model_identifier>
dimensions: <embedding_dimensions>
Environment Variables
The configuration supports environment variable substitution using the ${VAR} or ${VAR:-default} syntax:
providers:
openai:
api_key: ${OPENAI_API_KEY} # Required
api_base: ${OPENAI_API_BASE:-https://api.openai.com/v1} # With default
API Reference
Interfaces
AbstractLLMClient: Abstract base class for LLM clientsAbstractEmbedder: Abstract base class for embeddersChatMessage: Message data class (role, content)LLMResponse: Response data class (content, raw)
Implementations
LiteLLMClient: LiteLLM implementation of AbstractLLMClientLiteLLMEmbedder: LiteLLM implementation of AbstractEmbedder
Configuration
LiteLLMConfig: Root configuration modelProviderConfig: Provider configuration modelLLMModelConfig: LLM model configuration modelEmbeddingModelConfig: Embedding model configuration modelload_config(path): Load configuration from YAML fileget_config(path, reload): Get global configuration singletonreset_config(): Reset global configuration (for testing)
Development
Running Tests
pytest tests/ -v
Type Checking
mypy src/ccw_litellm
License
MIT