LLM Providers¶
Eval AI Library supports multiple LLM providers through a unified interface. All metrics work with any provider — just change the model string.
Supported Providers¶
| Provider | Prefix | Example | API Key Variable |
|---|---|---|---|
| OpenAI | openai: (default) | gpt-4o | OPENAI_API_KEY |
| Azure OpenAI | azure: | azure:gpt-4o | AZURE_OPENAI_API_KEY |
| Google Gemini | google: | google:gemini-2.0-flash | GOOGLE_API_KEY |
| Anthropic Claude | anthropic: | anthropic:claude-3-5-sonnet-latest | ANTHROPIC_API_KEY |
| Ollama | ollama: | ollama:llama3 | — |
| Custom | — | Pass CustomLLMClient instance | — |
Model Specification Format¶
# Short form (OpenAI is default)
model = "gpt-4o"
# Full form with provider prefix
model = "provider:model_name"
# Examples
model = "openai:gpt-4o"
model = "anthropic:claude-3-5-sonnet-latest"
model = "google:gemini-2.0-flash"
model = "ollama:llama3"
model = "azure:gpt-4o"
Using LLMDescriptor¶
For programmatic provider selection:
from eval_lib import LLMDescriptor, Provider
model = LLMDescriptor(provider=Provider.OPENAI, model="gpt-4o")
model = LLMDescriptor(provider=Provider.ANTHROPIC, model="claude-3-5-sonnet-latest")
model = LLMDescriptor(provider=Provider.GOOGLE, model="gemini-2.0-flash")
Mix Providers in One Evaluation¶
You can use different providers for different metrics:
from eval_lib import (
AnswerRelevancyMetric,
FaithfulnessMetric,
CustomEvalMetric,
)
metrics = [
# OpenAI for answer relevancy
AnswerRelevancyMetric(model="gpt-4o", threshold=0.7),
# Claude for faithfulness
FaithfulnessMetric(model="anthropic:claude-3-5-sonnet-latest", threshold=0.7),
# Gemini for custom evaluation
CustomEvalMetric(
model="google:gemini-2.0-flash",
threshold=0.7,
name="Quality",
criteria="Evaluate response quality"
),
]
Direct LLM Calls¶
You can also make direct LLM calls using the library's client:
from eval_lib import chat_complete, get_embeddings
# Chat completion
response, cost = await chat_complete(
llm="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
temperature=0.7
)
# Embeddings
embeddings, cost = await get_embeddings(
model="openai:text-embedding-3-small",
texts=["Hello world", "How are you?"]
)
Cost Tracking¶
All API calls return cost in USD when available. The evaluation engine aggregates costs across all metrics and test cases:
See Pricing for model pricing details.