Skip to main content
CORE supports multiple AI model providers. Set your provider and API key in your environment variables, then choose which models to use for chat and embeddings.

Configuration

Two environment variables control which provider and model CORE uses:
CHAT_PROVIDER=openai       # openai | anthropic | google | azure | ollama
MODEL=gpt-5-2025-08-07     # model ID for the selected provider

OpenAI

Get your API key from platform.openai.com.
CHAT_PROVIDER=openai
OPENAI_API_KEY=sk-...
MODEL=gpt-5-2025-08-07
Chat models
Model IDLabelComplexity
gpt-5.2-2025-12-11GPT-5.2Medium
gpt-5-2025-08-07GPT-5Medium
gpt-5-mini-2025-08-07GPT-5 MiniLow
Embedding models
Model IDLabelDimensions
text-embedding-3-smallText Embedding 3 Small1536
text-embedding-3-largeText Embedding 3 Large3072
EMBEDDINGS_PROVIDER=openai
EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_MODEL_SIZE=1536
Optional: OpenAI-compatible proxy If you’re using a proxy or a service with an OpenAI-compatible API, set OPENAI_BASE_URL and OPENAI_API_MODE:
OPENAI_BASE_URL=https://your-proxy.example.com/v1
OPENAI_API_MODE=chat_completions   # use chat_completions for most proxies

Anthropic

Get your API key from console.anthropic.com.
CHAT_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
MODEL=claude-sonnet-4-6
Chat models
Model IDLabelComplexity
claude-sonnet-4-6Claude Sonnet 4.6Medium
claude-haiku-4-5Claude Haiku 4.5Low

Google

Get your API key from aistudio.google.com.
CHAT_PROVIDER=google
GOOGLE_GENERATIVE_AI_API_KEY=AIza...
MODEL=gemini-2.5-pro-preview-03-25
Chat models
Model IDLabelComplexity
gemini-2.5-pro-preview-03-25Gemini 2.5 ProMedium
gemini-2.5-flash-preview-04-17Gemini 2.5 FlashLow
Embedding models
Model IDLabelDimensions
text-embedding-004Text Embedding 004768
EMBEDDINGS_PROVIDER=google
EMBEDDING_MODEL=text-embedding-004
EMBEDDING_MODEL_SIZE=768

Azure OpenAI

Set up a deployment in the Azure portal, then configure:
CHAT_PROVIDER=azure
AZURE_API_KEY=...
AZURE_BASE_URL=https://<your-resource>.openai.azure.com
MODEL=azure/gpt-4.1
Chat models
Model IDLabelComplexity
azure/gpt-4oGPT-4o (Azure)Medium
azure/gpt-4o-miniGPT-4o Mini (Azure)Low
azure/gpt-4.1GPT-4.1 (Azure)High

Ollama (Local)

Run models locally with Ollama. No API key required.
CHAT_PROVIDER=ollama
OLLAMA_URL=http://ollama:11434
MODEL=llama3.2
Pull a model before starting CORE:
ollama pull llama3.2
Ollama can also be used for embeddings and reranking:
EMBEDDINGS_PROVIDER=ollama
EMBEDDING_MODEL=mxbai-embed-large
EMBEDDING_MODEL_SIZE=1024

RERANK_PROVIDER=ollama
OLLAMA_RERANK_MODEL=dengcao/Qwen3-Reranker-8B:Q5_K_M

OpenRouter

OpenRouter gives access to models from many providers through a single API. It uses an OpenAI-compatible interface.
CHAT_PROVIDER=openai
OPENAI_API_KEY=sk-or-...
OPENAI_BASE_URL=https://openrouter.ai/api/v1
OPENAI_API_MODE=chat_completions
MODEL=openrouter/anthropic/claude-3.5-sonnet

DeepSeek

Get your API key from platform.deepseek.com.
CHAT_PROVIDER=openai
OPENAI_API_KEY=sk-...
OPENAI_BASE_URL=https://api.deepseek.com/v1
OPENAI_API_MODE=chat_completions
MODEL=deepseek-chat
Models
Model IDDescription
deepseek-chatDeepSeek V3 — fast, general-purpose
deepseek-reasonerDeepSeek R1 — reasoning model

Groq

Get your API key from console.groq.com.
CHAT_PROVIDER=openai
OPENAI_API_KEY=gsk_...
OPENAI_BASE_URL=https://api.groq.com/openai/v1
OPENAI_API_MODE=chat_completions
MODEL=llama-3.3-70b-versatile
Models
Model IDDescription
llama-3.3-70b-versatileLlama 3.3 70B — fast, general-purpose
llama-3.1-8b-instantLlama 3.1 8B — ultra-fast
gemma2-9b-itGemma 2 9B

Mistral

Get your API key from console.mistral.ai.
CHAT_PROVIDER=openai
OPENAI_API_KEY=...
OPENAI_BASE_URL=https://api.mistral.ai/v1
OPENAI_API_MODE=chat_completions
MODEL=mistral-large-latest
Models
Model IDDescription
mistral-large-latestMistral Large — most capable
mistral-small-latestMistral Small — fast, cost-efficient

xAI (Grok)

Get your API key from console.x.ai.
CHAT_PROVIDER=openai
OPENAI_API_KEY=xai-...
OPENAI_BASE_URL=https://api.x.ai/v1
OPENAI_API_MODE=chat_completions
MODEL=grok-3
Models
Model IDDescription
grok-3Grok 3 — flagship model
grok-3-miniGrok 3 Mini — fast, lightweight

Kimi (Moonshot)

Get your API key from platform.moonshot.cn.
CHAT_PROVIDER=openai
OPENAI_API_KEY=sk-...
OPENAI_BASE_URL=https://api.moonshot.cn/v1
OPENAI_API_MODE=chat_completions
MODEL=moonshot-v1-8k
Models
Model IDContextDescription
moonshot-v1-8k8KFast, short context
moonshot-v1-32k32KBalanced
moonshot-v1-128k128KLong context

Any OpenAI-compatible API

CORE can connect to any provider that exposes an OpenAI-compatible API (e.g. LM Studio, vLLM, Together AI, Fireworks, etc.):
CHAT_PROVIDER=openai
OPENAI_API_KEY=<your-key>
OPENAI_BASE_URL=https://<provider-endpoint>/v1
OPENAI_API_MODE=chat_completions
MODEL=<model-id>