Skip to main content
Pastures works with any AI provider that implements the OpenAI chat completions API. This includes OpenAI, Anthropic, Ollama, Azure OpenAI, and any custom endpoint.

Supported providers

ProviderAPI Base URLModel exampleAuth
OpenAIhttps://api.openai.com/v1gpt-4oAPI key
Anthropichttps://api.anthropic.comclaude-sonnet-4-20250514API key
Ollamahttp://localhost:11434/v1llama3.2None
Azure OpenAIhttps://{resource}.openai.azure.comYour deployment nameAPI key
CustomAny OpenAI-compatible endpointProvider-specificProvider-specific

Configure in Pastures

Navigate to Pastures → Settings and enter:
FieldValue
AI ProviderYour provider (OpenAI, Anthropic, Ollama, etc.)
API Base URLThe provider’s API endpoint
API KeyYour provider’s API key (leave blank for Ollama)
ModelThe model identifier

Ollama local setup

Ollama lets you run models locally with no API key and no data leaving your network.
1

Install Ollama

brew install ollama
Or download from ollama.com.
2

Pull a model

ollama pull llama3.2
This downloads the model weights (~4 GB for llama3.2).
3

Start Ollama

ollama serve
Ollama serves an OpenAI-compatible API at http://localhost:11434/v1.
4

Configure in Pastures

In Pastures → Settings:
FieldValue
AI ProviderOllama
API Base URLhttp://localhost:11434/v1
API Key(leave blank)
Modelllama3.2
For Rancher running in a VM or container, replace localhost with the host machine’s IP address.

Limitations of BYO mode

BYO mode uses your model’s general knowledge for diagnostics. The following features are only available with Rancher Oracle:
  • Source citations — GitHub issue links in diagnostic responses
  • Fleet correlation — Cross-cluster pattern matching
  • Fix verification — Post-fix monitoring
See Oracle Setup to enable these features.