GET /health
Basic health check for the Pastures Engine. Returns 200 OK if the Engine is running and ready to accept requests.
The extension uses this endpoint in Settings when the user clicks Test Connection to verify that the configured Engine URL is reachable.
Request
No headers or body required (authentication is not enforced on this endpoint).
Response
Success (200):
Failure: Any non-200 status code (or a network error) indicates the Engine is unreachable or unhealthy. The extension displays an error message in Settings.
GET /api/llm/status
Returns the current AI provider configuration and health status. The extension displays this information in Settings so operators can confirm which model and provider the Engine is using.
| Header | Required | Description |
|---|
Authorization | No | Bearer <api-key> (if configured) |
Response
{
"provider": "openai",
"model": "gpt-4o",
"status": "connected",
"latency": "245ms",
"air_gap_safe": false
}
Response Fields
| Field | Type | Description |
|---|
provider | string | AI provider name (e.g., openai, anthropic, ollama, vllm). |
model | string | Model identifier currently in use. |
status | string | Connection status — connected, degraded, or disconnected. |
latency | string | Round-trip latency to the AI provider. |
air_gap_safe | boolean | true if the provider runs locally (no external network calls). Relevant for air-gapped environments. |
The air_gap_safe field helps operators in restricted environments verify that no data leaves the network. When using a local provider like Ollama or vLLM, this field is true.