How Oracle works
User asks a question
A query is submitted through the Pastures AI drawer (e.g., “Why is my etcd cluster degraded?”).
Semantic search
Oracle embeds the query and performs semantic search across a corpus of 48,000+ real issue resolutions from 13 open-source repositories.
Context retrieval
The top-k matching issues, pull requests, and discussions are retrieved from the vector database.
Grounded generation
The retrieved context is passed to the LLM, which generates a response grounded in real resolutions — not generic knowledge.
Key differentiators
Source citations
Every response cites actual GitHub issues — not hallucinated references. Users can click through to verify the source.
Fleet correlation
Oracle detects when the same issue is active across multiple clusters, surfacing fleet-wide patterns automatically.
Confidence scoring
Each response includes a confidence score reflecting how well the query matches known resolutions in the corpus.
Fix verification
After a fix is applied, Oracle can monitor the cluster and verify the issue is resolved.
What Oracle is not
- Oracle is not the extension. The Pastures extension is fully open source (Apache 2.0) and works without Oracle by connecting to any OpenAI-compatible model.
- Oracle is not a general-purpose LLM. It is specifically tuned for Kubernetes and Rancher ecosystem troubleshooting.
- Oracle is not required. You can use Pastures in demo mode or with your own model. Oracle is the optional paid service that adds grounded responses, citations, and fleet intelligence.
Architecture
POST /api/diagnose. The Engine performs RAG against the corpus and returns a structured diagnosis with source citations. See the API Reference for full endpoint documentation.