We are using Google’s Gemini 2.5 Model to generate answers.

The system is RAG-driven, meaning the LLM is only responsible for reading and synthesizing the relevant information we provide to help answer the question or prompt.
We prefer the use of publicly available models, on which we can optionally apply fine tuning to specialize their use based on the task at hand and RavenPack proprietary content.