📄 Overview
We at AI Planet are excited to introduce BeyondLLM, an open-source framework designed to streamline the development of RAG and LLM applications, complete with evaluations, all in just 5-7 lines of code.
Yes, you read that correctly. Only 5-7 lines of code.
Let's understand what and why one needs BeyondLLM.
Why BeyondLLM?
Easily build RAG and Evals in 5 lines of code
Building a robust RAG (Retrieval-Augmented Generation) system involves integrating
various components
and managing associatedhyperparameters
. BeyondLLM offers an optimal framework forquickly experimenting with RAG applications
.With components like
source
andauto_retriever
, which support several parameters, most of the integration work is automated, eliminating the need for manual coding.Additionally, we are actively working on enhancing features such as hyperparameter tuning for RAG applications, addressing the next key aspect of our development roadmap.
Customised Evaluation Support
The evaluation of RAG in the market largely relies on the OpenAI API Key and closed-source LLMs. However, with BeyondLLM, you have the flexibility to select any LLM for evaluating both LLMs and embeddings.
We offer support for
2 evaluation metrics
for embeddings:Hit rate
andMRR (Mean Reciprocal Rank)
, allowing users to choose the most suitable model based on their specific needs.Additionally, we provide
4 evaluation metrics
for assessingLarge Language Models
across various criteria, in line with current research standards.
Various Custom LLMs support tailoring the basic needs
HuggingFace: Easily accessible for everyone to access Open Source LLMs
Ollama: Run LLMs locally
Gemini: (default LLM): Run Multimodal applications
OpenAI: Powerful chat model LLM with best quality response
Azure: For 32K large context good response quality support.
Reduce LLM Hallucination
Certainly, the primary objective is to minimize or eliminate hallucinations within the RAG framework.
To support this goal, we've developed the
Advanced RAG section
, facilitating rapid experimentation for constructing RAG pipelines with reduced hallucination risks.BeyondLLM features, including source and auto_retriever, incorporate functionalities such as
Markdown splitter
,chunking strategies
,Re-ranking (Cross encoders and flag embedding)
andHybrid Search
, enhancing the reliability of RAG applications.It's worth noting Andrej Karpathy's insight: "Hallucination is a LLM's greatest feature and not a bug," underscoring the inherent capabilities of language models.
Done talking, lets build.
Last updated