BeyondLLM
  • Getting started
    • 📄Overview
    • 🔧Installation
    • 🚀Quickstart Guide
  • Core Components
    • 🌐Source
    • 🧬Embeddings
    • 🤖Auto Retriever
      • 🔫Evaluate retriever
    • 💼Vector Store
    • 🧠LLMs
    • 🔋Generator
    • 🧠Memory
    • 📊Evaluation
    • ⏰Observability
  • Advanced RAG
    • 📚Re-ranker Retrievers
    • 🔀Hybrid Retrievers
    • 📐Finetune Embeddings
  • Integration
    • 🦜️🔗 Langchain
    • 🦙 LlamaIndex
  • Use Cases
    • 💬Chat with PowerPoint Presentation
    • 🔍Document Search and Chat
    • 🤖Customer Service Bot
    • 🗣️Multilingual RAG
  • How to Guides
    • ➕How to add new LLM?
    • ➕How to add new Embeddings?
    • ➕How to add a new Loader?
  • Community Spotlight
    • 🔄Share your work
    • 👏Acknowledgements
Powered by GitBook
On this page
  1. Getting started

Overview

NextInstallation

Last updated 1 year ago

We at AI Planet are excited to introduce , an open-source framework designed to streamline the development of RAG and LLM applications, complete with evaluations, all in just 5-7 lines of code.

Yes, you read that correctly. Only 5-7 lines of code.

Let's understand what and why one needs BeyondLLM.

Why BeyondLLM?

Easily build RAG and Evals in 5 lines of code

  • Building a robust RAG (Retrieval-Augmented Generation) system involves integrating various components and managing associated hyperparameters. BeyondLLM offers an optimal framework for quickly experimenting with RAG applications.

  • With components like source and auto_retriever, which support several parameters, most of the integration work is automated, eliminating the need for manual coding.

  • Additionally, we are actively working on enhancing features such as hyperparameter tuning for RAG applications, addressing the next key aspect of our development roadmap.

Customised Evaluation Support

  • The evaluation of RAG in the market largely relies on the OpenAI API Key and closed-source LLMs. However, with BeyondLLM, you have the flexibility to select any LLM for evaluating both LLMs and embeddings.

  • We offer support for 2 evaluation metrics for embeddings: Hit rate and MRR (Mean Reciprocal Rank), allowing users to choose the most suitable model based on their specific needs.

  • Additionally, we provide 4 evaluation metrics for assessing Large Language Models across various criteria, in line with current research standards.

Various Custom LLMs support tailoring the basic needs

  • HuggingFace: Easily accessible for everyone to access Open Source LLMs

  • Ollama: Run LLMs locally

  • Gemini: (default LLM): Run Multimodal applications

  • OpenAI: Powerful chat model LLM with best quality response

  • Azure: For 32K large context good response quality support.

Reduce LLM Hallucination

  • Certainly, the primary objective is to minimize or eliminate hallucinations within the RAG framework.

  • To support this goal, we've developed the Advanced RAG section, facilitating rapid experimentation for constructing RAG pipelines with reduced hallucination risks.

  • BeyondLLM features, including source and auto_retriever, incorporate functionalities such as Markdown splitter, chunking strategies, Re-ranking (Cross encoders and flag embedding) and Hybrid Search, enhancing the reliability of RAG applications.

Done talking, lets build.

It's worth noting Andrej Karpathy's insight: "," underscoring the inherent capabilities of language models.

📄
Hallucination is a LLM's greatest feature and not a bug
BeyondLLM
Build-Experiment-Evaluate-Repeat