BeyondLLM
  • Getting started
    • 📄Overview
    • 🔧Installation
    • 🚀Quickstart Guide
  • Core Components
    • 🌐Source
    • 🧬Embeddings
    • 🤖Auto Retriever
      • 🔫Evaluate retriever
    • 💼Vector Store
    • 🧠LLMs
    • 🔋Generator
    • 🧠Memory
    • 📊Evaluation
    • ⏰Observability
  • Advanced RAG
    • 📚Re-ranker Retrievers
    • 🔀Hybrid Retrievers
    • 📐Finetune Embeddings
  • Integration
    • 🦜️🔗 Langchain
    • 🦙 LlamaIndex
  • Use Cases
    • 💬Chat with PowerPoint Presentation
    • 🔍Document Search and Chat
    • 🤖Customer Service Bot
    • 🗣️Multilingual RAG
  • How to Guides
    • ➕How to add new LLM?
    • ➕How to add new Embeddings?
    • ➕How to add a new Loader?
  • Community Spotlight
    • 🔄Share your work
    • 👏Acknowledgements
Powered by GitBook
On this page
  • What is a Generator?
  • Evaluation
  1. Core Components

Generator

What is a Generator?

The generator is a core component designed to generate responses. Besides generating responses you can evaluate your pipeline as well from within the generator. Generator function utilizes the retriever and llm to generate a response. It puts everything together to answer the user query.

Parameters

  • User query : The question from the user.

  • System Prompt Optional[str] : The system prompt that directs the responses of llm.

  • Retriever : The retriever which will fetch relevant information from the knowledge base based on the user query.

  • LLM [default: Gemini model] : The Language model to generate the response based on the information fetched by the retriever.

Code Snippet

from beyondllm import generator

user_prompt = "......"
# using default LLM
pipeline = generator.Generate(question=user_prompt,retriever=retriever)


from beyondllm.llms import OllamaModel
llm = Ollama(model="llama2")
system_prompt = "You are an AI assistant...."
pipeline = generator.Generate(
                 question=user_prompt
                 system_prompt = system_prompt
                 llm = llm,
                 retriever=retriever
)

Call

Once the pipeline is setup we use the call function to return the generated response from LLM that acts as Generator in RAG.

print(pipeline.call())

Evaluation

Evaluation is an integral part of BeyondLLM as it circles out the pain points in the pipeline. Generator lets you evaluate the pipeline on list of important benchmarks. For more information kindly refer to : Evaluation

PreviousLLMsNextMemory

Last updated 1 year ago

🔋