BeyondLLM
  • Getting started
    • 📄Overview
    • 🔧Installation
    • 🚀Quickstart Guide
  • Core Components
    • 🌐Source
    • 🧬Embeddings
    • 🤖Auto Retriever
      • 🔫Evaluate retriever
    • 💼Vector Store
    • 🧠LLMs
    • 🔋Generator
    • 🧠Memory
    • 📊Evaluation
    • ⏰Observability
  • Advanced RAG
    • 📚Re-ranker Retrievers
    • 🔀Hybrid Retrievers
    • 📐Finetune Embeddings
  • Integration
    • 🦜️🔗 Langchain
    • 🦙 LlamaIndex
  • Use Cases
    • 💬Chat with PowerPoint Presentation
    • 🔍Document Search and Chat
    • 🤖Customer Service Bot
    • 🗣️Multilingual RAG
  • How to Guides
    • ➕How to add new LLM?
    • ➕How to add new Embeddings?
    • ➕How to add a new Loader?
  • Community Spotlight
    • 🔄Share your work
    • 👏Acknowledgements
Powered by GitBook
On this page
  • Import the required libraries
  • Setup API key
  • Load the Source Data
  • Embedding model
  • Auto retriever to retrieve documents
  • Run Generator Model
  1. Use Cases

Chat with PowerPoint Presentation

Import the required libraries

from beyondllm import source,retrieve,embeddings,llms,generator

Setup API key

import os
from getpass import getpass
os.environ['GOOGLE_API_KEY'] = getpass('Put the Google API Key here')

Load the Source Data

Here we will use a sample powerpoint on Document Generation Using ChatGPT. You have to provide the path to your ppt file here

data = source.fit("path/to/your/powerpoint/file",dtype="ppt",chunk_size=512,chunk_overlap=51)

Embedding model

We have the default Embedding Model which is GeminiEmbeddings in this case. You will have to specify your API key as an environment variable named: GOOGLE_API_KEY

Auto retriever to retrieve documents

retriever = retrieve.auto_retriever(data,type="normal",top_k=3)

Run Generator Model

pipeline = generator.Generate(question="what is this powerpoint presentation about?",retriever=retriever)
print(pipeline.call())

Output

The presentation focuses on exploring the depths of document generation using GPT-3.5. It entails a detailed walkthrough of the methodologies employed, shedding light on the current state, and presenting avenues for future advancements.

Deploy Inference - Gradio

import gradio as gr

def predict(message, history, system_prompt, tokens):
  response =  pipeline.call()
  return response

with gr.Blocks() as demo:
    chatbot = gr.Chatbot()
    msg = gr.Textbox()
    clear = gr.ClearButton([msg, chatbot])

    def predict(message, chat_history):
      response = pipeline.call()
      chat_history.append((message, response))
      return "", chat_history


    msg.submit(predict, [msg, chatbot], [msg, chatbot])

demo.launch(share = True)
Previous🦙 LlamaIndexNextDocument Search and Chat

Last updated 1 year ago

💬