π¦ LlamaIndex
This section demonstrates how to evaluate a LlamaIndex pipeline using BeyondLLM. We'll walk through the process step-by-step, explaining each component and its purpose.
LlamaIndex Evaluation
Setup and Imports
import os
from getpass import getpass
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, StorageContext
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.embeddings.fastembed import FastEmbedEmbedding
from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI
import chromadb
from beyondllm.utils import CONTEXT_RELEVENCE, GROUNDEDNESS, ANSWER_RELEVENCE
import re
import numpy as np
import pysbd
# Set up Hugging Face API Token
HUGGINGFACEHUB_API_TOKEN = getpass("API:")
os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKENDocument Loading and Model Configuration
Vector Store and Index Setup
Utility Functions
Evaluation Functions
Evaluation Execution
Sample Output
Last updated