, I walked you thru organising a quite simple RAG pipeline in Python, utilizing OpenAI’s API, LangChain, and your native information. In that publish, I cowl the very fundamentals of making embeddings out of your native information with LangChain, storing them in a vector database with FAISS, making API calls to OpenAI’s API, and in the end producing responses related to your information. 🌟
Nonetheless, on this easy instance, I solely display how one can use a tiny .txt file. On this publish, I additional elaborate on how one can make the most of bigger information together with your RAG pipeline by including an additional step to the method — chunking.
What about chunking?
Chunking refers back to the technique of parsing a textual content into smaller items of textual content—chunks—which are then reworked into embeddings. This is essential as a result of it permits us to successfully course of and create embeddings for bigger information. All embedding fashions include varied limitations on the dimensions of the textual content that’s handed — I’ll get into extra particulars about these limitations in a second. These limitations enable for higher efficiency and low-latency responses. Within the case that the textual content we offer doesn’t meet these measurement limitations, it’ll get truncated or rejected.
If we needed to create a RAG pipeline studying, say from Leo Tolstoy’s Conflict and Peace textual content (a fairly massive ebook), we wouldn’t be capable of immediately load it and rework it right into a single embedding. As a substitute, we have to first do the chunking — create smaller chunks of textual content, and create embeddings for each. Every chunk being beneath the dimensions limits of no matter embedding mannequin we use permits us to successfully rework any file into embeddings. So, a considerably extra real looking panorama of a RAG pipeline would look as follows:

There are a number of parameters to additional customise the chunking course of and match it to our particular wants. A key parameter of the chunking course of is the chunk measurement, which permits us to specify what the dimensions of every chunk shall be (in characters or in tokens). The trick right here is that the chunks we create should be sufficiently small to be processed throughout the measurement limitations of the embedding, however on the similar time, they need to even be massive sufficient to include significant data.
As an illustration, let’s assume we need to course of the next sentence from Conflict and Peace, the place Prince Andrew contemplates the battle:

Let’s additionally assume we created the next (fairly small) chunks :

Then, if we have been to ask one thing like “What does Prince Andrew imply by ‘all the identical now’?”, we might not get a great reply as a result of the chunk “However isn’t all of it the identical now?” thought he. doesn’t comprise any context and is imprecise. In distinction, the which means is scattered throughout a number of chunks. Thus, although it’s much like the query we ask and could also be retrieved, it doesn’t comprise any which means to supply a related response. Subsequently, choosing the suitable chunk measurement for the chunking course of consistent with the kind of paperwork we use for the RAG, can largely affect the standard of the responses we’ll be getting. Usually, the content material of a bit ought to make sense for a human studying it with out every other data, with a purpose to additionally be capable of make sense for the mannequin. Finally, a trade-off for the chunk measurement exists — chunks should be sufficiently small to satisfy the embedding mannequin’s measurement limitations, however massive sufficient to protect which means.
• • •
One other vital parameter is the chunk overlap. That’s how a lot overlap we wish the chunks to have with each other. As an illustration, within the Conflict and Peace instance, we might get one thing like the next chunks if we selected a bit overlap of 5 characters.

That is additionally an important determination we’ve got to make as a result of:
- Bigger overlap means extra calls and tokens spent on embedding creation, which suggests costlier + slower
- Smaller overlap means a better probability of shedding related data between the chunk boundaries
Selecting the proper chunk overlap largely depends upon the kind of textual content we need to course of. For instance, a recipe ebook the place the language is easy and easy most likely received’t require an unique chunking methodology. On the flip facet, a basic literature ebook like Conflict and Peace, the place language may be very advanced and which means is interconnected all through totally different paragraphs and sections, will most likely require a extra considerate strategy to chunking to ensure that the RAG to supply significant outcomes.
• • •
However what if all we’d like is a less complicated RAG that appears as much as a few paperwork that match the dimensions limitations of no matter embeddings mannequin we use in only one chunk? Can we nonetheless want the chunking step, or can we simply immediately make one single embedding for your complete textual content? The brief reply is that it’s all the time higher to carry out the chunking step, even for a information base that does match the dimensions limits. That’s as a result of, because it seems, when coping with massive paperwork, we face the issue of getting misplaced within the center — lacking related data that’s included in massive paperwork and respective massive embeddings.
What are these mysterious ‘measurement limitations’?
Usually, a request to an embedding mannequin can embody a number of chunks of textual content. There are a number of totally different sorts of limitations we’ve got to think about comparatively to the dimensions of the textual content we have to create embeddings for and its processing. Every of these several types of limits takes totally different values relying on the embedding mannequin we use. Extra particularly, these are:
- Chunk Measurement, or additionally most tokens per enter, or context window. That is the utmost measurement in tokens for every chunk. As an illustration, for OpenAI’s
text-embedding-3-small
embedding mannequin, the chunk measurement restrict is 8,191 tokens. If we offer a bit that’s bigger than the chunk measurement restrict, usually, will probably be silently truncated‼️ (an embedding goes to be created, however just for the primary half that meets the chunk measurement restrict), with out producing any error. - Variety of Chunks per Request, or additionally variety of inputs. There’s additionally a restrict on the variety of chunks that may be included in every request. As an illustration, all OpenAI’s embedding fashions have a restrict of two,048 inputs — that’s, a most of two,048 chunks per request.
- Whole Tokens per Request: There’s additionally a limitation on the overall variety of tokens of all chunks in a request. For all OpenAI’s fashions, the overall most variety of tokens throughout all chunks in a single request is 300,000 tokens.
So, what occurs if our paperwork are greater than 300,000 tokens? As you might have imagined, the reply is that we make a number of consecutive/parallel requests of 300,000 tokens or fewer. Many Python libraries do that mechanically behind the scenes. For instance, LangChain’s OpenAIEmbeddings
that I exploit in my earlier publish, mechanically batches the paperwork we offer into batches underneath 300,000 tokens, provided that the paperwork are already offered in chunks.
Studying bigger information into the RAG pipeline
Let’s check out how all these play out in a easy Python instance, utilizing the Conflict and Peace textual content as a doc to retrieve within the RAG. The info I’m utilizing — Leo Tolstoy’s Conflict and Peace textual content — is licensed as Public Area and might be present in Venture Gutenberg.
So, initially, let’s attempt to learn from the Conflict and Peace textual content with none setup for chunking. For this tutorial, you’ll must have put in the langchain
, openai
, and faiss
Python libraries. We will simply set up the required packages as follows:
pip set up openai langchain langchain-community langchain-openai faiss-cpu
After ensuring the required libraries are put in, our code for a quite simple RAG seems like this and works advantageous for a small and easy .txt file within the text_folder
.
from openai import OpenAI # Chat_GPT API key
api_key = "your key"
# initialize LLM
llm = ChatOpenAI(openai_api_key=api_key, mannequin="gpt-4o-mini", temperature=0.3)
# loading paperwork for use for RAG
text_folder = "RAG information"
paperwork = []
for filename in os.listdir(text_folder):
if filename.decrease().endswith(".txt"):
file_path = os.path.be a part of(text_folder, filename)
loader = TextLoader(file_path)
paperwork.lengthen(loader.load())
# generate embeddings
embeddings = OpenAIEmbeddings(openai_api_key=api_key)
# create vector database w FAISS
vector_store = FAISS.from_documents(paperwork, embeddings)
retriever = vector_store.as_retriever()
def important():
print("Welcome to the RAG Assistant. Kind 'exit' to stop.n")
whereas True:
user_input = enter("You: ").strip()
if user_input.decrease() == "exit":
print("Exiting…")
break
# get related paperwork
relevant_docs = retriever.invoke(user_input)
retrieved_context = "nn".be a part of([doc.page_content for doc in relevant_docs])
# system immediate
system_prompt = (
"You're a useful assistant. "
"Use ONLY the next information base context to reply the person. "
"If the reply just isn't within the context, say you do not know.nn"
f"Context:n{retrieved_context}"
)
# messages for LLM
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_input}
]
# generate response
response = llm.invoke(messages)
assistant_message = response.content material.strip()
print(f"nAssistant: {assistant_message}n")
if __name__ == "__main__":
important()
However, if I add the Conflict and Peace .txt file in the identical folder, and attempt to immediately create an embedding for it, I get the next error:

ughh 🙃
So what occurs right here? LangChain’s OpenAIEmbeddings
can not cut up the textual content into separate, lower than 300,000 token iterations, as a result of we didn’t present it in chunks. It doesn’t cut up the chunk, which is 777,181 tokens, resulting in a request that exceeds the 300,000 tokens most per request.
• • •
Now, let’s attempt to arrange the chunking course of to create a number of embeddings from this huge file. To do that, I shall be utilizing the text_splitter
library offered by LangChain, and extra particularly, the RecursiveCharacterTextSplitter
. In RecursiveCharacterTextSplitter
, the chunk measurement and chunk overlap parameters are specified as numerous characters, however different splitters like TokenTextSplitter
or OpenAITokenSplitter
additionally enable to arrange these parameters as numerous tokens.
So, we are able to arrange an occasion of the textual content splitter as beneath:
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
… after which use it to separate our preliminary doc into chunks…
split_docs = []
for doc in paperwork:
chunks = splitter.split_text(doc.page_content)
for chunk in chunks:
split_docs.append(Doc(page_content=chunk))
…after which use these chunks to create the embeddings…
paperwork= split_docs
# create embeddings + FAISS index
embeddings = OpenAIEmbeddings(openai_api_key=api_key)
vector_store = FAISS.from_documents(paperwork, embeddings)
retriever = vector_store.as_retriever()
.....
… and voila 🌟
Now our code can successfully parse the offered doc, even when it’s a bit bigger, and supply related responses.

On my thoughts
Selecting a chunking strategy that matches the dimensions and complexity of the paperwork we need to feed into our RAG pipeline is essential for the standard of the responses that we’ll be receiving. For positive, there are a number of different parameters and totally different chunking methodologies one must have in mind. Nonetheless, understanding and fine-tuning chunk measurement and overlap is the inspiration for constructing RAG pipelines that produce significant outcomes.
• • •
Cherished this publish? Obtained an attention-grabbing knowledge or AI mission?
Let’s be pals! Be part of me on
📰Substack 📝Medium 💼LinkedIn ☕Purchase me a espresso!
• • •