Building a ChatGPT like Q&A on PDFs
In this guide we will create a ChatGPT-like question & answer widget for your PDFs.
Getting started
Let's start by creating our chains backend! Create a repository with a chains
folder inside it. Our CLI will automatically deploy all chains we create in this folder.
Speaking of CLI, let's install it!
Next, let's authenticate:
After following the instructions, you'll be set up and ready to go! Make sure to also add your Open AI key.
Want to skip to the end? We have a Github repo with a ChatGPT style frontend and chains folder ready-made! Click here.
Loading your PDFs
The first thing we need to do is index our PDFs in some sort of database. Ideally, we actually want this to be a vector database - where we can store embeddings for the PDF text.
What is a vector database?
Vector databases allow you to store and query "embeddings" alongisde your metadata. You can think of embeddings as numerical representations of your text, capturing the many semantic dimensions.
By querying embeddings against embeddings (vector similarity search), you're able to find content based on meaning. This is in contrast to traditional search techniques such as keyword matching.
For example, in a vector database, the following two sentences would have similar embeddings:
This is because the meaning of the two sentences is similar, even though the words are different. Traditional keyword matching would not find these two sentences similar.
How do they help with LLMs?
A very common technique is to use vector databases to provide additional context to LLMs. We can search the vector database with the user's input and retrieve relevant information. This can then be added into the prompt.
Loading PDFs into our built-in vector database
Good news! Relevance has a built in vector database. We call these “datasets”. We want to:
- Scrape PDFs from URLs
- Split up the PDF text into chunks
- Create embeddings with OpenAI for each chunk
- Upload the chunks into a Relevance AI dataset
We’ll be creating more tooling and content around this process, but for now you can run this Google Colab demo. Make sure to fill in your authentication details at the top of the notebook.
Planning the chain
Now let’s build the chain. We want it to:
- Receive a question
- Use the question to search our vector database for relevant text chunks from our PDFs
- Feed those chunks into the LLM as context, and have the LLM answer our question
Additionally, we want the chain to be able to consider the chat history when answering questions. This will allow us to have a conversation, like with ChatGPT.
To do that, we need to alter our chain to also accept a history param. The trick is, we want to use that history to re-word the user’s question.
So for example, if their question is “What’s that in fahrenheit?” - standalone, this will make no sense to the LLM. However, if we have just asked the LLM “what is the boiling point of water in celsius?”, by passing in the history, we can re-word the new question to “what is the boiling point of water in fahrenheit?”. An LLM is able to answer that successfully!
So with that in mind, our chain should look like:
- Receive a question and chat history
- If chat history exists, use an LLM to reword the question based on the history
- Use the question to search our vector database for relevant text chunks from our PDFs
- Feed those chunks into the LLM as context, and have the LLM answer our question
Defining our chain input
Let’s start by creating a new file in the chains
folder called pdf-chat.js
. This will be our chain!
We start by importing the defineChain
method from the @relevanceai/chain
package:
Next, we want to define the input parameters for this chain. As discussed, those should be question
and history
. We then give the params types, based on JSONSchema.
Our history will be an array of objects, with the message and who sent the message (role
).
Defining our chain steps
The meat and potatoes of any chain are the "steps". These are the transformations that will execute on running a chain. They run in our hosted Relevance AI run time.
We have a whole library of pre-built steps, which you can see here! We also also support more advanced use cases where you can self-host any code as a chain step! Speak to our team if this is you.
To build this chain, we’ll need to use the following steps as specified earlier:
- Modify question based on chat history
- Search vector database for relevant text chunks
- Feed chunks into LLM and get answer
We do this in the setup
function in defineChain
, which receives params
and a step
function as arguments. The step
function is used to define the steps of the chain.
Note: the setup
function only allows you to declare each step of a chain. No Javascript you write outside of these steps, except for your returned output will be executed.
However, you can use the code
function provided in setup
to run custom Javascript. You can also use the runIf
function to conditionally run steps, as well as forEach
to iterate over arrays. Learn more.
Demo
Let’s show off how to run this bad boy client side! I’ve created a simple Nuxt.js application containing the above chain and a chat frontend. You’re free to clone this template and use it as a starting point for your own projects.
To get it working with your own chain, make sure to:
The relevant code exists in the useChain
composable. We run the chain by it’s ID, which is the file name of the chain.
Was this page helpful?