Building an agent that dashboards like an analyst
In this guide we will create an agent that analyses your SQL schema and creates a dashboard full of charts.
Getting started
Let's start by creating our chains backend! Create a repository with a chains
folder inside it. Our CLI will automatically deploy all chains we create in this folder.
Speaking of CLI, let's install it!
Next, let's authenticate:
After following the instructions, you'll be set up and ready to go! Make sure to also add your Open AI key.
Don’t want to read the details? There’s a demo repo and video at the bottom of this guide! You can check out the code and run it yourself.
Check out a video of the final product!
What’s the plan?
We want to create an agent that will look at a table of data and think “what interesting insight can I find here?”. It should then write SQL queries to find those insights and build a dashboard full of charts.
To do this, we want to create three key chains that interact with each other.
-
We want a chain that takes in the SQL table metadata and starts thinking of interesting questions to ask the data.
-
The next chain should take those questions and figure out what SQL queries will help us find the data to answer them.
-
We then want to run these SQL queries on our database.
-
The final chain should take the retrieved data, as well as the question we are trying to answer, and create a chart for our dashboard.
Let’s get started!
Asking questions of the data
Our first chain needs to:
- Receive metadata about the SQL table we want to analyse.
- Think of some interesting questions to ask, based on the metadata.
To do this, we will have to first define the params our chain should receive. In this case, the SQL table’s name and it’s columns.
We can tell the chain how many questions to ask via the amountToGenerate
param.
Next, let’s start declaring the chain steps.
We’ll start by converting our array of table columns into a string, it can be used in an LLM prompt. We can do this by our code
utility, that allows us to run Javascript as part of our chain.
Next, let’s create an LLM prompt step that asks some questions based on the table name and the columns.
Note that we ask the LLM to return us a string that looks like an array. To turn it into an actual array, we can use a Javascript code step again:
Finally, our chain should return the arrayQueries
:
Now in our frontend code, we’re able to run this chain:
Figuring out how to answer the questions with SQL queries
Now we have a list of natural language questions, based on our SQL table - let’s figure out how to retrieve the data we need to answer them!
We’ll start by defining the params our chain should receive. This chain will convert one question at a time to SQL, so it should receive the natural language query as well as the table metadata.
You could make this chain receive and handle all the questions if you’d prefer! I opted for one at a time, purely for frontend effect (the ability to mock a streaming-esque interface).
We’ll start our setup with the same reduction as before, to get our table metadata ready for LLMs.
Next, we’ll create an LLM prompt to identify which part of the natural language query is most relevant to generating an SQL query.
This way, if the question contains a lot of extra information, we can extract the petinent part to querying a database.
Next, let’s take this relevant section and ask the LLM to generate a valid SQL query.
Perfect!
Generating charts
Once we have our SQL queries, we can use them to retrieve data from our database. In this guide’s accompanying repo, we use Prisma to connect to a Planetscale database.
We take the results from the database and feed it into our final chain, which generates a chart!
We’ll ask it to generate a configuration for Chart.js, so we can display the charts in the frontend.
For each set of results, we’ll pass in the results as well as the original question. This way the chart can be designed to answer the question.
In the first chain step, we’ll ask the LLM to select an appropriate type of chart for this data.
Then, we use a code block to prepare an example configuration for that chart type.
Finally, we ask the LLM to create a chart configuration! Note, we also ask it to return a custom key called “original_question” - so we can display it in the frontend. Thanks!
Piecing it all together!
I’ve created a repo that pieces these chains together to create a really cool business analyst agent!
You’ll notice I even add the ability for the user to ask questions at the end! There are some edge cases you can find that break the agent, but it’s a great start! With more prompt engineering, particularly if you know the sort of data that will be processed, you can make this agent even more robust.
Check it out on Github! It uses Planetscale for the database.
Check out this video of it in action:
Was this page helpful?