Modifying the LLM Prompt
In our initial chain, we’ve already received the input for the message we want the AI chain to examine.
To obtain the explanation for the sentiment score, we could either introduce a separate LLM prompt or simply modify our original
prompt and then transform the output into a JSON. We’ll proceed with the latter.
To accomplish this, we adjust our prompt:
You are an expert sentiment classifier. Only ever respond with JSON in the format "{ sentiment, reason }". Do not say anything else.
Configuring Your LLM Output
After we’ve adjusted our prompt, we can proceed to copy one of the messages from our dataset that we want the chain to examine, and then hit run.
We can now observe that this has provided us with a JSON-like output that includes a reason and the sentiment.
{{llm.answer}} that now includes the instruction to perform sentiment analysis and provide a rationale for it.
sentiment, and reasonwith the output from our chain.
