409. Multi-Stage-Reasoning Using LLMs

▮ LLM Tasks VS LLM-Based Workflows

LLMs are great at single tasks, but when we want to utilize LLMs in real-world applications, there is rarely a case where there is only 1 task. Typical applications are more than just a prompt-responsive system.

Task VS Workflows

The goal of multi-stage-reasoning is to build up a workflow so that it is modularized in the sense that you could take out an LLM and put another one back in.

So for this post, I want to share how you can apply multi-stage-reasoning to an application.

▮ Multi-LLM Problem Case Study

Let’s say we want to get the sentiment of many articles on a certain topic.If we were to accomplish this task with a single LLM, the workflow would look like the image below.

Single LLMs

However, this solution can quickly overwhelm the model input length. To solve this, we can utilize a two-stage process like the image below instead.

Multi-LLMs

▮ LLM Chains

To chain multiple LLMs like in the case study we’ve discussed above, we can use libraries such as langchain. All you need to do is the following.

  1. Load models
    from langchain.llms import OpenAI
    
    # load model depending on your task
    llm_1= OpenAI()
    llm_2= OpenAI()
    
  2. Initiate chain for 1st llm
    from langchain import PromptTemplate
    from langchain.chains import LLMChain
    
    #tell the first llm what it is supposed to do
    llm1_template = """
    You are a summarizer for articles, you will respond to the following article with a brief summary of it. 
    Article:" {article}"
    Summary: 
    """
    
    llm1_prompt_template = PromptTemplate(
        input_variables=["article"],
        template=llm1_template,
    )
    
    llm1_chain = LLMChain(
        llm=llm_1,
        prompt=llm1_prompt_template,
        output_key="llm_1_said",
        verbose=False,
    )
    
  3. Initiate chain for 2nd llm
    #tell the second llm what it is supposed to do
    llm2_template = """
    You are going to decide the overall sentiment of the coming summarized article. 
    Summarized Articles: {llm_1_said}
    Overall Sentiment:
    """
    
    llm2_prompt_template = PromptTemplate(
        input_variables=["llm_1_said"],
        template=llm2_template,
    )
    
    llm2_chain = LLMChain(
        llm=llm_2,
        prompt=llm2_prompt_template,
        output_key="llm_2_said",
        verbose=False,
    )
    
    
  4. Run chain sequence
    from langchain.chains import SequentialChain
    
    llm_chain = SequentialChain(
        chains=[llm1_chain, llm2_chain],
        input_variables=["article"],
        verbose=True,
    )
    
    # We can now run the chain with our Article!
    my_article=""
    llm_chain.run({"article": my_article})
    

▮ LLM Agents

In addition to being able to chain LLMs, we can also give LLMs the ability to delegate tasks to specified tools. LLM Agents are LLM-based systems that execute the ReasonAction loop.

LLM Agents

To build a LLM agent we would need the following.

  • A task to be solved
  • An LLM as the reasoning/decision-making entity
  • A set of tools that the LLM will select and execute to perform steps to achieve the task

In python code, it would look like the following.

task_to_be_solved = "Create a detailed dataset (DO NOT try to download one, you MUST create one based on what you find) on the performance of each driver in the Mercedes AMG F1 team in 2020 and do some analysis with at least 3 plots, use a subplot for each graph so they can be shown at the same time, use seaborn to plot the graphs."

llm_for_decision_making = OpenAI()

tools_for_agent = load_tools(["wikipedia", "serpapi", "python_repl", "terminal"], llm=llm)

# We now create my_agent using the "initialize_agent" command.
my_agent = initialize_agent(
    tools_for_agent, llm_for_decision_making, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)

my_agent.run(task_to_be_solved)