OpenAI functions
Certain OpenAI models (like gpt-3.5-turbo
and gpt-4
) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function.
In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions.
The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
The OpenAI Functions Agent is designed to work with these models.
Must be used with an OpenAI Functions model.
With LCEL
In this example we'll use LCEL to construct a customizable agent that is given two tools: search and calculator.
We'll then pull in a prompt template from the LangChainHub and pass that to our runnable agent.
Lastly we'll use the default OpenAI functions output parser OpenAIFunctionsAgentOutputParser
.
This output parser contains a method parseAIMessage
which when provided with a message, either returns an instance of FunctionsAgentAction
if there is another action to be taken my the agent, or AgentFinish
if the agent has completed its objective.
import { AgentExecutor } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { ChatPromptTemplate, MessagesPlaceholder } from "langchain/prompts";
import {
AIMessage,
AgentStep,
BaseMessage,
FunctionMessage,
} from "langchain/schema";
import { RunnableSequence } from "langchain/schema/runnable";
import { SerpAPI, formatToOpenAIFunction } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";
import { OpenAIFunctionsAgentOutputParser } from "langchain/agents/openai/output_parser";
/** Define your list of tools. */
const tools = [new Calculator(), new SerpAPI()];
/**
* Define your chat model to use.
* In this example we'll use gpt-4 as it is much better
* at following directions in an agent than other models.
*/
const model = new ChatOpenAI({ modelName: "gpt-4", temperature: 0 });
/**
* Define your prompt for the agent to follow
* Here we're using `MessagesPlaceholder` to contain our agent scratchpad
* This is important as later we'll use a util function which formats the agent
* steps into a list of `BaseMessages` which can be passed into `MessagesPlaceholder`
*/
const prompt = ChatPromptTemplate.fromMessages([
["ai", "You are a helpful assistant"],
["human", "{input}"],
new MessagesPlaceholder("agent_scratchpad"),
]);
/**
* Bind the tools to the LLM.
* Here we're using the `formatToOpenAIFunction` util function
* to format our tools into the proper schema for OpenAI functions.
*/
const modelWithFunctions = model.bind({
functions: [...tools.map((tool) => formatToOpenAIFunction(tool))],
});
/**
* Define a new agent steps parser.
*/
const formatAgentSteps = (steps: AgentStep[]): BaseMessage[] =>
steps.flatMap(({ action, observation }) => {
if ("messageLog" in action && action.messageLog !== undefined) {
const log = action.messageLog as BaseMessage[];
return log.concat(new FunctionMessage(observation, action.tool));
} else {
return [new AIMessage(action.log)];
}
});
/**
* Construct the runnable agent.
*
* We're using a `RunnableSequence` which takes two inputs:
* - input --> the users input
* - agent_scratchpad --> the previous agent steps
*
* We're using the `formatForOpenAIFunctions` util function to format the agent
* steps into a list of `BaseMessages` which can be passed into `MessagesPlaceholder`
*/
const runnableAgent = RunnableSequence.from([
{
input: (i: { input: string; steps: AgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: AgentStep[] }) =>
formatAgentSteps(i.steps),
},
prompt,
modelWithFunctions,
new OpenAIFunctionsAgentOutputParser(),
]);
/** Pass the runnable along with the tools to create the Agent Executor */
const executor = AgentExecutor.fromAgentAndTools({
agent: runnableAgent,
tools,
});
console.log("Loaded agent executor");
const query = "What is the weather in New York?";
console.log(`Calling agent executor with query: ${query}`);
const result = await executor.invoke({
input: query,
});
console.log(result);
/*
Loaded agent executor
Calling agent executor with query: What is the weather in New York?
{
output: 'The current weather in New York is sunny with a temperature of 66 degrees Fahrenheit. The humidity is at 54% and the wind is blowing at 6 mph. There is 0% chance of precipitation.'
}
*/
API Reference:
- AgentExecutor from
langchain/agents
- ChatOpenAI from
langchain/chat_models/openai
- ChatPromptTemplate from
langchain/prompts
- MessagesPlaceholder from
langchain/prompts
- AIMessage from
langchain/schema
- AgentStep from
langchain/schema
- BaseMessage from
langchain/schema
- FunctionMessage from
langchain/schema
- RunnableSequence from
langchain/schema/runnable
- SerpAPI from
langchain/tools
- formatToOpenAIFunction from
langchain/tools
- Calculator from
langchain/tools/calculator
- OpenAIFunctionsAgentOutputParser from
langchain/agents/openai/output_parser
Adding memory
We can also use memory to save our previous agent input/outputs, and pass it through to each agent iteration.
Using memory can help give the agent better context on past interactions, which can lead to more accurate responses beyond what the agent_scratchpad
can do.
Adding memory only requires a few changes to the above example.
First, import and instantiate your memory class, in this example we'll use BufferMemory
.
import { BufferMemory } from "langchain/memory";
const memory = new BufferMemory({
memoryKey: "history", // The object key to store the memory under
inputKey: "question", // The object key for the input
outputKey: "answer", // The object key for the output
returnMessages: true,
});
Then, update your prompt to include another MessagesPlaceholder
. This time we'll be passing in the chat_history
variable from memory.
const prompt = ChatPromptTemplate.fromMessages([
["ai", "You are a helpful assistant."],
new MessagesPlaceholder("chat_history"),
["human", "{input}"],
new MessagesPlaceholder("agent_scratchpad"),
]);
Next, inside your RunnableSequence
add a field for loading the chat_history
from memory.
const runnableAgent = RunnableSequence.from([
{
input: (i: { input: string; steps: AgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: AgentStep[] }) =>
formatAgentSteps(i.steps),
// Load memory here
chat_history: async (_: { input: string; steps: AgentStep[] }) => {
const { history } = await memory.loadMemoryVariables({});
return history;
},
},
prompt,
modelWithFunctions,
new OpenAIFunctionsAgentOutputParser(),
]);
const executor = AgentExecutor.fromAgentAndTools({
agent: runnableAgent,
tools,
});
Finally we can call the agent, and save the output after the response is returned.
const query = "What is the weather in New York?";
console.log(`Calling agent executor with query: ${query}`);
const result = await executor.invoke({
input: query,
});
console.log(result);
/*
Calling agent executor with query: What is the weather in New York?
{
output: 'The current weather in New York is sunny with a temperature of 66 degrees Fahrenheit. The humidity is at 54% and the wind is blowing at 6 mph. There is 0% chance of precipitation.'
}
*/
// Save the result and initial input to memory
await memory.saveContext(
{
question: query,
},
{
answer: result.output,
}
);
const query2 = "Do I need a jacket?";
const result2 = await executor.invoke({
input: query2,
});
console.log(result2);
/*
{
output: 'Based on the current weather in New York, you may not need a jacket. However, if you feel cold easily or will be outside for a long time, you might want to bring a light jacket just in case.'
}
*/
You may also inspect the LangSmith traces for both agent calls here:
With initializeAgentExecutorWithOptions
This agent also supports StructuredTool
s with more complex input schemas.
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";
const tools = [new Calculator(), new SerpAPI()];
const chat = new ChatOpenAI({ modelName: "gpt-4", temperature: 0 });
const executor = await initializeAgentExecutorWithOptions(tools, chat, {
agentType: "openai-functions",
verbose: true,
});
const result = await executor.invoke({
input: "What is the weather in New York?",
});
console.log(result);
/*
The current weather in New York is 72°F with a wind speed of 1 mph coming from the SSW. The humidity is at 89% and the UV index is 0 out of 11. The cloud cover is 79% and there has been no rain.
*/
API Reference:
- initializeAgentExecutorWithOptions from
langchain/agents
- ChatOpenAI from
langchain/chat_models/openai
- SerpAPI from
langchain/tools
- Calculator from
langchain/tools/calculator
Prompt customization
You can pass in a custom string to be used as the system message of the prompt as follows:
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";
const tools = [new Calculator(), new SerpAPI()];
const chat = new ChatOpenAI({ modelName: "gpt-4", temperature: 0 });
const prefix =
"You are a helpful AI assistant. However, all final response to the user must be in pirate dialect.";
const executor = await initializeAgentExecutorWithOptions(tools, chat, {
agentType: "openai-functions",
verbose: true,
agentArgs: {
prefix,
},
});
const result = await executor.invoke({
input: "What is the weather in New York?",
});
console.log(result);
// Arr matey, in New York, it be feelin' like 75 degrees, with a gentle breeze blowin' from the northwest at 3 knots. The air be 77% full o' water, and the clouds be coverin' 35% of the sky. There be no rain in sight, yarr!
API Reference:
- initializeAgentExecutorWithOptions from
langchain/agents
- ChatOpenAI from
langchain/chat_models/openai
- SerpAPI from
langchain/tools
- Calculator from
langchain/tools/calculator