Prompt + LLM
One of the most foundational Expression Language compositions is taking:
PromptTemplate
/ ChatPromptTemplate
-> LLM
/ ChatModel
-> OutputParser
Almost all other chains you build will use this building block.
Interactive tutorial
The screencast below interactively walks through a simple prompt template + LLM chain. You can update and run the code as it's being written in the video!
PromptTemplate + LLM
A PromptTemplate -> LLM is a core chain that is used in most other larger chains/systems.
import { PromptTemplate } from "langchain/prompts";
import { ChatOpenAI } from "langchain/chat_models/openai";
const model = new ChatOpenAI({});
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);
const chain = promptTemplate.pipe(model);
const result = await chain.invoke({ topic: "bears" });
console.log(result);
/*
AIMessage {
content: "Why don't bears wear shoes?\n\nBecause they have bear feet!",
}
*/
API Reference:
- PromptTemplate from
langchain/prompts
- ChatOpenAI from
langchain/chat_models/openai
Often times we want to attach kwargs to the model that's passed in. To do this, runnables contain a .bind
method. Here's how you can use it:
Attaching stop sequences
Interactive tutorial
The screencast below interactively walks through an example. You can update and run the code as it's being written in the video!
import { PromptTemplate } from "langchain/prompts";
import { ChatOpenAI } from "langchain/chat_models/openai";
const prompt = PromptTemplate.fromTemplate(`Tell me a joke about {subject}`);
const model = new ChatOpenAI({});
const chain = prompt.pipe(model.bind({ stop: ["\n"] }));
const result = await chain.invoke({ subject: "bears" });
console.log(result);
/*
AIMessage {
contents: "Why don't bears use cell phones?"
}
*/
API Reference:
- PromptTemplate from
langchain/prompts
- ChatOpenAI from
langchain/chat_models/openai
Attaching function call information
Interactive tutorial
The screencast below interactively walks through an example. You can update and run the code as it's being written in the video!
import { PromptTemplate } from "langchain/prompts";
import { ChatOpenAI } from "langchain/chat_models/openai";
const prompt = PromptTemplate.fromTemplate(`Tell me a joke about {subject}`);
const model = new ChatOpenAI({});
const functionSchema = [
{
name: "joke",
description: "A joke",
parameters: {
type: "object",
properties: {
setup: {
type: "string",
description: "The setup for the joke",
},
punchline: {
type: "string",
description: "The punchline for the joke",
},
},
required: ["setup", "punchline"],
},
},
];
const chain = prompt.pipe(
model.bind({
functions: functionSchema,
function_call: { name: "joke" },
})
);
const result = await chain.invoke({ subject: "bears" });
console.log(result);
/*
AIMessage {
content: "",
additional_kwargs: {
function_call: {
name: "joke",
arguments: '{\n "setup": "Why don\'t bears wear shoes?",\n "punchline": "Because they have bear feet!"\n}'
}
}
}
*/
API Reference:
- PromptTemplate from
langchain/prompts
- ChatOpenAI from
langchain/chat_models/openai
PromptTemplate + LLM + OutputParser
Interactive tutorial
The screencast below interactively walks through an example. You can update and run the code as it's being written in the video!
We can also add in an output parser to conveniently transform the raw LLM/ChatModel output into a consistent string format:
import { PromptTemplate } from "langchain/prompts";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { RunnableSequence } from "langchain/schema/runnable";
import { StringOutputParser } from "langchain/schema/output_parser";
const model = new ChatOpenAI({});
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);
const outputParser = new StringOutputParser();
const chain = RunnableSequence.from([promptTemplate, model, outputParser]);
const result = await chain.invoke({ topic: "bears" });
console.log(result);
/*
"Why don't bears wear shoes?\n\nBecause they have bear feet!"
*/
API Reference:
- PromptTemplate from
langchain/prompts
- ChatOpenAI from
langchain/chat_models/openai
- RunnableSequence from
langchain/schema/runnable
- StringOutputParser from
langchain/schema/output_parser