How to convert Runnables to Tools
For convenience, Runnables
that accept a string or object input can be
converted to tools using the
asTool
method, which allows for the specification of names, descriptions, and
additional schema information for arguments.
Here we will demonstrate how to use this method to convert a LangChain
Runnable
into a tool that can be used by agents, chains, or chat
models.
This functionality requires @langchain/core>=0.2.16
. Please see here for a guide on upgrading.
asTool
β
Tools have some additional requirements over general Runnables:
- Their inputs are constrained to be serializable, specifically strings and objects;
- They contain names and descriptions indicating how and when they should be used;
- They contain a detailed
schema
property for their arguments. That is, while a tool (as aRunnable
) might accept a single object input, the specific keys and type information needed to populate an object should be specified in theschema
field.
The asTool()
method therefore requires this additional information to
create a tool from a runnable. Hereβs a basic example:
import { RunnableLambda } from "@langchain/core/runnables";
import { z } from "zod";
const schema = z.object({
a: z.number(),
b: z.array(z.number()),
});
const runnable = RunnableLambda.from((input: z.infer<typeof schema>) => {
return input.a * Math.max(...input.b);
});
const asTool = runnable.asTool({
name: "My tool",
description: "Explanation of when to use the tool.",
schema,
});
asTool.description;
Explanation of when to use the tool.
await asTool.invoke({ a: 3, b: [1, 2] });
6
Runnables that take string inputs are also supported:
const firstRunnable = RunnableLambda.from<string, string>((input) => {
return input + "a";
});
const secondRunnable = RunnableLambda.from<string, string>((input) => {
return input + "z";
});
const runnable = firstRunnable.pipe(secondRunnable);
const asTool = runnable.asTool({
name: "append_letters",
description: "Adds letters to a string.",
schema: z.string(),
});
asTool.description;
Adds letters to a string.
await asTool.invoke("b");
baz
In an agentsβ
Below we will incorporate LangChain Runnables as tools in an agent application. We will demonstrate with:
We first instantiate a chat model that supports tool calling:
Following the RAG tutorial, letβs first construct a retriever:
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0125", temperature: 0 });
import { Document } from "@langchain/core/documents";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
const documents = [
new Document({
pageContent:
"Dogs are great companions, known for their loyalty and friendliness.",
}),
new Document({
pageContent: "Cats are independent pets that often enjoy their own space.",
}),
];
const vectorstore = await MemoryVectorStore.fromDocuments(
documents,
new OpenAIEmbeddings()
);
const retriever = vectorstore.asRetriever({
k: 1,
searchType: "similarity",
});
We next create a pre-built LangGraph agent and provide it with the tool:
import { createReactAgent } from "@langchain/langgraph/prebuilt";
const tools = [
retriever.asTool({
name: "pet_info_retriever",
description: "Get information about pets.",
schema: z.string(),
}),
];
const agent = createReactAgent({ llm: llm, tools });
const stream = await agent.stream({
messages: [["human", "What are dogs known for?"]],
});
for await (const chunk of stream) {
// Log output from the agent or tools node
if (chunk.agent) {
console.log("AGENT:", chunk.agent.messages[0]);
} else if (chunk.tools) {
console.log("TOOLS:", chunk.tools.messages[0]);
}
console.log("----");
}
AGENT: AIMessage {
"id": "chatcmpl-9m9RIN1GQVeXcrVdp0lNBTcZFVHb9",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_n30LPDbegmytrj5GdUxZt9xn",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 17,
"promptTokens": 52,
"totalTokens": 69
},
"finish_reason": "tool_calls"
},
"tool_calls": [
{
"name": "pet_info_retriever",
"args": {
"input": "dogs"
},
"type": "tool_call",
"id": "call_n30LPDbegmytrj5GdUxZt9xn"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 52,
"output_tokens": 17,
"total_tokens": 69
}
}
----
TOOLS: ToolMessage {
"content": "[{\"pageContent\":\"Dogs are great companions, known for their loyalty and friendliness.\",\"metadata\":{}}]",
"name": "pet_info_retriever",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_n30LPDbegmytrj5GdUxZt9xn"
}
----
AGENT: AIMessage {
"id": "chatcmpl-9m9RJ3TT3ITfv6R0Tb7pcrNOUtnm8",
"content": "Dogs are known for being great companions, known for their loyalty and friendliness.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 18,
"promptTokens": 104,
"totalTokens": 122
},
"finish_reason": "stop"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 104,
"output_tokens": 18,
"total_tokens": 122
}
}
----
This LangSmith trace shows whatβs going on under the hood for the above run.
Going further, we can even create a tool from a full RAG chain:
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
const SYSTEM_TEMPLATE = `
You are an assistant for question-answering tasks.
Use the below context to answer the question. If
you don't know the answer, say you don't know.
Use three sentences maximum and keep the answer
concise.
Answer in the style of {answer_style}.
Context: {context}`;
const prompt = ChatPromptTemplate.fromMessages([
["system", SYSTEM_TEMPLATE],
["human", "{question}"],
]);
const ragChain = RunnableSequence.from([
{
context: (input, config) => retriever.invoke(input.question, config),
question: (input) => input.question,
answer_style: (input) => input.answer_style,
},
prompt,
llm,
new StringOutputParser(),
]);
Below we again invoke the agent. Note that the agent populates the
required parameters in its tool_calls
:
const ragTool = ragChain.asTool({
name: "pet_expert",
description: "Get information about pets.",
schema: z.object({
context: z.string(),
question: z.string(),
answer_style: z.string(),
}),
});
const agent = createReactAgent({ llm: llm, tools: [ragTool] });
const stream = await agent.stream({
messages: [["human", "What would a pirate say dogs are known for?"]],
});
for await (const chunk of stream) {
// Log output from the agent or tools node
if (chunk.agent) {
console.log("AGENT:", chunk.agent.messages[0]);
} else if (chunk.tools) {
console.log("TOOLS:", chunk.tools.messages[0]);
}
console.log("----");
}
AGENT: AIMessage {
"id": "chatcmpl-9m9RKY2nAa8LeGoBiO7N1SR4nAoED",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_ukzivO4jRn1XdDpuVTI6CvtU",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 30,
"promptTokens": 63,
"totalTokens": 93
},
"finish_reason": "tool_calls"
},
"tool_calls": [
{
"name": "pet_expert",
"args": {
"context": "pirate",
"question": "What are dogs known for?",
"answer_style": "short"
},
"type": "tool_call",
"id": "call_ukzivO4jRn1XdDpuVTI6CvtU"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 63,
"output_tokens": 30,
"total_tokens": 93
}
}
----
TOOLS: ToolMessage {
"content": "Dogs are known for their loyalty, companionship, and ability to provide emotional support to their owners.",
"name": "pet_expert",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_ukzivO4jRn1XdDpuVTI6CvtU"
}
----
AGENT: AIMessage {
"id": "chatcmpl-9m9RMwAEc14TTKtitq3CH2x9wpGik",
"content": "A pirate would say that dogs are known for their loyalty, companionship, and ability to provide emotional support to their owners.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 26,
"promptTokens": 123,
"totalTokens": 149
},
"finish_reason": "stop"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 123,
"output_tokens": 26,
"total_tokens": 149
}
}
----
See this LangSmith trace for the above run to see whatβs going on internally.