diff --git a/docs/docs/tutorials/first-agent/0-setup.md b/docs/docs/tutorials/first-agent/0-setup.md new file mode 100644 index 000000000..1a7a536d5 --- /dev/null +++ b/docs/docs/tutorials/first-agent/0-setup.md @@ -0,0 +1,115 @@ +# Build your first agent - Introduction + +In this comprehensive tutorial, we will build an AI support chatbot using LangGraph.js that can: + +- Answer common questions by searching the web +- Maintain conversation state across calls +- Route complex queries to a human for review +- Use custom state to control its behavior +- Rewind and explore alternative conversation paths + +We'll start with a basic chatbot and progressively add more sophisticated capabilities, introducing key LangGraph concepts along the way. Later, we will learn how to iterate on an agent graph using Studio and deploy it using LangGraph Cloud. + +There's a lot of ground to cover, but don't worry! We'll take it step by step across 7 parts. Each part will introduce a single concept that helps improve the chatbot's capabilities. At the end you should feel comfortable building, debugging, iterating on, and deploying an AI agent of your own. Here's an overview of what we'll cover: + +- [**Setup**](/first-agent/0-setup.md) _(You are here)_: Set up your development environment, dependencies, and services needed to build the chatbot. +- [**Part 1: Create a chatbot**](/first-agent/1-create-chatbot.md): Build a basic chatbot that can answer questions using Anthropic's LLM. +- [**Part 2: Add search Retrieval-Augmented Generation (RAG)**](/first-agent/2-rag-search.md): Provide the chatbot with a tool to search the web using Tavily. +- [**Part 3: Add persistent state**](/first-agent/3-persistent-state.md): Add memory to the chatbot so it can continue past conversations. +- [**Part 4: Add human-in-the-loop**](/first-agent/4-human-loop.md): Route complex queries to a human for review. +- [**Part 5: Time-travel debugging**](/first-agent/5-time-travel-debugging.md): Use the persisted state to rewind and debug or explore alternative conversation paths. +- [**Part 6: Iterate using Studio**](/first-agent/6-studio.md): Setup Studio to iterate and debug the agent using a graphical interface. +- [**Part 7: Deploy to LangGraph Cloud**](/first-agent/7-deploy.md): Deploy the agent to LangGraph Cloud and interact with it over the web. + +## Prerequisites + +To complete this tutorial, you will need to have a computer set up with Node.js 18 or later. You can download Node.js from the [official website](https://nodejs.org/). + +You will also need a basic understanding of JavaScript and TypeScript, and should be familiar with the command line. + +LangGraph makes it easy to work with a variety of tools and services to build AI agents. In this tutorial, we will use the following: + +- [Anthropic API](https://console.anthropic.com/) will be used for the base Large Language Model (LLM) that powers the chatbot. +- [Tavily's Search API](https://tavily.com/) will be used as a tool that enables the agent to search the web. + +To complete this tutorial, you will need to sign up and get an API key for both services. + +## Setup + +Once you've got NodeJS installed and have signed up for Tavily and Anthropic, you are ready to get the project setup. + +First, run the follow commands to create a new directory for your project and navigate to it in your terminal. + +```bash +mkdir langgraph-chatbot +cd langgraph-chatbot +``` + +### Environment variables + +Next, create a `.env` file in the root of your project and add the API keys you received from Anthropic and Tavily: + +``` +#.env +ANTHROPIC_API_KEY=your-Anthropic-key-here +TAVILY_API_KEY=your-Tavily-key-here +``` + +While we're at it, let's make sure the environment variables defined in the `.env` file are available to our project. We can do this by installing the `dotenv` package: + +```bash +npm install dotenv --save +``` + +Now we need to make sure dotenv loads the environment variables from the `.env` file. To do this, create a new file called `chatbot.ts` and add the following lines at the top of the: + +```ts +// chatbot.ts +import "dotenv/config"; +``` + +This will load the environment variables from the `.env` file into the global `process.env` object when the project starts. To verify it's working, let's log the environment variables to the console. +Add the following lines to the end of the `chatbot.ts` file: + +```ts +console.log(process.env.ANTHROPIC_API_KEY); +console.log(process.env.TAVILY_API_KEY); +``` + +Now let's run the project using `tsx`, a tool that lets us run TypeScript code without first compiling it to JS. Use the following command: + +```bash +npx tsx chatbot.ts +``` + +You should see the API keys you added to your `.env` file printed to the console. + +### Install dependencies + +You'll also need to install a few dependencies to create an agent: + +- **@langchain/core** provides the core functionality of Langchain that LangGraph depends on +- **@langchain/langgraph** contains the building blocks used to assemble an agent +- **@langchain/anthropic** enable you to use Anthropic's LLMs in LangGraph +- **@langchain/community** contains the Tavily search tool that will be used by the agent + +Let's do that using the Node Package Manager (npm). Run the following command in your terminal: + +```bash +npm install @langchain/core @langchain/langgraph @langchain/anthropic @langchain/community +``` + +### (Encouraged) Set up tracing with LangSmith + +Setting up up LangSmith is optional, but it makes it a lot easier to understand what's going on "under the hood." + +To use [LangSmith](https://smith.langchain.com/) you'll need to sign up and get an API key. Once you have an API key, add the following to your `.env` file: + +``` +LANGCHAIN_API_KEY=your-LangSmith-key-here +LANGCHAIN_TRACING_V2=true +LANGCHAIN_PROJECT="LangGraph Tutorial" +LANCHAIN_CALLBACKS_BACKGROUND=true +``` + +At this point, you should be ready to start building your first agent. When you're ready, move on to [part 1: create a chatbot](/first-agent/1-create-chatbot.md). diff --git a/docs/docs/tutorials/first-agent/1-create-chatbot.md b/docs/docs/tutorials/first-agent/1-create-chatbot.md new file mode 100644 index 000000000..65b80f769 --- /dev/null +++ b/docs/docs/tutorials/first-agent/1-create-chatbot.md @@ -0,0 +1,233 @@ +# Part 1: Create a chatbot + +We'll first create a simple chatbot using LangGraph.js. This chatbot will respond directly to user messages. Though simple, it will illustrate the core concepts of building with LangGraph. By the end of this section, you will have built a rudimentary chatbot. + +## Step 1: Create an LLM agent + +The first thing we need to do is create an LLM agent. LangGraph makes it easy to use any LLM provider, and we will be using Anthropic's Claude 3.5 Sonnet model. Add the following code to your `chatbot.ts` file: + +```ts +import { ChatAnthropic } from "@langchain/anthropic"; + +const model = new ChatAnthropic({ + model: "claude-3-5-sonnet-20240620", + temperature: 0 +}); +``` + +The `ChatAnthropic` class is a wrapper around the Anthropic API that makes it easy to interact with the LLM. We're setting some options on it to configure the LLM: + +- `model` needs the API model name of the model we want to use. We're using `claude-3-5-sonnet-20240620`. You can learn more in the [Anthropic models documentation](https://docs.anthropic.com/en/docs/about-claude/models#model-comparison-table`). +- `temperature` is a parameter that controls the randomness of the model's output. A temperature of 0 will always return the most likely/predictable token and as the temperature goes towards the max value of 1 the LLM will produce more "creative" outputs. For this tutorial, we'll be using a temperature of 0 to produce more consistent outputs, but feel free to experiment. + +## Step 2: Create a StateGraph + +The next thing we're going to implement is a [StateGraph](https://langchain-ai.github.io/langgraphjs/reference/classes/langgraph.StateGraph.html). A `StateGraph` object defines the structure of our chatbot as a "state machine". Nodes can communicate by reading and writing to a shared state. We'll add `nodes` to represent the llm and the functions our chatbot can call. The nodes are connected using `edges` that specify how the bot should transition between these functions. + +Add the following code to your `chatbot.ts` file: + +```ts +import { StateGraph, MessagesAnnotation } from "@langchain/langgraph"; + +const graphBuilder = new StateGraph(MessagesAnnotation); +``` + +In this code snippet, we're creating a new `StateGraph` object and passing it our state [`Annotation`](https://langchain-ai.github.io/langgraphjs/concepts/low_level/#annotation). It's so common for chatbot state to be an array of messages that LangGraph provides a helper for it: [`MessagesAnnotation`](https://langchain-ai.github.io/langgraphjs/concepts/low_level/#messagesannotation). This helper defines a state schema with a single field `messages` which is an array of strings. It also provides a reducer function that appends new messages to the array. + +Later, we will use the `graphBuilder` object to build a graph that defines how our chatbot will behave by adding nodes and edges to the graph. + +## Step 3: Create node that runs the LLM + +Now that we have a basic `StateGraph` and and LLM, we need to define a node that will invoke the LLM with the correct state. That's done using a function that takes the current state and returns the new state. Add the following code to your `chatbot.ts` file: + +```ts +async function callModel(state: typeof MessagesAnnotation.State) { + const response = await model.invoke(state.messages); + + // We return the response in an array and the `MessagesAnnotation` reducer will append it to the state + return { messages: [response] }; +} +``` + +This function is the glue between our `StateGraph` and the LLM. Without it, the LLM wouldn't know what is being asked of it, and the state wouldn't be updated with its response. + +## Step 4: Build and run the graph + +With the LLM, the `StateGraph`, and a way for them to communicate, we're ready to build our first agent graph! In LangGraph, the entrypoint is defined using a node named `"__start__"`. We need to add our LLM node and connect it to the start node. Add the following code to your `chatbot.ts` file: + +```ts +// Create a graph that defines our chatbot workflow and compile it into a `runnable` +export const app = graphBuilder + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .compile(); +``` + +Notice that we're `export`ing the `app` object. This helps us keep the code organized; the agent is defined in `chatbot.ts` and we will write the code that uses it in a separate file. When we go over how to [iterate on an agent using a GUI](5-iterate-studio.md), we will `import` our agent into [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio) too. + +At this point we have an app object we can invoke to run our chatbot. To try it out, we're going to need a chat loop that lets us interact with the bot. Let's create a new file called `chatloop.ts` and add logic for our chat loop to it: + +```ts +// chatloop.ts +import { BaseMessageLike } from "@langchain/core/messages"; + +// We need to import the chatbot we created so we can use it here +import { app } from "./chatbot.ts"; + +// We'll use these helpers to read from the standard input in the command line +import * as readline from "node:readline/promises"; +import { stdin as input, stdout as output } from "node:process"; + +async function chatLoop() { + const lineReader = readline.createInterface({ input, output }); + + console.log("Type 'exit' or 'quit' to quit"); + const messages = Array(); + while (true) { + const answer = await lineReader.question("User: "); + if (["exit", "quit", "q"].includes(answer.toLowerCase())) { + console.log("Goodbye!"); + lineReader.close(); + break; + } + messages.push({ content: answer, role: "user" }); + + // Run the chatbot, providing it the `messages` array containing the conversation + const output = await app.invoke({ messages }); + messages.push(output.messages[output.messages.length - 1]); + console.log("Agent: ", output.messages[output.messages.length - 1].content); + } +} +chatLoop().catch(console.error); +``` + +This chat loop uses the [`readline`](https://nodejs.org/api/readline.html) module from Node.js to read user input from the command line. It stores the message history in the `messages` array so that each message _continues_ the conversation, rather than starting a new one each time. + +We're calling `app.invoke()` to use the chatbot. Passing it an array of messages containing the conversation history lets us continue a single conversation. In part 3 of this tutorial, we will use a [checkpointer](https://langchain-ai.github.io/langgraphjs/concepts/low_level/?h=messages+annotation#checkpointer) to store conversation history and enable the agent to participate in multiple separate conversation threads. For now, we're manually updating the message history with each new message from the user and agent. + +Now that we have a way to interact with the agent, try it out by running the following command: + +```bash +npx tsx chatloop.ts +``` + +Here's an example chat session: + +``` +User: What's langgraph all about? +Agent: LangGraph is a tool or framework designed to facilitate the development and deployment of applications that leverage large language models (LLMs). It typically focuses on enhancing the capabilities of LLMs by integrating them with various data sources, APIs, and other tools to create more sophisticated and context-aware applications. + +LangGraph may include features such as: + +1. **Graph-Based Representation**: It often uses graph structures to represent relationships between different entities, which can help in understanding context and improving the relevance of responses generated by LLMs. + +2. **Integration with APIs**: LangGraph can connect with various APIs to pull in real-time data, allowing applications to provide up-to-date information and contextually relevant responses. + +3. **Custom Workflows**: Users can create custom workflows that define how the LLM interacts with different data sources and processes information, making it adaptable to specific use cases. + +4. **Enhanced Contextual Understanding**: By utilizing graph structures, LangGraph can improve the model's ability to understand and generate responses based on complex relationships and hierarchies within the data. + +5. **Applications**: It can be used in various domains, including customer support, content generation, data analysis, and more, where natural language understanding and generation are crucial. + +For the most accurate and up-to-date information, I recommend checking the official LangGraph website or relevant documentation, as developments in technology can lead to new features and capabilities. +User: what problems does it solve? +Agent: LangGraph addresses several challenges associated with the use of large language models (LLMs) in application development and deployment. Here are some of the key problems it aims to solve: + +1. **Contextual Understanding**: LLMs can struggle with maintaining context over long conversations or complex queries. LangGraph's graph-based representation helps in organizing and managing contextual information, allowing for more coherent and relevant responses. + +2. **Data Integration**: Many applications require data from multiple sources (e.g., databases, APIs). LangGraph facilitates the integration of these diverse data sources, enabling LLMs to access real-time information and provide more accurate and context-aware responses. + +3. **Complex Query Handling**: Users often pose complex queries that involve multiple entities or relationships. LangGraph can help break down these queries and manage the relationships between different pieces of information, improving the model's ability to generate relevant answers. + +4. **Customization and Flexibility**: Different applications have unique requirements. LangGraph allows developers to create custom workflows and interactions tailored to specific use cases, making it easier to adapt LLMs to various domains and tasks. + +5. **Scalability**: As applications grow and require more data and interactions, managing these efficiently can become challenging. LangGraph's architecture can help scale applications by organizing data and interactions in a way that remains manageable. + +6. **Improved User Experience**: By enhancing the LLM's ability to understand context and integrate data, LangGraph can lead to a more satisfying user experience, as users receive more accurate and relevant responses to their queries. + +7. **Error Reduction**: By providing a structured way to manage data and context, LangGraph can help reduce errors in responses generated by LLMs, particularly in scenarios where precision is critical. + +8. **Interactivity**: LangGraph can enable more interactive applications, where users can engage in dynamic conversations or queries that adapt based on previous interactions, leading to a more engaging experience. + +Overall, LangGraph aims to enhance the capabilities of LLMs, making them more effective tools for a wide range of applications, from customer support to content generation and beyond. +User: q +Goodbye! +``` + +**Congratulations!** You've built your first chatbot using LangGraph. This bot can engage in basic conversation by taking user input and generating responses using an LLM. You can inspect a [LangSmith Trace](https://smith.langchain.com/public/29ab0177-1177-4d25-9341-17ae7d94e0e0/r) for the call above at the provided link. + +However, you may have noticed that the bot's knowledge is limited to what's in its training data. In the next part, we'll add a web search tool to expand the bot's knowledge and make it more capable. + +Below is the full code for this section for your reference: + +
+```ts +// chatbot.ts +import { ChatAnthropic } from "@langchain/anthropic"; +import { StateGraph, MessagesAnnotation } from "@langchain/langgraph"; + +// read the environment variables from .env +import "dotenv/config"; + +// Create a model and give it access to the tools +const model = new ChatAnthropic({ + model: "claude-3-5-sonnet-20240620", + temperature: 0, +}); + +// Define the function that calls the model +async function callModel(state: typeof MessagesAnnotation.State) { + const messages = state.messages; + + const response = await model.invoke(messages); + + // We return a list, because this will get added to the existing list + return { messages: response }; +} + +const graphBuilder = new StateGraph(MessagesAnnotation); + +// Create a graph that defines our chatbot workflow and compile it into a `runnable` +export const app = graphBuilder + .addNode("agent", callModel) + .addEdge("__start__", callModel) + .compile(); + +```` +
+ +
+```ts +// chatloop.ts +import { app } from "./chatbot.ts"; + +import { BaseMessageLike } from "@langchain/core/messages"; + +// We'll use these helpers to read from the standard input in the command line +import * as readline from "node:readline/promises"; +import { stdin as input, stdout as output } from "node:process"; + +async function chatLoop() { + const lineReader = readline.createInterface({ input, output }); + + console.log("Type 'exit' or 'quit' to quit"); + const messages = Array(); + while (true) { + const answer = await lineReader.question("User: "); + if ( ["exit", "quit", "q"].includes( answer.toLowerCase() ) ) { + console.log("Goodbye!"); + lineReader.close(); + break; + } + + // Add the user's message to the conversation history + messages.push({ content: answer, type: "user" }); + + // Run the chatbot and add its response to the conversation history + const output = await app.invoke({ messages }); + messages.push(output.messages[output.messages.length - 1]); + console.log("Agent: ", output.messages[output.messages.length - 1].content); + } +chatLoop().catch(console.error); +``` +
diff --git a/docs/docs/tutorials/first-agent/2-search-RAG.md b/docs/docs/tutorials/first-agent/2-search-RAG.md new file mode 100644 index 000000000..63d20e3c1 --- /dev/null +++ b/docs/docs/tutorials/first-agent/2-search-RAG.md @@ -0,0 +1,270 @@ +# Part 2: Enhancing the chatbot with tools + +To handle queries our chatbot can't answer "from memory", we'll integrate a web search tool call [Tavily](https://tavily.com/). Our bot can use this tool to find relevant information and provide better responses. At the end of this section, your chatbot will be able to search the web and use the results to answer questions with up-to-date information. + +**Prerequisites** + +If you already completed the [setup steps](/first-agent/0-setup.md) you are ready to get started! To recap, you should have already done the following: + +- Signed up for Tavily and received an API key +- Created a `.env` file in the root of your project and added your Tavily API key to it +- Installed the `dotenv` package to load your environment variables from the `.env` file +- Used npm to install the `@langchain/community` package containing the Tavily search tool + +If you haven't done any of those steps, you'll need to go back and complete them before proceeding. + +Once they're done, you're ready to move on and get your chatbot connected to the internet! + +## Step 1: Define the tool for your LLM to use + +Let's start by setting up the search tool. We'll need to import the `TavilySearchResults` class and use it to construct a tool the LLM can use. + +```ts +// chatbot.ts +import { TavilySearchResults } from "@langchain/community/tools/tavily_search"; + +const searchTool = new TavilySearchResults({ maxResults: 3 }); +``` + +If you want, you can use the tool directly right now! Add the following lines right under the line where you defined the `tools` variable, then run your project using `npx tsx chatbot.ts`: + +```ts +function prettyPrintJSON(json: string) { + console.log(JSON.stringify(JSON.parse(json), null, 2)); +} + +searchTool.invoke("What's a 'node' in LangGraph?").then(prettyPrintJSON); +``` + +The `prettyPrintJSON` function makes the content easier to read for us humans. Your output should look something like this, but may contain different search results: + +```json +[ + { + "title": "Low Level LangGraph Concepts - GitHub Pages", + "url": "https://langchain-ai.github.io/langgraph/concepts/low_level/", + "content": "Nodes¶ In LangGraph, nodes are typically python functions (sync or async) where the first positional argument is the state, and (optionally), the second positional argument is a \"config\", containing optional configurable parameters (such as a thread_id). Similar to NetworkX, you add these nodes to a graph using the add_node method:", + "score": 0.999685, + "raw_content": null + }, + { + "title": "LangGraph Tutorial: What Is LangGraph and How to Use It?", + "url": "https://www.datacamp.com/tutorial/langgraph-tutorial", + "content": "In LangGraph, each node represents an LLM agent, and the edges are the communication channels between these agents. This structure allows for clear and manageable workflows, where each agent performs specific tasks and passes information to other agents as needed. State management. One of LangGraph's standout features is its automatic state ...", + "score": 0.998862, + "raw_content": null + }, + { + "title": "Beginner's Guide to LangGraph: Understanding State, Nodes ... - Medium", + "url": "https://medium.com/@kbdhunga/beginners-guide-to-langgraph-understanding-state-nodes-and-edges-part-1-897e6114fa48", + "content": "Each node in a LangGraph graph has the ability to access, read, and write to the state. When a node modifies the state, it effectively broadcasts this information to all other nodes within the graph .", + "score": 0.99819684, + "raw_content": null + } +] +``` + +These search results are the summaries of web pages that our chat bot can use to answer questions. + +When you're getting an output similar to this, you've got it working right! If not, verify that your `TAVILY_API_KEY` is set in your `.env` file and loaded using `dotenv`. Also verify that you have the `dotenv` and `@langchain/community` packages installed. + +You can delete the call to `searchTool.invoke()` and the `prettyPrintJSON()` function and move on to the next step. + +## Step 2: Bind the tool to your LLM + +Now that we've created a tool node, we need to bind it to our LLM. This lets the LLM know the correct JSON format to use if it wants to use the Search Engine. We do this by using the `bindTools()` method on our chat model instance - the one created using `new ChatAnthropic()`. + +In your `chatbot.ts` file, find the following code where you defined your chat model: + +```ts +const model = new ChatAnthropic({ + model: "claude-3-5-sonnet-20240620", + temperature: 0 +}); +``` + +Update it to bind the tool node to the model as follows: + +```ts +const model = new ChatAnthropic({ + model: "claude-3-5-sonnet-20240620", + temperature: 0 +}).bindTools([searchTool]); +``` + +Notice how we passed the tool to `bindTools()`: it was as an array using `[searchTool]`. LLMs can use multiple tools, so the Langchain and LangGraph APIs typically operate on tool _arrays_ rather than individual tools. + +Now the LLM will know about the available tools. If it decides any of them would be helpful it will communicate that by responding with a message asking for the tool to be run. The message will contain structured JSON data for its request. + +## Step 3: Enabling the chatbot to use a tool + +At this point, the chatbot knows how to structure a request to use the search tool, but our graph doesn't provide a way to execute that request. Furthermore, we don't yet have a way to detect when the chatbot wants to use the tool. Let's fix that! + +Next, we need to create a `"tools"` node. It will be responsible for actually running the tool. Add the following import at the top of your `chatbot.ts` file: + +```ts +import { ToolNode } from "@langchain/langgraph/prebuilt"; +``` + +Then, add the following code after the definition of `searchTool`. Notice that we are once again wrapping the tool in an array: + +```ts +const tools = new ToolNode([searchTool]); +``` + +The `ToolNode` helper handles parsing the message from the LLM to extract the request data, crafting the request to the tool, and returns a tool message containing the response from the tool. You can learn more about `ToolNode` from its [API documentation](https://langchain-ai.github.io/langgraphjs/reference/classes/langgraph_prebuilt.ToolNode.html) and the [how-to guide on calling tools using `ToolNode`](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/). + +The last step is to update our graph to include the new tool. Recall from [part 1: create a chatbot](/first-agent/1-create-chatbot.md) that `edges` route the control flow from one node to the next. **Conditional edges** usually contain `if` statements to route to different nodes depending on the current graph state. These functions receive the current graph state and return a string indicating which node to call next. For our new `tools` node to be run, it's going to need an edge that connects to it. + +Let's create a _conditional edge_ function that detects when the chatbot wants to use a tool and communicates that to the graph. Add the following function to your `chatbot.ts` file: + +```ts +import type { AIMessage } from "@langchain/core/messages"; + +function shouldUseTool({ messages }: typeof MessagesAnnotation.State) { + const lastMessage: AIMessage = messages[messages.length - 1]; + + // If the LLM makes a tool call, then we route to the "tools" node + if (!!lastMessage.tool_calls?.length) { + return "tools"; + } + // Otherwise, we stop (reply to the user) + return "__end__"; +} +``` + +This function will read the last message from the chatbot to check if it asked to use a tool. If it did, it returns the string `"tools"`, which we will need to define as a node in our graph. If the chatbot didn't ask to use a tool, it must be a normal message response, so we return `"__end__"` to indicate that graph's execution is finished. + +Now that we have the node and logic for a conditional edge that connects to it, we just need to add them to our graph. Locate the following code where our graph is currently defined and compiled: + +```ts +// Create a graph that defines our chatbot workflow and compile it into a `runnable` +export const app = graphBuilder + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .compile(); +``` + +Update it to include the new `tools` node and the conditional edge function: + +```ts +export const app = graphBuilder + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .addNode("tools", tools) + .addConditionalEdges("agent", shouldUseTool) + .addEdge("tools", "agent") + .compile(); +``` + +One helpful feature of the graph builder is that if you try to add an edge that connects to a node that doesn't exist, it will result in a type error. This helps you catch bugs in your graph immediately, rather than at runtime. + +Conditional edges start from a single node. This tells the graph that any time the `agent` node runs, it should either go to 'tools' if it calls a tool, or end the loop if it responds directly. When the graph transitions to the special `"__end__"` node, it has no more tasks to complete and ceases execution. + +You may or may not have noticed that this graph has a simple loop in it: `agent` -> `tools` -> `agent`. The presence of loops is a common pattern in LangGraph graphs. They allow the graph to continue running until the agent has nothing left to do. This is a major difference from common AI chat interfaces, where a single message will only receive a single response. The ability to add loops to a graph enables **agentic behavior**, where the agent can perform multiple actions in service of a single request. + +We're ready to put our agent to work! With the update to the graph, it should now be able to use the search tool to find information on the web. Run your project using `npx tsx chatloop.ts` and test it out. You can ask it questions that require current information to answer, like "what's the weather in sf?": + +``` +User: What's the weather in sf? +Agent: The current weather in San Francisco is sunny with a temperature of 82.9°F (28.3°C). The wind is coming from the west-northwest at 11.9 mph (19.1 kph), and the humidity is at 32%. There is no precipitation reported, and visibility is good at 16 km (9 miles). + +For more details, you can check the full report [here](https://www.weatherapi.com/). +``` + +Just lovely! If your weather is anything like San Francisco's right now, this is a great opportunity to go outside and enjoy it. You've earned it! + +When you're ready, continue on to part 3, where we'll [add persistent state to the chatbot](/first-agent/3-persistent-state.md). This will allow the chatbot to remember past conversations and have multiple threads of discussion. + +The final code from this section should look something like the below example. We've cleaned this version up a bit to make it easier to follow: + +
+```ts +// chatbot.ts +import { ChatAnthropic } from "@langchain/anthropic"; +import { BaseMessageLike } from "@langchain/core/messages"; +import { ToolNode } from "@langchain/langgraph/prebuilt"; +import { StateGraph, MessagesAnnotation } from "@langchain/langgraph"; +import { TavilySearchResults } from "@langchain/community/tools/tavily_search"; +import type { AIMessage } from "@langchain/core/messages"; + +// read the environment variables from .env +import "dotenv/config"; + +const searchTool = new TavilySearchResults({ maxResults: 3 }); +const tools = new ToolNode([searchTool]); + +// Create a model and give it access to the tools +const model = new ChatAnthropic({ + model: "claude-3-5-sonnet-20240620", + temperature: 0, +}).bindTools(tools); + +// Define the function that calls the model +async function callModel(state: typeof MessagesAnnotation.State) { + const messages = state.messages; + + const response = await model.invoke(messages); + + return { messages: response }; +} + +function shouldUseTool(state: typeof MessagesAnnotation.State) { + const lastMessage: AIMessage = state.messages[state.messages.length - 1]; + + // If the LLM makes a tool call, then we route to the "tools" node + if (!!lastMessage.tool_calls?.length) { + return "tools"; + } + // Otherwise, we stop (reply to the user) using the special "__end__" node + return "__end__"; +} + +// Define the graph and compile it into a runnable +export const app = new StateGraph(MessagesAnnotation) + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .addNode("tools", tools) + .addConditionalEdges("agent", shouldUseTool) + .addEdge("tools", "agent") + .compile(); +``` +
+ +
+```ts +// chatloop.ts +import { app } from "./chatbot.ts"; + +// Create a command line interface to interact with the chat bot +// We'll use these helpers to read from the standard input in the command line +import * as readline from "node:readline/promises"; +import { stdin as input, stdout as output } from "node:process"; + + +async function chatLoop() { + const lineReader = readline.createInterface({ input, output }); + + console.log("Type 'exit' or 'quit' to quit"); + + const messages = Array(); + while (true) { + const answer = await lineReader.question("User: "); + if ( ["exit", "quit", "q"].includes( answer.toLowerCase() ) ) { + console.log("Goodbye!"); + lineReader.close(); + break; + } + // Add the user's message to the conversation history + messages.push({ content: answer, role: "user" }); + + // Run the chatbot and add its response to the conversation history + const output = await app.invoke({ messages }); + messages.push(output.messages[output.messages.length - 1]); + + console.log("Agent: ", output.messages[output.messages.length - 1].content); + } +} +chatLoop().catch(console.error); +``` +
diff --git a/docs/docs/tutorials/first-agent/3-persistent-state.md b/docs/docs/tutorials/first-agent/3-persistent-state.md new file mode 100644 index 000000000..34cc50bad --- /dev/null +++ b/docs/docs/tutorials/first-agent/3-persistent-state.md @@ -0,0 +1,192 @@ +# Part 3: Adding memory to the chatbot + +Our chatbot can now use tools to answer user questions, but it doesn't remember the context of previous interactions. This limits its ability to have coherent, multi-turn conversations. + +LangGraph solves this problem through **persistent checkpointing**. If you provide a [`checkpointer`](https://langchain-ai.github.io/langgraphjs/concepts/low_level/#checkpointer) when compiling the graph and a `thread_id` when calling your graph, LangGraph automatically saves the state after each step. When you invoke the graph again using the same `thread_id`, the graph loads its saved state, allowing the chatbot to pick up where it left off. + +We will see later that **checkpointing** is _much_ more powerful than simple chat memory - it lets you save and resume complex state at any time for error recovery, human-in-the-loop workflows, time travel interactions, and more. But before we get too ahead of ourselves, let's add checkpointing to enable multi-turn conversations. + +## Step 1: Add a `MemorySaver` checkpointer + +To get started, create a `MemorySaver` checkpointer. `MemorySaver` is an in-memory checkpointer that saves the state of the graph in memory. This is useful for testing and development, but in production, you will want to use a persistent checkpointer like [`SqliteSaver`](https://langchain-ai.github.io/langgraphjs/reference/classes/checkpoint_sqlite.SqliteSaver.html) or [`MongoDBSaver`](https://langchain-ai.github.io/langgraphjs/reference/classes/checkpoint_mongodb.MongoDBSaver.html). For this tutorial, `MemorySaver` is sufficient. + +First, we need to import the `MemorySaver` class from LangGraph. Add the import statement to the top of your `chatbot.ts` file: + +```ts +import { MemorySaver } from "@langchain/langgraph"; +``` + +Then, update the code that creates the runnable agent to use a checkpointer. As a reminder, it should currently look like this: + +```ts +// Define the graph and compile it into a runnable +export const app = new StateGraph(MessagesAnnotation) + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .addNode("tools", tools) + .addConditionalEdges("agent", shouldUseTool) + .addEdge("tools", "agent") + .compile(); +``` + +We need to pass an instance of `MemorySaver` to the `compile` method. Update the call to `compile` to the following: + +```ts +.compile({ checkpointer: new MemorySaver() }); +``` + +This change doesn't affect how the graph runs. What it does is specify that the state of the graph should be saved every time it finishes executing a node. + +## Step 2: Replace manual state track with the checkpointer + +Previously, we were manually tracking the state of the conversation using the `messages` array. Now that the graph has a checkpointer, we don't have to track the state manually. + +Let's remove the `messages` array and the code that updates it with messages from the user and agent. Delete the following 3 bits of code from near the bottom of your `chatloop.ts` file: + +```ts +const messages = Array(); + +// Add the user's message to the conversation history +messages.push({ content: answer, role: "user" }); + +messages.push(output.messages[output.messages.length - 1]); +``` + +Since `messages` is no longer defined, we're getting an error now on the following line where the chatbot is invoked: + +```ts +const output = await app.invoke({ messages }); +``` + +The app still needs us to pass the _new_ message from the user when we invoke it, but the checkpointer will save it to the graph's state after that. Update the line to the following: + +```ts +const output = await app.invoke( + { + messages: [{ content: answer, type: "user" }] + }, + { configurable: { thread_id: "42" } } +); +``` + +Notice that we are now passing **two** arguments to `invoke()` - the first object contains the messages, and the second object contains the configurable `thread_id`. + +We're using the `MessagesAnnotation` helper, which has a reducer that will append the new message to the graph's `messages` state. This way each time we invoke the chatbot it will get the new message and all the previous messages from this conversation thread. + +The compiled graph now has access to a checkpointer to save progress as it executes the graph. To use it, we are providing a `thread_id` value when calling `.invoke()`. In a real application, you'd probably want to generate unique thread IDs using something like UUID or nanoid. For now, we're using a hardcoded value of "42". + +## Step 3: Test the chatbot + +At this point, the chatbot should be back to a runnable state! Test it's memory out by asking some questions that depend on the context of the previous question(s). + +As a reminder, you can run it with `npx tsx chatloop.ts`. Let's try asking it about the weather in a few locations, but not tell it we're asking about the weather each time. If it has context of the previous questions, it should be able to figure it out anyway. + +``` +User: what's the weather in seattle? +Agent: The current weather in Seattle is sunny with a temperature of 31.7°C (89.1°F). The wind is coming from the west-northwest at 4.3 mph, and the humidity is at 35%. There is no precipitation, and visibility is good at 16 km (9 miles). + +For more details, you can check the full report [here](https://www.weatherapi.com/). +User: how about ny +Agent: The current weather in New York is clear with a temperature of 20.6°C (69.1°F). The wind is coming from the east-northeast at 6.9 mph, and the humidity is at 57%. There is no precipitation, and visibility is good at 16 km (9 miles). + +For more details, you can check the full report [here](https://www.weatherapi.com/). +User: q +Goodbye! +``` + +Wow, it sure is nice out! And even the we only asked "how about ny", the chatbot was able to infer that we were asking about the weather. This is because it remembered the context of the previous question. + +Great job getting this far! When you're ready to continue, we're going to [add a human in the loop](/first-agent/4-human-loop.md) for any actions we don't want the chatbot to take with full autonomy. + +Here's what the final code from this section looks like: + +
+```ts +// chatbot.ts +import { ChatAnthropic } from "@langchain/anthropic"; +import { ToolNode } from "@langchain/langgraph/prebuilt"; +import { StateGraph, MessagesAnnotation } from "@langchain/langgraph"; +import { TavilySearchResults } from "@langchain/community/tools/tavily_search"; +import { MemorySaver } from "@langchain/langgraph"; +import type { AIMessage } from "@langchain/core/messages"; + +// read the environment variables from .env +import "dotenv/config"; + +const searchTool = new TavilySearchResults({ maxResults: 3 }); +const tools = new ToolNode([searchTool]); + +// Create a model and give it access to the tools +const model = new ChatAnthropic({ + model: "claude-3-5-sonnet-20240620", + temperature: 0, +}).bindTools(tools); + +// Define the function that calls the model +async function callModel(state: typeof MessagesAnnotation.State) { + const messages = state.messages; + + const response = await model.invoke(messages); + + return { messages: response }; +} + +function shouldUseTool(state: typeof MessagesAnnotation.State) { + const lastMessage: AIMessage = state.messages[state.messages.length - 1]; + + // If the LLM makes a tool call, then we route to the "tools" node + if (lastMessage.tool_calls?.length) { + return "tools"; + } + // Otherwise, we stop (reply to the user) using the special "__end__" node + return "__end__"; +} + +// Define the graph and compile it into a runnable +const app = new StateGraph(MessagesAnnotation) + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .addNode("tools", tools) + .addConditionalEdges("agent", shouldUseTool) + .addEdge("tools", "agent") + .compile({ checkpointer: new MemorySaver() }); +``` +
+
+```ts +// chatloop.ts +import { app } from "./chatbot.ts"; + +// Create a command line interface to interact with the chat bot + +// We'll use these helpers to read from the standard input in the command line +import * as readline from "node:readline/promises"; +import { stdin as input, stdout as output } from "node:process"; + +async function chatLoop() { + const lineReader = readline.createInterface({ input, output }); + + console.log("Type 'exit' or 'quit' to quit"); + + while (true) { + const answer = await lineReader.question("User: "); + if (["exit", "quit", "q"].includes(answer.toLowerCase())) { + console.log("Goodbye!"); + lineReader.close(); + break; + } + + // Run the chatbot with the user's input, using the same thread_id each time. + const output = await app.invoke( + { + messages: [{ content: answer, role: "user" }], + }, + { configurable: { thread_id: "42" } }, + ); + + console.log("Agent: ", output.messages[output.messages.length - 1].content); + } +} +chatLoop().catch(console.error); +``` +
diff --git a/docs/docs/tutorials/first-agent/4-human-in-the-loop.md b/docs/docs/tutorials/first-agent/4-human-in-the-loop.md new file mode 100644 index 000000000..8cd219613 --- /dev/null +++ b/docs/docs/tutorials/first-agent/4-human-in-the-loop.md @@ -0,0 +1,342 @@ +# Part 4: Human-in-the-loop + +Agents can be unreliable and may need human input to successfully accomplish tasks. Similarly, for some actions, you may want to require human confirmation before sensitive tasks/actions like making a purchase. + +LangGraph supports `human-in-the-loop` workflows in a number of ways. In this section, we will use LangGraph's `interrupt_before` functionality to always break the tool node. + +We'll be continuing using the final code from [part 3 - adding memory to the chatbot](/first-agent/3-persistent-state.md). If you haven't been following along, make sure you complete the [setup steps](/first-agent/0-setup.md) and copy the code from part 3 before continuing. + +## Step 1: Interrupt agent execution + +To start, we need to decide _where_ in the graph's execution the agent should wait for human feedback. LangGraph provides two options - _before_ or _after_ a node is run. Let's set up the agent to wait for human feedback before using a tool. + +Interrupts don't change the _structure_ of the graph, only how it is _executed_. As a result, we don't need to change the way we build the graph. Instead, we'll specify where we want interrupts to occur when we _compile_ the graph. + +Locate the code that builds and compiles the graph in your `chatbot.ts` file. It should look like this: + +```ts +// Define the graph and compile it into a runnable +export const app = new StateGraph(MessagesAnnotation) + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .addNode("tools", tools) + .addConditionalEdges("agent", shouldUseTool) + .addEdge("tools", "agent") + .compile({ checkpointer: new MemorySaver() }); +``` + +We need to change the last line - the `.compile()` call - to specify we want an interrupt before the `"tools"` node runs. It's possible to add interrupts before multiple nodes, so we will specify our interrupts as an array. Since we only want to interrupt before one node, it'll be an array with a single value. Update the `compile` code to the following: + +```ts +.compile({ checkpointer: new MemorySaver(), interruptBefore: ["tools"] }); +``` + +This change will cause the agent's execution to stop before it runs a tool. Before we can try it out to see how it works, we need to make an update to our chat loop. + +Currently, the chat loop prints the `content` of the last message from the agent. This has worked so far because when the agent requested a tool, the `"tools"` node ran it and then the agent got invoked again with the results. Now that execution is interrupted before the `"tools"` node, the last message from the agent will be the one requesting to use a tool. Those messages have no `content`, so trying to chat now will result in what feels like the agent equivalent of a blank stare: + +``` +User: How's the weather today in sf? +Agent: +``` + +Let's update how we log the output of the agent so we can see what's going on. Find the final `console.log` and update it to remove the final `.content` so it matches the following: + +```ts +console.log("Agent: ", output.messages[output.messages.length - 1]); +``` + +Now when the agent wants to use a tool, we can see the full request. Try it out by running `npx tsx chatloop.ts`. Your result should look something like this: + +``` +User: I'm learning LangGraph. Could you do some research on it for me? +Agent: AIMessage { + "id": "chatcmpl-A4ZQ4tc5ILjuPX8oV0Ovud2W7pqpr", + "content": "", + "response_metadata": { + "tokenUsage": { + "completionTokens": 19, + "promptTokens": 87, + "totalTokens": 106 + }, + "finish_reason": "tool_calls", + "system_fingerprint": "fp_483d39d857" + }, + "tool_calls": [ + { + "name": "tavily_search_results_json", + "args": { + "input": "LangGraph" + }, + "type": "tool_call", + "id": "call_CRsquDkg5zJ5DGwSqXCA7KP5" + } + ], + "invalid_tool_calls": [], + "usage_metadata": { + "input_tokens": 87, + "output_tokens": 19, + "total_tokens": 106 + } +} +``` + +If you try to continue the conversation, you'll get an error because it's expecting the next message to come from a tool, not a human. We'll fix that in the next step. + +## Step 2: Add human confirmation + +Right now, the code in `chatloop.ts` that is responsible for running the agent and printing the result looks something like this: + +```ts +// Run the chatbot and add its response to the conversation history +const output = await app.invoke( + { + messages: [{ content: answer, role: "user" }] + }, + { configurable: { thread_id: "42" } } +); + +console.log("Agent: ", output.messages[output.messages.length - 1]); +``` + +There's nothing here to detect if the agent is trying to run a tool, nor is there a way for a human to confirm that it's okay for the tool to run. We need to change that! + +Notice that in the `AIMessage` object example at the end of step 1, the `AIMessage` object has a `tool_calls` field. If that field contains an array, the agent is requesting a tool run. Otherwise, the field will be `undefined`. Let's update our chat loop to check for it and ask the human if graph execution should continue. + +Add the following code in between the `app.invoke()` call and the subsequent `console.log()` that prints the output: + +```ts +// 1. Check if the AI is trying to use a tool +const lastMessage = output.messages[output.messages.length - 1]; +if (lastMessage instanceof AIMessage && lastMessage.tool_calls !== undefined) { + console.log( + "Agent: I would like to make the following tool calls: ", + lastMessage.tool_calls + ); + + // 2. Let the human decide whether to continue or not + const humanFeedback = await lineReader.question( + "Type 'y' to continue, or anything else to exit: " + ); + if (humanFeedback.toLowerCase() !== "y") { + console.log("Goodbye!"); + lineReader.close(); + break; + } + + // 3. No new state is needed for the agent to use the tool, so pass `null` + const outputWithTool = await app.invoke(null, { + configurable: { thread_id: "42" } + }); + console.log( + "Agent: ", + outputWithTool.messages[outputWithTool.messages.length - 1].content + ); + continue; +} +``` + +We're also using the `AIMessage` class, which you'll need to import at the top: + +```ts +import { AIMessage } from "@langchain/core/messages"; +``` + +Great! There are three things going on in this addition to the chat loop: + +1. We are checking if the agent wants to use a tool. If it does, we print out the details of the requested tool call so the human can make a decision. +2. We ask the human if they want to continue. If they don't, we exit the chat loop. +3. Since the graph execution was simply paused, we don't need to add any new state to continue. Once the human has approved the tool call, we continue execution by calling `app.invoke()` again with `null` as the new state. + +Try running the chatbot again with `npx tsx chatloop.ts`. When the agent requests a tool, you should see the details of the request tool call and be prompted to continue. If you type `y`, the agent will continue and run the tool. If you type anything else, the chat loop will exit. + +Here's an example run: + +``` +User: I'm learning LangGraph. Could you do some research on it for me? +Agent: I would like to make the following tool calls: [ + { + name: 'tavily_search_results_json', + args: { input: 'LangGraph' }, + type: 'tool_call', + id: 'call_pEIxSTbokDU1c1ba0UsEACAH' + } +] +Type 'y' to continue, or anything else to exit: y +Agent: Here are some key resources and information about LangGraph: + +1. **LangGraph Overview**: + - **Website**: [LangChain](https://www.langchain.com/langgraph) + - LangGraph is a framework designed for building stateful, multi-actor agents using large language models (LLMs). It allows for handling complex scenarios and enables collaboration with humans. You can use LangGraph with Python or JavaScript and deploy your agents at scale using LangGraph Cloud. + +2. **Documentation and Features**: + - **GitHub Pages**: [LangGraph Documentation](https://langchain-ai.github.io/langgraph/) + - This documentation provides insights into creating stateful, multi-actor applications with LLMs. It covers concepts like cycles, controllability, and persistence, along with examples and integration with LangChain and LangSmith. + +3. **Tutorials and Guides**: + - **DataCamp Tutorial**: [LangGraph Tutorial](https://www.datacamp.com/tutorial/langgraph-tutorial) + - This tutorial explains how to use LangGraph to develop complex, multi-agent LLM applications. It focuses on creating stateful, flexible, and scalable systems, detailing the use of nodes, edges, and state management. + +These resources should help you get started with LangGraph and understand its capabilities in building advanced applications with LLMs. +User: when should I use langgraph vs langchain? +Agent: I would like to make the following tool calls: [ + { + name: 'tavily_search_results_json', + args: { input: 'when to use LangGraph vs LangChain' }, + type: 'tool_call', + id: 'call_Q0b5YA0I9ibuqDVhG9ftHfu6' + } +] +Type 'y' to continue, or anything else to exit: y +Agent: When deciding between LangGraph and LangChain, consider the following points: + +1. **LangGraph**: + - **Use Case**: LangGraph is specifically designed for building stateful, multi-actor agents. It excels in scenarios where you need to manage complex interactions and workflows among multiple agents or components. + - **Features**: It allows for the creation of intelligent AI agents using graph structures, enabling more powerful and flexible applications. LangGraph is particularly useful for applications that require collaboration between agents and human users. + - **Ideal For**: If your project involves multi-agent environments or requires advanced state management and interaction patterns, LangGraph is the better choice. + +2. **LangChain**: + - **Use Case**: LangChain is a more general framework for building applications powered by large language models (LLMs). It simplifies the development of applications by allowing you to define and execute action sequences (chains) easily. + - **Features**: LangChain supports the creation of directed acyclic graphs (DAGs) for managing workflows, making it suitable for a wide range of LLM applications. + - **Ideal For**: If your project focuses on simpler applications or workflows that do not require the complexity of multi-agent interactions, LangChain may be sufficient. + +In summary, use **LangGraph** for complex, multi-agent applications requiring advanced state management, and use **LangChain** for more straightforward LLM applications that benefit from action chaining. +User: quit +Goodbye! +``` + +## Conclusion + +**Congrats!** You've used an `interrupt` to add human-in-the-loop execution to your chatbot, allowing for human oversight and intervention when needed. In practice, breakpoints like this are used to guard sensitive tools that perform potentially destructive actions against LLM hallucinations or prompt injection. They can be used to present end-users a UI to confirm details before performing a transaction such as buying a stock or booking a flight, or to collect more information by [updating the graph state](https://langchain-ai.github.io/langgraphjs/how-tos/edit-graph-state/) before resuming. + +Now you have built a very capable chatbot. It can handle complex queries using external tools, wait for human confirmation before performing sensitive actions, and can keep track of conversations. At this point, you have a solid foundation to build AI agents for your own applications using LangGraph. We have [a plethora of how-to guides](https://langchain-ai.github.io/langgraphjs/how-tos/) that can help you improve your agent app, such as: + +- [Streaming LLM tokens as they are generated](https://langchain-ai.github.io/langgraphjs/how-tos/stream-tokens/) +- [View and update graph state](https://langchain-ai.github.io/langgraphjs/how-tos/time-travel/) +- [Add a summary of the conversation](https://langchain-ai.github.io/langgraphjs/how-tos/add-summary-conversation-history/#using-the-graph) + +Next, we'll explore our agent graph using [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio). Studio is a specialized agent IDE that enables visualization, interaction, and debugging of complex agentic applications. Dive into part 5 and [iterate in LangGraph Studio (beta)](/first-agent/5-iterate-studio.md)! + +Below is a copy of the final code from this section. + +
+```ts +import { ChatAnthropic } from "@langchain/anthropic"; +import { AIMessage, BaseMessageLike } from "@langchain/core/messages"; +import { ToolNode } from "@langchain/langgraph/prebuilt"; +import { StateGraph, MessagesAnnotation } from "@langchain/langgraph"; +import { TavilySearchResults } from "@langchain/community/tools/tavily_search"; +import { MemorySaver } from "@langchain/langgraph"; + +// read the environment variables from .env +import "dotenv/config"; + +const searchTool = new TavilySearchResults({ maxResults: 3 }); +const tools = new ToolNode([searchTool]); + +// Create a model and give it access to the tools +const model = new ChatAnthropic({ +model: "claude-3-5-sonnet-20240620", +temperature: 0, +}).bindTools(tools); + +// Define the function that calls the model +async function callModel(state: typeof MessagesAnnotation.State) { +const messages = state.messages; + +const response = await model.invoke(messages); + +return { messages: response }; +} + +function shouldUseTool(state: typeof MessagesAnnotation.State) { + const lastMessage: AIMessage = state.messages[state.messages.length - 1]; + + // If the LLM makes a tool call, then we route to the "tools" node + if (lastMessage.tool_calls?.length) { + return "tools"; + } + // Otherwise, we stop (reply to the user) using the special "__end__" node + return "__end__"; +} + +// Define the graph and compile it into a runnable +export const app = new StateGraph(MessagesAnnotation) + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .addNode("tools", tools) + .addConditionalEdges("agent", shouldUseTool) + .addEdge("tools", "agent") + .compile({ checkpointer: new MemorySaver(), interruptBefore: ["tools"] }); +``` +
+
+```ts +// chatloop.ts +import { app } from "./chatbot.ts"; +// Create a command line interface to interact with the chat bot + +// We'll use these helpers to read from the standard input in the command line +import * as readline from "node:readline/promises"; +import { stdin as input, stdout as output } from "node:process"; + + +async function chatLoop() { + const lineReader = readline.createInterface({ input, output }); + + console.log("Type 'exit' or 'quit' to quit"); + + while (true) { + const answer = await lineReader.question("User: "); + if ( ["exit", "quit", "q"].includes( answer.toLowerCase() ) ) { + console.log("Goodbye!"); + lineReader.close(); + break; + } + + // Run the chatbot and add its response to the conversation history + const output = await app.invoke( + { + messages: [{ content: answer, type: "user" }], + }, + { configurable: { thread_id: "42" } }, + ); + + // Check if the AI is trying to use a tool + const lastMessage = output.messages[output.messages.length - 1]; + if ( + lastMessage instanceof AIMessage && + lastMessage.tool_calls !== undefined + ) { + console.log( + "Agent: I would like to make the following tool calls: ", + lastMessage.tool_calls, + ); + + // Let the human decide whether to continue or not + const humanFeedback = await lineReader.question( + "Type 'y' to continue, or anything else to exit: ", + ); + if (humanFeedback.toLowerCase() !== "y") { + console.log("Goodbye!"); + lineReader.close(); + break; + } + + // No new state is needed for the agent to use the tool, so pass `null` + const outputWithTool = await app.invoke(null, { + configurable: { thread_id: "42" }, + }); + console.log( + "Agent: ", + outputWithTool.messages[outputWithTool.messages.length - 1].content, + ); + continue; + } + + console.log("Agent: ", output.messages[output.messages.length - 1]); + } +} +chatLoop().catch(console.error); +``` +
diff --git a/docs/docs/tutorials/first-agent/5-iterate-studio.md b/docs/docs/tutorials/first-agent/5-iterate-studio.md new file mode 100644 index 000000000..3938badfb --- /dev/null +++ b/docs/docs/tutorials/first-agent/5-iterate-studio.md @@ -0,0 +1,191 @@ +# Part 5: Iterate in LangGraph Studio (beta) + +In the previous tutorials, we built a chatbot that can answer user questions using a Large Language Model (LLM) and tools. We added memory to the chatbot to enable multi-turn conversations. Now, we will use LangGraph Studio, a specialized agent IDE that enables visualization, interaction, and debugging of complex agentic applications. Studio makes it easy to iterate on your agent and debug its behavior. + +> **Note:** LangGraph Studio is currently in beta and only available for MacOS. If you encounter any issues or have feedback, please let us know! + +## Step 1: Set up the prerequisites + +To use LangGraph Studio (beta), you will need to have a few things set up on your machine: + +- [Node.js v20](https://nodejs.org/en/download/releases/). LangGraph Studio currently only supports Node.js v20. +- [Docker Desktop](https://www.docker.com/products/docker-desktop). LangGraph Studio runs your agent in a Docker container. +- A LangSmith account. You can [sign up for a free account](https://smith.langchain.com/) if you don't have one already. LangSmith is a service from LangChain that makes it easy to implement agent observability. +- The latest release of LangGraph Studio. You can download it [from here](https://github.com/langchain-ai/langgraph-studio/releases). +- You will also want to have a good network connection, as LangGraph Studio will need to download the Docker container image that is used to run your agent. + +To follow along with this tutorial, you should have also completed the previous tutorials in this series. This tutorial will be using the chatbot we built in them. If you haven't been following along, start by [building your first LangGraph Agent](0-setup.md). If you're already comfortable with LangGraph and just want to learn about LangGraph Studio, you can grab the code from the bottom of the [previous tutorial](4-persistent-state.md) and use that as a starting point to follow along. + +Let's start by getting all the dependencies set up. + +### Setting up Node.js v20 using Node Version Manager (NVM) + +The LangGraph Studio beta currently only supports Node.js version 20. You can check your current node version using the command `node -v`. If you have a different version of node installed, we recommend using [nvm](https://github.com/nvm-sh/nvm) to install multiple node versions side-by-side. Follow the installation steps in the nvm repo to get it installed. + +Once you have nvm installed, you will need to use nvm to set the default node version to v20 to use LangGraph Studio. You can use the following commands to install Node v20 and set is as default using nvm: + +```bash +nvm install 20 +nvm alias default 20 +``` + +After running these commands, you can check that your node version is set to v20 using `node -v`. If you need to change to another version for any reason, you can use the same commands, replacing `20` with the version you want to use. + +### Docker desktop + +You will need to have Docker Desktop running on your machine for LangGraph Studio to be able to run your agent. If you don't have Docker installed on your machine, you can [download Docker from the official website](https://www.docker.com/products/docker-desktop/). Studio will automatically use Docker to pull the container image used to run your agent. Once you've got it downloaded, run the Docker Desktop application. + +Additionally, you'll need to tell LangGraph Studio where your agent is located. This is done using a `langgraph.json` file in the root of your project. This file should contain the following information: + +```json +{ + "node_version": "20", + "dockerfile_lines": [], + "dependencies": ["."], + "graphs": { + "agent": "./chatbot.ts:app" + }, + "env": ".env" +} +``` + +Notice that in the `graphs` field, we have the following value: + +```json +"agent": "./chatbot.ts:app" +``` + +The key, `"agent"`, tells LangGraph Studio how to identify the graph in the UI. You can name it whatever you want, and use different names if you wish to view multiple agents from the same project in LangGraph Studio. The value, `"./chatbot.ts:app"`, tells LangGraph Studio where to find the graph. The first part, `"./chatbot.ts"`, is the path to the file containing the graph. If your `chatbot.ts` file were in a `src` folder, your path might be `./src/chatbot.ts` instead. The second part, separated from the path using a colon `":app"`, is the name of the graph in that file. The graph must be _exported_ as a variable with this name. If you named your graph something other than `app`, you would replace `app` with the name of your graph. + +### LangSmith and LangGraph Studio + +Before you can open your agent in LangGraph Studio, you'll have to complete a few setup steps: + +1. [Sign up for a free LangSmith account](https://smith.langchain.com/). +2. Download the latest release of LangGraph Studio [from here](https://github.com/langchain-ai/langgraph-studio/releases) and install it on your machine. +3. Open the LangGraph Studio app and log in with your LangSmith account. + +## Step 2: Setup LangGraph Studio (beta) + +Once you have logged into LangSmith from inside LangGraph Studio, you should see the following screen: + +![LangGraph Studio project selection screen](./img/langgraph-studio-project-selection.png) + +From this screen, you can choose the LangGraph application folder to use. Either drag and drop or manually select the folder that contains your `langgraph.json`, `chatbot.ts`, and `chatloop.ts` files to open your agent in Studio! + +It may take a few minutes for your project to open. LangGraph Studio runs your agent in a Docker container, and it takes some time for the container image to download and start up. If you encounter any issues, please let us know by [opening an issue on the LangGraph Studio GitHub repository](https://github.com/langchain-ai/langgraph-studio/issues/new). LangGraph Studio is currently in beta, and reporting any issues you encounter helps us resolve them and improve the tool for everyone. + +Once your project is open in Studio, you should see the following screen showing a visual representation of your agent graph: + +![LangGraph Studio screen displaying a visualization of the agent graph and an interface to chat with the agent](./img/langgraph-studio-graph-view.png) + +This visual graph is a valuable tool for understanding the structure of an agent. You can see the nodes and edges that make up the agent's logic, and you can interact with the agent by sending messages to it. This is a much better interface than the basic console interface we built in `chatloop.ts`. Let's try it out! + +## Step 3: Interact with your agent in LangGraph Studio + +Now that your agent is open in Studio, let's try interacting with it here. In the bottom left corner, there's a panel labeled "Input" that contains a "+ Message" button. Click the button and type a message. You may have noticed a dropdown menu appears that says "Human" in it. That lets you choose what type of message you want to send - very helpful for debugging. We'll come back to this later, but for now leave it as "Human". + +Let's try a familiar prompt and take a look at how LangGraph Studio makes it easier to understand how the agent responds to it. Ask the agent "What's the weather like in SF right now?" Press the submit button to send the message and start the execution of your agent graph. + +There are a few things that will happen as a result of sending your message: + +1. The agent graph visualization should have shown where execution is happening - in the "agent" node - as well as the edge taken to get there +2. The right panel shows a nice conversation thread UI, where each sender is the graph node that produced the message +3. Execution stopped after the agent node, because it wants to make a tool call - using Tavily to search for the current weather in San Francisco + +You may recall that at the end of [the previous tutorial on human-in-the-loop](4-human-in-the-loop.md), we added an interrupt that stops execution before making a tool call. Back then we had to write a bunch of updates in `chatloop.ts` so we could preview the tool request and approve or deny the tool call. LangGraph Studio provides a much nicer way to do this, and the only code it requires is the agent graph! + +## Step 4: Human in the loop using LangGraph Studio + +In the message thread UI on the right side, you can see your own message asking about the weather as well as the agent's request for a tool call. It should look similar to this: + +![The LangGraph Studio thread UI view, showing the initial human request as "__start__" and the agent's request for a tool call as "agent"](./img/langgraph-studio-thread-view.png) + +LangGraph Studio nicely formats it as a table that communicates: + +- The tool that was requested - `tavily_search_results_json` +- The argument names and values for the tool request - `{ "input": "current weather in San Francisco" }` + +This view is much easier to read than the raw JSON and the CLI output we made previously. If you want to see the full details of the tool request, you can use the toggle at the top to switch from the "Pretty" view to "JSON". The JSON view is formatted and supports expanding and collapsing nested objects, making it easy to explore the structure of the tool request data. + +You probably noticed that there is a "Continue" button at the bottom of the message thread. Next to it is the helpful information about what node will run next if the "Continue" button is pressed. In our case, that's the "tools" node. When you hover over the "Continue" button or any other message in the thread, the corresponding node in the graph visualization will be highlighted. + +The LangGraph Studio UI makes it a lot easier to understand and test human-in-the-loop workflows. When you are ready to continue, press the "Continue" button. + +## Step 5: Time travel debugging + +Continuing from the interrupt, the tool node runs, gets the current weather data using Tavily. The result of the tool run is added as a JSON message from "tools" in the thread sent. Studio nicely formats the JSON and provides expand/collapse controls for nested objects, even in the "Pretty" view. The response is passed to the agent, which uses the search results to provide a response to the initial query. The response may vary based on when your query is run, but it should be something like this: + +```md +The current weather in San Francisco is as follows: + +- **Temperature**: 64°F (17.8°C) +- **Condition**: Overcast +- **Humidity**: 90% +- **Wind**: 2.2 mph from the WSW +- **Visibility**: 9 miles + +For more details, you can check the [Weather API](https://www.weatherapi.com/) or the [National Weather Service](https://forecast.weather.gov/MapClick.php?lon=-122.43573&lat=37.68754). +``` + +Imagine that the agent was producing some weird or unexpected results, such as claiming that San Francisco had anything other than beautiful, warm, sunny weather. We can attempt to debug such a nonsense answer by looking at the message from "tools" to understand what information the LLM is using to generate a response. Tavily should have returned 3 results as specified in the tool config in `chatbot.ts`. One of the results is from [weatherapi.com](https://weatherapi.com), and it contains information such as the following (trimmed for brevity): + +```json +"content": { + "location": { + "name": "San Francisco", + "region": "California", + "country": "United States of America", + }, + "current": { + "temp_c": 17.8, + "temp_f": 64.0, + "condition": { + "text": "Overcast", + }, + "humidity": 90, + "cloud": 100, + "feelslike_c": 17.8, + "feelslike_f": 64.0, + "windchill_c": 14.2, + "windchill_f": 57.5, + "vis_km": 16.0, + "vis_miles": 9.0, + "uv": 3.0, + } +} +``` + +Note that LLM generation and search results may change over time, and your results may not include the same sources. If that's the case, do your best to follow along with whatever search results you have. + +Given these search results claiming that San Francisco is only 64°F and overcast, it's easy to see why the LLM answered with something other than beautiful 75°F and sunny weather. + +Fortunately, LangGraph Studio makes it easy to do time-travel debugging. It's a technique where past state is changed, and then the execution is continued using the modified state. To do this, hover over the "tools" message and click the pencil icon to edit the message. + +![A screenshot of LangGraph Studio with a red arrow drawn on top to draw attention to the edit icon below the message from "tools". On the left side of the window, the "tools" node is highlighted in the graph visualization](img/langgraph-studio-edit-message.png) + +Now update the tool message to reflect the desired weather. I'll be updating it to 75° and sunny with 0 clouds. Once you've updated the tools result with new information, notice that there's a new "Fork" button below the edited message: + +![A view of the message from "tools" with the content collapsed to reveal the buttons below. The primary button is labeled "fork" and the secondary button is labeled "cancel"](img/langgraph-studio-fork-thread.png) + +The "Continue" button from earlier would continue the graph execution in the current thread. The "Fork" button will create an alternate timeline or "thread" where the agent is provided the modified search results instead. This is a powerful debugging tool that allows you to explore different paths your agent could have taken based on different inputs. Forking the conversation using the new and improved San Francisco weather information yields the following answer from the LLM: + +```md +The current weather in San Francisco is sunny with a temperature of 75°F (approximately 24°C). The humidity is at 90%, and there is a light wind coming from the west-southwest at about 2.2 mph. + +For more details, you can check the [Weather API](https://www.weatherapi.com/) or the [National Weather Service](https://forecast.weather.gov/MapClick.php?lon=-122.43573&lat=37.68754). +``` + +Warm and sunny with a light breeze, much better! + +In addition to editing past messages, LangGraph Studio provides easy ways to switch between forks in a conversation thread. Below the "tools" message, you should now see a small interface element labeled "Fork 2 / 2" with arrows to switch versions of the conversation starting from the different versions of the tools message. This makes it easy to compare the agent's responses to different inputs. + +## Summary + +LangGraph Studio has powerful tools that make it easy to start interacting with an agent, debug behavior, and validate your graph works as expected before deploying it. While this tutorial provides a nice overview of some of the capabilities, there are others that can be extremely helpful. For example: + +- [**Create and edit threads**](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#create-and-edit-threads) easily to have multiple separate chats and explore different execution paths. +- [**Managing interrupts**](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#how-to-add-interrupts-to-your-graph) without code changes using the Studio GUI. +- [**Specify custom graph configurations**](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file#configure-graph-run) that allow you to customize the runtime behavior of your agent + +Now that you know how to get your agent working exactly the way you want by iterating in LangGraph Studio, the final step is to deploy it! In the next tutorial, we'll cover how to [deploy your agent to the cloud](6-deploy-to-cloud.md) using LangGraph Cloud. diff --git a/docs/docs/tutorials/first-agent/6-deploy-to-cloud.md b/docs/docs/tutorials/first-agent/6-deploy-to-cloud.md new file mode 100644 index 000000000..2cfd79f00 --- /dev/null +++ b/docs/docs/tutorials/first-agent/6-deploy-to-cloud.md @@ -0,0 +1,244 @@ +# Part 6: Deploy your agent using LangGraph Cloud (beta) + +> **Note** +> +> - LangGraph is an MIT-licensed open-source library, which we are committed to maintaining and growing for the community. +> - LangGraph Cloud is an optional managed hosting service for LangGraph, which provides additional features geared towards production deployments. +> - We are actively contributing improvements back to LangGraph informed by our work on LangGraph Cloud. +> - You can always deploy LangGraph applications on your own infrastructure using the open-source LangGraph project. + +In the previous tutorials, you've learned how to create and iterate on a LangGraph agent. Everything we've done so far has been local to your machine. Once you're happy with your agent, it's time to deploy it to the cloud so that it can be accessed via your application or other users. + +This tutorial will walk you through the process of deploying your agent using LangGraph Cloud (beta). LangGraph Cloud is a managed hosting service from LangChain that makes it easy to deploy and scale your LangGraph agents. It integrates with LangSmith for observability and tracing to provide robust deployment infrastructure. + +> **Note** +> LangGraph Cloud is a paid service and is currently in beta. To deploy an agent to LangGraph Cloud, you must be on any of the paid LangSmith plans. [Learn more about LangSmith pricing](https://www.langchain.com/pricing) + +## Prerequisites + +To deploy your agent to LangGraph Cloud, you will need: + +- A [GitHub](https://github.com/) account and the Git CLI or some other way to create and push a Git repository to GitHub from your computer. +- A LangSmith account on any paid plan. [Sign up for LangSmith](https://smith.langchain.com/) if you don't have an account yet. +- A LangGraph agent that you want to deploy, such as the one built throughout this tutorial series. +- We'll be using the [`@langchain/langgraph-sdk`](https://www.npmjs.com/package/@langchain/langgraph-sdk) to interact with the LangGraph Cloud REST API. + +Start by installing the `@langchain/langgraph-sdk` package: + +```sh +npm install @langchain/langgraph-sdk +``` + +If you'll be deploying the agent from this tutorial series, there is one small modification you need to make before we move on to deployment. In the part about [human-in-the-loop workflows](4-human-in-the-loop.md), we added an `interruptBefore` argument when compiling the graph. That was a great way to learn about human-in-the-loop workflows, but we want the agent to be able to search autonomously for the deployed version. To remove it, locate the code where the agent is compiled in `chatbot.ts`. It should look like this: + +```ts +// Define the graph and compile it into a runnable +export const app = new StateGraph(MessagesAnnotation) + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .addNode("tools", new ToolNode(tools)) + .addConditionalEdges("agent", shouldUseTool) + .addEdge("tools", "agent") + .compile({ checkpointer: new MemorySaver(), interruptBefore: ["tools"] }); +``` + +Additionally, agents deployed via LangGraph Cloud will automatically be checkpointed using a Postgres checkpointer for persistent storage. This means you can also remove the `MemorySaver` checkpointer we have been using for local runs. + +Remove the object containing the `checkpointer` and `interruptBefore: ["tools"]` options from the call to `compile` so that it looks like this: + +```ts +// Define the graph and compile it into a runnable +export const app = new StateGraph(MessagesAnnotation) + .addNode("agent", callModel) + .addEdge("__start__", "agent") + .addNode("tools", new ToolNode(tools)) + .addConditionalEdges("agent", shouldUseTool) + .addEdge("tools", "agent") + .compile(); +``` + +This change will allow the deployed agent to perform its searches autonomously and save the progress checkpoints in a persistent Postgres database managed by LangGraph Cloud. With that, you're ready to get your agent deployed! + +## Step 1: Create a new Git repository with your agent code + +Start by opening your terminal and navigating to the directory where your agent code is located. From there, run the following code to initialize a git repository in that folder: + +```sh +git init . +``` + +It's important not to commit your environment variables to the repo. They're stored in the `.env` file, so we're going to add it to the `.gitignore` file, which tells Git which files should not be tracked. While we're at it, we'll also ignore the `node_modules` folder. The following commands will create a `.gitignore` file that ignores those files from our project: + +```sh +touch .gitignore +echo ".env" >> .gitignore +echo "node_modules" >> .gitignore +``` + +Next, add the files LangGraph Cloud needs to build and run the agent and commit them to the repo locally: + +```sh +git add chatbot.ts langgraph.json package.json .gitignore +git commit --message "Initial commit of LangGraph agent" +git branch --move --force main +``` + +Now you've got a local repository with your agent code in the `main` branch. The next step is to push the repository to GitHub. + +## Step 2: Push your repository to GitHub + +Click [this link](https://github.com/new?name=langgraph-agent&description=My%20first%20langgraph%20agent) to create a new GitHub repository. Use the following options to create it: + +- **Repository template**: No template +- **Repository name**: This tutorial will assume the repo is named `langgraph-agent`, but feel free to choose something else if you prefer. Wherever this tutorial mentions `langgraph-agent`, replace it with the name you choose. +- **Description**: You can use any description you'd like. +- LangGraph Cloud can deploy from both Public and Private repositories, so choose whichever you prefer +- **Initialize this repository with**: None. We've already initialized the repository locally, so there's no need to do so here. + +When you're done, click the "Create repository" button at the bottom of the page. You'll be taken to the repository page, which will have a URL like `https://github.com/YOUR_USERNAME/langgraph-agent`. With the repository created, you can push your files to it. Run the following commands in your terminal, making sure to replace `YOUR_USERNAME` with your GitHub username: + +```sh +git remote add origin https://github.com/YOUR_USERNAME/langgraph-agent.git +git push --set-upstream origin main +``` + +This adds the GitHub repository as a remote and pushes your code to it. You can now see your code on GitHub by visiting the repository URL in your browser. More importantly, this means that once you connect your GitHub account to LangSmith, you'll be able to deploy your agent directly from GitHub. Let's do that now! + +## Step 3: Connect your GitHub account to LangSmith + +Log in to your LangSmith account at [smith.langchain.com](https://smith.langchain.com/). Once you're logged in: + +1. Click on the "Deployments" (rocket ship icon) tab in the sidebar on the left. +2. From the deployments page, click the "+ New Deployment" button in the top right corner. + +![Screenshot showing the LangSmith UI with an arrow labeled "1" pointing to the deployments tab and an arrow labeled "2" pointing to the "+ New Deployment" button](./img/langsmith-new-deployment.png) + +That will open the "Create New Deployment" panel. To start, you'll need to connect your GitHub account to access the repo with your agent code. + +1. Click the "Import with GitHub" button at the top. This will open a new tab where you will be prompted to grant LangGraph Cloud access to your GitHub repositories. +2. When prompted to install "hosted-langserve" select the account or organization that owns your agent repository. +3. On the next screen, you will be prompted to decide which repositories to install and authorize access to. It's a good security practice to grant the minimum permissions necessary. +4. Choose "Only select repositories" and then select the `langgraph-agent` repository you created earlier. +5. Click the "Install & Authorize" button to grant LangGraph Cloud access to read the code and metadata in your repository. + +## Step 4: Configure your deployment + +After connecting your GitHub account to LangSmith, the GitHub tab will close and return you to the LangSmith UI. + +Now, instead of the GitHub button you should see your account and a prompt to select a repository from a dropdown. Select your `langgraph-agent` repository from the dropdown. Change the "Name" of your deployment to `langgraph-agent` so it's easy to identify. + +The LangGraph API config file and Git reference should both have the correct values by default. You also don't need to change the "Deployment Type" configuration. + +Make sure to add your environment variables. As a reminder, they are in the `.env` file in your project folder on your computer. You can copy the whole `.env` file and paste its contents into the `name` field. + +Once you've added your environment variables you are ready to deploy! Click the "Submit" button in the top right corner. You will be taken to the deployment page for your new deployment! + +## Step 5: Deploy your agent + +The first thing that needs to happen is LangGraph Cloud will build your agent into a deployable application. After the build completes, LangGraph Cloud will deploy your agent. + +Once the agent has finished deploying, you should see a green message saying "Currently deployed" in the entry inside the "Revisions" list. Near the top of the page and towards the middle, locate your "API URL". This is the base URL for the endpoints you will hit to use your agent. Note it down somewhere, as you'll be using it soon. + +The last step before using the deployed agent is to create an API key. Agent deployments are secured using API keys are how you control who has access to run your deployed agent. To create an API key, click the settings button (gear icon) in the bottom left corner of the page to go to the workspace settings page. Then click the "API Keys" tab in the settings page. + +From the API keys page, create a new API Key: + +1. Click the "Create API Key" button in the top right +2. Set the description to something identifiable, like "LangGraph Agent API Key" +3. Select "Personal Access Token". When integrating the agent into your application, use a Service Key instead. +4. Click the "Create API Key" button +5. Add the generated API key to your `.env` file as `LANGSMITH_API_KEY=YOUR_KEY_HERE` + +Now you have everything you need to start using your deployed agent! + +## Step 6: Invoke your deployed agent via the API + +With the agent deployed, we need to write some code that interacts with it via the API using the LangGraph SDK. Create a new file called `deployed-agent.ts` and add the following code to interact with it via API: + +```ts +// deployed-agent.ts +import { BaseMessageLike } from "@langchain/core/messages"; +import { Client } from "@langchain/langgraph-sdk"; +import "dotenv/config"; + +// Create an API client for interacting with the API +const client = new Client({ + apiUrl: process.env.LANGGRAPH_DEPLOY_URL!, + apiKey: process.env.LANGSMITH_API_KEY!, +}); + +// get the default assistant created when deploying the agent +const assistant = (await client.assistants.search())[0]; + +// create a conversation thread +const thread = await client.threads.create(); + +// define the user message that will start the conversation thread +const input = { + messages: [ + { type: "user", content: "What is the weather like in sf right now?" }, + ] satisfies BaseMessageLike[], +}; + +// Initialize the conversation using `.stream()`, which streams each message response as it is generated +const streamResponse = client.runs.stream( + thread.thread_id, + assistant.assistant_id, + { + input, + } +); + +// wait for each message in the stream and print it to the console +for await (const chunk of streamResponse) { + if (chunk.data && chunk.event !== "metadata") { + console.log(chunk.data.messages.at(-1).content, "\n\n"); + } +} +``` + +Previously we've been using the `invoke()` method of running agents, which waits for the entire conversation to resolve before returning the results. In applications where you want the user to see messages as they become available, the `.stream()` API provides a generator that yields each message once it is available. This can make long-running agentic workflows feel more responsive. + +To run the deployed agent using this code, use the following command: + +```sh +npx tsx deployed-agent.ts +``` + +This will grab the default-generated assistant for your deployment, create a new thread, and run the conversation with your deployed agent! You should see a response similar to the following (tool response trimmed for brevity): + +``` +What is the weather like in sf right now? + + +[{"title":"Weather in San Francisco", ... }] + + +The current weather in San Francisco is partly cloudy with a temperature of 20.2°C (68.4°F). The wind is coming from the west-southwest at 12.1 mph (19.4 kph), and the humidity is at 68%. There is no precipitation reported, and visibility is good at 16 km (9 miles). + +For more details, you can check the full report [here](https://www.weatherapi.com/). +``` + +And that's it! You've now successfully deployed your agent using LangGraph Cloud and run it using the `@langchain/langgraph-sdk` package. You're ready to integrate LLM-powered, tool-wielding agents into your applications now. + +## Step 7: Viewing traces in LangSmith + +Now that you've run a deployed agent, it's possible to view traces from its executions using langsmith. + +From the "Deployments" page in [LangSmith](https://smith.langchain.com/), select your `langgraph-agent` deployment. near the bottom-right corner of the page, you should see a button that says "See tracing project" that looks like this: + +![Screenshot showing the "See tracing project" button](./img/langsmith-see-tracing-project.png) + +Click that button to be taken to the tracing project for your deployed agent. Here you can access observability info about your agent such as runs and metrics. Feel free to explore the data available here. + +## Summary + +Great work getting here! You've learned so much: + +- how to build an LLM-powered agent with access to tools +- persist its state across runs and conversations +- add a human-in-the-loop for sensitive actions +- visualize, debug, and iterate on your agent using LangGraph Studio +- deploy your agent using LangGraph Cloud and interact with it via the SDK + +This is the end of the tutorial series, but it's just the beginning of your journey building LLM-powered agents for your applications. We hope you've enjoyed the series and are excited to see what you build next! diff --git a/docs/docs/tutorials/first-agent/img/langgraph-studio-edit-message.png b/docs/docs/tutorials/first-agent/img/langgraph-studio-edit-message.png new file mode 100644 index 000000000..126836ab9 Binary files /dev/null and b/docs/docs/tutorials/first-agent/img/langgraph-studio-edit-message.png differ diff --git a/docs/docs/tutorials/first-agent/img/langgraph-studio-fork-thread.png b/docs/docs/tutorials/first-agent/img/langgraph-studio-fork-thread.png new file mode 100644 index 000000000..f0e589147 Binary files /dev/null and b/docs/docs/tutorials/first-agent/img/langgraph-studio-fork-thread.png differ diff --git a/docs/docs/tutorials/first-agent/img/langgraph-studio-graph-view.png b/docs/docs/tutorials/first-agent/img/langgraph-studio-graph-view.png new file mode 100644 index 000000000..533128520 Binary files /dev/null and b/docs/docs/tutorials/first-agent/img/langgraph-studio-graph-view.png differ diff --git a/docs/docs/tutorials/first-agent/img/langgraph-studio-screen.png b/docs/docs/tutorials/first-agent/img/langgraph-studio-screen.png new file mode 100644 index 000000000..3e9abe0b8 Binary files /dev/null and b/docs/docs/tutorials/first-agent/img/langgraph-studio-screen.png differ diff --git a/docs/docs/tutorials/first-agent/img/langgraph-studio-thread-view.png b/docs/docs/tutorials/first-agent/img/langgraph-studio-thread-view.png new file mode 100644 index 000000000..5daa05e12 Binary files /dev/null and b/docs/docs/tutorials/first-agent/img/langgraph-studio-thread-view.png differ diff --git a/docs/docs/tutorials/first-agent/img/langsmith-new-deployment.png b/docs/docs/tutorials/first-agent/img/langsmith-new-deployment.png new file mode 100644 index 000000000..c06ad3f54 Binary files /dev/null and b/docs/docs/tutorials/first-agent/img/langsmith-new-deployment.png differ diff --git a/docs/docs/tutorials/first-agent/img/langsmith-see-tracing-project.png b/docs/docs/tutorials/first-agent/img/langsmith-see-tracing-project.png new file mode 100644 index 000000000..3e1b5fe5c Binary files /dev/null and b/docs/docs/tutorials/first-agent/img/langsmith-see-tracing-project.png differ diff --git a/examples/how-tos/wait-user-input.ipynb b/examples/how-tos/wait-user-input.ipynb index 9f8220215..7f916748e 100644 --- a/examples/how-tos/wait-user-input.ipynb +++ b/examples/how-tos/wait-user-input.ipynb @@ -569,7 +569,7 @@ "id": "6a30c9fb-2a40-45cc-87ba-406c11c9f0cf", "metadata": {}, "source": [ - "We can now tell the agent to continue. We can just pass in `None` as the input to the graph, since no additional input is needed" + "We can now tell the agent to continue. We can just pass in `null` as the input to the graph, since no additional input is needed" ] }, { diff --git a/examples/quickstart.ipynb b/examples/quickstart.ipynb index 29b8fa2bf..df4cd2e70 100644 --- a/examples/quickstart.ipynb +++ b/examples/quickstart.ipynb @@ -248,7 +248,7 @@ " const lastMessage = messages[messages.length - 1];\n", "\n", " // If the LLM makes a tool call, then we route to the \"tools\" node\n", - " if (lastMessage.additional_kwargs.tool_calls) {\n", + " if (lastMessage.tool_calls) {\n", " return \"tools\";\n", " }\n", " // Otherwise, we stop (reply to the user) using the special \"__end__\" node\n", diff --git a/libs/langgraph/src/prebuilt/tool_node.ts b/libs/langgraph/src/prebuilt/tool_node.ts index e030d8d3c..af3b6dfd8 100644 --- a/libs/langgraph/src/prebuilt/tool_node.ts +++ b/libs/langgraph/src/prebuilt/tool_node.ts @@ -7,7 +7,6 @@ import { import { RunnableConfig, RunnableToolLike } from "@langchain/core/runnables"; import { StructuredToolInterface } from "@langchain/core/tools"; import { RunnableCallable } from "../utils.js"; -import { END } from "../graph/graph.js"; import { MessagesAnnotation } from "../graph/messages_annotation.js"; export type ToolNodeOptions = { @@ -200,7 +199,7 @@ export class ToolNode extends RunnableCallable { export function toolsCondition( state: BaseMessage[] | typeof MessagesAnnotation.State -): "tools" | typeof END { +): "tools" | "__end__" { const message = Array.isArray(state) ? state[state.length - 1] : state.messages[state.messages.length - 1]; @@ -210,7 +209,6 @@ export function toolsCondition( ((message as AIMessage).tool_calls?.length ?? 0) > 0 ) { return "tools"; - } else { - return END; } + return "__end__"; }