-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Build your first agent" tutorial #461
Draft
Masstronaut
wants to merge
18
commits into
main
Choose a base branch
from
allan/first-agent-tutorial
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from all commits
Commits
Show all changes
18 commits
Select commit
Hold shift + click to select a range
2c97c0c
Setup and part 1 of the tutorial
Masstronaut 61b75b3
Updated `toolsCondition` to use `'__end__'`
Masstronaut 1a0f39e
added RAG part of the first agent tutorial
Masstronaut f499b98
Addressed some feedback on p2
Masstronaut 2600e52
added part 3 on persistent memory
Masstronaut bc12e35
updated the quickstart to use the standardized `tool_calls` rather th…
Masstronaut 21ee1e2
swapped value from python `None` to js `null` for js doc
Masstronaut 2d44b98
Part 4 - human in the loop
Masstronaut 36cc4a3
iterating in studio - setup
Masstronaut d31d3bb
Fix typos
jacoblee93 a58de7e
updated parts 1-4 to split the agent code and chat loop into two files
Masstronaut e56285a
langgraph studio lesson
Masstronaut c8367b0
langgraph studio lesson
Masstronaut 2558210
langgraph cloud section using SDK
Masstronaut 8033d3a
Addressed most PR feedback
Masstronaut c5a9741
Addressed most PR feedback
Masstronaut ee8e658
wrap awaits in an async fn
Masstronaut 44073a0
wrap awaits in an async fn
Masstronaut File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,115 @@ | ||
# Build your first agent - Introduction | ||
|
||
In this comprehensive tutorial, we will build an AI support chatbot using LangGraph.js that can: | ||
|
||
- Answer common questions by searching the web | ||
- Maintain conversation state across calls | ||
- Route complex queries to a human for review | ||
- Use custom state to control its behavior | ||
- Rewind and explore alternative conversation paths | ||
|
||
We'll start with a basic chatbot and progressively add more sophisticated capabilities, introducing key LangGraph concepts along the way. Later, we will learn how to iterate on an agent graph using Studio and deploy it using LangGraph Cloud. | ||
|
||
There's a lot of ground to cover, but don't worry! We'll take it step by step across 7 parts. Each part will introduce a single concept that helps improve the chatbot's capabilities. At the end you should feel comfortable building, debugging, iterating on, and deploying an AI agent of your own. Here's an overview of what we'll cover: | ||
|
||
- [**Setup**](/first-agent/0-setup.md) _(You are here)_: Set up your development environment, dependencies, and services needed to build the chatbot. | ||
- [**Part 1: Create a chatbot**](/first-agent/1-create-chatbot.md): Build a basic chatbot that can answer questions using Anthropic's LLM. | ||
- [**Part 2: Add search Retrieval-Augmented Generation (RAG)**](/first-agent/2-rag-search.md): Provide the chatbot with a tool to search the web using Tavily. | ||
- [**Part 3: Add persistent state**](/first-agent/3-persistent-state.md): Add memory to the chatbot so it can continue past conversations. | ||
- [**Part 4: Add human-in-the-loop**](/first-agent/4-human-loop.md): Route complex queries to a human for review. | ||
- [**Part 5: Time-travel debugging**](/first-agent/5-time-travel-debugging.md): Use the persisted state to rewind and debug or explore alternative conversation paths. | ||
- [**Part 6: Iterate using Studio**](/first-agent/6-studio.md): Setup Studio to iterate and debug the agent using a graphical interface. | ||
- [**Part 7: Deploy to LangGraph Cloud**](/first-agent/7-deploy.md): Deploy the agent to LangGraph Cloud and interact with it over the web. | ||
|
||
## Prerequisites | ||
|
||
To complete this tutorial, you will need to have a computer set up with Node.js 18 or later. You can download Node.js from the [official website](https://nodejs.org/). | ||
|
||
You will also need a basic understanding of JavaScript and TypeScript, and should be familiar with the command line. | ||
|
||
LangGraph makes it easy to work with a variety of tools and services to build AI agents. In this tutorial, we will use the following: | ||
|
||
- [Anthropic API](https://console.anthropic.com/) will be used for the base Large Language Model (LLM) that powers the chatbot. | ||
- [Tavily's Search API](https://tavily.com/) will be used as a tool that enables the agent to search the web. | ||
|
||
To complete this tutorial, you will need to sign up and get an API key for both services. | ||
|
||
## Setup | ||
|
||
Once you've got NodeJS installed and have signed up for Tavily and Anthropic, you are ready to get the project setup. | ||
|
||
First, run the follow commands to create a new directory for your project and navigate to it in your terminal. | ||
|
||
```bash | ||
mkdir langgraph-chatbot | ||
cd langgraph-chatbot | ||
``` | ||
|
||
### Environment variables | ||
|
||
Next, create a `.env` file in the root of your project and add the API keys you received from Anthropic and Tavily: | ||
|
||
``` | ||
#.env | ||
ANTHROPIC_API_KEY=your-Anthropic-key-here | ||
TAVILY_API_KEY=your-Tavily-key-here | ||
``` | ||
|
||
While we're at it, let's make sure the environment variables defined in the `.env` file are available to our project. We can do this by installing the `dotenv` package: | ||
|
||
```bash | ||
npm install dotenv --save | ||
``` | ||
|
||
Now we need to make sure dotenv loads the environment variables from the `.env` file. To do this, create a new file called `chatbot.ts` and add the following lines at the top of the: | ||
|
||
```ts | ||
// chatbot.ts | ||
import "dotenv/config"; | ||
``` | ||
|
||
This will load the environment variables from the `.env` file into the global `process.env` object when the project starts. To verify it's working, let's log the environment variables to the console. | ||
Add the following lines to the end of the `chatbot.ts` file: | ||
|
||
```ts | ||
console.log(process.env.ANTHROPIC_API_KEY); | ||
console.log(process.env.TAVILY_API_KEY); | ||
``` | ||
|
||
Now let's run the project using `tsx`, a tool that lets us run TypeScript code without first compiling it to JS. Use the following command: | ||
|
||
```bash | ||
npx tsx chatbot.ts | ||
``` | ||
|
||
You should see the API keys you added to your `.env` file printed to the console. | ||
|
||
### Install dependencies | ||
|
||
You'll also need to install a few dependencies to create an agent: | ||
|
||
- **@langchain/core** provides the core functionality of Langchain that LangGraph depends on | ||
- **@langchain/langgraph** contains the building blocks used to assemble an agent | ||
- **@langchain/anthropic** enable you to use Anthropic's LLMs in LangGraph | ||
- **@langchain/community** contains the Tavily search tool that will be used by the agent | ||
|
||
Let's do that using the Node Package Manager (npm). Run the following command in your terminal: | ||
|
||
```bash | ||
npm install @langchain/core @langchain/langgraph @langchain/anthropic @langchain/community | ||
``` | ||
|
||
### (Encouraged) Set up tracing with LangSmith | ||
|
||
Setting up up LangSmith is optional, but it makes it a lot easier to understand what's going on "under the hood." | ||
|
||
To use [LangSmith](https://smith.langchain.com/) you'll need to sign up and get an API key. Once you have an API key, add the following to your `.env` file: | ||
|
||
``` | ||
LANGCHAIN_API_KEY=your-LangSmith-key-here | ||
LANGCHAIN_TRACING_V2=true | ||
LANGCHAIN_PROJECT="LangGraph Tutorial" | ||
LANCHAIN_CALLBACKS_BACKGROUND=true | ||
``` | ||
|
||
At this point, you should be ready to start building your first agent. When you're ready, move on to [part 1: create a chatbot](/first-agent/1-create-chatbot.md). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,233 @@ | ||
# Part 1: Create a chatbot | ||
|
||
We'll first create a simple chatbot using LangGraph.js. This chatbot will respond directly to user messages. Though simple, it will illustrate the core concepts of building with LangGraph. By the end of this section, you will have built a rudimentary chatbot. | ||
|
||
## Step 1: Create an LLM agent | ||
|
||
The first thing we need to do is create an LLM agent. LangGraph makes it easy to use any LLM provider, and we will be using Anthropic's Claude 3.5 Sonnet model. Add the following code to your `chatbot.ts` file: | ||
|
||
```ts | ||
import { ChatAnthropic } from "@langchain/anthropic"; | ||
|
||
const model = new ChatAnthropic({ | ||
model: "claude-3-5-sonnet-20240620", | ||
temperature: 0 | ||
}); | ||
``` | ||
|
||
The `ChatAnthropic` class is a wrapper around the Anthropic API that makes it easy to interact with the LLM. We're setting some options on it to configure the LLM: | ||
|
||
- `model` needs the API model name of the model we want to use. We're using `claude-3-5-sonnet-20240620`. You can learn more in the [Anthropic models documentation](https://docs.anthropic.com/en/docs/about-claude/models#model-comparison-table`). | ||
- `temperature` is a parameter that controls the randomness of the model's output. A temperature of 0 will always return the most likely/predictable token and as the temperature goes towards the max value of 1 the LLM will produce more "creative" outputs. For this tutorial, we'll be using a temperature of 0 to produce more consistent outputs, but feel free to experiment. | ||
|
||
## Step 2: Create a StateGraph | ||
|
||
The next thing we're going to implement is a [StateGraph](https://langchain-ai.github.io/langgraphjs/reference/classes/langgraph.StateGraph.html). A `StateGraph` object defines the structure of our chatbot as a "state machine". Nodes can communicate by reading and writing to a shared state. We'll add `nodes` to represent the llm and the functions our chatbot can call. The nodes are connected using `edges` that specify how the bot should transition between these functions. | ||
|
||
Add the following code to your `chatbot.ts` file: | ||
|
||
```ts | ||
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph"; | ||
|
||
const graphBuilder = new StateGraph(MessagesAnnotation); | ||
``` | ||
|
||
In this code snippet, we're creating a new `StateGraph` object and passing it our state [`Annotation`](https://langchain-ai.github.io/langgraphjs/concepts/low_level/#annotation). It's so common for chatbot state to be an array of messages that LangGraph provides a helper for it: [`MessagesAnnotation`](https://langchain-ai.github.io/langgraphjs/concepts/low_level/#messagesannotation). This helper defines a state schema with a single field `messages` which is an array of strings. It also provides a reducer function that appends new messages to the array. | ||
|
||
Later, we will use the `graphBuilder` object to build a graph that defines how our chatbot will behave by adding nodes and edges to the graph. | ||
|
||
## Step 3: Create node that runs the LLM | ||
|
||
Now that we have a basic `StateGraph` and and LLM, we need to define a node that will invoke the LLM with the correct state. That's done using a function that takes the current state and returns the new state. Add the following code to your `chatbot.ts` file: | ||
|
||
```ts | ||
async function callModel(state: typeof MessagesAnnotation.State) { | ||
const response = await model.invoke(state.messages); | ||
|
||
// We return the response in an array and the `MessagesAnnotation` reducer will append it to the state | ||
return { messages: [response] }; | ||
} | ||
``` | ||
|
||
This function is the glue between our `StateGraph` and the LLM. Without it, the LLM wouldn't know what is being asked of it, and the state wouldn't be updated with its response. | ||
|
||
## Step 4: Build and run the graph | ||
|
||
With the LLM, the `StateGraph`, and a way for them to communicate, we're ready to build our first agent graph! In LangGraph, the entrypoint is defined using a node named `"__start__"`. We need to add our LLM node and connect it to the start node. Add the following code to your `chatbot.ts` file: | ||
|
||
```ts | ||
// Create a graph that defines our chatbot workflow and compile it into a `runnable` | ||
export const app = graphBuilder | ||
.addNode("agent", callModel) | ||
.addEdge("__start__", "agent") | ||
.compile(); | ||
``` | ||
|
||
Notice that we're `export`ing the `app` object. This helps us keep the code organized; the agent is defined in `chatbot.ts` and we will write the code that uses it in a separate file. When we go over how to [iterate on an agent using a GUI](5-iterate-studio.md), we will `import` our agent into [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio) too. | ||
|
||
At this point we have an app object we can invoke to run our chatbot. To try it out, we're going to need a chat loop that lets us interact with the bot. Let's create a new file called `chatloop.ts` and add logic for our chat loop to it: | ||
|
||
```ts | ||
// chatloop.ts | ||
import { BaseMessageLike } from "@langchain/core/messages"; | ||
|
||
// We need to import the chatbot we created so we can use it here | ||
import { app } from "./chatbot.ts"; | ||
|
||
// We'll use these helpers to read from the standard input in the command line | ||
import * as readline from "node:readline/promises"; | ||
import { stdin as input, stdout as output } from "node:process"; | ||
|
||
async function chatLoop() { | ||
const lineReader = readline.createInterface({ input, output }); | ||
|
||
console.log("Type 'exit' or 'quit' to quit"); | ||
const messages = Array<BaseMessageLike>(); | ||
while (true) { | ||
const answer = await lineReader.question("User: "); | ||
if (["exit", "quit", "q"].includes(answer.toLowerCase())) { | ||
console.log("Goodbye!"); | ||
lineReader.close(); | ||
break; | ||
} | ||
messages.push({ content: answer, role: "user" }); | ||
|
||
// Run the chatbot, providing it the `messages` array containing the conversation | ||
const output = await app.invoke({ messages }); | ||
messages.push(output.messages[output.messages.length - 1]); | ||
console.log("Agent: ", output.messages[output.messages.length - 1].content); | ||
} | ||
} | ||
chatLoop().catch(console.error); | ||
``` | ||
|
||
This chat loop uses the [`readline`](https://nodejs.org/api/readline.html) module from Node.js to read user input from the command line. It stores the message history in the `messages` array so that each message _continues_ the conversation, rather than starting a new one each time. | ||
|
||
We're calling `app.invoke()` to use the chatbot. Passing it an array of messages containing the conversation history lets us continue a single conversation. In part 3 of this tutorial, we will use a [checkpointer](https://langchain-ai.github.io/langgraphjs/concepts/low_level/?h=messages+annotation#checkpointer) to store conversation history and enable the agent to participate in multiple separate conversation threads. For now, we're manually updating the message history with each new message from the user and agent. | ||
|
||
Now that we have a way to interact with the agent, try it out by running the following command: | ||
|
||
```bash | ||
npx tsx chatloop.ts | ||
``` | ||
|
||
Here's an example chat session: | ||
|
||
``` | ||
User: What's langgraph all about? | ||
Agent: LangGraph is a tool or framework designed to facilitate the development and deployment of applications that leverage large language models (LLMs). It typically focuses on enhancing the capabilities of LLMs by integrating them with various data sources, APIs, and other tools to create more sophisticated and context-aware applications. | ||
|
||
LangGraph may include features such as: | ||
|
||
1. **Graph-Based Representation**: It often uses graph structures to represent relationships between different entities, which can help in understanding context and improving the relevance of responses generated by LLMs. | ||
|
||
2. **Integration with APIs**: LangGraph can connect with various APIs to pull in real-time data, allowing applications to provide up-to-date information and contextually relevant responses. | ||
|
||
3. **Custom Workflows**: Users can create custom workflows that define how the LLM interacts with different data sources and processes information, making it adaptable to specific use cases. | ||
|
||
4. **Enhanced Contextual Understanding**: By utilizing graph structures, LangGraph can improve the model's ability to understand and generate responses based on complex relationships and hierarchies within the data. | ||
|
||
5. **Applications**: It can be used in various domains, including customer support, content generation, data analysis, and more, where natural language understanding and generation are crucial. | ||
|
||
For the most accurate and up-to-date information, I recommend checking the official LangGraph website or relevant documentation, as developments in technology can lead to new features and capabilities. | ||
User: what problems does it solve? | ||
Agent: LangGraph addresses several challenges associated with the use of large language models (LLMs) in application development and deployment. Here are some of the key problems it aims to solve: | ||
|
||
1. **Contextual Understanding**: LLMs can struggle with maintaining context over long conversations or complex queries. LangGraph's graph-based representation helps in organizing and managing contextual information, allowing for more coherent and relevant responses. | ||
|
||
2. **Data Integration**: Many applications require data from multiple sources (e.g., databases, APIs). LangGraph facilitates the integration of these diverse data sources, enabling LLMs to access real-time information and provide more accurate and context-aware responses. | ||
|
||
3. **Complex Query Handling**: Users often pose complex queries that involve multiple entities or relationships. LangGraph can help break down these queries and manage the relationships between different pieces of information, improving the model's ability to generate relevant answers. | ||
|
||
4. **Customization and Flexibility**: Different applications have unique requirements. LangGraph allows developers to create custom workflows and interactions tailored to specific use cases, making it easier to adapt LLMs to various domains and tasks. | ||
|
||
5. **Scalability**: As applications grow and require more data and interactions, managing these efficiently can become challenging. LangGraph's architecture can help scale applications by organizing data and interactions in a way that remains manageable. | ||
|
||
6. **Improved User Experience**: By enhancing the LLM's ability to understand context and integrate data, LangGraph can lead to a more satisfying user experience, as users receive more accurate and relevant responses to their queries. | ||
|
||
7. **Error Reduction**: By providing a structured way to manage data and context, LangGraph can help reduce errors in responses generated by LLMs, particularly in scenarios where precision is critical. | ||
|
||
8. **Interactivity**: LangGraph can enable more interactive applications, where users can engage in dynamic conversations or queries that adapt based on previous interactions, leading to a more engaging experience. | ||
|
||
Overall, LangGraph aims to enhance the capabilities of LLMs, making them more effective tools for a wide range of applications, from customer support to content generation and beyond. | ||
User: q | ||
Goodbye! | ||
``` | ||
|
||
**Congratulations!** You've built your first chatbot using LangGraph. This bot can engage in basic conversation by taking user input and generating responses using an LLM. You can inspect a [LangSmith Trace](https://smith.langchain.com/public/29ab0177-1177-4d25-9341-17ae7d94e0e0/r) for the call above at the provided link. | ||
|
||
However, you may have noticed that the bot's knowledge is limited to what's in its training data. In the next part, we'll add a web search tool to expand the bot's knowledge and make it more capable. | ||
|
||
Below is the full code for this section for your reference: | ||
|
||
<details> | ||
Masstronaut marked this conversation as resolved.
Show resolved
Hide resolved
|
||
```ts | ||
// chatbot.ts | ||
import { ChatAnthropic } from "@langchain/anthropic"; | ||
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph"; | ||
|
||
// read the environment variables from .env | ||
import "dotenv/config"; | ||
|
||
// Create a model and give it access to the tools | ||
const model = new ChatAnthropic({ | ||
model: "claude-3-5-sonnet-20240620", | ||
temperature: 0, | ||
}); | ||
|
||
// Define the function that calls the model | ||
async function callModel(state: typeof MessagesAnnotation.State) { | ||
const messages = state.messages; | ||
|
||
const response = await model.invoke(messages); | ||
|
||
// We return a list, because this will get added to the existing list | ||
return { messages: response }; | ||
} | ||
|
||
const graphBuilder = new StateGraph(MessagesAnnotation); | ||
|
||
// Create a graph that defines our chatbot workflow and compile it into a `runnable` | ||
export const app = graphBuilder | ||
.addNode("agent", callModel) | ||
.addEdge("__start__", callModel) | ||
.compile(); | ||
|
||
```` | ||
</details> | ||
|
||
<details> | ||
```ts | ||
// chatloop.ts | ||
import { app } from "./chatbot.ts"; | ||
|
||
import { BaseMessageLike } from "@langchain/core/messages"; | ||
|
||
// We'll use these helpers to read from the standard input in the command line | ||
import * as readline from "node:readline/promises"; | ||
import { stdin as input, stdout as output } from "node:process"; | ||
|
||
async function chatLoop() { | ||
const lineReader = readline.createInterface({ input, output }); | ||
|
||
console.log("Type 'exit' or 'quit' to quit"); | ||
const messages = Array<BaseMessageLike>(); | ||
while (true) { | ||
const answer = await lineReader.question("User: "); | ||
if ( ["exit", "quit", "q"].includes( answer.toLowerCase() ) ) { | ||
console.log("Goodbye!"); | ||
lineReader.close(); | ||
break; | ||
} | ||
|
||
// Add the user's message to the conversation history | ||
messages.push({ content: answer, type: "user" }); | ||
|
||
// Run the chatbot and add its response to the conversation history | ||
const output = await app.invoke({ messages }); | ||
messages.push(output.messages[output.messages.length - 1]); | ||
console.log("Agent: ", output.messages[output.messages.length - 1].content); | ||
} | ||
chatLoop().catch(console.error); | ||
``` | ||
</details> |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be Node.js 20 to match requirements for LangGraph studio?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Node 18+ is fine for parts 0-4, it's only Studio that needs to be exactly version 20. So I think it makes more sense to defer that to later and keep the first parts simpler. Hopefully we will have node 18+ support for studio in the future and can just remove the nvm section in the studio tutorial.