TUTORIAL

Build AI Agents with LangChain

Create autonomous agents with memory and tools

Build AI Agents with LangChain: A Production-Ready Tutorial

1. Brief Overview

LangChain is an open-source framework designed to simplify the creation of applications using large language models (LLMs). It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. At its core, LangChain enables developers to build complex, data-aware, and autonomous agents that can reason, make decisions, and take actions.

The significance of LangChain lies in its ability to connect LLMs to external sources of data and computation. While LLMs possess vast knowledge, they are inherently stateless and disconnected from the real world. LangChain bridges this gap by providing "tools" that agents can use to interact with APIs, databases, and other external systems. This allows developers to create intelligent agents that can not only converse but also perform tasks, such as sending emails, querying databases, or even interacting with other AI systems.

This tutorial is for developers, data scientists, and AI enthusiasts who want to move beyond simple chatbot implementations and build sophisticated AI agents. Whether you're looking to create a personal assistant, automate complex workflows, or build a next-generation application with reasoning capabilities, LangChain provides the tools and abstractions to get you there. A basic understanding of Python and LLMs is recommended to get the most out of this guide.

2. Key Concepts

To effectively build agents in LangChain, it's crucial to understand the following core concepts:

3. Practical Code Examples

This section provides a complete, step-by-step guide to building a simple LangChain agent.

3.1. Installation

First, let's set up our environment and install the necessary packages. We'll use a virtual environment to keep our dependencies isolated.


# Create and activate a virtual environment
python3 -m venv langchain-env
source langchain-env/bin/activate

# Install LangChain, OpenAI, and other required packages
pip install langchain langchain-openai python-dotenv tavily-python

3.2. Environment Setup

We'll be using the OpenAI API and the Tavily Search API. You'll need to get API keys for both services.

  1. OpenAI API Key: Get your key from the OpenAI Platform.
  2. Tavily Search API Key: Get your key from the Tavily AI.

Create a file named .env in your project directory and add your API keys:


OPENAI_API_KEY="your_openai_api_key_here"
TAVILY_API_KEY="your_tavily_api_key_here"

3.3. Building the Agent

Now, let's write the Python code for our agent. Create a file named main.py.


import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.memory import ConversationBufferMemory

# Load environment variables from .env file
load_dotenv()

# Initialize the LLM
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# Define the tools the agent can use
tools = [TavilySearchResults(max_results=3)]

# Create the prompt template
# The prompt needs to have a variable named `agent_scratchpad`
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant."),
        MessagesPlaceholder(variable_name="chat_history"),
        ("human", "{input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)

# Create the agent
agent = create_tool_calling_agent(llm, tools, prompt)

# Create the memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# Create the agent executor
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    memory=memory
)

# Function to handle chat
def chat_with_agent():
    print("Chat with your AI agent! Type 'exit' to quit.")
    while True:
        user_input = input("You: ")
        if user_input.lower() == 'exit':
            break
        response = agent_executor.invoke({"input": user_input})
        print(f"Agent: {response['output']}")

if __name__ == "__main__":
    chat_with_agent()

3.4. Running the Agent

Execute the script from your terminal:


python main.py

You should see the following output:


Chat with your AI agent! Type 'exit' to quit.
You:

Now you can interact with your agent. Here's an example conversation:


You: What is the weather in San Francisco?
> Entering new AgentExecutor chain...
... (agent's thought process will be printed here)
Agent: The weather in San Francisco is currently ...

You: What about in New York?
> Entering new AgentExecutor chain...
... (agent's thought process will be printed here)
Agent: The weather in New York is currently ...

4. Best Practices

5. Common Pitfalls to Avoid

6. Next Steps and Additional Resources