Free Tutorial

MCP Servers: The Complete Guide

Understand Model Context Protocol - Connect AI to your tools and databases seamlessly

1. Brief Overview

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open-source standard designed to be the universal translator between Large Language Models (LLMs) and the outside world. Think of it as a USB-C port for AI. Before MCP, connecting an AI model to your specific database, internal API, or even a local file system required custom, one-off integrations. Each new tool or data source demanded bespoke code, creating a fragmented and brittle ecosystem. MCP changes that by providing a standardized, secure, and scalable way for AI models to interact with external systems.

Why Does It Matter?

LLMs are incredibly powerful, but their knowledge is frozen in time, limited to the data they were trained on. They can't access real-time information, interact with your proprietary systems, or perform actions in the real world without a bridge. MCP is that bridge. It matters because it unlocks the full potential of AI by allowing it to:

This shift transforms AI from a passive knowledge base into an active participant in your workflows, automating complex processes and providing more relevant, context-aware assistance.

Who Should Use It?

MCP is for developers and organizations building AI-powered applications that need to go beyond the pre-trained knowledge of a model. This includes:

If you want your AI to be more than just a text generator, MCP is a technology you need to understand.

2. Key Concepts

MCP's architecture is based on a simple yet powerful client-server model. Here are the core concepts you need to grasp:

  1. MCP Host: This is the AI application or environment where the LLM resides. It could be a chatbot interface, a data analysis tool, or any other application that uses an LLM. The MCP Host contains the MCP Client.
  2. MCP Client: The MCP Client lives within the MCP Host and is responsible for initiating communication with MCP Servers. It translates the LLM's need for information or action into a standardized MCP request.
  3. MCP Server: This is the external service that provides the context, data, or capabilities that the LLM needs. You'll build MCP Servers to expose your tools and data to the AI.
  4. Communication Protocol: MCP uses JSON-RPC 2.0 for communication between the client and server. This is a lightweight, stateless remote procedure call (RPC) protocol that is easy to implement in any language. Communication can happen over various transports, including standard I/O (for local servers), HTTP, or WebSockets.
  5. "Tools": The Heart of MCP: A "Tool" is a schema-defined interface that an MCP server exposes to an LLM. It's essentially a function that the LLM can call to perform a specific operation. Each tool has a clearly defined set of inputs and outputs, which allows the LLM to understand how to use it. For example, you could have a send_email tool that takes a recipient, subject, and body as input.
  6. "Resources" and "Prompts": Beyond "Tools," MCP also defines "Resources" for data access (like a GET request in a REST API) and "Prompts" as reusable templates for LLM interactions. This allows for a more structured and efficient way to interact with the AI.
  7. Security and User Consent: A critical aspect of MCP is its focus on security and user control. The protocol includes mechanisms for user consent, ensuring that the AI doesn't perform actions or access data without permission. All interactions are logged, providing a clear audit trail.

3. Practical Code Examples

Let's move from theory to practice. We'll build a simple MCP server using the official Python SDK. This server will expose a single tool: a calculator that can add two numbers.

Step 1: Installation

First, you need to install the mcp package from PyPI.

pip install mcp

Step 2: Creating the MCP Server

Now, let's write the code for our server. Create a file named calculator_server.py:

import asyncio
from mcp import McpServer, Tool, tool

# Define the schema for our calculator tool's input
class AddInput:
    def __init__(self, a: int, b: int):
        self.a = a
        self.b = b

# Define the schema for our calculator tool's output
class AddOutput:
    def __init__(self, result: int):
        self.result = result

# Create a class for our calculator tool
class Calculator(Tool):
    name = "calculator"
    description = "A simple calculator that can add two numbers."

    @tool(inputs=AddInput, outputs=AddOutput)
    async def add(self, inputs: AddInput) -> AddOutput:
        """Adds two numbers together."""
        result = inputs.a + inputs.b
        return AddOutput(result=result)

# Main function to start the server
async def main():
    # Instantiate the server
    server = McpServer()

    # Register our calculator tool
    server.register_tool(Calculator())

    print("Starting calculator MCP server...")
    # Start the server and listen for connections on port 8080
    await server.listen(port=8080)

if __name__ == "__main__":
    asyncio.run(main())

Step 3: Running the Server

Save the code and run it from your terminal:

python calculator_server.py

Expected Output

You should see the following output, indicating that your server is running and waiting for connections:

Starting calculator MCP server...

Step 4: Creating an MCP Client

Now that we have a server, let's create a client to connect to it and use our add tool. Create a new file named client.py:

import asyncio
from mcp import McpClient

async def main():
    # Connect to the MCP server
    client = await McpClient.connect(port=8080)

    print("Connected to MCP server.")

    # Get the list of available tools
    tools = await client.get_tools()
    print(f"Available tools: {[tool.name for tool in tools]}")

    # Call the 'add' tool from our 'calculator'
    result = await client.tools.calculator.add(a=5, b=7)
    print(f"The result of 5 + 7 is: {result.result}")

    # Close the connection
    await client.close()

if __name__ == "__main__":
    asyncio.run(main())

Step 5: Running the Client

Open a new terminal window (leaving the server running in the first one) and run the client:

python client.py

Expected Output

You should see the following output from your client:

Connected to MCP server.
Available tools: ['calculator']
The result of 5 + 7 is: 12

Congratulations! You've successfully built and interacted with an MCP server. This simple example demonstrates the core principles of MCP, which you can now apply to more complex, real-world scenarios.

4. Best Practices

As you move from simple examples to production systems, it's crucial to follow best practices to ensure your MCP servers are secure, reliable, and maintainable.

  1. Schema-First Design: Always define your tool schemas (inputs and outputs) first. This ensures that both the client and server have a clear contract, reducing the chances of errors. Use descriptive names and add comments to your schema definitions.
  2. Implement Robust Error Handling: Your tools will inevitably encounter errors. Don't let them crash your server. Implement try-except blocks within your tool functions to catch exceptions and return meaningful error messages to the client.
  3. Secure Your Server: If your MCP server is exposed to the network, you MUST secure it. Implement authentication and authorization to control who can access your tools. The McpServer class can be configured with SSL/TLS certificates to encrypt communication.
  4. Version Your Tools: As your tools evolve, you'll need to manage changes without breaking existing clients. Introduce a versioning system for your tools (e.g., calculator/v1, calculator/v2) to ensure backward compatibility.
  5. Asynchronous by Default: Use async and await for all I/O-bound operations within your tools (e.g., making API calls, querying a database). This will prevent your server from blocking and allow it to handle multiple concurrent requests efficiently.
  6. Logging and Monitoring: Implement comprehensive logging to track requests, responses, and errors. This is invaluable for debugging issues and monitoring the health of your server.
  7. Dependency Management: Use a dependency management tool like Poetry or pip-tools to lock your project's dependencies. This ensures that your server is reproducible and won't break due to unexpected package updates.

5. Common Pitfalls to Avoid

Here are some common mistakes that developers make when building MCP servers, along with how to fix them.

  1. Mismatched Schemas
    • Error Message: You might see a ValidationError on the server or unexpected None values on the client.
    • Problem: The client is sending data that doesn't match the schema defined on the server.
    • Fix: Ensure that the client and server are using the exact same schema definitions. A shared library or code generation from a common source (like a Protobuf or JSON Schema definition) can help prevent this.
  2. Blocking I/O in an Async Server
    • Problem: Your server becomes unresponsive under load. Requests time out, and performance is poor.
    • Cause: You're using a blocking I/O library (like the requests library) in an async function without proper handling.
    • Fix: Use asynchronous libraries for I/O operations (e.g., httpx instead of requests). If you must use a blocking library, run it in a separate thread pool using asyncio.to_thread.
  3. Ignoring Security
    • Problem: Your MCP server is a prime target for abuse if it's not secured. Unauthorized users could access sensitive data or perform malicious actions.
    • Fix: ALWAYS implement authentication and authorization for any server that is not running locally and isolated from the network. Use industry-standard practices like OAuth2 or API keys.

6. Next Steps and Additional Resources

You've taken your first steps into the world of MCP. Here's how you can continue your journey:

The Model Context Protocol is a young but rapidly evolving standard. By learning how to build MCP servers, you're positioning yourself at the forefront of the next wave of AI development. Happy building!

← Back to Tutorials