LangChain Support
Back to LangSmith Deployment

How do I deploy an MCP server on LangSmith Deployments with my agent?

LangSmith Deployment·2 min read·Nov 6, 2025··

While this setup is not recommended for production, it’s possible to run an MCP server within a LangSmith Deployment for proof-of-concept (PoC) or demo purposes.

This example is adapted from the LangChain MCP Adapters README.

Recommended approach is to stand up a separate MCP server elsewhere and deploy it independent of a LangGraph Deployment

Example: Running an MCP Server with a LangGraph Agent

If you want to run a LangGraph agent that uses MCP tools in a LangGraph API server, you can use the following setup:

# graph.pyfrom contextlib import asynccontextmanagerfrom langchain_mcp_adapters.client import MultiServerMCPClientfrom langgraph.prebuilt import create_react_agent

async def make_graph():
    client = MultiServerMCPClient(
        {
            "weather": {
                # make sure you start your weather server on port 8000
                "url": "http://localhost:8000/mcp",
                "transport": "streamable_http",
            },
            # ATTENTION: MCP's stdio transport was designed primarily to support applications running on a user's machine.
            # Before using stdio in a web server context, evaluate whether there's a more appropriate solution.
            # For example, do you actually need MCP? or can you get away with a simple `@tool`?
            "math": {
                "command": "python",
                # Make sure to update to the full absolute path to your math_server.py file
                "args": ["/path/to/math_server.py"],
                "transport": "stdio",
            },
        }
    )
    tools = await client.get_tools()
    agent = create_react_agent("openai:gpt-4.1", tools)
    return agent

In your langgraph.json make sure to specify make_graph as your graph entrypoint:

{
  "dependencies": ["."],
  "graphs": {
    "agent": "./graph.py:make_graph"
  }
}

The second MCP server above shows how to point to a fastmcp application in a file in your repo and have it run everytime.

There are a number of reasons why not to do this in production!! Highlighted in this thread: https://langchain.slack.com/archives/C089RDWTKU4/p1761774370247319