# How to build MCP SSE and deploy with Docker ## Metadata - **Published:** 3/18/2025 - **Duration:** 15 minutes - **YouTube URL:** https://youtube.com/watch?v=UsmwdXKy9CU - **Channel:** nerding.io ## Description Join the Community: https://nas.io/vibe-coding-retreat 50% off One-click remote hosted MCP servers use NERDINGIO at Supermachine.ai https://supermachine.ai/ 📞 Book a Call: https://calendar.app.google/vx62Asp9DTk7dRLW7 📰 Newsletter: https://sendfox.com/nerdingio In this tutorial, we walk through building an MCP server that uses Server-Sent Events (SSE) to enable real-time communication with AI models. You'll also learn how to deploy it using Docker, making it easy to scale and integrate into web applications. What You’ll Learn: ✅ What is MCP SSE? – Understanding Model Context Protocol over SSE ✅ Building an MCP Server – Implementing SSE for AI-powered web apps ✅ Creating Custom MCP Tools – Extending AI models with web-based capabilities ✅ Testing/Debugging MCP SSE – Using the inspector for debugging and testing ✅ Deploying with Docker – Containerizing and running your MCP SSE server 💡 Why This Matters By leveraging SSE, your AI applications can maintain a persistent, real-time connection with clients—enabling seamless interactions without polling. This approach is ideal for web-based AI assistants, chatbots, and dynamic AI workflows. 🎥 Chapters 00:00 Introduction 00:26 Code 01:05 Tool Spec 03:19 Building with LLM txt 04:10 Server 10:28 Testing 12:20 Docker 🔗 Links Source: https://github.com/nerding-io/mcp-sse-example Spec: https://spec.modelcontextprotocol.io/specification/2024-11-05/server/resources/#message-flow ⤵️ Let's Connect https://everefficient.ai https://nerding.io https://twitter.com/nerding_io https://www.linkedin.com/in/jdfiscus/ https://www.linkedin.com/company/ever-efficient-ai/ ## Key Highlights ### 1. Building an MCP SSE Server The video details the process of building a server-sent events (SSE) server conforming to the Model Context Protocol (MCP) using JSON-RPC specs for communication, including message sessions and tool invocation. ### 2. Leveraging LLMs for Development The video demonstrates how to utilize LLMs (like those in AI-assisted IDEs) with the MCP documentation to accelerate server development. Copying the URL of website documentation enables pulling in documentation or SDK readme into IDE. ### 3. Dockerizing and Deploying the Server The video guides the viewer through Dockerizing the MCP SSE server using a Dockerfile and Docker Compose. It then showcases deployment via Render, highlighting configuration steps like setting root directories and environment variables. ### 4. Testing and Integration The video demonstrates how to test the server using an inspector tool to examine available tools and call functions. It also shows how to connect the deployed server to an n8n custom node for seamless integration. ### 5. MCP Specification Deep Dive Explanation of the MCP specification, including the architecture with server endpoints, message sessions, tool listing, and invocation process using JSON-RPC. ## Summary Here's a summary document designed to quickly convey the video's content: **Document Title: MCP SSE Server Development and Docker Deployment: A Quick Guide** **1. Executive Summary:** This video provides a practical walkthrough of building an MCP (Model Context Protocol) server utilizing Server-Sent Events (SSE) for real-time AI communication. It covers the essential steps from creating custom tools to deploying the server with Docker, enabling scalable AI application integration. **2. Main Topics Covered:** * **MCP SSE Fundamentals:** Understanding the core principles of Model Context Protocol (MCP) over SSE for real-time AI interaction. Explanation of JSON-RPC spec for communication and session/tool invocation process * **Building an MCP Server:** Implementing SSE endpoints for AI-powered web applications, including message transport and tool handling. * **Creating Custom MCP Tools:** Developing and defining custom tools for AI models with example implementations: basic math function (add) and using an API (fetch). Using JSON to format return messages that contain text, images, or resources. * **Leveraging LLMs for Development:** Demonstrating how to use Large Language Models (LLMs) with the MCP documentation to accelerate server development, including prompt generation. * **Testing/Debugging MCP SSE:** Using an inspector tool to examine available tools and call functions for debugging. * **Docker Deployment:** Containerizing the MCP SSE server using Dockerfile and Docker Compose, configuring Render for deployment, and setting up environment variables. * **Integration:** Connecting the deployed server to an n8n custom node for seamless workflow integration. **3. Key Takeaways:** * SSE provides a real-time communication channel between clients and AI models, eliminating the need for continuous polling. * MCP's JSON-RPC specification offers a standardized way to manage communication, tool listing, and invocation. * LLMs can significantly speed up development by providing code suggestions and scaffolding based on documentation. * Docker simplifies deployment and scaling of MCP SSE servers. * The Model Context Protocol Inspector facilitates testing and verifying server functionality. * MCP SSE servers can integrate with platforms like n8n for automated workflows. **4. Notable Quotes or Examples:** * "In the specification, they actually have a really good graph of what uh tools are in the server...basically you have different types of protocol messages." - Explaining the core architecture of MCP. * "Another really cool trick with mCP and llms is they actually give you an llm full text...you can actually come in here copy the link address and this will give you the full website of the documentation that you can actually pull in" - Demonstrating how to leverage LLMs for rapid development using the MCP documentation. * Example API tool "Defines the code to call an external API and return text. It also demonstrates passing environment variables through the server header" * Discusses that return messages "don't necessarily have to be text, you can send images or even resources" **5. Target Audience:** * Developers building AI-powered web applications. * Engineers interested in real-time communication protocols like SSE. * Individuals looking to integrate AI models into automated workflows using platforms like n8n. * Those seeking practical guidance on deploying servers with Docker. * Anyone wanting to understand and implement the Model Context Protocol. ## Full Transcript hey everyone welcome to nering IO I'm JD and based on the response from the mCP n8n node that I built I'm going to start an mCP series and so today what we're going to go through is how to build a server Cent event server that can also be deployed and so with that let's go ahead and get started so the first thing that we're going to look at is I put together this repo it's going to have an example of ssse uh as well as a Docker file so that we can actually deploy this out and I'm going to try and stay within this repo for the series so we'll do things like testing and uh clients as well as uh some automations and there's some utility specs that we can go through so first thing would be to go ahead and download this repo uh and get that set up what we're going to do is talk about what an mCP ser SS server is really quick and how to set up some tools uh and so while the docs are really great um I'll actually show you a fun way to use them we're actually going to dive into the spec so in the specification they actually have a really good graph of what uh tools are in the server and so what this means is you basically have different types of protocol messages and the way SSE works is you have a server and this server has a endpoint called potentially an ssse or it could even be the root uh endpoint and then it allows you to create a session and that session allows you to send messages so it can maintain what what messages belong to that session and as part of that you can actually say uh that you want to call tools and so you would get a list of tools those tools would go back to the client uh the llm or even like a human in the loop and then do the tool selection and then you would say to invoke uh when when you're ready so uh then there's also a process back uh once the result is complete it's going to go hit the llm and then it can be returned to the AG to the uh agent there's also this piece called uh tools list changed which basically means that uh you are sending from the server to the client to let them know that uh there there was an update in in change so if we scroll up to like what listing looks like everything is a jsonrpc spec and what that means is you basically have a method and then you have parameters that you send and so on the response you basically get an array of what those tools look like and so basically when you send your parameters through SS which is basically a URL you'll actually get this list of tools back and then based on those tools you can actually do a call which will actually execute that call based on the name of the function essentially and the arguments that are associated with that so uh what we're going to do is we're actually going to look at how we can go through a server and actually build something another really cool trick with uh with mCP and llms is they actually give you an llm full text what this means is if you're using something like cursor or Klein or some other AI assisted IDE you can actually come in here copy the link address and this will give you the full website of the documentation that you can actually pull in so we're going to use real quick everyone if you haven't already please remember to like And subscribe it helps more than you know also please go check out text yourself it's a simple application that I built that helps keep me on track all you have to do is SMS different tasks and reminders that you need to be sent back to yourself with that let's get back to it that in when we're looking at our server so the first thing we're going to do is we'll come in here and inside backend and then inside our server I will actually go through what all of this is so first and foremost we're going to say we need an mCP server we're going to Define what our version is and then we can actually go through the process of building building a tool this tool is kind of like a hello world of tools basically it's a simple ad and what that means is that we're going to pass our parameters we have our definition of what our function would be and then we're going to follow the schema which is the content array you can have a uh object in a value of text as well as what that text is going to be you can actually also have uh images that can get returned in this type so again back in the specification you can actually see what kind of type and you can send images or you could even send uh resources and we're actually going to do more of this in an upcoming video but just wanted to show that it doesn't necessarily have to be text but in this example uh we're just going to go ahead and uh use text and the next tool what we're going to do is we're going to do uh an implement ation of of hitting an API and so we're going to again Define what our tool is we're going to give our name the parameters that we need to we want to use as well as uh you know the the schema itself and then really all we're doing is we're going out and calling uh the API in this instance we're doing a Fetch and we're actually passing an environment variable through our header to then again return a text response I also put an example of a resource in here we'll go through another thing of prompts but those will be in the in the series but what I did want to show you is a way to kind of build an mCP server from scratch is you can actually go through and paste in that llm full text and then what I like to do is two different things I go to the SDK itself I'll grab the raw file of the mark uh read me and I will post paste that into cursor as well and then if I really want to get like good at the specification I'll even pull in sometimes the package uh itself so for instance if I need an SS I would just come in here grab the water ra file of this and then and we could just add another tool or we could even add some prompts uh so we can say add a tool that and let's also add a prompt just for fun so add prompt and so now what we're going to do we're just going to see if it uh what it returns cool uh and so what this did is it I'll probably end up rejecting both of these but what it did is it actually applied the subtract which is just a function that you can see in diff and then it went ahead and knew what to do as a prompt based on the uh the information that we gave it for um the documentation as well as the typ script SDK read me it actually has examples of what it could actually put in there for um a prompt as well as a new tool so again this is super helpful When developing um it's I would definitely say like using an llm full text is is really awesome for document like pulling in different types of documentation uh as well as like getting up and running one of the problems though is you know it's still AI so you have to be careful what it's actually going to to do so you need to review the code as well as do some testing so now that we have this server up and running let's take a look at our Docker file so I put my Docker composed in here so that I can actually just in my root so that it will actually point to my Docker file which is in my back end and all this is doing is spinning up a simple uh Express server and that is determined down here so what my uh we have our mCP server but then we also have to serve those endpoints and that's the purpose of why we're going to use Docker to deploy so what I make sure and do is I want to be able to have cores open to the world so that you know anyone can hit it uh actually like having a a root file that just kind of gives an outline of my server and then I actually have two different endpoints I have an ssse endpoint and a messages endpoint and so all you really need to do in the SS is you're basically taking the request this is going to start a transport to this messages rout that messages route will then actually take uh and connect to that transport it's going to that connection is actually going to post and so that's why we need to have this messages endpoint so that it will take the information coming from SS and start essentially a message uh cue kind of so if we go over to our uh let's go ahead and watch this cool so I'm on Port 301 and if we go now to our uh so so if we go to our ssse we can actually see that the event is hitting the end point and then it's passing us data to actually see what this message is so there's a couple of different ways of testing and again going to show a different uh video for this but the best way is this inspector and so we're going to take our URL come over to our inspector we're going to go ahead and click connect let's just refresh do and then we can actually get our tools so again we're seeing our ad and our search and we're able to actually see uh this information so if we wanted to do one two we should get a result we want to do search protocol count of two then oh so it's showing that our uh variable is not set in our Docker so definitely make sure in your when you're doing your Docker up uh that you actually Define that uh variable but this is essentially what we need to do as far as uh doing a quick test we can actually run this inspector the way you run that is this the way you run that is model context protocol inspector again going to have another video on how to leverage this schol a little more the other piece is deploying so once we actually have our Docker file and we've got everything set up and we want to deploy uh we can just push this to the uh to our repo and then actually have it auto deployed uh I really wanted to use coolify for this but after a lot of arguments uh with cursor I was unable to to get it to work it has to do with the fact that it's a engine x uh hosting so or server so instead I went with renderer and so what you can actually do is go into your settings uh in in the case if you want to get this up and running you can use the free tier um I just put in my repo I made sure that my root directory was for the back and uh you know and that I have my Docker file uh associated with it and then I just h a deploy and so now we can actually see on this URL uh that we have our root uh outline basically that allows us to know do our health check and then we can also go to ssse and we can see the uh connection so we've got our endpoint and then again we can take this and go back to our inspector and do a test do a refresh we got our tools all right and the last thing that we're going to do is we're actually going to connect this to our um n8n custom node that we previously built so if we come into our NN what we're going to do is we're going to go to uh our credentials I'm going to have an ACC account and so just like uh I was running locally I'm going to change this to my deployed version all I have to do is save and then go to the workflows I'm just going to start from C SC scratch and we're just going to do something simple so say mCP we want to get a list of available tools we have our ssse account we're going to go ahead and list and we're going to go ahead and test it's going to be a little slower but we actually get our schema back just like we would on Local Host and we know that this is hitting the end point based on the fact that we have our credentials and we're actually hitting a live uh server all right that's it for us today everyone so what we went through was tools how to build an ssse server and then how you can actually deploy that server through a Docker file with that happy nerding --- *Generated for LLM consumption from nerding.io video library*