# Use MCP with Images PDF's, and Databases #mcp #sse #n8n ## Metadata - **Published:** 3/31/2025 - **Duration:** 14 minutes - **YouTube URL:** https://youtube.com/watch?v=Qydl8LbZxXk - **Channel:** nerding.io ## Description Join the Community: https://nas.io/vibe-coding-retreat 50% off One-click remote hosted MCP servers use NERDINGIO at Supermachine.ai https://supermachine.ai/ ๐Ÿ“ž Book a Call: https://calendar.app.google/M1iU6X2x18metzDeA ๐Ÿ“ฐ Newsletter: https://sendfox.com/nerdingio In this video, we dive into how to create MCP resource templates designed for Server-Sent Events (SSE) โ€” enabling real-time, web-friendly communication between AI models and external tools. We cover dynamic resource examples including: ๐Ÿ—ƒ๏ธ Databases โ€“ Query live data and stream results ๐Ÿ–ผ๏ธ Images โ€“ Process and serve images on-demand ๐Ÿ“„ PDFs โ€“ Send PDFs to OpenAI ๐Ÿ” What Youโ€™ll Learn: โœ… What are MCP Resource Templates and how they power AI tools โœ… How to build SSE-compatible resources for live, browser-friendly communication โœ… Dynamic examples using databases, images & pdfs โœ… How these templates help extend your AI assistant with custom tools ๐Ÿ’ก Why MCP + SSE? MCP over SSE allows for real-time, standardized, and web-compatible AI workflows. With templates, you can rapidly plug in new tools that make your LLMs smarter and more useful in context-aware applications. ๐ŸŽฅ Chapters 00:00 Introduction 00:26 SSE Server 01:24 Templates 03:49 Inspector 06:13 n8n 10:08 Image 12:44 PDF ๐Ÿ”— Links Source: https://github.com/nerding-io/mcp-sse-example Spec: https://spec.modelcontextprotocol.io/specification/2024-11-05/server/tools/ โคต๏ธ Let's Connect https://everefficient.ai https://nerding.io https://twitter.com/nerding_io https://www.linkedin.com/in/jdfiscus/ https://www.linkedin.com/company/ever-efficient-ai/ ## Key Highlights ### 1. Dynamic Resources with MCP Templates MCP resource templates allow dynamic content delivery using mustache-style templating for variables. Enables flexibility in serving different data types. ### 2. Serving Images and PDFs via MCP MCP can serve images and PDFs by encoding them as Base64. This allows LLMs to access and process visual data from server-side resources. ### 3. Database Integration with MCP MCP can be used to access databases for read-only operations, acting as a resource template for querying and retrieving data sets, useful for read-only access. ### 4. N8N Integration for Image Analysis Shows a workaround to enable OpenAI to analyze images by dynamically fetching them through MCP and encoding them in Base64. ### 5. PDF Uploads to OpenAI Assistants Demonstrates how to upload PDFs accessed via MCP resource templates to OpenAI assistants for embedding and later use. ## Summary Here's a comprehensive summary document for the provided video, designed for quick understanding: **Document Title: MCP Resource Templates: Dynamic Data Delivery for AI Workflows** **1. Executive Summary:** This video explains how to utilize Model Context Protocol (MCP) resource templates to dynamically serve various data types (databases, images, PDFs) to AI models using Server-Sent Events (SSE) for real-time communication. It demonstrates practical applications, including database queries, image analysis via Base64 encoding, and PDF uploads to OpenAI assistants. **2. Main Topics Covered:** * **Introduction to MCP Resource Templates:** Explanation of dynamic resource creation using mustache-style templating. * **SSE Server Implementation:** Setting up an SSE server for real-time data delivery. * **Dynamic Data Examples:** Serving different data types: * Databases: Querying and streaming database results (read-only). * Images: Processing and serving images as Base64 encoded strings. * PDFs: Sending PDFs to OpenAI for embedding. * **MCP Inspector:** Using the MCP Inspector to test and validate resource templates. * **N8N Integration:** Demonstrating how to use MCP resource templates within the N8N workflow automation platform. * **Workarounds for Image Analysis:** Implementing a workaround to enable OpenAI to analyze images fetched via MCP by encoding them as Base64 and utilizing the correct OpenAI API parameters. * **PDF Uploads to OpenAI Assistants:** Uploading PDFs to OpenAI assistants using MCP for embedding and knowledge base augmentation. **3. Key Takeaways:** * **MCP Resource Templates for Dynamic Content:** Enables flexibility in serving different data types to AI models. * **SSE for Real-Time Communication:** MCP over SSE allows for real-time, standardized, and web-compatible AI workflows. * **Base64 Encoding for Images and PDFs:** Allows for sending image and PDF data to LLMs for analysis. * **Database Integration for Read-Only Access:** MCP can be used to access databases for read-only operations. * **Custom Tools & Workflows:** Templates can rapidly plug in new tools that make LLMs smarter and more useful in context-aware applications. **4. Notable Quotes or Examples:** * **Dynamic Greeting Example:** "In this instance all we're doing is just saying hello to someone but we can also actually pass back files." (illustrates basic templating) * **Database Integration:** "While we can use tools to read and write from the database there might be a an opportunity to say well I just want to do a read only from my database or my data set" * **Image analysis with OpenAI (workaround):** "when we typically send images to OpenAI we actually have to send it as a different flag...you have to actually send this through like an image media URL or send it as base 64 code." * **Uploading PDFs to Open AI Assistants:** Demonstrates how to upload PDFs accessed via MCP resource templates to OpenAI assistants for embedding and later use. **5. Target Audience:** * AI/ML Developers * Workflow Automation Engineers * Individuals interested in integrating AI models with external data sources * Developers using or considering Model Context Protocol (MCP) * N8N users looking to extend their workflows with AI capabilities ## Full Transcript hey everyone welcome to Nering.io i'm JD and today I'm going to continue the MCP series with going through resources but doing resource templates which allows us to be a little more dynamic we're also going to look at some of the more advanced things that you can return so things like databases images as well as even PDFs and with that let's go ahead and get started all right so last time what we did is we actually returned a static file and we were able to pull back our uh our full text for LLMs and so now what we're going to do is we're actually going to expand upon that and we're actually going to make dynamic resources so this is using something called a resource template and really all you're going to be doing is using like the mustache uh style templating where you can pass in dynamic variables and then you're able to pass that information back so in this instance all we're doing is just saying hello to someone but we can also actually pass back files so what's really interesting about this is we could pass back log information so for instance if we have information that's happening on our server we could actually tell it what the file name would be and then we can return that as a plain text but what's even more interesting is when you start going through some of the the documentation in the uh MCP you can actually see that there's other different types of information that you can send so for instance we can actually send images and PDFs and we can do that by telling it what our meme type is going to be and what our uh blob or our content is so we can actually take that information and turn it into a base 64 and so the way we can do that is we can actually use the file system and actually read these paths dynamically then pass that information directly into for instance our LLM so what we're going to do is we're going to take a look at this and see how we can actually send different PDF file PDF files as well as images you can also see rather than just sending the text we can actually send the text file as a meme type as well as the the um the content and then the other interesting thing we can actually do is we can actually call our database so while we can use tools to read and write from the database there might be a an opportunity to say well I just want to do a read only from my database or my data set and so in this particular interest in incident in this particular example all I'm really doing is I'm just creating a simple uh almost like a NoSQL store i'm going to pass in the collection that I want to see and the ID and so you can actually see down here this is how I'm actually populating the database you're basically going to say that I have a user collection each one has an ID here's the information based on the the uh user and then also some product information so I can pull this all back using MCP as a uh resource template based on our database real quick everyone if you haven't already please remember to like and subscribe it helps more than you know also please go check out text yourself it's a simple application that I built that helps keep me on track all you have to do is SMS different tasks and reminders that you need to be sent back to yourself with that let's get back to it so what we're actually going to do is we're going to pull this information from our uh server side uh or server events and we can do that by just running our server so again we're just in the back end and we're just going to say npm run and then we also need to run our model context protocol inspector and so now that we have our inspector we can again go to our local host and as we saw in the previous videos we could actually send this out to a deployed in Docker and we can connect you'll notice that resource templates is actually different than resources and what's interesting though is the way to read a resource is actually pretty much the same so when we here when we're here and we list it'll actually just give us information but you'll notice that the read is uh resources read and then we list here we're getting resources templates list and if we did a greeting and we just said JD and we read that resource we're actually getting the same URI to read the resource but it's aware enough that we're passing information in because of the fact that we know dynamically this is expected to be a value back in our uh template so if we look at things like logs we can enter our log name and do like app.log we'll actually get information back about what's in our log file as for the documents if we wanted to say our document type is images and our file name is build images respect.png we can actually read that and now we're getting the base 64 encoded information of our file that we can actually pass to our LLM so now what we're going to do is we're actually going to take a look at this and see how we can do it in any end again same principles apply to our collection if we do something like users and we do ID we can actually read from source now we're getting our information so let's apply this in and we'll see what that looks like so first we're actually going to test getting a log and you can see down here it says it should be get my but or get me log files for system one and so if we come in here and we look at our assistant we say always list the resources templates first and then get the structure and send to the URI these are pretty basic instructions you could definitely make these a lot better this is going to list our resources this is a new uh operation that we have and then we're actually going to read and again the read is exactly the same we're going to say make it dynamic whereas in the last video we actually hardcoded what the the resource is so now we're going to say give me the logs for system log go ahead and fetch that information and you can see that it ran on our uh our input as well as getting the URI for the log system so again it knows I'm going to this URI which is pointing to my resource template and then the parameter here and again we're getting our information back so we can do the same kind of thing in our uh with a collection ID so as you can see from the test here I'm going to say get me the user for the collection the users collection users collection of ID of one same kind of thing we're actually just going to pull from the resource and there we go now we actually have our data of what is actually coming back from the application via a tool now I tried doing this uh same kind of way with an image and every single time that I try and pull this back I get an issue i get an error so it can read the images as the and it knows it's B 64 encoded but it can't actually process that information so when we do it in an AI agent and we say uh get me let's try get me the docs for images spec it's going to try and pull that information back but it's not actually able to process it and there's a reason for that so while this is actually processing what's happening is when we typically send images to OpenAI we actually have to send it as a different flag so you can see here that it can recognize the fact that it's a document so we got information back right here this is all the information that we got back however it's not passing this directly to OpenAI correctly again you have to actually send this through like an image media URL or send it as base 64 code there's not a way that that I know how to do this in uh N8N but I I wanted to experiment with this a little bit more and the way I got this to work was if you are not doing it as a tool you can actually send this to have OpenAI analyze it so this is all static but I just wanted to kind of go through what this actually looks like so again we have our list of our resource it knows which resource templates we have available we're going to read from that resource so we're just going to pass in the doc images spec static as we as we determined what it was right so we could actually chain this execution to our other uh flow right here if we wanted to but just to kind of show you how this works when we actually pass that information to the binary here right now we're hard coding it so we could actually change this to be our file name could be our URI and our mean type could be our mean type we could actually do like a split uh slash and then if we did this whole thing again and did like a I know this is kind of hacky how we're going to do it but should work one it should actually give us our name and our file so now we actually get our our image file name great so as we actually uh send this information though what we should be able to do now is we can actually send this dynamically to OpenAI so all I'm going to do is just say test and we'll just actually get an integration of what's going on for our node so in this so now we're actually getting the information back and so when we say what's the image we're actually getting our content message back from what the analyze when we analyze it and the way this happens is when we look at OpenAI we have the ability to look at a resource in this case an image and then we can analyze the image and then what we do is we actually have to pass this data field to our binary type instead of an image URL and we can actually get information back and so this B 64 to binary is just code that's basically telling the next thing that's getting passed that this is binary it's a data uh field and here's all the information that you need in order to understand what this image is and so right here we have this is an image of sequence uh interactions between model or between resources and discovery and kind of explains what we're actually looking at and if we go back to our code and we look at the images spec we can see that's what it is it's a web sequence diagram of our client to server for MCP and we can actually grab this information all right so now let's try it as a PDF so if we come in here and we change this from images to PDFs build with LCP pdf test we're getting that information we're coming back now if we we shouldn't have to change anything here but we would need to change something here if we change our binary to analyze image we can still say analyze image but now we need to say file we can say upload file and let's test our step here so if we take a look at the output what we can see is it actually uploaded the file to our uh OpenAI file as an assistant and what this means is it's now done in embedding and we could actually use this in our uh OpenAI assistance and call back for anything related to this file all right that's it for today okay so what we went through was dynamic resources using templates so looking at things like databases as well as images and PDF and how we can actually use that in things like NAD with that happy nerding --- *Generated for LLM consumption from nerding.io video library*