# Build a Customer Support AI Agent with LangGraph & Portkey Prompt API ## Metadata - **Published:** 4/25/2025 - **Duration:** 12 minutes - **YouTube URL:** https://youtube.com/watch?v=6MgPd3O3FXs - **Channel:** nerding.io ## Description In this video, we walk through how to build a customer support AI agent using #langgraph for agent orchestration and #portkey Prompt Render API for flexible, dynamic prompt management. This agent isn’t just a chatbot—it can reason, route, and respond like a human support rep by chaining tools, memory, and context-aware responses in a modular and scalable graph-based system. 🧠 What You’ll Learn: ✅ How to build a multi-step support agent with LangGraph ✅ How to dynamically render prompts using Portkey’s Prompt Render API ✅ How to chain together tools, memory, and decisions in a structured flow ✅ Use cases for real-world AI-driven customer support ✅ How to make your agents transparent, debuggable, and adaptable 💡 Why It’s Powerful: LangGraph allows you to create reliable agent workflows with proper control, observability, and modularity—no more spaghetti prompt logic. Portkey helps separate prompt content from code, making it easier to test, version, and maintain over time. 🎯 Perfect for: - SaaS support chatbots - Internal AI assistants - Lead qualification bots -Tier-1 support routing 🎥 Watch the full walkthrough and deploy your own AI agent! 🔗 Resources: 👉 LangGraph: https://www.npmjs.com/package/@langchain/langgraph-supervisor 👉 Portkey Langgraph: https://portkey.ai/docs/integrations/agents/langgraph#2-reliability 👉 Portkey Prompt Render API: https://portkey.ai/docs/product/prompt-engineering-studio/prompt-api#portkey-node-sdk 🧠 Text Yourself: https://textyourself.app 📩 Newsletter: https://sendfox.com/nerdingio 📞 Book a Call: https://calendar.app.google/M1iU6X2x18metzDeA 📌 Chapters: 00:00 Intro 00:20 Docs 03:49 Demo 05:16 Code 10:06 Observability 12:00 Final Thoughts ⤵️ Let’s Connect: 🌐 https://nerding.io 🐦 Twitter: https://twitter.com/nerding_io 💼 LinkedIn: https://www.linkedin.com/in/jdfiscus/ 🚀 Ever Efficient AI: https://everefficient.ai 💬 Have you tried LangGraph or Portkey? Drop your use cases or questions in the comments! 👇 👍 Like & Subscribe for more AI agent workflows and automation builds! ## Key Highlights ### 1. LangGraph Multi-Agent Supervisor with TypeScript Demonstrates using the pre-built LangGraph multi-agent supervisor with a focus on TypeScript implementation, mimicking React agents for customer support. ### 2. Portkey AI Gateway Integration Highlights the integration of Portkey as an AI gateway for prompt management, load balancing, observability, and metrics within the LangGraph setup. ### 3. Prompt Renderer API for Dynamic Prompts Explains how to use Portkey's Prompt Renderer API to dynamically load and update prompts from the management tool, enabling prompt editing without code changes. ### 4. Handling Missing Trace Logging in TypeScript SDK Addresses the lack of trace logging in Portkey's TypeScript SDK and provides a solution to implement prompt management, a feature readily available in the Python SDK. ### 5. AI-First Firebase and Next.js Starter Kit Briefly mentions a Firebase and Next.js starter kit with pre-built AI components for building AI-powered applications with features like built-in prompt instructions and a chatbot. ## Summary Here's a summary document for the video "Build a Customer Support AI Agent with LangGraph & Portkey Prompt API": **1. Executive Summary:** This video demonstrates how to build a customer support AI agent using LangGraph for agent orchestration and Portkey's Prompt Render API for dynamic prompt management. The focus is on creating a multi-step, reasoning agent that can route and respond like a human support rep by leveraging tools, memory, and context-aware prompts in a modular, scalable graph-based system. **2. Main Topics Covered:** * **LangGraph Implementation:** Building a multi-step support agent using LangGraph, focusing on TypeScript implementation of the pre-built multi-agent supervisor. * **Portkey Integration:** Using Portkey as an AI gateway for prompt management, load balancing, observability, and metrics within the LangGraph setup. * **Dynamic Prompt Rendering:** Employing Portkey's Prompt Render API to dynamically load and update prompts, enabling prompt editing without code changes. * **Agent Specialization**: Each agent is able to access only specific tools that allow it to be most explicit about the information it's providing. * **Debugging and Observability:** Emphasizing the importance of transparency and debuggability in AI agent workflows, showcasing Portkey's logging capabilities. * **AI Starter Kit Mention:** A brief mention of a Firebase and Next.js starter kit with pre-built AI components for AI-powered applications. **3. Key Takeaways:** * LangGraph facilitates the creation of reliable and maintainable agent workflows, offering better control, observability, and modularity compared to traditional prompt chaining. * Portkey's Prompt Render API allows for prompt management outside of the code base, enabling easier testing, versioning, and maintenance of prompts. * Dynamic prompts can be loaded and updated from Portkey's management tool, facilitating real-time adjustments without code deployments. * While Portkey's TypeScript SDK lacks trace logging (present in the Python SDK), prompt management can still be effectively implemented. * The AI agent can route customer inquiries to specialized agents (product, pricing, shipping, general support) based on the question's context. **4. Notable Quotes or Examples:** * "LangGraph allows you to create reliable agent workflows with proper control, observability, and modularity—no more spaghetti prompt logic." * "Portkey helps separate prompt content from code, making it easier to test, version, and maintain over time." * Example Use Case: Customer asks: "What are the shipping options for New York?" The agent recognizes the need for shipping information and responds: "Here are the shipping options for New York." * "We can actually load the prompts that we're using in the port key prompt management tool and we can do that with something they released called prompt renderer." * Using API calls to dynamically load prompts instead of embedding the prompts directly in the code. This enables for ease of change and less code deployment. **5. Target Audience:** * Developers interested in building AI-powered customer support solutions. * Engineers working with LangChain/LangGraph and seeking improved agent orchestration. * Individuals looking for solutions for dynamic prompt management and AI gateway integration. * Teams building SaaS support chatbots, internal AI assistants, or lead qualification bots. * Anyone looking to enhance AI agent transparency, debuggability, and adaptability. ## Full Transcript hey everyone welcome to Nering.io i'm JD and today we're going to go through a customer support AI agent where we're using langraph and portkey the renderer API so that we can actually do prompt management with that let's go ahead and get started all right so first things first we're actually going to use the lane graph multi- aent supervisor this is actually a pre-built uh AI agent that we can actually leverage so they have a pretty good uh way to go about installing this it's just a simple npm they also have pre-built for um Python as well but I've been sticking with uh TypeScript lately so we're going to actually go through and very similar to how it's creating these React agents we're actually going to make React agents that mimic a customer support bot but one of the things that uh I always want to include is have an AI gateway and so my gateway of choice is portkey it's super flexible and you can port it to multiple different SDKs so they actually have a good getting started page as well on different AI agents uh and what we're going to do and how to include it into langraph one of the key things that I I love about Porky is like the ability to quickly ramp up reliability so you can actually load in load balancers you can have different targets you can include all different kinds of uh parameters you can manage all your prompts as well as it has observability and metrics built in but one of the things that I noticed is even on this uh example there's no way to actually take their prompt management tool and integrate it into langraph the other thing that's missing at least in the the TypeScript SDK is that it doesn't have this ability to log traces in AI agents so this is one of the benefits if you wanted to go with Python you can actually log all of the steps in your trace so we're going to go through though in uh Node.js how do we actually load the prompts that we're using in the port key prompt management tool and we can do that with something they released called prompt renderer so this is an an API where we can actually hit and we'll be able to uh connect our port key instance and be able to pull down our prompts based on what we have in our management system with that let's go ahead and jump into it real quick this is another project that I'm working on it's a Firebase and Nex.js starter kit that comes with AI powered apps and so what this allows you to do is get up and running with Firebase Nex.js and Genkit as well as pre-built AI components so some of the things that I focused on were the ability to actually start with uh AI first mindset and built-in prompt instructions so that you can actually build new features build new blog posts build documentation and actually integrate directly into a chatbot that is built with a chat interface as well as content generation and different prompts so if you sign up now there's actually a discount going on where you can get 90% off and this will fluctuate by uh this also comes with a social proof which allows you to do dynamic discounts right now we're offering 90% off so definitely check it out and don't forget to like and subscribe all right so before digging into the code what we're going to do is just look at a quick demo so if we do npm run we're going to see that we get a terminal and it's just going to show um the questions that we're asking so we're going to ask questions that are related to the either the product the pricing the shipping or the brand and then just general support this is all use each one of these is a different agent and each one has access to different tools so what we're going to do is we're going to see how the agent actually works when we ask it the question it's going to actually go through a different conversation flow that we're kind of outputting so again we have not only do we have access to these agents but we're actually sending different prompts to those agents in order to be able to be more explicit this gives us the added benefit of allowing uh other people to go edit the prompts without actually editing the code and so as you can see as the supervisor is going through it actually finds that it has access to a tool in the shipping information and it's going to then respond back and the customer service agent is going to say "Here are the shipping options for New York." So let's take a look at how this is actually operating again we're using the uh the langraph pre-built agent and we're going to set that up as well as integrate into our uh into port key so as you can see here where this part is actually controlling the terminal we're loading in port key but then we're also loading in a prompt loader and I'll actually explain what I'm doing there the rest of this is actually building out the agent all the way down here this is still doing the uh the terminal interface and then we'll be connecting directly to our agent so when we look at the supervisor this is where we're actually creating our uh lane graph implementation so you can see we have our memory saver in memory store we're actually going to initialize all of the agents that we have access to we're going to say that we have a supervisor we're going to create those agents and then we're going to right here you can see the prompt as well as the output mode we want to get all of our information we're also going to configure all of our other agents and by doing that we're actually going to create uh different LLMs for each of the models and the reason that I wanted to try that is I just wanted to see could I send different models to it and can I have different traces for each one of those uh agent models the only thing that we need to do in order to implement uh the port key portion of this is we're just going to tell it that we want to look for the port key gateway we're going to create these headers and then just like we always do with uh OpenAI or with Lang Graph and our LLM is we're going to pass all of that configuration in so we don't actually change the call that we need to in order for the LLM once we have all of that set up we'll take a look at our special agents so each one of these agents is what we talked about where we have product we have pricing shipping etc and then we are creating our React agent again these are pre-built agents we're going to pass in our tools we're passing in our models like we just saw we're saying what our name of it is and then we have our prompt which we are actually going to load from our product so we'll take a quick look at tools what they kind of look like there's different tools that we have about like product information so what kind of products do you have what kind of uh tool information or I'm sorry the ability of those so we can check to see again if we have a quantity of a smartwatch or what the quantity of a smartphone will be um so all of these tools we're going to pass to different agents so that those specific agents can actually be the most explicit on getting this information and specialized lastly again if we go to this prompt uh load prompt portion what's happening here is this is actually where we're going out and calling port key so that we can specifically call the prompt renderer API so I did it this way which is maybe a little more complicated but I wanted a fallback so that if I called port key uh or there was some error or maybe I put in uh template incorrectly or I was changing it then I could actually fall back to the constants of what these prompts are but again as these constants are constant I can't change any variables or someone would need to come in and manually uh put information in here and so what I want to do is do that dynamically so right here all we're doing is we're saying check and see if the prompt ID actually exists if it does then great uh we're going to continue on and actually render the prompt right here so again the cool thing is is this is just an API we have our prompt ID which we're getting from our environment variable which is mapped up here uh and then we are actually going through and generating the prompts so what's happening is when you pass the variables in those can get um injected you get a response based on those response you get your uh your prompt right here or we're doing like uh data messages but if there is the data prompt which is what we're expecting in this instance then we're actually going to return the prompt to be used in our langraph so let's take a look at what this actually looks like in port key if we go over here and we actually look at our supervisor prompt we can see that we have everything here someone can log in make changes once they make a change we can actually update it'll give us like a little uh like description automatically and we have where all of our prompts are versioned you can also see a history of your prompts i also like to organize them so that we have ones for supervisor pricing and product uh and you can change the foundational models if you want but as you saw in the code we can actually overwrite those or you can make this a part of a configuration which again is one of the reasons that I really like uh portkey and so this allows us to then take these prompts and actually look at our logs of what's coming back through the system so when we actually go in we're seeing that the system object it was object object which was JavaScript but we're coming in and we're seeing the tool execution that's actually happening as well as the information getting passed so here's the assistant so we can actually see all the information that is being discussed here in our logs the other cool thing is that it's taking these tools that we defined in our code and actually showing it in the request details of how we're actually executing through this and again giving us our final response again we're seeing like different uh calls being made specifically to our our um our uh our agent all right everyone that's it for us today what we went through was a Node.js implementation of Langraph agent and we built out the prompt management solution using portkey with that happy nerding --- *Generated for LLM consumption from nerding.io video library*