# How to configure Vapi Ai Voice Bot Functions With Next.js ## Metadata - **Published:** 5/2/2024 - **Duration:** 12 minutes - **YouTube URL:** https://youtube.com/watch?v=q9eZZA4VCNM - **Channel:** nerding.io ## Description ๐Ÿ—ฃ๏ธ Discover the incredible potential of AI voice bots with Vapi and Next.js! ๐Ÿค– In this video, we'll guide you through the process of integrating Vapi's powerful AI voice bot functions into your Next.js applications. You'll learn how to: - Setup a Vapi NextJS example - Learn about display and webhook functions - Create a transcription download button Unlock a new level of user engagement and interactivity by harnessing the combined power of Vapi and Next.js. ๐Ÿ’ช Course: https://forms.gle/PN3RpKD8eHsicL9Z7 ๐Ÿ“ฐ News & Resources: https://sendfox.com/nerdingio ๐Ÿ“ž Book a Call: https://calendar.app.google/M1iU6X2x18metzDeA ๐ŸŽฅ Chapters 00:00 Introduction 00:29 Dashboard 01:21 Demo 03:07 Setup 05:28 Code 09:09 Transcription 10:57 Final ๐Ÿ”— Links https://vapi.ai/ https://docs.vapi.ai/resources https://github.com/VapiAI/web https://github.com/VapiAI/client-side-example-javascript-next โคต๏ธ Let's Connect https://everefficient.ai https://nerding.io https://twitter.com/nerding_io https://www.linkedin.com/in/jdfiscus/ https://www.linkedin.com/company/ever-efficient-ai/ ## Key Highlights ### 1. Vapi: Voice Bot with Developer-Friendly Integrations Vapi provides an easy-to-integrate voice bot solution tailored for developers, with support for various ecosystems like Next.js. It allows building AI voice bot applications quickly. ### 2. Functions Enable Dynamic UI Updates Vapi utilizes functions to dynamically change the user interface based on the conversation flow. Webhooks are triggered by specific functions, updating the display based on the responses. ### 3. Flexible Model Support with Open AI Compatibility Vapi offers the flexibility to switch between different AI models, including those compatible with the OpenAI chat completion API, allowing developers to choose the best model for their needs. ### 4. Real-time Transcription and Post-Call Automation Vapi provides real-time transcription and the ability to extract messages and create transcripts for download, enabling post-call automation and analysis. ### 5. Customizable Assistants via Dashboard or JSON Vapi assistants can be built either through the dashboard interface or by defining them as JSON objects, providing flexibility in configuration and management. ## Summary ## Vapi AI Voice Bot with Next.js: Video Summary **1. Executive Summary:** This video provides a comprehensive overview of Vapi, an AI voice bot platform, and demonstrates its integration with Next.js for creating interactive and dynamic voice applications. It covers setting up a Vapi Next.js example, leveraging functions for dynamic UI updates, and generating transcript downloads. **2. Main Topics Covered:** * **Introduction to Vapi:** Overview of the platform and its capabilities as a developer-friendly AI voice bot. * **Vapi Dashboard Walkthrough:** Quick tour of the Vapi dashboard for creating and testing voice assistants. * **Next.js Integration:** Setting up a local Next.js example provided by Vapi. * **Functions and Webhooks:** Explanation of how Vapi utilizes functions and webhooks to dynamically update the UI based on conversation flow. * **Model Flexibility:** Ability to switch between different AI models, including OpenAI-compatible ones. * **Transcription and Automation:** Generating real-time transcripts and creating a download button, opening doors for post-call automation. * **Assistant Configuration:** Building assistants via the dashboard or JSON objects. **3. Key Takeaways:** * **Vapi simplifies AI voice bot development:** Offers a developer-friendly approach with excellent integrations like Next.js. * **Functions enable dynamic UI:** The use of functions and webhooks provides the ability to dynamically update the user interface during a voice interaction. * **OpenAI Compatibility:** Supports a wide range of AI models, including those compatible with the OpenAI chat completion API. * **Transcription opens automation possibilities:** Real-time transcription and download capability enables post-call automation and data analysis. * **Flexible configuration:** Assistants can be configured via the dashboard or programmatically through JSON definitions. **4. Notable Quotes or Examples:** * "[Vapi is] basically a voice bot that integrates really well so it's perfect for developers to go ahead and put together a AI voice bot application" * "We can actually use different providers... or other providers, but we can also use OpenAI compatible models which is really interesting." * "[Functions allow you] just like the assistant API to actually do a call out to something else." * "We can now change the display based on the functions that are going back and forth from our call." * Demonstration of booking Broadway show tickets using a voice assistant built with Vapi and Next.js. **5. Target Audience:** * Web developers interested in integrating AI voice bot functionality into their Next.js applications. * Developers exploring voice-based UI and conversational AI. * Individuals looking for a developer-friendly AI voice bot platform. * Individuals interested in creating automated transcriptions. ## Full Transcript hey everyone welcome to nerding IO I'm JD and today we're going to be looking at fapy which is basically a voice bot that integrates really well so it's perfect for developers to go ahead and put together a AI voice bot application can also integrate into twilio we're going to be looking at how it can do functions as well and dive into some of the examples as well as probably put together a transcript so with that let's go ahead and get started all right so we're going to be looking at vapy which is a voice spot and so the first thing I just want to kind of point out is just do a quick walk through of their uh dashboard where you can uh actually sign up and get started with a voice spot pretty quickly all you have to do is uh click to create a new assistant and you can even test right here in the browser but we're actually going to go through uh setting up a local example of this and so have a nexg template that you can actually get through their resources so they have all these different uh ecosystems that you can leverage we're going to look at the nextjs one which we've already cloned down and we'll uh go ahead and get through a demo here so the cool thing about this is it has functions so I'm going to go through straight through the demo and then kind of talk about what's happening hi I'm Paula welcome to Broadway shows how are you feeling today great that's wonderful to hear are you interested in seeing a Broadway show if so do you have a specific location and date in mind or are you open to suggestions I'd like to see Lion King the lion king is a fantastic choice it's a visually stunning show with Incredible music and performances do you have a specific date in mind for the show or would you like me to suggest some dates for you suggest one for me great choice The Lion King on December 15th 2023 sounds perfect how many tickets would you like to book for the show two your tickets for The Lion King on December 15th 2023 have been successfully booked enjoy the show awesome so what you saw there was it was actually going through multiple different functions and changing the display of what we're we're trying to see you can also see here on the right that our transcription was happening in real time so what we're going to do is we're going to take a look at this transcription and see like how can we actually get this information um and then how are we how are they actually using the functions to change the display itself and so to do that we're going to take a look at our our npm uh our install of their nextjs example which all you have to do is clone and we'll go ahead and get started all right so the first thing to note is how we actually build this uh Paula assistant there's two different ways to do it you can do it through the dashboard that we looked at or you can actually build adjacent object uh into what this agent what is this agent doing so the first thing is we have our name we have our model we're going to be using our provider of open AI we are going to use uh the GPT 3.5 model and we have a prompt the other thing that we should note here is that this provider can be switched so we can actually use different providers um you know different types of uh like mistol or or other uh providers but we can also use open AI compatible models which is really interesting so anything that does the same API schema as the chat completion API uh we can actually use inside of vapy which is super super powerful the other piece is this functions piece this allows you just like the assistant API to actually do a call out to something else and what's happening is this is call calling a web hook to our system so it's basically saying when I hear or when I acknowledge this function function I'm going to go ahead and do a web hook back to my application and this is one of the most powerful pieces of where the displays are changing so it's uh basically saying okay I want this function then this is the description and here are the parameters that I want to send I want to send the string and I want to send the date and then on the web hook back it's looking for this particular function and we'll go into this a bit more real quick everyone if you haven't already please remember to like And subscribe it helps more than you know we also have an upcoming course on C white labeling custom GPT we're going to have a section on there specifically around voice so please sign up for that down below with that let's get back to it so if we look at this structure basically what we have is the uh the component we have our assistant which is basically just our display and our assistant button and then we have varying different uh data in hooks so the data is just like the shows itself so this could be like a database uh the hook all is all the events that uh are available to us so if we start looking through this we have our the ability to set our messages we're looking for speech on start and end and we can actually take the these events and actually do things with them so you'll notice right here we have our set messages and we'll come back to this a little bit later but what's also interesting and really really cool about uh vappy is that you can use multiple different models so the fact is that you can use open AI compatible uh API so you can use open AI you could use something from Deep infra you could um use mistol anything that's comp with that same type of schema in order to do this functionality then you have this concept of functions which means that as this is operating we are able to send information through a web hook to a function call and then actually have that uh function call operate so you can see here it's saying oh okay if our function call is a suggested show then I know that my response is this result and then based on this result we can actually send that information back to the front end and the way we would send it back to the front end is if we look at our display we're actually taking into account that when the function type again is being called here then we're going to say oh okay we want to change our uh display so we have our set status to the show we're sending the message of here is a list of the suggested shows and then we're looking to display what that is so if we were going to uh display the shows that's our list of all the different types of shows their image and the price and title then once we select our show again we'll be sent uh a web hook and change our function and it'll notice okay now that I've heard of the show I want to look at confirm tickets as my function name and what should I be doing here and then if I've booked the tickets I'm actually going through the process of this display happening again and we're just changing confirm to ticket so to me that's really awesome because what's happening is we can now change the display based on the functions that are going back and forth from our call we can also use the functions or events at the very end in order to do some sort of summarization and so that's what we're going to do now is we're actually going to go into the hook and actually change how we would take this message and make some other automation so we can look at like the messages maybe give a transcript download for the user or um build out an automation after the fact where we could say on voice call end go ahead and download the uh implementation all right so to build our transcription what we're going to do is we're actually going to pull out of the use vapy hook that we were just looking at and actually get the messages so and get the messages so now we have our messages that's being loaded from our hook and then we're actually going to build out this button so if we look at this button what we're doing is we're actually just creating an element we're taking a blob and we are writing everything that we can see uh based on the RO the transcript itself so the message and then the we're making sure that we're only filtering is this actually a transcript message from our object and then we're just going to download that transcript in uh txt format again this is just in case like somebody wanted to actually have a copy of what they built or we could even take this transcript and send it through some sort of automation after the fact so what we're going to do now is we're just going to take a quick look at the demo so uh if we go ahead and save this then when we go back we're going to have our download transcript button we're going to go ahead and start work hi I'm Paula welcome to Broadway shows how are you feeling today great that's wonderful to hear are you interested in seeing a Broadway show if so do you have a specific location and date in mind yeah I want to see Aladdin uh in New York great choice you've selected Aladdin in New York do you have a specific so again we were able to go through that video but now we want to actually download our transcript and so in order to see this what we did is we just clicked our button it's pulling the messages it knows everything in the object and then uh is just writing that to a file and this is just defining our role again we could actually take this transcript based on the fact that we're using uh these kind of attributes or almost like tokens and associating the fact that who's speaking and and what uh is the transcript itself so if we were actually logging user information we could actually take that information and and uh log it in our transcript the cool thing about all of this is since it's in the hook itself we can pull this information directly from these events right and so we are setting this on the message update we know what our active transcript is our active transcript message which is basically our current message and then we're actually taking all the messages and we're piping them into an array all right everyone that's it for us today if you haven't already please remember to like And subscribe today what we covered was specifically a tool called vapy which allows you to do a voice spot and has a bunch of different Integrations like react native nextjs Etc we also built a way to take that information from the call and create a uh transcript which we could obviously automate in the future with that happy nerding --- *Generated for LLM consumption from nerding.io video library*