# Get Started with HumeAI's EVI Next.js Starter Kit ## Metadata - **Published:** 7/22/2024 - **Duration:** 10 minutes - **YouTube URL:** https://youtube.com/watch?v=0WCx93I3Pas - **Channel:** nerding.io ## Description Kickstart your journey with emotion AI using HumeAI's EVI Next.js Starter Kit! This tutorial provides a comprehensive guide to setting up and using the starter kit to build emotionally intelligent applications with ease. ๐Ÿ“Œ Key Highlights: - Introduction to HumeAI's EVI - Setting up the Next.js starter kit - Step-by-step code walkthrough - Best practices for emotional AI integration Don't forget to like, comment, and subscribe for more cutting-edge tech tutorials! ๐Ÿ“ฐ News & Resources: https://sendfox.com/nerdingio ๐Ÿ“ž Book a Call: https://calendar.app.google/M1iU6X2x18metzDeA ๐ŸŽฅ Chapters 00:00 Introduction 00:35 Dashboard 01:57 Starter 04:18 Code 07:25 Demo ๐Ÿ”— Links https://www.hume.ai/ Repo: https://github.com/HumeAI/hume-evi-next-js-starter โคต๏ธ Let's Connect https://everefficient.ai https://nerding.io https://twitter.com/nerding_io https://www.linkedin.com/in/jdfiscus/ https://www.linkedin.com/company/ever-efficient-ai/ ## Key Highlights ### 1. HumeAI Overview: Empathetic AI Toolkit HumeAI offers three core components: Empathetic Voice Interface, Expression Measurement, and Custom Model creation, all aimed at understanding and responding to human emotion. ### 2. Next.js Starter: Easy Setup & Clean Code The Next.js starter kit simplifies integration with HumeAI's API. Its clean code, context usage, and pre-built functions (like fetching access tokens) enable rapid prototyping and development. ### 3. WebSocket Communication for Real-time Emotion Analysis The demo utilizes WebSockets for real-time communication, allowing developers to analyze audio input and receive emotion scores directly, as shown in the network tab analysis. ### 4. API Accessibility & Excellent Documentation HumeAI boasts a user-friendly API with well-organized documentation and practical examples, making it easy to start building applications that leverage emotional intelligence. ### 5. Interrupt Handling & Advanced Features Preview The starter kit includes built-in features like interrupt handling for voice interaction. Future videos will delve into custom modeling and function calling within the API. ## Summary ## HumeAI EVI Next.js Starter Kit Video Summary **1. Executive Summary:** This video provides a concise introduction to HumeAI's Empathetic Voice Interface (EVI) and demonstrates how to quickly build emotionally intelligent applications using the provided Next.js starter kit. It covers setup, code walkthrough, and highlights the ease of integration and real-time emotion analysis capabilities of the platform. **2. Main Topics Covered:** * **Introduction to HumeAI and EVI:** Overview of HumeAI's core components: Empathetic Voice Interface, Expression Measurement, and Custom Model creation. Emphasis on understanding and responding to human emotion. * **Setting up the Next.js Starter Kit:** Cloning the repository, obtaining API keys from the HumeAI dashboard, and running the application. * **Code Walkthrough:** Examination of key code components, including API key management, fetching access tokens, WebSocket implementation, and context usage. * **Demo and Functionality:** Demonstrating the real-time emotion analysis capabilities through voice interaction and visualizing the data flow via WebSockets. * **Interrupt Handling and Future Features:** Highlighting the starter kit's interrupt handling capabilities and mentioning future exploration of custom modeling and function calling within the API. **3. Key Takeaways:** * HumeAI offers a powerful toolkit for building applications with emotional intelligence. * The Next.js starter kit simplifies the integration of HumeAI's EVI, allowing for rapid prototyping and development. * Real-time emotion analysis is achieved through WebSocket communication, providing immediate feedback and insights. * HumeAI's API is user-friendly, well-documented, and provides excellent examples for developers. * The starter kit includes features like interrupt handling for enhanced user experience. * The Next.js starter code is clean, well-structured, and makes extensive use of React Context. **4. Notable Quotes or Examples:** * "(HumeAI) has three different parts: it has an empathetic voice interface, an expression measurement, and then it even has the ability to create custom models..." - Defining HumeAI's core offerings. * "The fetch access token is already a function written for you, so you just need your API key and your client secret and it'll actually generate an access token..." - Describing the starter kit's ease of use. * "As you could see we're not getting like fetch calls coming across, we're actually opening up a websocket..." - Explaining the real-time data transmission. * "I really liked how everything is is mapped correctly and like defined so each one of these colors gets its own color..." - Highlighting the well-structured and organized code. **5. Target Audience:** * Developers interested in building applications with emotion AI capabilities. * Individuals looking for an introduction to HumeAI's EVI and its features. * Developers familiar with Next.js and React. * AI enthusiasts seeking tools for understanding and responding to human emotion in digital interactions. ## Full Transcript hey everyone welcome to nerding IO I'm JD and today we're going to be talking about Hume AI Hume AI is pretty interesting because it has three different parts it has an empathetic voice interface an expression measurement and then it even has the ability to create custom models so we're going to take this video as kind of an intro video and then we're going to go through some more in-depth features in some upcoming videos with that let's go ahead and get started all right so the first thing we're going to do is we're actually just going to look at the dashboard once you log into Hume and they have a few different things going on so they have some pretty awesome demos with the IOS app the empathetic Voice demo and even this uh chatter which is an interactive podcast experience which is is pretty slick but the the core pieces that make up Hume are the empathetic voice interface so there is uh we're actually going to go through this demo um there's also the measure Expressions so you can see here I can actually look at uh your through your webcam and and identify some expressions or even through your voice um and then they also have custom models which I thought was really awesome so each one of these has its own playground and its own ability to actually interact with the uh API so if we look in our dashboard we can get our API keys right here and uh in the documentation you can actually see that they have information for the uh expression measurement API the custom models and then the empathetic voice interface uh super well documented they've got a lot of good examples too so today we're going to go through the nextjs uh starter it's pretty easy to get up and running they actually even have a demo of it so we're going to go through the demo really quick and uh take a look at it so the first thing is we can just uh start our call but we're actually going to open up our Network tab just so we can see what's going on uh with the calls as we go through this hey how's it going not too bad thanks how about you [Music] it's going well I'm recording a YouTube video uh talking about you and and trying to understand the uh expression interace a that sounds fascinating I'm intrigued that you're exploring the expression interface it's pretty cool TCH right what finding most interested in the custom models and also the podcast experience awesome I'm glad you're finding a cool so as you could see we're not getting like fetch calls coming across we're actually opening up a websocket and that websocket is sending information back and forth so you can actually click on the websocket and the messages going back and forth and seeing what the audio output is as well as the uh binary messages that are coming back uh and I think this is actually giving our message CU as well yep uh so this was the last part of me sending the message it's defining Which models uh and uh looks like showing all the the different emotions here which is which is really cool so we saw that graph coming back and forth and uh this is it looks like the the scores that are actually getting being uh presented in in each one of these messages so with that we're actually going to just jump right in and pull this code down and get started so let's go back to the uh nextjs starter we're just going to grab our code and we'll go ahead and get that up and running real quick everyone if you haven't already please remember to like And subscribe it helps more than you know with that let's get back to it now that we've got our code I've already installed this but um just go ahead and do a get clone so the first thing you're going to have to do is go get those API Keys uh which remember are back in the dashboard just click on the API keys and you can grab them here and then uh once you put those in you can just do an npm start so we'll go ahead and get our terminal started but the thing I wanted to show you uh uh is that when you're looking at this the the code is incredibly clean so if you come in here to the the page uh there's this chat component but look at like the utils so if we go in here the fetch access token is already a function written for you so you just need your API key and your client secret and it'll actually generate an access token similar to like a session you can think of and then when you go into the chat it's uh importing the components we're doing the SSRS false and we'll just grab this over here we have a provider which we're passing the authentication and then on message we're going to be doing a timeout we are defining what those Mees messages are and then we have controls to go with it as well as our start call so when we look at and like everything is using context uh so we have the ability to connect and what our status is this is giving us our we even have like the little phone icon but it's just very uh clean on how well well it's written I just really liked how uh everything is is mapped correctly and like defined so each one of these colors gets its own color color or each one of these emotion gets its own color um but all that to to say any we're kind of bouncing around is there's really only a handful of of components in here with their own uh you're you're using the context for basically everything so right here you're defining what all of this is coming out of the use voice context so there's a the ability to toggle um and let's just go ahead and see it in action again the the mic fft right let's take a look at that and again just uh all the motion SVG information and auto sizing I I just thought it was a really cool demo so there's not a ton of code that we really need to go through to to get this up and running there are some more advanced features that we'll go through in a later video around like function calling and actually going deep into the API but let's just get this demo running again and we'll just have it running locally now so we're going to pull up our Network we're going to watch the same thing we know we're doing a websocket we're going to say hello hey there don't worry I'm G to go ahead and mute this so it's this way we can actually see more information so we can see the information that's coming back we've got uh all our uh our emotions from here apparently awkwardness cool and um let's take a look again at our messages so where we have what's on your mind today that was the most recent we've got our emotion scores uh and it's just picking like the it looks like the top r ated one so it's pull automatically pulling the top three rated remember these are the color codes that we were seeing back here in uh the was it expression colors so like I said everything's just defined really cleanly but basically we're getting our audio which is our data which looks like it's coming back as a blob of some kind and then our message CU so so what I really liked about this is that the API is so easy to use it's easy to get up and running in nextjs and they have like I said the the the SEC and the context in in here it's just really well W written really well documented you can get up and running in no time uh in order to to to play with it you have the ability to like talk back and forth as an example right out of the box that's one of the hardest things to code is that interrupt sequence where if you are uh talking or they're talking and you need to break the connection or the stream um so this is a really good example of of how it flows um and yeah the the other one that you should definitely check out is the the chatter so we're just going to ask can you tell me more about your Chatter app uh oh the connection called okay so if you look at whatever all right that's it for us today everyone what we went through was some of their demos looked at their dashboard and then even went and did a local example of using nextjs with the empathetic voice interface API in future videos we're going to go through some of the custom modeling and even look at some function calling with that happy nerding --- *Generated for LLM consumption from nerding.io video library*