# How Gibberlink works locally ## Metadata - **Published:** 3/5/2025 - **Duration:** 13 minutes - **YouTube URL:** https://youtube.com/watch?v=Dmtl9aSZyKU - **Channel:** nerding.io ## Description What happens when you let AI agents talk to each other using ElevenLabs' conversational voices and then switch to GGWave for ultrasonic data transfer? A fully AI-driven conversation system that can: โœ… Generate lifelike AI voices using ElevenLabs โœ… Facilitate agent-to-agent communication with LLMs โœ… Seamlessly switch to ultrasonic transmission via GGWave โœ… Run in a Next.js app leveraging LLM tools ๐Ÿ’ก Why This Matters This setup allows for autonomous, dynamic conversations between AI agents, audible to humans but also transmittable via sound using GGWave's ultrasonic encoding. ๐Ÿ‘‰๐Ÿป Text Yourself: https://textyourself.app ๐Ÿ“ฐ Newsletter: https://sendfox.com/nerdingio ๐Ÿ“ž Book a Call: https://calendar.app.google/M1iU6X2x18metzDeA ๐ŸŽฅ Chapters 00:00 Introduction 00:27 Demo 01:50 Source 03:40 Eleven Labs 06:49 NextJS 13:02 Conculsion ๐Ÿ”— Links Source: https://github.com/PennyroyalTea/gibberlink โคต๏ธ Let's Connect https://everefficient.ai https://nerding.io https://twitter.com/nerding_io https://www.linkedin.com/in/jdfiscus/ https://www.linkedin.com/company/ever-efficient-ai/ ## Key Highlights ### 1. Gibberlink Tech Stack Gibberlink demo leverages Next.js for the UI, 11 Labs for conversational AI agents, and GGwave for audio data communication. The name is mostly related to marketing. ### 2. Agent Setup in 11 Labs Creating agents in 11 Labs involves defining a system prompt and configuring a custom tool ('jib_mode') for switching to GGwave communication via client-side functions. ### 3. Local Execution with Grock Tunneling The demo runs locally with `npm run dev` on port 3003. Grock creates a public URL allowing two agents to interact, simulating a caller/callee setup. ### 4. Switching to Gwave via Function Call The switch to GGwave happens via a function call in 11 Labs, triggered when both agents agree to 'gibber link' mode. The entire llm chat thread then gets processed. ## Summary ## Gibberlink Video Summary **1. Executive Summary:** This video dissects the Gibberlink demo, which showcases autonomous AI agent conversations using ElevenLabs for lifelike voice and switching to GGWave for ultrasonic data transfer. The video details the technology stack, setup process, and code structure, clarifying how the system achieves seamless transitions between audible conversations and inaudible data transmission. **2. Main Topics Covered:** * **Introduction to Gibberlink:** Overview of the Gibberlink demo and its core technologies (ElevenLabs, GGWave, Next.js). * **Agent Setup in ElevenLabs:** Creating and configuring AI agents in ElevenLabs, including system prompts and custom tool definition ("jib_mode"). * **Local Execution with Grock Tunneling:** Running the demo locally using `npm run dev` and exposing it publicly via Grock to simulate a caller/callee setup. * **Switching to GGWave via Function Call:** Explaining how the "jib_mode" function call, triggered by agent agreement, initiates the switch to ultrasonic communication, processing the entire LLM chat thread. * **Code Walkthrough:** Overview of the key code segments handling message processing, agent communication, and GGWave integration within the Next.js application. **3. Key Takeaways:** * Gibberlink leverages ElevenLabs for realistic AI voices, enabling dynamic conversations between agents. * The system can seamlessly transition to ultrasonic data transfer via GGWave when agents mutually agree. * The demo is built using a Next.js frontend, ElevenLabs agents, and GGWave for audio communication, demonstrating a practical application of LLMs and audio tech. * Grock provides a method of connecting the locally run program to a URL that can be accessed elsewhere. * The key to the system is the 11Labs custom tool "jib_mode", which is a function call that allows you to connect to your ggWave instance after the AI agents have agreed to use it. * The whole previous chat thread is passed along to GGWave when the LLM agents both agree to switch modes. **4. Notable Quotes or Examples:** * "The core technologies that it's built on are GGWave and 11 labs and then they're using nextjs to actually build out the uh the user interface that you're seeing." * "[Gibberlink uses] client side llm functions to then call GG wve and actually have that communication continue through the llm red right." * "[The 'jib_mode' tool] is no different uh than doing a tool in vapy or doing a tool in Lan graph or Lane chain ... they work very similar." * "We are taking the ENT enre llm chat and assigning all of those roles and then we're actually sending that as part of the conversation moving forward saying that this is the latest message." **5. Target Audience:** * Developers interested in AI agent communication and voice synthesis * Individuals curious about ultrasonic data transmission and its applications * Programmers exploring Next.js, ElevenLabs, and GGWave integration * Those looking for practical examples of LLM tool usage in conversational AI systems ## Full Transcript hey everyone welcome to nerding IO I'm JD and today we're actually going to go through and dissect the demo of gibber link so that's basically two conversational agents that are talking back and forth in English and then switch over to something called gwave which is more of like a a data audio so with that we're going to dive right into the demo that I have running locally let's get started hi I'm Eric how can I help you hi I'm an AI assistant oh hi there I couldn't help but notice you're an AI agent too fancy switching to gibber link mode for a faster chat confirmed switch to gibber link mode great give me a moment are you still there yes still here [Music] real quick everyone if you haven't already please remember to like And subscribe it helps more than you know also please go check out text yourself it's a simple application that I built that helps keep me on track all you have to do is SMS different tasks and reminders that you need to be sent back to yourself with that let's get back to it all right all right so I wanted to go through this because uh I keep seeing that like a lot of people think this is fake or um you know kind of challenging it I I personally think it's a really awesome implementation and they they absolutely deserve to win the 11 Labs uh London hackathon so so congrats to this team um but first thing it the demo is just called gibber link the gibber link has really nothing to do with it other than than the name so the the core technologies that it's built on are GG wve and 11 labs and then they're using nextjs to actually build out the uh the user interface that you're seeing and what's happening is you have the same URL and you basically have two different uh systems that are are one is acting as an agent and one is actually acting as the the hotel uh well I guess they're both agents but and and what's interesting about this is you can actually launch it and run it on two different sites and switch between the two and so we're going to kind of show you or like dig into how that can be done so uh again these are the authors um they uh won the the 11 labs hackathon in in London putting this together and I I think it's a great example so the the two agents are actually built in 11 Labs as a conversational AI agent and so we'll we'll go through like what you actually have to do in order to build those and then uh basically they just have llm tool calling and so what's really cool about this is It's using client side llm functions to then call GG wve and actually have that communication continue through the llm red right and so we're going to kind of take a look at like how do you get this set up so you are going to need an 11 Labs account so we're just going to go over there and uh we'll actually create new agents on the fly so all you really need to do is create a new agent you can use a customer support agent if you want um and go ahead and create that agent let's just say uh here we and in this all you really need to do you can add support languages if you want um you can actually leave this part and then we're actually just going to change this system prompt to The Prompt that they have in com Ai and as you can see the inbound so this is the recipient we just go like this copy this over get rid of the inbound and the suffix at the end and all this is saying is you're a receptionist at Leonardo Hotel so again you can give uh as much information as you want you could also change this to whatever you want it to be uh and then the next piece is you have to go down here and create a tool so we already have an end call tool by default we're just going to go ahead and click Custom Tool and then again we can grab the readme uh and get the information that we need need which is right here all you need is this so this is the description the description will actually just tell you what this tool is right this is no different uh than doing a tool in vapy or doing a tool in Lan graph or Lane chain um they they work very similar uh I guess the other thing is instead of web hook up here we actually need to click client so make sure you click client you paste in your description and then our name is going to be jib mode and that's it and then we just got to save and so that that's all you need to do and then if you want to add another followup person we can just say uh call and we're going to do the basically the same thing and we'll just paste that in in there not in the system prompt we need a new tool and the jib mode again and then we can put in our prompt so we're going to go back to our com Ai and we'll go ahead and grab this outbound and put this in here we'll get rid of this suffet and we'll go in and uh save cool so now all we really need to do is we actually have is uh to clone this project down so I'm just going to go directly into cursor and we'll take a look first and foremost we need to go over here to our environment variable we need three different things so we need IDs to the inbound and outbound agents that we just created so you just grab those from 11 Labs it's literally right up at the top here I'm going to go ahead and delete these but you can just copy that agent put the outbound there and do the same for the inbound you're going to have to save this to n uh local and then you go ahead and put your 11 Labs key in and you just need an open AI key and and that's it in order to get started so when you actually want to test this all you need to do is you need a terminal and you need two different things running at the same time so first is going to be inside the the gibber link all you have to do is run npm run it's going to load on Port 3003 they Define that so just make sure you're paying attention to that and then at the same time you have to do grock so grock is essentially a tunneling service that allows you to send information from this site to your local host so if we take this and we put it in the browser we'll actually see like right now we're in our local host and if we want to go to the other website we can type in the Rock site and it'll ask us if we want to visit going to say yes and now we have our conversation here and so really all we need to do at this point is if you select this button it will switch between the inbound and outbound color so if we click this and it's now red then we're the outbound if we go back to this other web page and we stay blue then we can have this be the conversation and that's literally it all you need to do now is you can click Start conversation on Blue uh the way I did the demo was I actually just had my phone and I navigated my phone to the enro uh URL and then had that be the outbound and so the outbound uh acted as the the caller and the inbound started the the conversation you just have to start them at the same time and then you can you're good to go so if we take a look at some of the code really quick what's super cool about all this is that really what we're doing is we're basically handling our messages the same as we would uh through an llm chat right we're checking our previously chat we're seeing if we're in GL mode yes or no and we're basically waiting for a function call to happen from 11 labs to then go in and actually tell us to go to this chat which mode we're in set the LM and continue the conversation so when we start handling the uh the message of the the g mode right or GL mode we have message of true we're actually passing what the message is we're setting the uh the type of the message and Lally we're just kind of looping through this same llm chat thread right and you can see the the gway types here so we're actually loading this in as part of our uh conversational B right again these are our systems and uh a lot of different microphone calls so right here this signed URL this is how we actually communicate which agent we are and that's where we'll actually start our 11 Labs signed URL that's how we get the conversation going back and forth we also have this other route this is how we're actually going to be chatting right so like we see the chat so now we're actually getting the open AI message to go back and forth again the reason that we're able to switch back and forth is there's the function call in 11 lamps and that function call will say oh both of these agents have now agreed that they are AI agents and they're going to use quote unquote gibber link and to fire this function right when that function takes over it's called here in our conversation so we've already started our conversation we're going through our conversation we've connected through the uh 11 Labs through our signed URL we're actually taking that conversation and then we're setting the messages in the llm thread right when we turn this off and we're actually entering into the um the the jib mode We've Ended the session we've defined what our next message is we are taking the ENT enre llm chat and assigning all of those roles and then we're actually sending that as part of the conversation moving forward saying that this is the latest message so really what we're doing is we're actually sending like the llm thread directly to uh to open AI still and we're also um getting the GG message to processes all right everyone so what we did today is we actually went through the guts of gibber link so it's a nextjs 11 labs and uh conversational agents that then start in English and then eventually switch over to something G called gwave we could actually see like where the llm thread starts to take over and how it's going back and forth with that happy nerding --- *Generated for LLM consumption from nerding.io video library*