bouncer
← Back

ProgrammingPercy · 16.7K views · 577 likes

Analysis Summary

10% Minimal Influence
mildmoderatesevere

“This video is a straightforward technical guide; be aware that the architectural benefits of microservices are presented as strictly superior to monoliths without a deep discussion of the operational complexity they introduce.”

Transparency Transparent
Human Detected
95%

Signals

The video features a consistent, natural human voice with personal opinions and spontaneous speech markers that are absent in AI narration. The content is a highly technical, long-form tutorial with live coding and custom diagrams, which is characteristic of human expertise rather than automated content farming.

Natural Speech Patterns The transcript contains natural filler phrases ('kinda', 'super great', 'you know'), informal contractions, and slight grammatical inconsistencies typical of spontaneous speech.
Personal Voice and Anecdotes The narrator identifies as a 'go fanatic' and references personal preferences for building software, showing a distinct personality.
Live Demonstration Context The narrator uses deictic expressions like 'I've drawn here' and 'I'm gonna call mine', indicating a live, human-led walkthrough of a technical process.
Channel Authenticity The channel is linked to a specific developer (Percy Bolmer) with a matching blog and 'Buy Me a Coffee' link, consistent with independent human creators.

Worth Noting

Positive elements

  • This video provides an exceptionally detailed, hands-on guide to RabbitMQ security, CLI management, and Go integration that is rare in free content.

Be Aware

Cautionary elements

  • The creator's 'fanaticism' for Go is stated upfront, but viewers should remember that RabbitMQ is language-agnostic and the patterns shown apply to other ecosystems.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 13, 2026 at 16:08 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-11a App Version 0.1.0

More on This Topic

Related content covering similar topics.

Transcript

hi welcome to programming Percy and today we will be talking about one of my favorite ways of building software it's event driven architecture the difference between an event-driven architecture and a regular monolith is that usually when you have your monolith you have one entry point to the software you start it up and it runs and have stuff trigger each other inside the negative part about that is they kinda that kind of brings a coupling to the stuff that's going on and you have to redeploy the whole monolith with event driven architecture instead you leverage microservices and you have these microservices push information to each other using events and Revit mq is a great way of delivering these events to and between the microservices you can see an example I've drawn here we have a user service which sends out a customer registered event which is sent out by The Exchange we will cover what an exchange is soon to all the services the emails is analog service they both receive this event because they have said that they are interested in it this is a super great way to build software because the software as you can imagine becomes super flexible because each of these parts are deployed separately so you deploy the user service and you deploy the email service they have no connection between each other whatsoever if the user service were to go down that wouldn't affect the email service for instance the email service wouldn't receive any events so there would be a coupling that way but you could easily replace the email service without affecting any other parts and another thing that's great if you want to Shadow deploy something and just try it out you know you can deploy a new service listening to the customer register event and yes trigger an action whenever that happens and try it out this tutorial we will be covering Revit mq we will be building two microservices which communicates using everything and we will take a look at different paradigms and ways of using rabbitmq and we will also Focus mostly on rabbitmq so if you're here to learn rapid mq great you're in right place we will be building the microservices using go because I'm a go fanatic but you don't have to know go to to follow along and what you learn here you should be able to apply on whatever language you want web demq supports multiple protocols for sending data we will be focusing on amqp in this tutorial which is one way or the network protocol used and over the course of this tutorial we will learn a lot we will learn how to set up rapidmq using Docker we will learn about virtual host users and permissions inside revitamq we will learn how to manage rabbitmq using the command line tool called revitmq CTL we will also be using rabbitmq admin to manage other stuff inside the wrap them queue we will learn about what producers and consumers are and how to use them we will learn about cues exchanges and bindings and we will try out a few work queues we will try pubs publish And subscribe schemas we will try out a RPC based pattern using callbacks and we will also encrypt the traffic and set that up with TLS and we'll also use configurations to declare resources in Revenue queue that's a lot so we better get started so the easiest way to start a private mq is of course using Docker as always Docker is the most easiest way so let's go ahead and run a instance of rabbitmq I'm gonna call mine name rabbitmq I'm going to use Dash D so that it will run in the background we need to expose two ports so 5672 5672 is the amqp port or connections so you will need to find that story to your host machine so let's also go ahead and bind a second Port which is the same port but 15 000 instead and this is the port used by the Admin UI and the management UI for rabbitmq and let's go ahead and enter the latest available rapidmq image which is Revit mq 3.11 in my case Dash management let's go ahead and run that after we've run it we should verify that everything is working as expected so I do recommend that you open up a browser and you go to localhost and you go to the 15 672 port and you should see what I see here you can go ahead and log in using guests guests guess that's the username gives us the password and you will log into this fancy really fancy UI you can see a log log out button up in the corner I'm just going to zoom in we have a log out button up here so I'm just going to go ahead and log out again rapidmq comes with a guest user pre-installed we do not want to use this user and we want to remove that user but before we do that we're going to add our own user now whenever you want to work with rabbitmq you use a terminal you use a command line tool called rabbitmq CTL you can install this on your computer but the easiest way since we already have rabbitmq running in a Docker we can execute the command line tool from that Docker so let's go ahead and do that I'm just going to go go ahead and do Docker exec or execute you're gonna insert the name of your Docker container which we named Revit mq Then followed by the command which is Revit mqcdo if I print that you can see it prints a bunch of information information about what we can do with this command line tool and mostly it's used to manage and interact with mq server so in our case we're going to add a user this adds simple as running the add underscore user command I'm assuming so you see everything so we we run the add user we enter the username and then the password make sure to use a secure password I'm gonna use secret because it's a secret and that's super secret so we have created our first user for rabbitmq now users are usually used to limit and manage what permissions your users has so as you can see it also tells us to don't forget to Grant permissions to this user because right now we have added a user but we can't really do anything with said user because he has no permissions let's go ahead and make sure that our new user is a administrator so go ahead again Docker exit rabbitmq I'm gonna go ahead and run Revit mq CTL we're going to use the command set user tags set user tags if you run it you can see expect some arguments it actually expects the username and then the tag to send let's go ahead and tag our user Percy as on and mini Straight door voila Percy is now on administrator we are also going to want to delete the guest user which is by default always present remember that if you don't remove that user or if you don't configure the user to not get better your Avid mq server will have a guest guest user let's go ahead Docker access private mq rabbitmq CDL delete underscore user and as simple as that I'm just going to go ahead and delete the guest user so once that's deleted we can go back to the UI and we can enter Percy and my secret password and it should be working if you've come this far you have created your first user but We're not gonna play around in the UI before we have changed some things in the UI you can see everything you need to see how many channels there are the cues the exchanges and stuff like that you can also click on the admin administrator panel and you can see we have one admin and it's Percy so that's great there's one thing that we have to cover in Revit mq you have your resources which is channels exchanges Etc which we will discuss soon these resources are contained in something called virtual hosts so a virtual host is sort of a namespace so you use Virtual host to kind of limit and separate resources in a logical way and it's called a virtual because it's done in The Logical layer it's a soft restriction between what resources can reach what resource Etc and if you go up here to the corner you can see virtual hosts and there's a by default there's a slash virtual host which is the global one and we don't want to operate in that one we want to create our own virtual host in this case as I said again you use Virtual host to group certain resource to grid together and restrict access on those virtual hosts let's go ahead and create our virtual host again back to the terminal and we're gonna execute the same command we're going to use Revit mq CTL again and this time we're going to do instead of add user we're going to do add underscore V host short for virtual host and for our case we're going to add a virtual host called customers now the customer's virtual host will hold all all of our future resources which are related to anything working on customers once we have our virtual hosts we need to make sure that we have permissions to operate on the virtual host if I go back to the drawing board right now we have a user called Percy and we have a virtual restriction called a virtual host which is called customers now our user Percy he wants to communicate with the resources inside the virtual host however we are not allowed we don't have the permissions so we need to go ahead and make sure that this is possible so back to terminal go ahead and do Docker exec rabbitmq rabbitmq CTL and we do set underscore permissions we do Dash p to tell which virtual host to apply these rules to and we're going to set permissions on customers then you need to specify the name of the user now what follows here is a little bit of Regis the one we specify permissions we need to specify three different permissions and they are the configurations access so what can the what are the user allowed to configure we need to specify write permissions which is basically on what resources are the user allowed to write we also need to specify read permissions so what resources are Percy allowed to read the way we do it is by using a regex pattern which is very common on rabbitmq so if you want to allow Percy to only configure resources starting with say customer we have to do a regex like this customer dot star and this would allow Percy to configure every resource inside the virtual host of customers beginning with this name I hope that makes sense so if we want to allow Percy to configure all the resources inside this virtual host we would do dot star and that would match anything just to make it clear if we have a queue in here and this queue is named let's say details this queue is named details for some reason if I want to allow the user perceived to configure that details and maybe I want to allow him to write on the details so let's go ahead and say that he is allowed Percy is allowed to write on details but he's only allowed to read any resources beginning on dot customers these permissions would allow per C user to configure the details to write to the details but not to read from them because this pattern doesn't match the name I hope this makes sense so back to the command we need to specify these three permissions in order I'm just going to give Percy the permission to kind of write on everything so enter the permissions like this I'm just gonna do that star three times we need to enter that so we have the configuration write and read permissions set go ahead and execute that and it set the permissions for Percy and the virtual host of customers so now I'm allowed to read configure and write on everything inside of that virtual host which is great that is what we want so if you go back to the rabbitmq UI we should be able to refresh and we should see up here in the corner we have our customers up here we can specify we can click on that and we are now only viewing data related to the customer's virtual host so I think it's time to finally start building something and before we do I just want to make a quick note about all the resources that we will be using what they are so we understand that because it's really important to understand each component in Revenue now this is a small flowchart I've drawn using rapidmq and what we have is the first component which is producers producers is any piece of software that is sending messages so anyone who is sending a message is known as a producer they produce messages also we have consumers who is any piece of software that is receiving those messages so how do they receive the messages two pieces that we need to understand we have exchange now producers send their messages to a exchange think of the exchange as a broker now or a router so the exchange knows which cues are bound to The Exchange now to bind something to an exchange we use something called a binding a binding is basically a rule or set of rules if the queue should receive the messages so the exchange is bound to a queue by a set of rules so a producer sends a message say that the message is on the topic customers underscore registered now the exchange will know if a certain queue is bound on that topic and send it further along it's really important to understand you don't send messages to the queue you send messages to The Exchange which which then routes the messages where they should go now the queue is basically a buffer for messages it's a it's usually a first in first out so it's a fifo queue so basically the messages comes in and comes out in the correct order so message one is received it will be the first message to go out whenever we have a consumer who wants to consume messages we will be looking more at the bindings very shortly so basically this is what you need to know and the terminology that we will use I think I'm gonna go ahead and start up a go project which we can use to start coding so I'm just going to open up my terminal I'm going to clear it and we're gonna go ahead and initialize a small folder structure now I'm going to create the project setup first now I'm really used to using Cobra for my commands and I'm going to follow the structure that Cobra uses sort of for this project I'm inside a new folder here called event driven archua I'm going to go ahead and create a folder called CMD for command lines I'm going to create the producer and we're also going to create a internal folder for our internal libraries now we're going to create two files so inside CMD producer I'm going to create a main file for the producer program I'm also going to create a rabbit mq inside of the internal so the idea here is that CMD folder will hold different commands that we can execute in our case we will have a microservice client known as the producer who simply will shoot out a few messages onto an exchange and the internal folder will hold any internal code that we use amongst the services and let's go ahead and initialize a module so I'm going to go ahead and name these to programming procy dot Tech event driven rabbit we like it we do like event driven rabbits I'm also going to go get a rabbit mq amqp 091 which is the official oh sorry not got go get which is the official supported client which the rabbitmq team are supporting so I'd recommend using that now you should have a phone structure looking like this we have a CMD for our commands we have internal code or shared libraries we are going to go ahead and start by creating the Revit mq code to connect to the docker that is running I was gonna go ahead and open up the repdmq file that we created so in here we will need to create a small helper function that connects to revitmq using mqp so will we we will need to specify the credentials and the host and the virtual host to connect to so let's go ahead and create this code so we're going to be operating in the internal module we're going to need to import a few things we will need context we will need prompt and we will need mqp which we downloaded before so let's go ahead and create a new type I'm going to call it the rabbit client basically this will be a wrapper around the official amqp client which we can use to add a little bit functionality so I'm just going to go ahead and hold a connection to amqp we're also going to hold a channel to amqp now this is the connection used by the client so we need to discuss connections and channels so a connection in Revit mq is a TCP connection and A good rule of thumb is that you should reuse the connection across your whole application and you should spawn new channels on every concurrent task that is running so what's the channel a channel is a multiplexed connection over the TCP connection so think of it as a separate connection but it's using the TCP connection that we set up on the connection so the amqp connection is the TCP connection the channel is a multiplexed sub connection you could call it and that's a really good rule of thumb to just follow let's add a comment here that channel is used to process and send messages so the first thing we need to do we need to connect to revitamin so let's go ahead and create a function called connect rabbitmq we will accept a username a password the virtual host the host the host and the virtual host name also and we're going to be returning a am QP connection as always an error as well so to connect we can use amqp dot dial will accept a URL formatted in a specific jury so let's go ahead and do font dot Sprint f and the format is the protocol which is amqp we will then insert the username colon the password at the host and we will follow that by slash the virtual host so let's go ahead and insert a username password the host and the virtual host as input and these ex the output of dial we can view is a connection or a pointer to a connection and an error so let's just go ahead and return that next up we need to now the next thing that we will do is remember we talked about the channel being the connection should be reused and the channel should be spawned for each instance and if our program will be running many clients which is very common uh we will want to spawn that so I'm just going to go ahead and I'm gonna do new rabbit mq client so this one will actually accept the connection that we received from the connect web demq function and we will return a Revit client and an error so what we will do here is that we will take the connection and we will spawn a channel from it and this channel will be used for this created rabbit client this will allow us to reuse the connection between multiple rapid mq clients um so let's go ahead and do the channel and error which will be returned when we create a channel so connection.channel will return the channel or an error and if error isn't new in this case let's return an empty rabbit client or the error I won't be doing error wrapping and stuff here because it's really not what we will be learning today so let's do a new client we will hold a pointer to connection we will also hold the channel which we can reuse at at will and let's return new so we can connect to Revit mq and we can spawn a new client client that will hold a channel for us to reuse and just to be good boys let's add a function to the Revit client which is close and it will go and close the channel only close the channel because we don't want to close the connection in case we have other people using the same connection only close the multiplexed and it's going to be capital so we also have the ability to close the channel for this particular clients so this is good so let's jump to the producer the first thing we will do is that we will simply connect to rabbitmq and kind of see what's going on just to know that the connection works I'm going to create a main I'm going to create the main function and we're going to connect and we're going to call the internal module and we're going to connect to the rapidm queue and now we need to insert the credentials which is Percy and in my case the password is Secret we're hosting it on localhost 5672 and the virtual host is named customers basically we're only using the function that we just created it will connect and if the connection fails I will simply panic and once we are done with the connection we will go ahead and close it so let's go ahead and see and we also need to spawn the new rabbit client with the connection that we have so that we are reusing the connection let's go ahead see if there's any errors and again panic and defer client Dot close so we are closing the connections and let's just sleep for let's say 10 10 seconds then we will print the client just for fun and we also need to fix a few Imports uh for the time package and the log bring up a terminal and we should be able to run the producer and have a connection go up let's go ahead go around CMD producer main.go and let's run that we will see that we didn't import context I'm just going to jump back to my bad I imported the context package but we're not using it currently so I'm just going to go ahead and remove it then I'm gonna jump back to my producer and now I'm gonna go ahead and rerun the program and we should see that it loads we can open up the revitamq UI and we can go to the overview and we should see here that we have one connection and one oh they disappeared I was too slow so let's go ahead and run it again maybe in 10 seconds is a bit slow I'm just gonna run it again jump back to the UI and you should see here that we have one channel and one Connection open so that's great and I mean I've got to stress this enough like remember that you should recreate the channel for each concurrent task but reuse the connection always have one connection for your particular service and spawn channels from that and the reason why we want to do that is because if you spawn connections instead you will create so many TCP connections and that does not scale very well so it's time to start sending data and this can be done on the channel that we will be using and we will Begin by creating a view so to create a queue remember what the queue is we remember the queue is the fifu buffer which we use so let's go ahead and create a cue and we can do that inside of the code if you go back to your code anyplace and we're going to that where we have the channel we can go to the channel and see there's a queue declare function which will help us create a queue from the client so there's a few parameters that we need to understand when we create our queue so first of all the queue accepts a name parameter basically this is just a reference to the queue you use that to specify what the queue is named in our case we will probably use something like customers now you also need to specify if the queue should be durable and a durable queue will be persisted whenever the broker restarts so that means when Revit mq restarts if you want your Q to persist you need to say set that to durable now we can also set the auto delete parameter and if Q which is set to automatically delete will be deleted whenever the software that created it shuts down so whenever the producer shuts down after the 10 second timeout it will delete the queue this is very common when you have Dynamic cues being created by services that you don't know of that you don't want to maybe they respond differently and you don't want to clog everything here with a million queues you have to delete them so Auto delete is good for that and you can also set the exclusive flag and the exclusive flag will make the queue only available to the connection that created it so if we only expect the queue to be used from this particular piece of software we can set it to exclusive nobody else will be able to use the queue no way it will assume that the queue is being created on the server so if you are expected to already exist for instance you should set it to true now it's important to understand these parameters it's really important and they will be reused when we create messages and exchanges so all these things will mean the same thing later in the tutorial when we start looking at messages and exchanges instead of declaring this here let's make a small little function which can help us I'm just going to go ahead and create a new function and it will be attached to our rabbit client and we will call it create View basically it's a simple wrapper around the current one but we're not exposing our Channel to to the users so we're gonna go ahead and just make this simple function and I'm not gonna be using all the parameters I'm not going to be allowing all the parameters and it's going to be allowing the users to set the durable and auto delete and the name so the rest of them will default to false for now and easiest to make it a little bit easier for now let's go ahead and reach out to the channel and do a q declare and once we pass in the name we will pass in the durable and the auto delete and exclusive rule set default I want everyone to reach this queue always and no way it will also be set to false by team now we can simply return the error and we have a nice little function now in place so we need to go back to the producer and once inside the producer we will need to now create the queue so in here after we have our client after we defer it I'm just going to go ahead and do the fur I'm gonna go client create q and I'm going to create the customers create that queue and I'm gonna set it as durable because I want it to survive I am not going to make it auto lit because I want it to be there forever and I'm gonna Panic if anything goes wrong so for fun we're also going to create a second queue which will be customers let's call it customer test and this one won't be durable and we will set it to Auto delete just so we can test that out and again we will have it wait for 10 seconds and then log the client right so we should be able to bring up the terminal and execute again and oh my bad I forgot one parameter inside of the create queue so let's go there inside the cube declare there's actually one final parameter I forgot about I'm so sorry and it's the arguments and this is a way of inserting options to provide to the user and we won't be going into that now but you can basically provide generic data which you can use internally to manage how things work I was gonna leave it to nil for now once we have entered this we will create our cues let's go ahead and open the terminal and I'm going to clear here I'm gonna go ahead and run the producer again I'm gonna quickly jump to the UI you should be able to go to the queues and see that we have the customers created Q we can have also the customer tests you can see a little bit of features here this is the durable so it will survive this one is auto delete and so it will be removed whenever we recreate now if you want to try this out to kind of get a better understanding of it you can go back to the terminal and I guess we could do Docker restart rapid mq and that will restart the rapidm2 server and now if you've guessed right we should whenever we go back to the UI we refresh we will only see one of the cues the one which was durable it's super important to remember what durable and auto delete does because it will be really important when you create your architecture for the messages so if you want your messages to survive between restarts or crashes or when everything happens make them durable so we will cover this more when we see how we deal with payloads as well so same principle applies to them now you might be thinking let's send some messages on that queue but do you remember what we talked about earlier you're not sending messages on queues we are sending messages to exchanges so we need to explore exchanges and bindings two very very important parts about revitamq sadly I wish I could just say that it's a router and that did that's it but there is a few different exchanges and I just want to quickly cover them so we know what we have to work with so exchanges are a vital part of rapidmq and that's why we're focusing on it now to start receiving or sending messages on a queue you need to bind that cue to an exchange this is called The Binding The Binding is basically a routing route so one important thing is that AQ can be bound to multiple exchanges which makes it even more important to understand what's going on you can even have exchanges being bound to exchanges so so whenever you send a message on message queue you have to add a routing key and the routing key is sometimes referred to as the topic will be used by The Exchange so direct exchanges is the first and foremost and the most Easy Exchange that exists so we have a producer who sends a message on a with a routing key called customer created now we have a queue called customer created so the message will go from The Exchange to that queue because the routing key is on exact match the customer email won't get the message because the keys doesn't match it is as simple as that the second exchange type is a very nice one it's called a fan art it's called a fan out because it's outputted to every queue present the fan out will ignore the routing key so in this case customer created message is sent to the fan out and the fan out will send it to customer credit but also to the customer email because who gives a about the routing key whenever you have whenever you have the need to deploy or broadcast messages to all the queues no matter what you use the fan out and then we have the one I like the most and it's called the topic Exchange the topic you change allows you to create routing keys that are delimited by a DOT now in the example you can see that we send messages on customers dot created dot February and the exchange will then send out the messages to this rule customers dot created does dot hashtag which will match any items on that topic when using topics there's a few things that you need to understand the dot is really important because when you when you subscribe to topics there's a few things there's actually two very special characters you see the character here the the hashtag the hashtag means zero or more matches so for example customers dot hashtag will match here because there's one or more match zero or more matches but we also have the star which is customers dot star which will match anything inside that dot and then maybe February this will also match because we're listening to any event that occurred on February not only created maybe customer deleted on February could also be possible so the topic exchange really allows you to create binding rules which you can use for this now this is very common so you could for instance have this customers created dot February or one other one other very common thing is maybe you want all the users from Stockholm so you would publish to the customers created stucco or maybe Europe dot Stockholm to even narrow it down because maybe some customers from some countries have these specific rules that you need to apply maybe they need an email stating that they are now signed up so you could use the topic to kind of have these very Dynamic routes being set and even help you with knowing what kind of data it comes from and then of course there's the final which is the header exchange now when sending messages in rabbitmq we are allowed to add headers and which is basically these key value fields and you can add um routing based on the header so for instance if you have an independent of the routing key you have maybe all your messages adds a header for Linux maybe you consider Linux users to be scary why are they Linux using Linux users using your system uh maybe you want to get notified on those so each method gets a header set which their user agent and you can then listen to those messages so it's very good very good but let's stop talking about exchanges and let's create one instead to create exchanges we can use the rapidmq admin command command line tool instead if previously we used the rabbit mqctl but there is also the rabbitmq admin you can create cues inside of the code it kind of depends on what you like to do I like to have my exchanges created beforehand so I know what exchanges exists but you could do this in the code as well but to get some practice using revident queue admin let's go ahead Now open up a terminal and we will do so if that terminal is so small okay let's do it so we do Docker exec rabbitmq rabbit mq admin we want to use the declare command because we want to declare a resource we want to declare a Exchange we will specify the virtual host use which is customers and then we need to specify the kind of exchange we want in my case we will be using the topic for now we will set the user and we will you set the password this is used to authenticate to Revit mq you need to do this also let's make it durable so let's run that and we will see that the name all right we need to specify a name for the exchange of course let's call it Customer Events as in the example let's run that and you should see exchange is declared so that's great we have the we have the exchange called Customer Events where we can start pushing data however we need to also set permissions for this right now the user we are using first he doesn't have permissions to send data to the customers let's go ahead and fix that let's do Docker X6 rabbitmq rabbit mq CTL set topic permissions so this is a new command used to set topic permissions we're going to set topics on the virtual host of customers and we're going to set the permissions for the user Percy and we're going to set the permissions for the exchange called Customer Events now we need to specify the read and write permissions so this is the same as we did before we're going to allow our user to read and write any messages starting with customers dot star so basically anything sent on the customers will be able to be read customers dots star and let's change that so basically if the exchange Customer Events would send billing information on billing dots whatever topic the Percy user wouldn't be able to listen on those because we are only allowed to listen on any topic called customers dot whatever all right we have the exchange up and running let's take a look at how we can start publishing messages onto the exchange now we need to go back to the rapidmq client we need to create a new function that allows us to create bindings because remember we need to bind the cues that we have created to The Exchange so I'm going to create a new function here we're going to call it create binding and it's going to be a simple wrapper again we're going to accept the binding key the routing key and the exchange name and the name to use for The Binding and we will simply return an error so there is a function on the channel which is called cubine so let's go ahead and take a look at it so RC the channel you bind it takes a name the routing key to use and the exchange and the no wait and some arguments let's go ahead let's add the name let's add the binding and the exchange we will set it to false the no wait we're leaving no weight to false because having no weight set to false will make the channel will make the channel return an error if it fails to buy it so I like that and again the final argument is extra options we don't really need them let me go ahead and write a comment here if you guys are wondering why I'm getting errors sometimes in this side it's because I don't comment the code properly I'm using revive and it doesn't like me not commenting the code it's not actually errors but you know it's a good it's good that it points it out so the crit bind will bind the current channel to the the given exchange using the routing key provided great so now that we have this functionality we need to go back to the producer and after we have created our cues we will go ahead and call the create binding and we will create the customers created the name of the queue that we want to bind maybe I didn't say that the name is the name of the queue that we want to bind then comes the binding key or the topic let's say we want customers dot created dot Stars dot Star as we had in the example and then we need to provide the name of the exchange and the exchange was named customer underscore events then let's see if there's any errors and again let's simply Panic if something goes wrong we're not doing proper error handling here because that's not really what we're trying to learn today so let's go ahead and copy that let's redo the let's copy that and recreate another binding for the test and instead we will do customers dot everything Maybe so that's amazing now we can try this just to see that the binding does work so we can clear this and we can do go run run the producer and we're getting some messages did I mess up the name oh my bad inside this create binding function we also need to make sure that the input arguments are strings so let's go ahead let's rerun the program now and I'm gonna jump to my UI we can see them we can see the cues are here so we can click on the Queue and we can actually see a binding you can view the binding here Customer Events is bound by this routing key and you can also of course unbind or even create bindings inside the UI I like to do it in the code it's more apparent it's more reusable but sometimes when testing things the UI is really great for just you know kind of adding a quick binding if you want to consume or listen in on the messages now that we have the bindings let's start publishing messages we have bound the cues to The Exchange we can now start sending messages I will create a new wrapper function I like wrapper functions if you haven't noticed uh just sort of making it easier to reuse and reducing the boilerplate code let's call it set and this time we're actually going to insert the context so re-add that import we're going to whenever we send messages we have to provide the exchange and the routing key that we want to send the message on and also something called options which we will look at soon the options are really important but let's leave them hanging for a bit so the exchange and the routing key Will Be Strings we will accept options from amqp they have a struct for it and it's called publishing and amqp Publishing holds all the options that you can apply to the message that you're sending so let's go ahead and return an error if something goes wrong so to send messages we use the channel again always the channel so let's return channel RC Channel and then we're using publish with context I like to use publish with context but because that allows us to add timeouts to the messages Etc so we don't just leave the code hanging forever let's go ahead and take a look at the input parameters that the published context kind of expects I'm just gonna go here so it expects the context The Exchange name the routing key and mandatory immediate and the message the first two we know what they are the exchange and routing key so let's just go ahead and do exchange and we are also adding the routing key but then comes the mandatory mandatory is used to determine if a failure should simply drop the message that you're sending or if it should return an error if you leave mandatory as false you won't get an error it will fire and forget if you set it to true it will make sure that if it failed to publish the message to the exchange or the queue it will return an error it's used to determine if an error should be returned upon failure so that's important to remember also we have the immediate flag immediate and you can just leave that to false because you will never use that if you're using this package um because immediate is actually removed in rabbitmq3 and up so unless you're using an old version of rabbitmq this is actually deprecated you just leave it hanging and then the last thing it wants is the options and the options will be the actual message that we're sending and again if you take a look at published context you'll see it returns an error so we can simply return whatever we get back from the publish so this is a super simple super super simple wrapper function uh publish payloads on to an exchange with the given routing key let's do the interesting part let's go back to the publisher this is becoming one giant function but bear with me this is in learning purposes we won't build a nice structure around this so let's publish messages one on each queue just to see a few differences the reason why we're doing this is because we want to discuss something called the delivery mode and this is a really important parameter to understand uh if you want to have your messages persist that means if you send a message and no consumer consumes it and your server restarts that message will be deleted if it's not a persistent message and those messages are called transient so why would you not want your messages to be persistent well it's it's a matter of performance and also up to you to decide is there any reason like if you're if there's if there's no reason that the event will happen if the server comes up for instance there's no reason to persist it and then they should be transient to increase performance because making things durable in rabbitmq there will be overhead to it also remember if you're trying to send persistent messages your queue also needs to be durable there is no point sending persistent messages on a queue that isn't persistent itself because if the queue isn't there then your message yeah you understand so let's go ahead let's send some messages let's begin by creating the context that we will use so context cancel we will do contexts with timeout and we will create a new context and we will say five seconds that should be reasonable a reasonable amount of time to wait and after five seconds we will cancel since we're using publish with context this will also cancel the publish function so let's go ahead and do if error equals client dot send as in the context The Exchange is Customer Events and let's send a message to customers dot created Dot us let's have a Us customer created now the options is the actual message that we're sending so let's go ahead and do amqp publishing amqp dot publishing and we need to create the actual message that we are publishing now you need to set the content type of the message this could be Json binary or plain text in our case let's use let's just use plain text for now we want to specify the delivery mode as we discussed so delivery mode should be persistent for this one and you can see there is transient and persistent let's leave it as persistent we want this message to survive then there's the actual body and in the body we will send a byte array it has to be on binary so let's say a cool message between services this is the message that we will send then let's also check if the error isn't nil we will panic and we need to make sure to end the with a semicolon so we're trying to send a message onto the customer's event exchange with the key customers created us let's go ahead and send a transient message as well yes so we can view the differences let's copy that let's go here insert and let's make a comment sending a transient message Let's see we want to send the message on the same exchange but let's instead send it on customers tests remember we bound customers dot star so it should be bound to this let's change this from transient to or persistent to transient uncool undurable message yes so we know what is what we can go to the terminal and execute the producer once again now if we visit the UI we should be able to go to the queues we can actually see that a message is sent you can see that both of them has a message sent and you can go inside here if you want to try it out so remember to just make sure that if you acknowledge the message the message will be gone so you can do knock and you can get the message you should be able to see that this was a cool message sent again to show you the difference now remember that we sent one message that was persistent and one message that was transient what we could do is we could do Docker restart rabbitmq now whenever we restart this we should see that one of the messages is actually gone whenever it starts up so let's go back we can go back to the queue we can go back to the customers created queue and we can see there's one message still being ready because we told it to be persistent and this is important to remember if you want to want your messages to be alive across restarts make sure you set the delivery mode to persistent uh if you only want the messages to be shut out and don't care if they are actually consumed set them as transient so we know how to publish messages but this is no good to us if we cannot consume the messages in another service let's go ahead and fix that so inside the command holder I'm going to create a new folder called consumer consumer is going to hold a main.go file which we will be using to consume messages so before we fix the consumer we need to actually create a second wrapper again and this wrapper will consume messages so again there is some parameters that we will need to learn let's call it consume and we will accept the queue name we will accept the consumer string and the consumer string is a unique identifier for the consumer it's used to consume a queue consumer will be this unique identifier we can use to say that this is this particular service Etc uh we will also accept a parameter called Auto ack Auto acknowledge is used to acknowledge messages basically the way it works in um rabbitmq is that if you have a user send a message to an exchange say that this is an exchange he sends a message and that message is sent on EQ to a client this client receives the message The Exchange needs to know that the service that wanted the message actually received it and the way around mq solved this is that on acknowledge is sent back to the exchange saying hey I got the message and then it will The Exchange will know that they can drop the message so that's important to understand that we have to acknowledge messages and auto AC will make sure that our code whenever it receives the message will send out and acknowledge to The Exchange it's a little bit tricky with auto AC because it is amazing it helps you but sometimes It's Tricky if you have a consumer that acknowledges the message but then fails that message will get lost because the server has delivered it it has done its job so you might not always want auto AC sometimes you want to say that you want to acknowledge the message manually whenever your service is done processing that message we will return a channel holding aimqp delivery which is a structure and also an error let's go ahead and return the RC Channel it has its own consume function which accepts the Q name the consumer identity the auto act parameter remember to use that in the right way if you have a service which does a lot of processing might take some time and can fail don't Auto act unless you're sure that's what you want there's also the exclusive parameter exclusive parameter can be true or false and if it's set to true then that means that this will be the one and only consumer consuming that queue if it's set to false the server will distribute messages using a load balancing technique so if you only want if you have one consumer that you want to consume all the messages set exclusive to true in this example we're just gonna default false and we're not even gonna let the user to pass in anything else because we don't want that no local is actually not supported in rabbitmq it's supported in mqp but not in Revit mq so the field is used to avoid publishing and consuming from the same domain so in case you have two servers maybe and you only want one of the servers to publish or consume you can have the no local set to true but again it's not supported in replica but just so you know what it is no wait won't wait for the server to confirm and again we'll set this to new and then the arguments that we can add is extra arguments we will leave them to false for now so we have our consume message let's build the consumer so I'm gonna go ahead jump back to the consumer and go package main function Main and again we want to connect and actually we want to do the same thing as we do in the publisher I'm just going to go to the publisher so scroll up I am going to copy the code here and I'm going to paste it inside the consumer because we would be doing the same thing also we will want to import the packages that we use so connect to the client create a client and then we will need to start consuming messages so let's do message bus accept an error let's do mq client dot comsu oh sorry client Dot consume and we want to consume customers created and we want to name this oh my bad I misspelled line dot consume let's call this service email service because this service will accept them and send emails now let's not Auto acknowledge let's leave that to false we will look into that as I explained earlier um let's just Panic if something goes wrong when starting the consumption now for now let's make sure that we just consume forever so I'm just going to create a blogging Channel it's gonna accept empty structs I'm gonna start a go routine which will run in the background assuming any message that we get from the message bus so let's just log that print line let's do new message and let's print the message like that then let's print a little nice text down here consuming assuming to close the program program press control dot C to exit and let's block forever so let's let's try consuming unblocking which will of course then run forever so we have nothing sending messages on this channel so this will actually lock and block because we will never receive anything on it for now so this is actually great let's open up a terminal again and let's go ahead and run the consumer remember we should have a message in the pipe on on the Queue so okay sorry oh oh my bad I entered the wrong Q name so I did customers Dash it should be customers underscore so let's go ahead and run it and we should see a message being consumed that is great we are consuming messages and as you can see there's a lot of data being printed here and it's not in a very nice format and what we can do is we can rerun the consumer the same message will be received over and over because we're never acknowledging to the server that we consume the message if rabbitmq never receives an acknowledgment it won't drop the message unless it expires you can set the expiry time in one of the options so we're never acknowledging the message so it will kind of simply just be there forever we can do one interesting thing actually if we go to the consume we can jump to the amqp delivery and go visit that struct because I like going into the libraries to see what's going on what options we have now as you can see there's a bunch of fields the content type the delivery mode set correlation ID which is really important to understand because you can use that to understand how the message correlates to what messages are sent we can see the exchange it came from the routing key and a bunch of other stuff about the message you can also see the acknowledger and acknowledger is used to acknowledge the messages so let's go back to the consumer just to show what's going on we can see here let's go ahead and acknowledge the message whenever we are done processing so this is important to understand because and you will want to do this when you have long running processes or whatever that you want to be retried in case they failed so I'm just going to go ahead and do acknowledge multiple will acknowledge many messages at the same time but we will set that to false for now let's just see if it returns an error we will log our print line if something goes wrong acknowledge message failed we won't really care about the error for now but simply continue if something goes wrong and then after we have continued let's simply do printf acknowledge message and let's print out the ID of the message because that can be important to understand and know what's going on so we will accept the message and this time we will actually acknowledge it this means we will never see it again so let's open up the terminal let's close the program let's run it and you can see it acknowledged the message let's close it let's rerun this time we won't receive it we have already acknowledged it I really want to do this to really push on the point that auto acknowledge can be dangerous it might have a an effect that you don't want what we can do is inside this program we can run the producer once again to have it send a message to the consumer this time I really just want to highlight how dangerous it is with auto AC so this time let's remove the the acknowledgment from the code I'm just going to comment out the code block where we send the acknowledgment and I'm going to change auto AC front false to true and then here inside the code we will panic go ahead let's run the consumer we accepted the message we have a crash now you might expect if you run the code again it would rerun the message but Auto acknowledgment has been sent to the server as soon as you receive the message I hope I've really made it clear why all act can be dangerous but also very handy if used in the correct way so what do you do when there's an actual failure and you kind of want driving time to know about that well instead of sending an arc we can send a knock which we'll tell rabbitmq that it actually failed to process so when using knock we can tell rabbitmq to a retry and re-queue the message so should and revenue try sending it out again or should throw it away so knocking is really important to understand and to be able to use just go ahead jump back to the code let's remove the outer knock and this time let's update the code so that we will throw any message the first time that they arrive it doesn't really make much sense but for practice it's pretty good so the message actually has a field called redelivered what this means is you can view redelivered If the message is appearing for a second time and what we can do is instead of message dot Arc we can do message dot knock and again multiple is the same as Arc if you want to do multiple messages at once but we don't you might want to do multiple if you have a high volume of traffic and whatever mq will do then is that the client will sort of buffer a few messages before sending the ax and which can help reduce the network load a little bit so the re-queue is the second parameter and this is basically if if the server should retry sending the message out again and let's go ahead and set that as true and then let's make sure we continue then down here we're going to check if we can do message dot talk instead let's acknowledge here and log print line let's just do fail to Arc message so also noticed I told you guys to print out message.id but we're not setting message ID to anything so it's kind of empty just so you know that let's see what are we complaining about print line so if we do this now we will make sure that each message is actually first knocked and then approved so we can run the consumer go run let's run the producer to make sure we have an event to actually send out and then let's do go run consumer main.co and we can see that we got the message and the second time we print it we actually acknowledge it so the first time we're skipping it the second time we're acknowledging this is how you manage in rabbitmq your approvals or disapprovals your Acts or knocks basically so whenever your service fails you can send them back a knock to the revenue exchange telling it to hey you might want to retry this right now our code is kind of messy because everything is single thread we can only receive one message at a time currently so I think we should Implement a quick little fix to change this so you can try a little bit more advanced examples so I will be adding an error group to the code you will need Go 1.2 or above to do to use Arrow groups if you haven't used error groups and go you should really try it out they are amazing um they are available in the golang.org X sync error group package I'm just gonna go ahead and kind of re-update this code so that we're using them instead so let's remove all that code so let's make sure let's create a context so we will set a timeout for 15 seconds per task and this is if you have long running tasks so let's create a new context that we can use let's create a context with a timeout so if that will be context with timeout basically what we're doing here is that we're saying hey whatever task you're running if 15 seconds has elapsed we want to cancel and we will also create a error group and the error group comes from this package the it's in the experimental packages so far but it's really I mean if you haven't tried it you should really do it let's create a new one and apply our context to it now error groups allows us to set concurrent tasks now we will be setting the amount of concurrency in in go right now but we can also do this inside of rabbitmq we will look at that later let's set the limit of 10 concurrent go routines at the same time and let's spawn a go funk which will run and then inside here we will Loop through the messages same as we did before but instead of handling the message we will spawn a worker so here you could have your we need to do that to avoid uh we need to assign a new variable to avoid over overriding message each run your ID should be complaining if you don't do that so what you do with a error group is that you do error group dot go which will spawn a new go routine and run your function here and it will handle all the concurrency stuff for you so it's really easy to use so let's do log dot printf new message because we want to see all the new messages that we got that we get let's make them sleep for 10 seconds so we're creating a long running process here now after 10 seconds you kind of want to acknowledge the message so this will make sure that we have a process which runs for a long time and let's just print it let's return the error and if everything succeeds we want to do back knowledge message let's add a print let's do the message message ID again we can add that to the message and then whenever we succeed we want to return nil and let's see we're running running running we need to import the time package or the time module sorry and once we are done with that also I removed the blocking channel so let's re-add that and then here in the end let's re-add the print line use control dot C to exit then let's block forever so let's go ahead run it to make sure it works so it's consuming great Hopefully this wasn't too advanced so we create the error group we set a limit for 10 messages at a time and then we just makes a loop that runs forever and each message that we receive we spawn a CO routine which will wait for 10 seconds and then execute so this allows us to listen for multiple messages at the same time there's a reason why we're implementing this again let's update the publisher as well so the publisher is a little bit too friendly right now let's go ahead and wrap the sending so we want to send more than one message so I'm just going to create a small little for Loop here which kind of sends 10 messages and then exits wrap that and let's do some indenting so wrap the publisher with a for Loop that sends 10 messages instead just so that we see a little bit more what's going on and let's remove the time.sleep because we don't need that anymore because I want to showcase one little thing here I'm also going to add a terminal here and once they are all in the same folder I'm going to be running two consumers and then one publisher so let's go ahead and do CMD consumer main.go now starting that one up and let's move up to the next terminal let's start a second consumer so I have two consumers running and then I'm gonna go ahead and run the producer in another terminal so we should see a lot of messages being printed after a while you can see that each of the services acknowledges five messages because right now we're sending the messages to the consumers it won't send all 10 messages to all the consumers because we haven't set that up it's right now we have registered what we call workers so we publish 10 messages and those 10 messages will be load balanced out onto the listeners onto the consumers that are registered which is why we went through the hustle of changing all this because I wanted to show you this maybe you noticed the servers actually acknowledge oh that's a consumer if I run the producer you can see the producer kind of exits right away it sends the messages and then exits so what about if we want the producer to actually wait until the server is done because we we know that the messages takes 10 seconds to complete but the producer doesn't care it sends the messages and forgets about them well sometimes we kind of want to wait and see what's going on in the producer because if it failed maybe the producer want to do something else so we can actually change that really easy so let's go into the client.send now we are using publish with context there's actually another function called publish with deferred the third confirm with context that's a long name published with deferred confirm with context now a deferred confirm will allow the function to return a object with information about the message that was produced or sent we cannot simply return here anymore because now we are accepting a confirmation and an error so the message is the same we can just leave that but down here we want to add we want to check if error not equals nil and we want to return the error but we also want to block and we can do that with the confirmation so confirmation dot weight will actually block until we receive information from the server and let's return a nail if everything goes fine now this won't wait until the work is completed but it will at least wait until the server acknowledges that it has received the messages I hope this makes a little bit sense so now our Hub producers will actually wait for information about what's going on this function will always return nil on confirmation if the queue isn't set to something called confirm mode so we need to enable confirm mode Let's go ahead and do that so the way we enable confirm mode is that we go up to where we actually create the channel so going to new rabbitmq client where we spawn the channel and then let's put the channel in confirm mode so call the function confirm that's passing false because we don't want to wait which is again it's the no wait so same as before it will wait for the server but we don't want that so if we cannot put it into confirm mode in our case we will want to return an error because now we're using the Deferred send and we will want to wait for that so to see what's going on let's add a print line to the confirm let's then jump back to the terminals that we have and let's produce some messages oh my bad we haven't imported log I'm just going to go ahead and import log module I'm going to go back execute the producer and now you see each time the message is um confirmed then we print true because that's what happened now I want to be very clear this is not the same as when the consumer acknowledges the message this is when the server acknowledges that it has published the message on the exchange they are different but it's good to know and this way with the Deferred confirm we can actually make sure that the message we sent is actually sent and the server has accepted it so that's why you want to do with deferred confirm sometimes if it's really important that the message is actually produced so up until this point we have only been using fifo queues first in first out that's the reason why each consumer is receiving five messages each because they're load balanced on the exchange but what if you want all the consumers to receive all the messages what if you want a pub sub schema to be used for instance so in a publish And subscribe schema you want all the consumers to receive all the same messages so if you are interested in customer registered or customer created you want all those messages you don't want to load balance them to do this we will use a fan out exchange the fan out exchange as explained before will skip everything regarding the topics or the routing key it will just push the messages out to all the consumers so whenever you want to do publish And subscribe you use a fan art fan outs are a great example when creating cues inside of the code is perfect because you will have a dynamic set of consumers so they should be creating their own queue because the final exchange won't care as long as there's a queue it will receive the message so let's go ahead and do that we are going to start by actually deleting the current exchange because it's the wrong type so go ahead and do Docker exit rabbit mq we're going back to the rabbit mq admin we're going to use delete exchange we're going to pass in the name which is Customer Events we're gonna pass in the virtual host which was customer and we're going to authenticate the user the administrator user so where I have the user Percy and we have our password and the virtual host is going to be customers so let's pass that in so we want to delete The Exchange customer events on the virtual host of customers and once that's done done we need to redeclare a new exchange so let's redeclare run exchange you cannot um if you have an exchange like Customer Events you cannot change the type you have to delete it and uh redeploy it because uh yeah well they are in unchangeable so let's go ahead let's redeploy unexchange on the virtual host customers the name will be the same as before don't want to edit all the code so let's deploy that and let's use the type fan out this time we will use the user Percy we will authenticate and we will also make sure that the exchange is durable I want it to survive between restarts this time and again we need to also update the permissions so Docker exec rabbitmq rabbitmq CTL set underscore topic permissions this is the same as before we want to use the virtual host customers and the name Percy and we want the exchange Customer Events now we're going to apply full permissions for everything we have created The Exchange we need to go back to the code and we actually need to go to the create queue create queue function and make a few modifications we don't know the name of the queue when we create unnamed queues which we will be doing now so we need to modify this so we need to actually make sure that the function Q declare actually returns a queue so let's go ahead and make sure we accept the parameter out let's do some Arrow checking here and if error isn't nil we will be returning a empty amqpq so like that and we will update so that we return the queue that is created for us so we also need to change the output parameters we will be returning a amqp.q and an error let me just make sure that it's some cue another pointer it's a cube so perfect so the reason why we want to take the queue that is declared for us is because if we declare cues they without a name they will have a uniquely created Name by rabbitmq so if we pass in a blank name rabbitmq will make up a name for the queue for us and in this case that's what we want let's jump to the producer or the publisher we need to update him because we have updated the create queue one thing that we need to change is that we need to change who is creating the queue so in a publish And subscribe it's most likely the subscriber who will be creating the queue because the publisher won't know what cues exist let's simply remove this from the publisher the publisher will no longer be creating the cues or no longer be creating the bindings so just remove that from the producer and it should be looking like this now for the sake of it we can also remove the customer test event because we don't need that anymore we know that works so this is the new producer it's much Slimmer so we have removed a lot of code from that sadly we need to move that code into the consumer let's go back to the consumer and inside the consumer we will be creating the queue and we will accept an error so we will do client dot create queue we will pass in an empty value for the name because we do not care about the name we will be generating random names you may you can use known names but usually when you have publish and subscribes you you you might end up with many subscribers and you don't know the subscribers so you might as well have the code generated but it's up to you right now the reason why we want to return the queue is because in the consumer when we create the binding we need the name because this is an unknown name let's do client create binding use the randomly generated name for us The Binding can be empty because this is a fan out we no longer care about um The Binding we will simply receive all the messages on this exchange we have created the queue and we have bound the queue using a fan out Exchange which is Customer Events this will allow this client to receive all of the events that are produced so before we try this out we need to take the consume part of the code and we need to push that down a little bit so put to consume at the bottom so we create the queue we create the binding and then we start consuming the queue so let's consume the queue let's use the randomly generated name everything else should stay stay the same let's go into our terminals let's clear them and let's run the consumers I'm starting both consumers let's clear that one so we are running both consumers and let's run the producer and this time we will see that all the 10 messages are sent to both consumers so I hope you see the difference here and why we went through all this hassle the direct solution will make all the clients receive one message or five messages because it will send an equal amount split the workload across the consumers that might not be what you want if you use if you're expecting a publish And subscribe in the case where you want all the consumers to receive all the messages you should use a fan out so the publishing subscribe really pushes the data out to everyone listening now there's another thing that is very common when using web demq and that's building on kind of RPC system remote call procedure and the way that works is you have the producers send a reply to and the reply to is the name of a queue that the producer is listening on each message is sent with this reply to and the email service knows whenever he is done he will reply to that queue and we will see how we can Implement that it's actually not too much it's actually pretty easy however we will be needing to go back do the terminal here I'm just going to go ahead and grab my Docker XA mq rabbit mq admin I'm gonna take the declare exchange command that we ran previously but I'm gonna modify it a bit this time we're going to create a direct type so the same command but it's going to be a direct type and the name of the exchange is going to be customer callbacks so execute that then we need to fix the permissions I'm just going to go back Docker exit use the Revit mq CTL to set the topic permissions we're going to set the topic permissions on customers for Percy on customer callbacks the customer callbacks that allows Percy user to you do whatever he wants on the customer callbacks so before we start doing this because now we're actually going to start consuming and Publishing in the same program and at that point it's really important as we cover one other rule of thumb that is super important before we said that you should reuse your connections but that is only true if you're publishing and consuming on the channel if you're doing both at the same time which we will be doing in an RPC because we will be producing messages and we will also be listening on the Callback so we will be both consuming and producing you should never do that on the same connection never why you might ask because if you have a producer which is spamming a lot of messages he's spamming more messages than the server can handle this means that the TCP connection will start accumulating too many messages and rapidmq will apply back pressure back pressure will start storing messages in a backlog the consumer who wants to send an acknowledgment to the server will suffer from the same back pressure so the back pressure will actually stop the consumer from telling the server that it has processed a message and that will just make the whole pipeline clogged up so an important rule of thumb never use the same connection for publishing and consuming in the cases where you have software like in the RPC we should create two connections so two connections one for consuming one for publishing so let's go back to the producer we will create two connections now instead one will be used for publishing and one will be used for consuming we will keep using the unnamed cues and we will also add a reply to to the messages let's copy these pieces piece of code down here but we will call this the consume connection all consuming will be done on this connection and we will defer consume on close instead copy this part where we create a client as well and we will call him the consume client let's not forget to change the connection we pass in to consume con instead then also let's close consume client so I hope this makes sense one connection for reading one or writing now whenever we send a message we will expect a callback so we send a message and the consumer does something and then we will want the response from that consumer so what we need to do is we need to create a queue so let's go ahead and create the queue we will use an unnamed queue we will make it persist we will check if there's any errors and if there's any errors we will panic we will need to bind this queue so let's do consume client we're creating The Binding on the consume client because it's related to consuming we will pass in qname and take binding will be the queue name because this is a direct exchange remember that the direct exchange we will only receive routing keys that match our binding exactly and then for the exchange it's customer callbacks let's check if there's any error error not equals nil then let's panic and then let's also start consuming so we have message bus error zoomclient dot consume we consume the Q name let's call it the customer API because it makes sense that you would have a customer API kind of push a new member to register on your website or whatever and then you kinda expect a callback to notify the user if it was successful or not let's go ahead create a go funk I'm not gonna be creating multiple routines here we're just going to use a single threaded reader so let's just print the messages I suppose message callback and we're going to use the correlation ID this time so let's do that and then at the bottom after we have sent let's create a blocking Channel again same as in the consumer and let's just block so the producer is creating the queue and waiting for a callback on that queue but we also need to tell the consumer which queue we are waiting for or where where we want the message and this is actually built into rabbitmq or mqp so we can specify the reply to fill and let's just pass in the name here when whoever received this message will also know where they should publish the response let's also add a correlation ID for relation ID is used to track and know which event the messages relates to so let's just do pumped Dot sprintf and I'm going to let's just call this the customer created and this is bad but let's just add the integer for now so each message that is published has a reply to and a correlation which can be used to further along control the messages we need to make sure that the consumer also have two connections because now the consumer will also publish messages back I'm gonna go and force you to copy again we're going to do the same thing but this time we're going to name it the publish connection you could probably fix so it's the same in both pieces but um and so we need to do the same for the client make sure that we have a second client and it's going to be the published client and we're going to pass in the publish connection to the published client publish connection to the published client and let's also close the publish client like that now we are already creating the cues here these are the same as before we don't have to change that we're not sending a message back at all let's go here before or after we acknowledge so let's say we are done here in this process so we're going to use the publish client to send back the Callback now this service has done its task so let's do line.send let's pass in do we have a let's just reuse the same context we're going to send the Callback to the customer callbacks Exchange so when when we are applying we are replying to the queue sent inside of the message dot reply which is the Miss Q name that we gave from the producer so we're responding to the same view that we were given let's go ahead and create a amqp ublishing and we need to add some data so let's make let's set the content type and the content type will be text of plane the delivery mode will be amqp persistent the body will contain the text RPC complete which is really creative but this is just again to show what's going on so Let's do let's re-add the correlation ID to destruct so the consumer can back Trace to what message he got a response to because if you don't add the correlation ID the producer when he receives the Callback he won't know which ID to relate it to so that's why you really pass the correlation back and forth if we go back now and we go back to the consumers and we restart them and we restart this consumer we should be able now to run the producer and that should generate all backs to the producer after a few seconds and we should get the response back and print those we can see here that everything seems to be working we are getting the responses back so message callback customer created zero but now we have an RPC pattern we have a producer sending a message to a consumer waiting for a response and it's just amazing so one thing we should look at before we move on is actually do you remember the hacky way we limited the amount of data we should be able to send we used a error group in go to do that but actually we wouldn't have to because there's a way of imposing limits limits inside of rabbitmq rabbitmq allows us to set something called a pre-fetch limit now a pre-fetch limit tells the sir server the Revit mq server how many unacknowledged messages it can send on one channel at a time and this way we can set something called a hard limit so we don't DDOS a service this is something revitmq refers to as a quality of service so let's see how we can modify revitmq to use this quality of service so we're gonna go back to the rabbitmq client and we're going to create a new function which will be called RC Revit mq client and we're going to apply Q quality of service so let's call it qos actually apply qos will accept a few parameters we will accept the prefetch count and the privilege count is an integer on how many unacknowledged messages the server can send and we have the prefetch size let's assume that in we have the prefetch size and that is an integer of how many bytes the queue can have before we are allowed to send more and then we also have the global flag which determines if the rule should be applied globally or not so those are the input parameters that we need to account for so count size integers and Global is a Boolean and we will return an error so let's do return RC the channel qos and let's just insert the count the size and the global this allows us to set the limit to the Revit mq server instead so uh and that's great because that allows us to more easily limit how many messages and this is great to do because you avoid cluttering your network and spamming one service until it goes down so this is kind of important to apply and I really recommend that you do it so go back to the consumer we can simply just do uh client dot apply let's say 10 messages 0 bytes zero bytes will be ignored so don't worry about that if you don't put any bytes it will ignore it apply a we have applied a hard limit on the server so that's great now the server knows that we can only accept 10 messages it won't send more than 10 messages which is nice so it's the year 2023 so before going into production with this amazing piece of code I think it's safe to say that we should add encryption rabbitmq actually provides a GitHub repository to help you to create a root certificate or or a root CA and the certificates that you need to get going so let's go ahead and open up a terminal now I'm going to clone the official rabbitmq GitHub repository they have a repository called TLS gen so go ahead do that we can go inside that repository and we should see a few files there's something called the basic which will generate basic certificates for you so let's go inside the basic folder tlsgen Dash basic now you can read the readme here the where they explain what's going on and what's not going on let's just go ahead and create some basic certificates while we test things so we're gonna do make password equals and we're going to do make verify and this will have generated a root CA and an all the files that we need to apply TLS you need to change the permissions on these ones and you can read the readme if you really want to know what all the commands are doing but since this video is already way too long I won't go into the details but you should really read up if you want to know all the internal workings on it so let's go ahead and change the permission on the basic the results will be restored in the results folder let's just go ahead and do oh my bad pseudo mod 644 and we will do it on the results folder so enter your password boom now we have the right permissions we will need to delete the currently running Revit mq instance I know so let's do it let's remove it Goodbye Mr rabbitmq so we need to reapply everything but we're going to see how we can do that using configurations instead and it'll be a lot easier trust me go back to your project in the root we will create a new file called revitmq.com this will be our configuration for rabbitmq we will need to mount this file into rabbitmq when we start up revidently when we mount it we will need to make sure that the certificates that the TLs gem program yesterday generated for us do the same Docker run command as before but we will add a few things to it so we will do Docker run dash dash dash name rabbitmq Dash V for volume and we will use the current let's let me go back go back to the root make sure you're in the root otherwise it won't work the ones in the root let's do Docker run again and let's name it the same as before this time we will add the dash V flag which is used to mount a folder from your host into the docker container and we will Mount the rabbitmq.com which is the configuration file we have created it's empty for now but you know mount it into slash Etc revitamq rabbitmq.com because that's where web damn Q will expect the configuration to be we will also make sure that we have read o and and once we have that we will add a second Mount this time we will Mount the certificates that the script generated for us to remove that you can find these commands inside of my blog posts by the way so if you kind of don't have the time or don't want to pause all the time you can find the commands in my articles let's Mount the results folder into the slash search folder on the docker container this will make sure that the docker container has access to the certificates that was generated for us and again we need to add the same ports as we did before the port for the networking protocol and the port for the admin UI and again we will use private mq 3.11 management now once this is created we can start adding TLS configurations to The Container let's go inside the configuration file so I'm inside my Revit mq configuration file which is now present also in the docker container so first of all we want to disable any connections that isn't TCP so listeners.tcp equals none we want to make sure that it goes by default to the 5671 Port when it's using SSL let's add the SSL certificates so we will do SSL options we will do dot CA cert file equals slash search slash d a certificate dot pimp and these are the files that the script has generated for you inside the results folder server and you need to make sure that you're using the right names so I'm just going to go inside my Basics folder in the results and my computer is named black box so these files will change name depending on the name of your computer so make sure that you check the correct name whenever inserting these so I will be using Black Box certificate Dot pemp and I will add SSL options hit the key file the key file will be server black box key dot pimp and we need to make sure that we are using peer verification up here verification is related to mtls I won't be going into mtls and explaining the details here because again the scope of this tutorial is not that now we're going to add fail if no fear cert basically mtls okay yes short is that the clients will also send out a acknowledgment with their certificate allowing them to along both sides to send their certificate to verify their identities so once we have done that we have edited these basically we have told repdmq where to found this find the certificates we have told revdmq to use SSL by default and we have said that both sides of the communication should be using it now let's open the terminal and do a clear first let's do Docker restart rabbitmq to make sure everything works let's do Docker logs rabbitmq to see the logs and you should see what I'm seeing here started TLS listeners on 5671 if you're going into production and you're using your own certificates simply skip the part where we generate the certificates and insert your own certificates to these locations you mount the folder into search and then you add your certificates into this folder and you mount it using the volume as we did we can really see if this works because we can close the programs that is running we can clear we can run the producer again and we should see a connection refused because we are not using the certificates right now so we need to update the code a little bit to use the certificates because the web demq server is expecting encrypted traffic so we need to fix that the easiest way basically go into the Revit mq we're going to modify the connection function because this is where the magic will happen we will need new parameters we'll need to add the ca cert the client search the client key if you are familiar with TLS I'm sorry if this is going a little bit fast um I can't cover that really in details um but if you want a video where we explain TLS in detail let me know we can do that um I'm just gonna return the error here so we're going to read in the certificate file now we're going to load the key pair and this is done using the the TLs packaging go so load this is the same regardless if you're using Edge a regular HTTP client or whatever and this is not related to rabbitmq how you load the certificates so you should have seen this code before probably now let's add the root CA to the third pool and root Cas will be X 509 new search pool and let's append the certificates EAS uh panned did I miss type yeah root C is we need to do append certificates from Pam files because that's what we loaded and then we will do TLS configurations we will create a TLS configuration where we apply the root Cas that we have loaded we will apply the certificates in this certificate and we will insert the Third like that so basically we load the key pairs we load the ca cert we create the search um we load them as certificates we append them and then create the TLs configuration the TLs configuration is important because when we dial amqp we have to tell it that we are using dial TLS instead also we need to change the protocol it shouldn't be amqp anymore it should be amqp secure so add an S and you're fine then in the end of the dial TLS function we also need to add the actual TLS config so TLS oh I mistyped that so let's let's name it TLS config instead PLS CFG so make sure we add that to the dial TLS I hope you don't speed this too much um and I hope it's fairly understandable what's going on so now we have changed the protocol we have loaded the certificates now we need to update the consumer and the producer to also insert these right now now in my example I will be using hard-coded values to these certificates the absolute paths you should not in a real production we will I will be adding that because um I'll be doing it the hard-coded way because it's actually a lot easier and we don't want to be here all night looking at me is doing strange stuff so so I'm just gonna add the path to the results folder the first parameter is the ca search so let's do the ca search let's add a cola let's copy this three times or two times and let's just change the last parameter so we will update and we will use the client underscore [Music] Black Box certificate and the third parameter should be the key so let's add the black box key.pam and let's make sure we type that right so the ca cert the Black Box certificate and the and the key of course we need to update this on all the three pla on all the places that we connect to the service and then of course I'm just gonna go ahead jump to the consumer copy paste that in the same thing let's see if this works like that so the consumer and the producer both are loading the certificates into this folder let's go ahead jump back to the terminal and run it um we forgot to import something so I'm gonna go here always read file I'm gonna add the Imports to OS module so we have all the Imports managed let's go ahead and run again oh unexpected new line in argument in the producer missing comma in the producer so let's go to the producer let's see what is he on about all right my bad I forgot a comma this is a really ugly connect method right now but bear with me actually I forgot the commas on the consumer as well so we need to add that also like that so one thing that we also need to change before connecting is the ports so the TLs ports will be 71 instead change that for both the consumer and the producer 51 and so if we run the producer now where we have changed the ports we have added the certificates and everything go ahead and run it and it will time out and tell us that the user pass is not allowed so let's go ahead and do Docker logs Revit mq you can see that the user Percy has invalid credentials now why is that well we have no infrastructure in place remember we set up the permissions and the users and stuff using the command line terminal now let's look at how we can use definitions to do that instead trust me if you don't want to ravage manager with mq using the command line it's going to be repetitive it's going to be hard it's going to be annoying so actually there's a way to define configuration files for rabbitmq in which we can Define the virtual host the users the permissions the exchanges to have Etc now to do this we need to begin by creating a hash of our password so I'm gonna go and create a new script called encode password dot shell I'm going to paste in a a I'm going to paste in a script I found on stack Overflow actually now it's called encode password and I will have a link in the description of course what this does is that it encodes the password that you put here into the rabbitmq defined algorithm we will need the output from this my password is secret so I'm just going to use that in here I'm gonna go ahead and open the terminal and I'm gonna do bash encode password and I get this little string here which is my encoded password I'm going to copy that because we will need that now when we update the configuration inside Revit mq the configuration we will need to load a configuration file so load configuration file on Startup let's load definitions this is called definitions by the way and we're going to place this in rabbitmq underscore definitions dot Json we're going to need to create this definitions file for us which we will use to define all the resources that we need so I'm going to go ahead and create a new file called rabbit mq underscore definitions dot Json it's a Json file so it should be fairly straightforward what's going on now we can Define users like I'm doing here so let's create an array of objects called users and we're going to have the name the name is Percy the password hash will be the output from the command that we ran before and we want to add tags remember that we added the administrator tag before we can do that by simply using the tags field instead now let's don't forget commas this is Json it will be angry we can add virtual hosts the same way so as you probably start to understand this is a lot easier than using the command line terminal the command line tools but I really want to show you the command line tools because you know it's good to know and we have a customers virtual host we want to set permissions to permissions the permissions will be an array again and the permissions will be we can add many permissions here but I will add a permission for the user Percy and Percy will be targeting the customers in this permission and I want to specify what permissions I have to configure my write permissions and my read permissions so basically this is the same thing as using the command line it's just a lot smoother in my opinion and makes you it makes it easier to kinda maintain um so let's use the name let's call let's create a exchange and as you see if you know what the resources are called it's pretty straightforward so permissions goes into the permissions exchanges exchanges it's really straightforward let's create the Customer Events the fan out we want it to be durable we want to Auto delete false we want internal to be false we will add no arguments to The Exchange oink and then we will actually copy this and add a second exchange which will be the callbacks so we have a functioning so we have a piece of software that actually functions after the tutorial so you guys can play more with it and the callbacks should be a direct and everything should be false false we can create cues if we want to we are creating cues inside of the code right now but just so you know that you can if you don't like for instance the customers created maybe I don't want that queue to be code generated it should always be there for instance so there's no reason to have the clients generated um sometimes it makes sense using the RPC or the pub sub for instance it makes sense to have the clients generate the queues but otherwise I like having um the queues generated in the definitions file so one last thing we're going to add bindings so you can have bindings pre-generated so that you know that these uh that the stuff you want really works let's find the consumer events to the virtual host customers and let's add a destination customers created that's the routing key destination destiny Destiny Nation type is a cube let's fix the typo here destination and let's go ahead do the routing key the routing key should be bound by customers created and a star like this I think this is it oh let's add the arguments as well just so we have everything to be honest I don't know if you need to add arguments as empty or if it will crash we have created the definitions we need to bind this however so let's open up the terminal again I'm gonna do Docker run I'm going to do darker container RMF rabbitmq I'm removing the docker container which is running rabbitmq right now I'm re-running dockeron and the reason for that is we need to add a third volume and that's the volume for the definitions you could Mount the whole folder if you want I suppose or make it a little bit easier to mount everything but let's just do this for Now activity slash Revit mq definitions Dot Json it should go into slash Etc slash rabbit mq slash Revit mq definitions dot Json let's add Arrow and that's about it I see a typo like that let's run it now after you restart it with the new volume you should be able you should be able to run Docker logs rabbitmq and see the logs for the container if you scroll up you should see that successfully set the permissions etc for the user so we do know that the definitions were ran successfully so now that we have fixed everything we have fixed the configuration and the definitions let's go ahead and try the producer once more and it's sending messages let's try one of the consumers and see if everything works as expected let's go back to the producer send a few new messages and everything is working and everything is encrypted and we have configured everything using configurations so sadly this is the end of the tutorial it's been a long while it's been a really long one I'm sorry but it's been thrilling and exciting so let's take a look at we have learned we have learned how to configure rabbitmq virtual host create users what permissions are what permissions and how you can create them we have learned how to produce and consume messages on cues and exchanges and hopefully it's really clear to you how to use connections and channels and the difference between them and remember to reuse connections but not consuming and Publishing on the same connection reuse channels for each parallel process so we have learned about RPC how to use RPC using rapidmq we have learned about publish And subscribe and about the different exchanges how to use TLS and how to configure it and hopefully you have enjoyed this video if you have feel free to like And subscribe to my channel and I will hear from you thank you

Video description

A beginner-friendly tutorial on how RabbitMQ works and how to use RabbitMQ in Go In this video, we will cover how to use RabbitMQ. These are the following items that we will learn about. * Setup RabbitMQ using Docker * Virtual Hosts, users, and permissions * Managing RabbitmQ using CLI with [rabbitmqctl](https://www.rabbitmq.com/rabbitmqctl.8.html) and [rabbitmqadmin](https://www.rabbitmq.com/management-cli.html) * Learn about Producers, Consumers and how to write them. * Learn about Queues, Exchanges, and Binding * Using Work Queues (first in and first out) * Using Pub/Sub with RabbitMQ * Using RPC-based patterns and callbacks. * Encrypting traffic with TLS * Using Configurations to declare resources in RabbitMQ You can also find this Video as a written article on https://programmingpercy.tech/blog/event-driven-architecture-using-rabbitmq/ If you liked this video, feel free to buy me a Coffee to support this channel. https://www.buymeacoffee.com/percybolmer ## Links mentioned or used https://www.rabbitmq.com/management-cli.html https://www.rabbitmq.com/rabbitmqctl.8.html https://www.rabbitmq.com/download.html https://www.rabbitmq.com/networking.html https://www.rabbitmq.com/channels.html https://github.com/rabbitmq/tls-gen https://www.rabbitmq.com/passwords.html https://stackoverflow.com/questions/41306350/how-to-generate-password-hash-for-rabbitmq-management-http-api https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol https://go.dev/dl/ 00:00 Introduction 03:42 Docker and Users 09:07 Virtual Hosts 15:25 Queues, Producers, Exchanges and Consumers 18:39 Building the RabbitMQ Client And A Producer 30:26 Queues, Durability and Auto Delete 37:40 Exchanges, Exchange Types and rabbitmqadminl 37:43 Bindings 47:25 Publishing Messages 01:03:05 Consuming Messages And Acknowledgement 01:15:08 Nacking and Retries 01:18:35 Multithreading using Errgroup 01:25:54 Deffered Confirm And Confirm Mode 01:30:04 Fanout And Publish And Subscribe 01:39:10 RPC Procedures 01:50:59 Limiting amount of Requests using Prefetch and Quality Of Service 01:54:23 Encrypting Traffic With TLS 02:17:16 Configuring RabbitMQ With Definitions 02:19:54 Ending

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC