We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Worth Noting
Positive elements
- This video provides a clear technical breakdown of how data structures like doubly linked lists and sorted sets can be repurposed for complex messaging patterns.
Be Aware
Cautionary elements
- The presenters emphasize their separation from AWS's commercial side to gain 'open source' credibility, which obscures the strategic interest AWS has in Valkey's success over Redis.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
[music] [music] Yeah. Uh, I'm Kyle. Um, and actually we we don't work for the Valkyrie offering. We work for the open source office at AWS. Our entire job is to work on the open source project for Valky. They don't let us touch any of the AWS stuff actually. So, um, that's probably for the best. Um, this is Roberto. >> Hello. >> Roberto's gonna kick in. >> Hello. Uh, he's >> My name is Roberto. Good. Donkey, that's all of my German. [laughter] >> I wish that I had as much skill with German. I don't have any. Um, okay. So, we're just going to go in and we're going to do a bit of a speedun when it comes to uh introducing you to a number of different things here. So, bear with us and Robera's going to show you how it all kind of connects together. So, um, this is this is what's on the Valkyrie website. I will read to you the definition of Valkyrie, but before I do, how many people know what Valky is? Raise your hands. A few. How many people know what Reddus is? raise your hands. Okay, that knowledge will come in handy. So, Valkyrie is an open- source BSD high performance key value data store that supports a variety of workloads such as caching, message cues, and act as a primary database. Um, and the project's backed by the Linux Foundation ensuring that we remain open source forever. So, let's break this down for just one moment. Uh, key value database, everything is related to a key. It has a type, a structure, and a series of commands that you can use to access the data. Um, it's a Linux Foundation project that is open source forever. Um so Valky is the fork of Reddus. So couple years ago now um Reddus had a license change half the maintainers went to form Valky. So that's where we come from. So a lot of these things if you know Reddus it's going to be very familiar. There's a few differences but um it has diverged in the intervening time. Um so high performance about a million operations per second on a single box. Uh, if we want to go the largest C cluster we know is possible, 2,000 nodes will give you about a billion operations per second. No one in the world needs that. So, don't even try it. It's silly. Um, but you can do it for some reason. Um, it's primarily used for caching because it's an in-memory first bite addressable database. Um, and then usually I go into talking about how it's a messaging cue, but this whole thing is about messaging cues and you know what messaging cues are. So, let's just talk a little bit about in memory messaging. Um so why inmemory right? Uh in memory is low latency and high throughput. It's how we do everything uh really when it comes to writing anything in a in a in a application in any way. Um so that makes it makes sense if you want to go high performance. Uh and messaging is kind of a balanced read workload. And what's great about inmemory stuff is that we read and write at approximately the same speed. Um, so it kind of matches that and it's pretty easy to scale things that are in memory uh kind of natively. What it means is that uh we really have less to worry about. We don't have any disks that we talk about. There is a way to persist to disk uh but it's not part of the scaling of it. So you can scale out to you know thousands of nodes without uh you know losing hair in the process. So uh what people know most about Valky when it comes to messaging is fire and forget pops up. So let's look at how that works. Now many of you have flown here. I flew here. So we're using a whole series of you know like airline examples in this in this demo or this uh presentation. So here we have a number of applications. They are subscribed to a number of channels. Um so you can see these flights FRA whatever. Um and then what happens is another application will publish to that channel. Those are called channels. they'll have a payload and then those payloads uh when they publish it will be delivered uh instantly to the different uh subscribers and so you can see here that's one you can also deliver to multiple subscribers as well. um what kind of people forget about this. We go and say it's fire and forget and people don't understand that because they try to do things and they try to say, "Well, what happens when an application's offline?" Well, nothing. It's it goes nowhere. It's gone. Talks about the fact it's kind of like having the radio on and going through a tunnel. When you're in the tunnel, you don't hear the radio and there's no way to retrieve that when you exit the tunnel, right? It's fire and forget. It has its uses, but it's not really useful for directly messaging. it's usually as a component of something else. Now, where Valky is used more is when you start using some of our other data types, uh, such as lists. Um, and that's used to build cues. Um, lists are just doubly linked lists that you probably learned about in compsai 101. Uh, in this case, we're using a command called l push. We are pushing into a key called flights. And we have two items. We're pushing to the left side. Each one goes to the left. So, it would look something like this. So, our first argument is LH444. that goes to the left side of the existing elements. And then we have EW112 that goes again to the left side of that. Um, and then we can pop items off the left side. And that's in this example. If we have this, we're going to get the item that's directly off of that. And we have that value. Um, so not that complicated. Uh, we can also do it from the right side. This is the same exact set of commands. Um, but we're doing to the right side instead of the left side. Um, so everybody with me? everything should be good here. Um then we're going to start talking about how we can use these in actually a useful way. So when we start start looking at this you can start we're going to see we're going to use this concept called blocking in Valkyrie. Basically that that is a way to prevent you from having to pull something. It will wait for something if it doesn't exist. So we are doing a left side pop from the key flights. We're going to have a timeout of 1,00. Um, if that doesn't exist, then we're going to actually retrieve, we're going to wait for 1,00 seconds, and the intervening time we l push into something, uh, it will be instantly delivered to um to the um flight uh the the blocking item here. So, keep that in the back of your mind. All the existing or all the rest of the commands that I'll talk about in the um presentation have the same idea to block or it'll be used in some way. So moving forward uh when we're doing this we can also l push a number of items in and we're going to build a queue with this. Uh so right now we have some data and it represents what you see here. Um and so has three items in that list plus any additional items. Um we have this thing where we can move between lists atomically and so in this case what's happening is we are moving from the left side of flights to the right side of a new key called process flights. So what we have now uh it'll return the item that we moved but it will also have two different keys. Um and this is what it looks like now. So we have where we are in this situation. We have we removed EW12 and it's now in a new key. So as we move on we can then do some processing on that returned value. Whatever that might be. It could be something like you know sending it to an API. It could be doing some sort of calculation on it. Whatever that takes time. Um when we're done with it, we can then remove it from the processing queue and uh we are safe. Everything has happened here. What's really good about this is that we can have you see we removed it from that processing queue. We could have any type of failure here that we need. Um because we have moved it atomically around in the inside the key value store. Um nothing has happened, right? We can go and process that again if if it fails um to process it and remove it at the end of the the processing system. Um so this is good and this is useful. People have built uh entire cues based on this in their applications and we see a lot of people using this. Um but we have other features as well. One of the things that we see people using is uh sorted sets which is another one of our data types that we use for priority cues. Um sorted sets are an interesting concept for those who haven't been introduced to it. It's a set with a series of members that has no repeats in those members. um they each one can have a score and the score has a numeric value and scores can be repeated. So how this looks is in this example um we have member FU and bar and they both have the score of 10,00 but we can't have uh FU to have two different scores. That's not allowed because a set when you try to change the score will basically do like an upsert to that that value. So let's see how we can make a priority Q out of this. So in this case we're doing a very simple priority queue. The more requests that we have each time it will uh kind of increase the priority of that uh of that uh item. So in this case what we're going to do we're going to do use a command zed in an incher by uh which means zed is how we denote all sorted set commands uh into the key flights and we're going to increment it by one and our payload is ew12. Um so in this case this is what the data looks like at this point. uh it wasn't there. It created it if it wasn't there. So uh we just assumed in this case that uh flights is an empty key. So here we're going to do a a Zinkerby for an additional flight LH444. And so now we have two members in our set and they each have the score of one. And then we have another app that will go in and do increment to EW112. And so now we have moved it up in the priority queue. So we can see how we can do this over and over and over again. What we can do here is we can start by saying how we're going to process that. We have a command called zev red r rev range. Uh so basically get a descending order by score from the zeroth element to the zeroth element. So those additional um arguments on there will tell you what it is. It's going to return that value. Um and you can then do additional processing here. What most people do will then add it to a a processing uh list as we showed in the previous example. um and then we would remove it. Now the problem here that you see is that it's not atomic right now. Um Valkyrie has a built-in uh Lua transaction engine that would allow us to run these things transactionally. Um so if there's any failure, it wouldn't actually not add it to the processing queue. It would fail all or nothing. Um so that allows us to then go back and uh look at it from again if we have a a problem. Um, so this is where we go for a long time. People have built entire cues off of this and there's applications that uh, Roberto is going to show. Um, but we have even another thing that you can use for messaging which is our streams. Um, I saw some of the presentations earlier. This is going to look very familiar to a lot of people. Um, it's basically the stream data model where each key contains a series of entries. Each entry has a unique ID that is based on a millisecond precision timestamp and an incrementing number from there that creates a sequence. So if multiple items come in in the same millisecond, they get a unique ID. Um and each entry has a series of fields and values which are the payload. Um you read these entries uh by range uh from one or more stream keys. So you can actually read multiple streams at a time and then you manage the processing and acknowledgement of that through a consumer group. Now, uh, we've said this the Kword several times. If you know Kafka, these are all very familiar, uh, concepts to what's going on here. So, let's look how you would build something with this using Valkyrie. Um, in this case, we have added three things. All the stream commands start with an X. So, X add, and we're going to add three different uh items to that uh what will become our our our queue here. Um, and they're each going to have these um IDs. So, they came in at the same millisecond. they each get these incrementing sequence numbers. Um then we're going to read from those streams and we're basically saying hey give me two values count two from streams flights uh starting from zero which is the beginning of time and it would return this value. Your application would do some sort of uh representation to whatever it looked like. This is how it looks in our CLI but it would probably come out as a series of uh dicks or whatever in in Python. Um so that's how it looks. Now, this is useful, but there's we can go further because we actually haven't talked about how we're going to acknowledge and process those items. Um, so we have the concept of groups. We're going to create a few groups here. Um, those groups, we have one for uh meals and cleaning using this idea that maybe you need to stock a flight or whatever. Um, here we're going to read from a group. Uh, in this case, we're going to read from the group meals. And every time a reader is there, it has an identifier that they use. In this case, I'm going to choose my name, Kyle. It would not be a person's name in a typical fashion. So, when we get this, we're going to return the value out, and that's going to return the the first one that we added in because we haven't read from that one before. Um, and then we can read from a different group, and we'll be able to return those values back out again. And so, we're getting the same values because they're reading from two different groups. So, group cleaning and group meals, that's keeping track of where they are. Um, and from here, we can then look at the pending list. So, we're going to look for the pending list. This is going to tell you what we haven't processed yet uh for a specific reader and the uh group that it's in. And that's going to say, hey, this this one that we looked at for meals, uh Kyle has not processed this one. And we have a timeout value in it. That's the 65,000 um bit at the bottom here. So, what we can do from here is we can actually go in and start thinking about how we are going to acknowledge or reclaim those values. So in this case, we have the uh values that we had before. We're going to do uh x claim. I wish it was called x reclaim because it's really about reclaiming data. So we're going to transfer that from Kyle uh to Joe. And there's a timeout value on this, too. So it would only claim it if it exceeds a particular time out. And then we would get the value back out of it with its new owner. So Joe owns that item. And then when we're done with it, we can acknowledge that item. And when we do the pending again, it'll be an empty array. Um, so there's nothing there. And if we were to look at this again with our look reading our group again, uh, basically we're going to see the same values back out of this. Um, where we've had this. And if you notice the EW12, which was our first one, is now gone because it's been processed. Um, and it's not anything that we can read any longer. Although you could read it without a group and still see it in the actual data structure. Now, I have thrown a lot at you all at once. This is like a speedrun. This is what I'd usually do in about an hour. Um, but I know people know messaging in this room. Um, but to put it all together, I'm going to bring Roberto up here. He's going to show you how it's actually done um with a a common package. [snorts] Thank you. And uh I know it's just me between you and Launch, so I'm going to try to make this quick. And like Kyle said, we probably all came I'm going backwards. >> You're going backwards. >> Yeah. Sorry about that. >> We all came probably uh somewhere on train or flights. Uh if you had the luck like I did to be a passenger on the given flight, right? It was in this case we have two characters Jen and Joan. and uh something happens and it happens very often especially in the US where uh you get delayed and then you need to send a bunch of different notifications right and this cause people to go like this right so essentially we're like oh my gosh like I have to go do something and like Kyle mentioned there are a few things that we can use like pretty much to build all the different uh data structure and things together. There are several libraries such as celery for Python, Boolean Q for JavaScript, psychic for Ruby and even for Kubernetes deployments uh something called Kada for autoscaling. So you u using balky as one of the data stores that helps you uh scale. How do we do this? In this case, we're using balky as a broker and a backend. Um and we have a producer which is our celery application and there's an event uh so in this case a flight gets delayed and we're using that list data structure to push into a queue called celery and we pass all the data about the task and then we also added to a sorted set as well as a hash to keep which are the unacknowledged tasks. With the sorted set we have the ability to now have a priority queue. So let's say there are some things that get delayed and they don't get processed. So we can go and from the unttracked uh index make sure that yeah the first uh came in they're also the first to go out and get processed and we have a worker reading from that Q. In this case we pop the elements from the celebrity task uh or Q and then we're going to get a meta or a meta task with three different tasks. Remember, we're doing SMS push notifications as well as in email. And then we go and remove it from the unact uh queue as well as the the hash where we're keeping track of of the values. And then once we're done processing, we store the results on the back end. In this case, also using bulk key. And we publish a message. Remember that's just a broadcast. Uh so it's a channel where we say, okay, we're done with this task. [clears throat] uh this is the result. So the worker I'm sorry in the case the producer can read from that u uh channel and get like okay the I I have the results of this given task. So I'm going to go to the console and run some uh commands for you. Oh, I'm sorry. I'm going to have to unplug and plug again. Sorry about that. Oh no, it did not work. I had to make a single screen so I can see what I'm doing here. So mirror and that should work. Can you see this phone size correctly or should I go a little bit closer? Is it good? Thank you. Perfect. Excellent. So let's uh run our demo. Let's first clean up our databases. Perfect. And then if we go and start the demo, this is a Python project, right? So it's super simple. Uh we're using a in this case two different servers. We have Balky 9 over here as our um bulky core and then we're also using the Radius open source version 72.4 for on which Valky was forked and we can go and see on this second tab over here uh in both running. So we have Balky as well as Reddus right so this proves the compatibility that the both of these uh can have. So essentially balky is a drop in replacement for most of the applications that currently just write as open source. So like I said it's a fairly simple Python project. These are some of the packages that we're using. We're using the celery uh library uh pedantic just to keep track of of the uh models that we have and uh in this case we're using radius and balky as well. Okay. So we synced our packages. We have a couple models. the passenger which has a few items about the person and the flights which has the airline departure and arrival airport as well as the status which can be uh on time, delayed, cancelled or boarding. And we're going to take a look at our uh configuration. So first what we're doing here is telling cell that we can use both uh as a broker and a backend uh in this case balky and radius or it could be both bali and we're going to use JSON for most of this and then we have the three different tasks in this case uh sending an email which we are simulating just uh by doing a sleep of two seconds uh and then of course you can use an API to actually deliver the emails and then sending an SMS. This could take a little bit longer depending on the carrier, right? So, we're simulating about 3 seconds. And finally, we're doing the push notification which is a little bit faster just to demonstrate how this tasks uh will operate uh asynchronously or non-blocking for what I learned from the keynote. And then finally, uh the main task which will process the flight status updates where we'll take a flight uh and in this case we're mocking the passengers uh Jane and Joe that we saw earlier. And um but of course it could be coming from a database or it could come as the payload of the function call as well. So all we're doing is for all those passengers we're going to go and send this uh three different notifications. Okay. So that's good. And then this is our main um program where we have two functions. One is the flight delay that is giving us uh that you are going to be delayed uh from Paris to Berlin like happened to me. And then uh boarding when you're going out for example from Berlin to Marcel and uh we are doing also just a quick one second pause just to see things on the screen and once we have that we are able to process our worker in this case we do see that celery is using both um uh the the backends uh for using the radius connector right in this case two bulky and radius open source So, I'm going to log to here in the same folder. Okay, GitHub. Perfect. That's where I want to be. And let's do the demo. Step two. If if you recall what I'm doing here is I'm I run the monitor command on various as well as Valky. In this case, Valky is a broker. And as you can see, seller is already uh like doing some status updates. For example, doing a heartbeat as well as uh verifying if there are any items on the queue. I need to spell my demo properly. There you go. So, in this case, we do not have any tasks currently running. But if I press the next command, that's going to do and process those two notifications that we saw. And um they're going super fast, right? So this in this case, I was able to wait for one second and then get the the payload of the active tasks that are happening. Above we see the worker and we can see the different um messages that were sent. Uh so all the notifications going for uh the flight GA that has been uh in this case delayed and we can see that the push task for John did not happen because he does not have a token but that completely really quickly right we didn't even have to wait for the 1.1 1.5 seconds whereas for Jane we did wait uh it took uh 1.5 that's even though was the last message that we sent or the task that we sent it was the first to get processed right that's what the a synchronicity or parallel processing happens and all of this uh as you can see we didn't have to write any bulky specific code it was just s doing the work and all happened over here behind the scenes we can see a one of a bunch of the all push commands and then um we can also see the PR pop remember talked about how we're blocking until there's something on on the elements on the queue And then we can also see over here on the uh on the storage or the back end I can go and scan some of the tasks that happened over here. This is the results of the task and I can go and get the payload. And there is something else that we didn't discuss with you that I would like you to learn which is uh very very important in the in-memory world where you can have a time to live on every key that you set. So in this case I have a key [clears throat] and it's called key and I have a value called value right but I can very well have a um u a key called country and then the value is Germany right. So I can go and say get me the key or get me the country. So okay that's fine. That's very good. But what if I want to see for how long the data is going to store or going to stay in memory. So I want to see the TTL on the key it's negative one. That means it's always going to be there in memory until the server gets rebooted. Right? So if you want to say I want to expire the key in 5 seconds. If I run the TTL again, I'm going to see how this value is getting decremented and eventually that key will no longer be available. So whenever I want to do the get right so that means I don't have a value and this negative that means that it did have a time to live but now it has been lapsed. So if we go to the get task right I can also get this key and uh see the TTL on it that means that this is about uh close to 24 hours right over three 23 hours so the idea is that we're showcasing here how bali can be used as an extremely low latency and reliable system for messaging uh using all these different structures that we talked to you about today and we will love you to uh contribute. There are a lot of things that we can do to enhance uh for example the use of streams and uh in [clears throat] this case we will like you to get involved. Go to the balky.io website uh download and test it out uh and yeah contribute uh if you can. Thank you so much. I think we're good to go. YEAH. [applause] >> [music] [music]
Video description
✨ This talk was recorded at MQ Summit 2025. If you're curious about our upcoming event, check https://mqsummit.com/ ✨ You’ve probably used Valkey (or its predecessor) as a cache, however, hiding below the surface is a messaging powerhouse built for both scale, low latency, and high throughput. In this session, you’ll learn about Valkey, a Linux Foundation back, high-speed, Key-Value datastore, provides both fundamental primitives and complete solutions for a variety of messaging use cases. Let's keep in touch! Follow us on: 💥 Twitter: / MQSummit 💥 BlueSky: / mqsummit.bsky.social 💥 LinkedIn: / mqsummit